Replace feature(never_type) with feature(exhaustive_patterns).
feature(exhaustive_patterns) only covers the pattern-exhaustives checks
that used to be covered by feature(never_type)
Move ascii::escape_default to libcore
As requested in #46409, the `ascii::escape_default` method has been added to the core library. All I did was copy over the `std::ascii` module file, remove the (redundant) `AsciiExt` trait, and change some of the documentation to match. None of the tests were changed.
I wasn't sure how to handle the annotations. For `EscapeDefault` and `escape_default()`, I changed them to `#[unstable(feature = "core_ascii", issue = "46409")]`. Is that alright? Or should I leave them as they were?
This commit updates rustc to embed bitcode in each object file generated by
default when compiling for iOS. This was determined in #35968 as a step
towards better compatibility with the iOS toolchain, so let's give it a spin and
see how it turns out!
Note that this also updates the `cc` dependency which should propagate this
change of embedding bitcode for C dependencies as well.
* Pass `opt_level(2)` when calculating CFLAGS to get the right flags on iOS
* Unconditionally pass `-O2` when compiling libbacktrace
This should...
Close#48903Close#48906
introduce canonical queries, use for normalization and dropck-outlives
This branch adds in the concept of a **canonicalized trait query** and uses it for three specific operations:
- `infcx.at(cause, param_env).normalize(type_foldable)`
- normalizes all associated types in `type_foldable`
- `tcx.normalize_erasing_regions(param_env, type_foldable)`
- like normalize, but erases regions first and in the result; this leads to better caching
- `infcx.at(cause, param_env).dropck_outlives(ty)`
- produces the set of types that must be live when a value of type `ty` is dropped
- used from dropck but also NLL outlives
This is a kind of "first step" towards a more Chalk-ified approach. It leads to a **big** speedup for NLL, which is basically dominated by the dropck-outlives computation. Here are some timing measurements for the `syn` crate (pre-branch measurements coming soon):
| Commit | NLL disabled | NLL enabled |
| ------- | --- | --- |
| Before my branch | 5.43s | 8.99s |
| After my branch | 5.36s | 7.25s |
(Note that NLL enabled still does *all the work* that NLL disabled does, so this is not really a way to compare the performance of NLL versus the AST-based borrow checker directly.) Since this affects all codepaths, I'd like to do a full perf run before we land anything.
Also, this is not the "final point" for canonicalization etc. I think canonicalization can be made substantially faster, for one thing. But it seems like a reasonable starting point for a branch that's gotten a bit larger than I would have liked.
**Commit convention:** First of all, this entire branch ought to be a "pure refactoring", I believe, not changing anything about external behavior. Second, I've tagged the most important commits with `[VIC]` (very important commit), so you can scan for those. =)
r? @eddyb
Before, the identifier `X` was also used when generating a pattern
to match against the dep-node. So `Foo(DefId)` would generate a match
pattern like:
match foo {
Foo(DefId) => ...
}
This does not scale to more general types like `&'tcx
Ty<'tcx>`. Therefore, we now require *exactly one* argument (the macro
was internally tupling anyway, and no actual nodes use more than one
argument), and then we can generate a fixed pattern like:
match foo {
Foo(arg) => ...
}
Huzzah. (Also, hygiene is nice.)
In general, we've been moving towards a semantics where you can have
contradictory where-clauses, and we try to honor them. There are
already existing run-pass tests where we take that philosophy as
well (e.g., `compile-fail/issue-36839.rs`). The current behavior of
`and`, where it strips the environment, breaks that code.