Some assorted region hashing fixes.
This PR contains three changes.
1. It changes what we implement `HashStable` for. Previously, the trait was implemented for things in the local `TyCtxt`. That was OK, since we only invoked hashing with a `TyCtxt<'_, 'tcx, 'tcx>` where there is no difference. With query result hashing this becomes a problem though. So we now implement `HashStable` for things in `'gcx`.
2. The PR makes the regular `HashStable` implementation *not* anonymize late-bound regions anymore. It's a waste of computing resources and it's not clear that it would always be correct to do so.
3. The PR adds an option for stable hashing to treat all regions as erased and uses this new option when computing the `TypeId`. This should help with https://github.com/rust-lang/rust/issues/41875.
I did not add a test case for (3) since that's not possible yet. But it looks like @zackmdavis has something in the pipeline there `:)`.
r? @eddyb
Make a disable-jemalloc build work
Fixes#43510. I've tested this up to building a stage1 compiler.
r? @alexcrichton
cc @cuviper @vorner
@cuviper your fix was almost correct, you just had a stray `!` in there which caused the second error you saw.
Non-lexical lifetimes region renumberer
Regenerates region variables for all regions in a cloned MIR in the nll mir pass. This is part of the work for #43234.
Hint correct extern constant syntax
Error message for `extern "C" { const …}` is terse, and the right syntax is hard to guess given unfortunate difference between meaning of `static` in C and Rust.
I've added a hint for the right syntax.
rustc: Rearchitect lints to be emitted more eagerly
In preparation for incremental compilation this commit refactors the lint
handling infrastructure in the compiler to be more "eager" and overall more
incremental-friendly. Many passes of the compiler can emit lints at various
points but before this commit all lints were buffered in a table to be emitted
at the very end of compilation. This commit changes these lints to be emitted
immediately during compilation using pre-calculated lint level-related data
structures.
Linting today is split into two phases, one set of "early" lints run on the
`syntax::ast` and a "late" set of lints run on the HIR. This commit moves the
"early" lints to running as late as possible in compilation, just before HIR
lowering. This notably means that we're catching resolve-related lints just
before HIR lowering. The early linting remains a pass very similar to how it was
before, maintaining context of the current lint level as it walks the tree.
Post-HIR, however, linting is structured as a method on the `TyCtxt` which
transitively executes a query to calculate lint levels. Each request to lint on
a `TyCtxt` will query the entire crate's 'lint level data structure' and then go
from there about whether the lint should be emitted or not.
The query depends on the entire HIR crate but should be very quick to calculate
(just a quick walk of the HIR) and the red-green system should notice that the
lint level data structure rarely changes, and should hopefully preserve
incrementality.
Overall this resulted in a pretty big change to the test suite now that lints
are emitted much earlier in compilation (on-demand vs only at the end). This in
turn necessitated the addition of many `#![allow(warnings)]` directives
throughout the compile-fail test suite and a number of updates to the UI test
suite.
Closes https://github.com/rust-lang/rust/issues/42511
Fixed mutable vars being marked used when they weren't
#### NB : bootstrapping is slow on my machine, even with `keep-stage` - fixes for occurances in the current codebase are <s>in the pipeline</s> done. This PR is being put up for review of the fix of the issue.
Fixes#43526, Fixes#30280, Fixes#25049
### Issue
Whenever the compiler detected a mutable deref being used mutably, it marked an associated value as being used mutably as well. In the case of derefencing local variables which were mutable references, this incorrectly marked the reference itself being used mutably, instead of its contents - with the consequence of making the following code emit no warnings
```
fn do_thing<T>(mut arg : &mut T) {
... // don't touch arg - just deref it to access the T
}
```
### Fix
Make dereferences not be counted as a mutable use, but only when they're on borrows on local variables.
#### Why not on things other than local variables?
* Whenever you capture a variable in a closure, it gets turned into a hidden reference - when you use it in the closure, it gets dereferenced. If the closure uses the variable mutably, that is actually a mutable use of the thing being dereffed to, so it has to be counted.
* If you deref a mutable `Box` to access the contents mutably, you are using the `Box` mutably - so it has to be counted.
Avoid calling the column!() macro in panic
Closes#43057
This "fix" adds a new macro called `__rust_unstable_column` and to use it instead of the `column` macro inside panic. The new macro can be shadowed as well as `column` can, but its very likely that there is no code that does this in practice.
There is no real way to make "unstable" macros that are usable by stable macros, so we do the next best thing and prefix the macro with `__rust_unstable` to make sure people recognize it is unstable.
r? @alexcrichton
Point at return type always when type mismatch against it
Before this, the diagnostic errors would only point at the return type
when changing it would be a possible solution to a type error. Add a
label to the return type without a suggestion to change in order to make
the source of the expected type obvious.
Follow up to #42850, fixes#25133, fixes#41897.
In preparation for incremental compilation this commit refactors the lint
handling infrastructure in the compiler to be more "eager" and overall more
incremental-friendly. Many passes of the compiler can emit lints at various
points but before this commit all lints were buffered in a table to be emitted
at the very end of compilation. This commit changes these lints to be emitted
immediately during compilation using pre-calculated lint level-related data
structures.
Linting today is split into two phases, one set of "early" lints run on the
`syntax::ast` and a "late" set of lints run on the HIR. This commit moves the
"early" lints to running as late as possible in compilation, just before HIR
lowering. This notably means that we're catching resolve-related lints just
before HIR lowering. The early linting remains a pass very similar to how it was
before, maintaining context of the current lint level as it walks the tree.
Post-HIR, however, linting is structured as a method on the `TyCtxt` which
transitively executes a query to calculate lint levels. Each request to lint on
a `TyCtxt` will query the entire crate's 'lint level data structure' and then go
from there about whether the lint should be emitted or not.
The query depends on the entire HIR crate but should be very quick to calculate
(just a quick walk of the HIR) and the red-green system should notice that the
lint level data structure rarely changes, and should hopefully preserve
incrementality.
Overall this resulted in a pretty big change to the test suite now that lints
are emitted much earlier in compilation (on-demand vs only at the end). This in
turn necessitated the addition of many `#![allow(warnings)]` directives
throughout the compile-fail test suite and a number of updates to the UI test
suite.
It's more pleasing to use the inner-attribute syntax (`#!` rather than
`#`) in the error message, as that is how `feature` attributes in
particular will be declared (as they apply to the entire crate).
#[must_use] for functions
This implements [RFC 1940](https://github.com/rust-lang/rfcs/pull/1940).
The RFC and discussion thereof seem to suggest that tagging `PartialEq::eq` and friends as `#[must_use]` would automatically lint for unused comparisons, but it doesn't work out that way (at least the way I've implemented it): unused `.eq` method calls get linted, but not `==` expressions. (The lint operates on the HIR, which sees binary operations as their own thing, even if they ultimately just call `.eq` _&c._.)
What do _you_ think??
Resolves#43302.
Add an overflow check in the Iter::next() impl for Range<_> to help with vectorization.
This helps with vectorization in some cases, such as (0..u16::MAX).collect::<Vec<u16>>(),
as LLVM is able to change the loop condition to use equality instead of less than and should help with #43124. (See also my [last comment](https://github.com/rust-lang/rust/issues/43124#issuecomment-319098625) there.) This PR makes collect on ranges of u16, i16, i8, and u8 **significantly** faster (at least on x86-64 and i686), and pretty close, though not quite equivalent to a [manual unsafe implementation](https://is.gd/nkoecB). 32 ( and 64-bit values on x86-64) bit values were already vectorized without this change, and they still are. This PR doesn't seem to help with 64-bit values on i686, as they still don't vectorize well compared to doing a manual loop.
I'm a bit unsure if this was the best way of implementing this, I tried to do it with as little changes as possible and avoided changing the step trait and the behavior in RangeFrom (I'll leave that for others like #43127 to discuss wider changes to the trait). I tried simply changing the comparison to `self.start != self.end` though that made the compiler segfault when compiling stage0, so I went with this method instead for now.
As for `next_back()`, reverse ranges seem to optimise properly already.
Update the list of confusable characters
Also reorder and space the list to make it clearer for futures updates
and to come closer to the original list.
This was tedious but somewhat rewarding!
Thanks @est31 for the instructions.
Fixes#43629.
r? @est31
make `for_all_relevant_impls` O(1) again
A change in #41911 had made `for_all_relevant_impls` do a linear scan over
all impls, instead of using an HashMap. Use an HashMap again to avoid
quadratic blowup when there is a large number of structs with impls.
I think this fixes#43141 completely, but I want better measurements in
order to be sure. As a perf patch, please don't roll this up.
r? @eddyb
beta-nominating because regression
A change in #41911 had made `for_all_relevant_impls` do a linear scan over
all impls, instead of using an HashMap. Use an HashMap again to avoid
quadratic blowup when there is a large number of structs with impls.
I think this fixes#43141 completely, but I want better measurements in
order to be sure. As a perf patch, please don't roll this up.
[libstd_unicode] Change UNICODE_VERSION to use u32
Looks like there's no strong reason to keep these values at `u64`.
With the current plans for the Unicode Standard, `u8` should be enough for the next 200 years. To stay on the safe side, I'm using `u16` here. I don't see a reason to go with anything machine-dependent/more-efficient.