This is the same idea as #92533, but for `AssocItem` instead
of `VariantDef`/`FieldDef`.
With this change, we no longer have any uses of
`#[stable_hasher(project(...))]`
In two cases where this ordering was used, I've replaced the sorting
to use a key that does not include DefId. I'm not sure this is correct
in terms of our goals from #90317, or otherwise.
Pretty printer algorithm revamp step 2
This PR follows #92923 as a second chunk of modernizations backported from https://github.com/dtolnay/prettyplease into rustc_ast_pretty.
I've broken this up into atomic commits that hopefully are sensible in isolation. At every commit, the pretty printer is compilable and has runtime behavior that is identical to before and after the PR. None of the refactoring so far changes behavior.
The general theme of this chunk of commits is: the logic in the old pretty printer is doing some very basic things (pushing and popping tokens on a ring buffer) but expressed in a too-low-level way that I found makes it quite complicated/subtle to reason about. There are a number of obvious invariants that are "almost true" -- things like `self.left == self.buf.offset` and `self.right == self.buf.offset + self.buf.data.len()` and `self.right_total == self.left_total + self.buf.data.sum()`. The reason these things are "almost true" is the implementation tends to put updating one side of the invariant unreasonably far apart from updating the other side, leaving the invariant broken while unrelated stuff happens in between. The following code from master is an example of this:
e5e2b0be26/compiler/rustc_ast_pretty/src/pp.rs (L314-L317)
In this code the `advance_right` is reserving an entry into which to write a next token on the right side of the ring buffer, the `check_stack` is doing something totally unrelated to the right boundary of the ring buffer, and the `scan_push` is actually writing the token we previously reserved space for. Much of what this PR is doing is rearranging code to shrink the amount of stuff in between when an invariant is broken to when it is restored, until the whole thing can be factored out into one indivisible method call on the RingBuffer type.
The end state of the PR is that we can entirely eliminate `self.left` (because it's now just equal to `self.buf.offset` always) and `self.right` (because it's equal to `self.buf.offset + self.buf.data.len()` always) and the whole `Token::Eof` state which used to be the value of tokens that have been reserved space for but not yet written.
I found without these changes the pretty printer implementation to be hard to reason about and I wasn't able to confidently introduce improvements like trailing commas in `prettyplease` until after this refactor. The logic here is 43 years old at this point (Graydon translated it as directly as possible from the 1979 pretty printing paper) and while there are advantages to following the paper as closely as possible, in `prettyplease` I decided if we're going to adapt the algorithm to work better for Rust syntax, it was worthwhile making it easier to follow than the original.
mangling_v0: Skip extern blocks during mangling
There's no need to include the dummy `Nt` into the symbol name, items in extern blocks belong to their parent modules for all purposes except for inheriting the ABI and attributes.
Follow up to https://github.com/rust-lang/rust/pull/92032
(There's also a drive-by fix to the `rust-demangler` tool's tests, which don't run on CI, I initially attempted using them for testing this PR.)
Remove some unused ordering derivations based on `DefId`
Like #93018, this removes some unused/unneeded ordering derivations as part of ongoing work on #90317. Here, these changes are aimed at making https://github.com/rust-lang/rust/pull/90749 easier to review, test, and merge.
r? `@cjgillot`
Move expr- and item-related pretty printing functions to modules
Currently *compiler/rustc_ast_pretty/src/pprust/state.rs* is 2976 lines on master. The `tidy` limit is 3000, which is blocking #92243.
This PR adds a `mod expr;` and `mod item;` to move logic related to those AST nodes out of the single huge file.
Use iterator instead of recursion in `codegen_place`
This PR fixes the FIXME in `codegen_place` about using iterator instead of recursion when processing the `projection` field in `mir::PlaceRef`. At the same time, it also reduces the right drift.
Formally implement let chains
## Let chains
My longest and hardest contribution since #64010.
Thanks to `@Centril` for creating the RFC and special thanks to `@matthewjasper` for helping me since the beginning of this journey. In fact, `@matthewjasper` did much of the complicated MIR stuff so it's true to say that this feature wouldn't be possible without him. Thanks again `@matthewjasper!`
With the changes proposed in this PR, it will be possible to chain let expressions along side local variable declarations or ordinary conditional expressions. In other words, do much of what the `if_chain` crate already does.
## Other considerations
* `if let guard` and `let ... else` features need special care and should be handled in a following PR.
* Irrefutable patterns are allowed within a let chain context
* ~~Three Clippy lints were already converted to start dogfooding and help detect possible corner cases~~
cc #53667
Fixes#92987
During evaluation of an auto trait predicate, we may encounter a cycle.
This causes us to store the evaluation result in a special 'provisional
cache;. If we later end up determining that the type can legitimately
implement the auto trait despite the cycle, we remove the entry from
the provisional cache, and insert it into the evaluation cache.
Additionally, trait evaluation creates a special anonymous `DepNode`.
All queries invoked during the predicate evaluation are added as
outoging dependency edges from the `DepNode`. This `DepNode` is then
store in the evaluation cache - if a different query ends up reading
from the cache entry, it will also perform a read of the stored
`DepNode`. As a result, the cached evaluation will still end up
(transitively) incurring all of the same dependencies that it would
if it actually performed the uncached evaluation (e.g. a call to
`type_of` to determine constituent types).
Previously, we did not correctly handle the interaction between the
provisional cache and the created `DepNode`. Storing an evaluation
result in the provisional cache would cause us to lose the `DepNode`
created during the evaluation. If we later moved the entry from the
provisional cache to the evaluation cache, we would use the `DepNode`
associated with the evaluation that caused us to 'complete' the cycle,
not the evaluatoon where we first discovered the cycle. As a result,
future reads from the evaluation cache would miss some incremental
compilation dependencies that would have otherwise been added if the
evaluation was *not* cached.
Under the right circumstances, this could lead to us trying to force
a query with a no-longer-existing `DefPathHash`, since we were missing
the (red) dependency edge that would have caused us to bail out before
attempting forcing.
This commit makes the provisional cache store the `DepNode` create
during the provisional evaluation. When we move an entry from the
provisional cache to the evaluation cache, we create a *new* `DepNode`
that has dependencies going to *both* of the evaluation `DepNodes` we
have available. This ensures that cached reads will incur all of
the necessary dependency edges.
We previously weren't tracking partial re-inits while being too
aggressive around partial drops. With this change, we simply ignore
partial drops, which is the safer, more conservative choice.
This changes drop range analysis to handle uninhabited return types such
as `!`. Since these calls to these functions do not return, we model
them as ending in an infinite loop.
This reduces the amount of work done, especially in later iterations,
by only processing nodes whose predecessors changed in the previous
iteration, or earlier in the current iteration. This also has the side
effect of completely ignoring all unreachable nodes.