Replace `NestedVisitorMap` with generic `NestedFilter`
This is an attempt to make the `intravisit::Visitor` API simpler and "more const" with regard to nested visiting.
With this change, `intravisit::Visitor` does not visit nested things by default, unless you specify `type NestedFilter = nested_filter::OnlyBodies` (or `All`). `nested_visit_map` returns `Self::Map` instead of `NestedVisitorMap<Self::Map>`. It panics by default (unreachable if `type NestedFilter` is omitted).
One somewhat trixty thing here is that `nested_filter::{OnlyBodies, All}` live in `rustc_middle` so that they may have `type Map = map::Map` and so that `impl Visitor`s never need to specify `type Map` - it has a default of `Self::NestedFilter::Map`.
Remove deprecated LLVM-style inline assembly
The `llvm_asm!` was deprecated back in #87590 1.56.0, with intention to remove
it once `asm!` was stabilized, which already happened in #91728 1.59.0. Now it
is time to remove `llvm_asm!` to avoid continued maintenance cost.
Closes#70173.
Closes#92794.
Closes#87612.
Closes#82065.
cc `@rust-lang/wg-inline-asm`
r? `@Amanieu`
[code coverage] Fix missing dead code in modules that are never called
The issue here is that the logic used to determine which CGU to put the dead function stubs in doesn't handle cases where a module is never assigned to a CGU (which is what happens when all of the code in the module is dead).
The partitioning logic also caused issues in #85461 where inline functions were duplicated into multiple CGUs resulting in duplicate symbols.
This commit fixes the issue by removing the complex logic used to assign dead code stubs to CGUs and replaces it with a much simpler model: we pick one CGU to hold all the dead code stubs. We pick a CGU which has exported items which increases the likelihood the linker won't throw away our dead functions and we pick the smallest to minimize the impact on compilation times for crates with very large CGUs.
Fixes#91661Fixes#86177Fixes#85718Fixes#79622
r? ```@tmandry```
cc ```@richkadel```
This PR is not urgent so please don't let it interrupt your holidays! 🎄🎁
Normalize generator-local types with unevaluated constants
Normalize generator-interior types in addition to (i.e. instead of just) erasing regions, since sometimes we collect types with unevaluated const exprs.
Fixes#84737Fixes#88171Fixes#92091Fixes#92634
Probably also fixes#73114, but that one has no code I could test. It looks like it's the same issue, though.
Delay remaining `span_bug`s in drop elaboration
This follows changes from #67967 and converts remaining `span_bug`s into
delayed bugs, since for const items drop elaboration might be executed
on a MIR which failed borrowck.
Fixes#81708.
Fixes#91816.
Remove `NullOp::Box`
Follow up of #89030 and MCP rust-lang/compiler-team#460.
~1 month later nothing seems to be broken, apart from a small regression that #89332 (1aac85bb716c09304b313d69d30d74fe7e8e1a8e) shows could be regained by remvoing the diverging path, so it shall be safe to continue and remove `NullOp::Box` completely.
r? `@jonas-schievink`
`@rustbot` label T-compiler
Continue supporting -Z instrument-coverage for compatibility for now,
but show a deprecation warning for it.
Update uses and documentation to use the -C option.
Move the documentation from the unstable book to stable rustc
documentation.
CTFE eval_fn_call: use FnAbi to determine argument skipping and compatibility
This makes use of the `FnAbi` type in CTFE/Miri, which `@eddyb` has been saying for years is what we should do.^^ `FnAbi` is used to
- determine which arguments to skip (rather than the previous heuristic of skipping ZST arguments with the Rust ABI)
- impose further restrictions on whether caller and callee are consistent in how a given argument is passed
I was hoping it would also simplify the code, but that is not the case -- the previous type compatibility checks are still required (AFAIK), only the ZST skipping is gone and that took barely any code. We also need some hacks because `FnAbi` assumes a certain way of implementing `caller_location` (by passing extra arguments), but Miri can just read the caller location from the call stack so it doesn't need those arguments. (The fact that every backend has to separately implement support for these arguments seems suboptimal -- looks like this might have been better implemented on the MIR level.) To avoid having to implement those unnecessary arguments in Miri, we just compute *whether* the argument is present on the caller/callee side, but don't actually pass that argument around.
I have no idea if this looks the way `@eddyb` thinks it should look... but it makes Miri's test suite pass. ;)
One of rustc's tests fails unfortunately (`ui/const-generics/issues/issue-67739.rs`), some const generic code that is evaluated too early -- I think that should raise `TooGeneric` but instead it ICEs. My assumption is this is some FnAbi code that has not been properly tested on polymorphic code, but it might also be me calling that FnAbi code the wrong way.
r? `@oli-obk` `@eddyb`
Fixes https://github.com/rust-lang/rust/issues/56166
Miri PR at https://github.com/rust-lang/miri/pull/1928
Store a `DefId` instead of an `AdtDef` in `AggregateKind::Adt`
The `AggregateKind` enum ends up in the final mir `Body`. Currently,
any changes to `AdtDef` (regardless of how significant they are)
will legitimately cause the overall result of `optimized_mir` to change,
invalidating any codegen re-use involving that mir.
This will get worse once we start hashing the `Span` inside `FieldDef`
(which is itself contained in `AdtDef`).
To try to reduce these kinds of invalidations, this commit changes
`AggregateKind::Adt` to store just the `DefId`, instead of the full
`AdtDef`. This allows the result of `optimized_mir` to be unchanged
if the `AdtDef` changes in a way that doesn't actually affect any
of the MIR we build.
This follows changes from #67967 and converts remaining `span_bug`s into
delayed bugs, since for const items drop elaboration might be executed
on a MIR which failed borrowck.
The `AggregateKind` enum ends up in the final mir `Body`. Currently,
any changes to `AdtDef` (regardless of how significant they are)
will legitimately cause the overall result of `optimized_mir` to change,
invalidating any codegen re-use involving that mir.
This will get worse once we start hashing the `Span` inside `FieldDef`
(which is itself contained in `AdtDef`).
To try to reduce these kinds of invalidations, this commit changes
`AggregateKind::Adt` to store just the `DefId`, instead of the full
`AdtDef`. This allows the result of `optimized_mir` to be unchanged
if the `AdtDef` changes in a way that doesn't actually affect any
of the MIR we build.
The issue here is that the logic used to determine which CGU to put the
dead function stubs in doesn't handle cases where a module is never
assigned to a CGU.
The partitioning logic also caused issues in #85461 where inline
functions were duplicated into multiple CGUs resulting in duplicate
symbols.
This commit fixes the issue by removing the complex logic used to assign
dead code stubs to CGUs and replaces it with a much simplier model: we
pick one CGU to hold all the dead code stubs. We pick a CGU which has
exported items which increases the likelihood the linker won't throw
away our dead functions and we pick the smallest to minimize the impact
on compilation times for crates with very large CGUs.
Fixes#86177Fixes#85718Fixes#79622
Move generator check earlier in inlining.
Inlining into generator may create references to other generators. For instance, inlining `Pin::<&mut from_generator::GenFuture<[generator1]>>::new_unchecked` into `generator2`. This cross reference can then create cycles when computing inlining for `generator1`.
In order to avoid this kind of surprises, we forbid all inlining into generators, and rely on LLVM to do the right thing. The existing `remove-zst-query-cycle` already ICEs in inline-mir mode, so we use it as test.
Split from #91743.
Previously some code paths would fail to evaluate the rvalue, while
incorrectly indicating success with `Ok`. As a result the previous value
of lhs could have been incorrectly const propagated.
This optimization pass previously made excessive assumptions as to the nature of
the blocks being optimized. We remove those assumptions and make sure to
rigorously justify all changes that are made to the MIR. Details can be found
in the file.
Remove `in_band_lifetimes` from `rustc_mir_transform`
Like #91580, this was inspired by the conversation in #44524 about possibly removing the feature from the compiler. This crate is a heavy `'tcx` user, so is a nice case study.
r? ``@petrochenkov``
Three interesting ones:
This one had the `'tcx` declared on the function, despite the trait taking a `'tcx`:
```diff
-impl Visitor<'_> for UsedLocals {
+impl<'tcx> Visitor<'tcx> for UsedLocals {
fn visit_statement(&mut self, statement: &Statement<'tcx>, location: Location) {
```
This one use in-band for one, and underscore for the other:
```diff
-pub fn remove_dead_blocks(tcx: TyCtxt<'tcx>, body: &mut Body<'_>) {
+pub fn remove_dead_blocks<'tcx>(tcx: TyCtxt<'tcx>, body: &mut Body<'tcx>) {
```
A spurious name, since there's no single-use-lifetime warning:
```diff
-pub fn run_passes(tcx: TyCtxt<'tcx>, body: &'mir mut Body<'tcx>, passes: &[&dyn MirPass<'tcx>]) {
+pub fn run_passes<'tcx>(tcx: TyCtxt<'tcx>, body: &mut Body<'tcx>, passes: &[&dyn MirPass<'tcx>]) {
```
Address some FIXMEs left over from #91475
This shouldn't change behavior, only clarify what we're currently doing. I filed #91576 to see if the treatment of generator drop shims is intentional.
cc #91475
This one is a heavy `'tcx` user.
Two interesting ones:
This one had the `'tcx` declared on the function, despite the trait taking a `'tcx`:
```diff
-impl Visitor<'_> for UsedLocals {
+impl<'tcx> Visitor<'tcx> for UsedLocals {
fn visit_statement(&mut self, statement: &Statement<'tcx>, location: Location) {
```
This one use in-band for one, and underscore for the other:
```diff
-pub fn remove_dead_blocks(tcx: TyCtxt<'tcx>, body: &mut Body<'_>) {
+pub fn remove_dead_blocks<'tcx>(tcx: TyCtxt<'tcx>, body: &mut Body<'tcx>) {
```
Add a MIR pass manager (Taylor's Version)
The final draft of #91386 and #77665.
While the compile-time constraints in #91386 are cool, I decided on a more minimal approach for now. I want to explore phase constraints and maybe relative-ordering constraints in the future, though. This should preserve existing behavior **exactly** (please let me know if it doesn't) while making the following changes to the way we organize things today:
- Each `MirPhase` now corresponds to a single MIR pass. `run_passes` is not responsible for listing the correct MIR phase.
- `run_passes` no longer silently skips passes if the declared MIR phase is greater than or equal to the body's. This has bitten me multiple times. If you want this behavior, you can always branch on `body.phase` yourself.
- If your pass is solely to emit errors, you can use the `MirLint` interface instead, which gets a shared reference to `Body` instead of a mutable one. By differentiating the two, I hope to make it clearer in the short term where lints belong in the pipeline. In the long term perhaps we could enforce this at compile-time?
- MIR is no longer dumped for passes that aren't enabled, or for lints.
I tried to check that `-Zvalidate` still works correctly, since the MIR phase is now updated as soon as the associated pass is done, instead of at the end of all the passes in `run_passes`. However, it looks like `-Zvalidate` is broken with current nightlies anyways 😢 (it spits out a bunch of errors).
cc `@oli-obk` `@wesleywiser`
r? rust-lang/wg-mir-opt
Looks like Generator drop shims already have `post_borrowck_cleanup` run
on them. That's a bit surprising, since it means they're getting const-
and maybe borrow-checked? This merits further investigation, but for now
just preserve the status quo.
Move `#![feature(const_precise_live_drops)]` checks earlier in the pipeline
Should mitigate the issues found during MCP on #73255.
Once this is done, we should clean up the queries a bit, since I think `mir_drops_elaborated_and_const_checked` can be merged back into `mir_promoted`.
Fixes#90770.
cc ``@rust-lang/wg-const-eval``
r? ``@nikomatsakis`` (since they reviewed #71824)
Instead we run `RemoveFalseEdges` and `RemoveUninitDrops` at the
appropriate time. The extra `SimplifyCfg` avoids visiting unreachable
blocks during `RemoveUninitDrops`.
Otherwise dataflow state will propagate along false edges and cause
things to be marked as maybe init unnecessarily. These should be
separate, since `SimplifyBranches` also makes `if true {} else {}` into
a `goto`, which means we wouldn't lint anything in the `else` block.
Previously for enums using the `Variants::Single` layout, the variant
index was being confused with its discriminant. For example, in the case
of `enum E { A = 1 }`.
Use `discriminant_for_variant` to avoid the issue.
The change is limited to the iteration over indices instead of using
`basic_blocks_mut()` directly, in the case the previous implementation
intentionally avoided invalidating the caches stored in MIR body.
Remove hir::map::blocks and use FnKind instead
The principal tool is `FnLikeNode`, which is not often used and can be easily implemented using `rustc_hir::intravisit::FnKind`.
Adopt let_else across the compiler
This performs a substitution of code following the pattern:
```
let <id> = if let <pat> = ... { identity } else { ... : ! };
```
To simplify it to:
```
let <pat> = ... { identity } else { ... : ! };
```
By adopting the `let_else` feature (cc #87335).
The PR also updates the syn crate because the currently used version of the crate doesn't support `let_else` syntax yet.
Note: Generally I'm the person who *removes* usages of unstable features from the compiler, not adds more usages of them, but in this instance I think it hopefully helps the feature get stabilized sooner and in a better state. I have written a [comment](https://github.com/rust-lang/rust/issues/87335#issuecomment-944846205) on the tracking issue about my experience and what I feel could be improved before stabilization of `let_else`.
Index and hash HIR as part of lowering
Part of https://github.com/rust-lang/rust/pull/88186
~Based on https://github.com/rust-lang/rust/pull/88880 (see merge commit).~
Once HIR is lowered, it is later indexed by the `index_hir` query and hashed for `crate_hash`. This PR moves those post-processing steps to lowering itself. As a side objective, the HIR crate data structure is refactored as an `IndexVec<LocalDefId, Option<OwnerInfo<'hir>>>` where `OwnerInfo` stores all the relevant information for an HIR owner.
r? `@michaelwoerister`
cc `@petrochenkov`
This performs a substitution of code following the pattern:
let <id> = if let <pat> = ... { identity } else { ... : ! };
To simplify it to:
let <pat> = ... { identity } else { ... : ! };
By adopting the let_else feature.
Stabilize `const_panic`
Closes#51999
FCP completed in #89006
```@rustbot``` label +A-const-eval +A-const-fn +T-lang
cc ```@oli-obk``` for review (not `r?`'ing as not on lang team)
Rework HIR API to make invocations of the hir_crate query harder.
`hir_crate` forces the recomputation of queries that depend on it.
This PR aims at avoiding useless invocations of `hir_crate` by making dependent code go through `tcx.hir()`.
Correct caller/callsite confusion in inliner message
`callee_body` is the MIR `Body` for the `callsite.callee` so this message basically says `"Inline {bar span} into bar"` when it should say `"Inline bar into foo"`.
Extracted out of #82280