rename LocalState::Uninitialized to Unallocated
This is to avoid confusion with `Uninit` as in `ScalarMaybeUninit`, which is very different.
r? `@oli-obk`
There are a few places were we have to construct it, though, and a few
places that are more invasive to change. To do this, we create a
constructor with a long obvious name.
Improve `AdtDef` interning.
This commit makes `AdtDef` use `Interned`. Much of the commit is tedious
changes to introduce getter functions. The interesting changes are in
`compiler/rustc_middle/src/ty/adt.rs`.
r? `@fee1-dead`
This commit makes `AdtDef` use `Interned`. Much the commit is tedious
changes to introduce getter functions. The interesting changes are in
`compiler/rustc_middle/src/ty/adt.rs`.
`Layout` is another type that is sometimes interned, sometimes not, and
we always use references to refer to it so we can't take any advantage
of the uniqueness properties for hashing or equality checks.
This commit renames `Layout` as `LayoutS`, and then introduces a new
`Layout` that is a newtype around an `Interned<LayoutS>`. It also
interns more layouts than before. Previously layouts within layouts
(via the `variants` field) were never interned, but now they are. Hence
the lifetime on the new `Layout` type.
Unlike other interned types, these ones are in `rustc_target` instead of
`rustc_middle`. This reflects the existing structure of the code, which
does layout-specific stuff in `rustc_target` while `TyAndLayout` is
generic over the `Ty`, allowing the type-specific stuff to occur in
`rustc_middle`.
The commit also adds a `HashStable` impl for `Interned`, which was
needed. It hashes the contents, unlike the `Hash` impl which hashes the
pointer.
Currently some `Allocation`s are interned, some are not, and it's very
hard to tell at a use point which is which.
This commit introduces `ConstAllocation` for the known-interned ones,
which makes the division much clearer. `ConstAllocation::inner()` is
used to get the underlying `Allocation`.
In some places it's natural to use an `Allocation`, in some it's natural
to use a `ConstAllocation`, and in some places there's no clear choice.
I've tried to make things look as nice as possible, while generally
favouring `ConstAllocation`, which is the type that embodies more
information. This does require quite a few calls to `inner()`.
The commit also tweaks how `PartialOrd` works for `Interned`. The
previous code was too clever by half, building on `T: Ord` to make the
code shorter. That caused problems with deriving `PartialOrd` and `Ord`
for `ConstAllocation`, so I changed it to build on `T: PartialOrd`,
which is slightly more verbose but much more standard and avoided the
problems.
Miri/CTFE: properly treat overflow in (signed) division/rem as UB
To my surprise, it looks like LLVM treats overflow of signed div/rem as UB. From what I can tell, MIR `Div`/`Rem` directly lowers to the corresponding LLVM operation, so to make that correct we also have to consider these overflows UB in the CTFE/Miri interpreter engine.
r? `@oli-obk`
Only create a single expansion for each inline integration.
The inlining integrator used to create one expansion for each span from the callee body.
This PR reverses the logic to create a single expansion for the whole call,
which is more consistent with how macro expansions work for macros.
This should remove the large memory regression in #91743.
Remove an unnecessary restriction in `dest_prop`
I had asked about this [on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Do.20unions.20have.20active.20fields.3F) but didn't receive a response, so putting up this PR that makes the change I think we can. If it turns out that this is wrong, hopefully I'll find out here. Reposting my Zulip comment:
> Not sure what channel to put this into, so using this as a fallback. The dest prop MIR opt has this comment:
>
> ```rust
> //! Subtle case: If `dest` is a, or projects through a union, then we have to make sure that there
> //! remains an assignment to it, since that sets the "active field" of the union. But if `src` is
> //! a ZST, it might not be initialized, so there might not be any use of it before the assignment,
> //! and performing the optimization would simply delete the assignment, leaving `dest`
> //! uninitialized.
> ```
>
> In particular, the claim seems to be that we can't take
> ```
> x = ();
> y.field = x;
> ```
> where `y` is a union having `field: ()` as one of its variants, and optimize the entire thing away (assuming `x` is unused otherwise). As far as I know though, Rust unions don't have active fields. Is this comment correct and am I missing something? Is there a worry about this interacting poorly with FFI code/C unions/LTO or something?
This PR just removes that comment and the associated code. Also it fixes one unrelated comment that did not match the code it was commenting on.
r? rust-lang/mir-opt
Convert `newtype_index` to a proc macro
The `macro_rules!` implementation was becomng excessively complicated,
and difficult to modify. The new proc macro implementation should make
it much easier to add new features (e.g. skipping certain `#[derive]`s)
rustc_errors: let `DiagnosticBuilder::emit` return a "guarantee of emission".
That is, `DiagnosticBuilder` is now generic over the return type of `.emit()`, so we'll now have:
* `DiagnosticBuilder<ErrorReported>` for error (incl. fatal/bug) diagnostics
* can only be created via a `const L: Level`-generic constructor, that limits allowed variants via a `where` clause, so not even `rustc_errors` can accidentally bypass this limitation
* asserts `diagnostic.is_error()` on emission, just in case the construction restriction was bypassed (e.g. by replacing the whole `Diagnostic` inside `DiagnosticBuilder`)
* `.emit()` returns `ErrorReported`, as a "proof" token that `.emit()` was called
(though note that this isn't a real guarantee until after completing the work on
#69426)
* `DiagnosticBuilder<()>` for everything else (warnings, notes, etc.)
* can also be obtained from other `DiagnosticBuilder`s by calling `.forget_guarantee()`
This PR is a companion to other ongoing work, namely:
* #69426
and it's ongoing implementation:
#93222
the API changes in this PR are needed to get statically-checked "only errors produce `ErrorReported` from `.emit()`", but doesn't itself provide any really strong guarantees without those other `ErrorReported` changes
* #93244
would make the choices of API changes (esp. naming) in this PR fit better overall
In order to be able to let `.emit()` return anything trustable, several changes had to be made:
* `Diagnostic`'s `level` field is now private to `rustc_errors`, to disallow arbitrary "downgrade"s from "some kind of error" to "warning" (or anything else that doesn't cause compilation to fail)
* it's still possible to replace the whole `Diagnostic` inside the `DiagnosticBuilder`, sadly, that's harder to fix, but it's unlikely enough that we can paper over it with asserts on `.emit()`
* `.cancel()` now consumes `DiagnosticBuilder`, preventing `.emit()` calls on a cancelled diagnostic
* it's also now done internally, through `DiagnosticBuilder`-private state, instead of having a `Level::Cancelled` variant that can be read (or worse, written) by the user
* this removes a hazard of calling `.cancel()` on an error then continuing to attach details to it, and even expect to be able to `.emit()` it
* warnings were switched to *only* `can_emit_warnings` on emission (instead of pre-cancelling early)
* `struct_dummy` was removed (as it relied on a pre-`Cancelled` `Diagnostic`)
* since `.emit()` doesn't consume the `DiagnosticBuilder` <sub>(I tried and gave up, it's much more work than this PR)</sub>,
we have to make `.emit()` idempotent wrt the guarantees it returns
* thankfully, `err.emit(); err.emit();` can return `ErrorReported` both times, as the second `.emit()` call has no side-effects *only* because the first one did do the appropriate emission
* `&mut Diagnostic` is now used in a lot of function signatures, which used to take `&mut DiagnosticBuilder` (in the interest of not having to make those functions generic)
* the APIs were already mostly identical, allowing for low-effort porting to this new setup
* only some of the suggestion methods needed some rework, to have the extra `DiagnosticBuilder` functionality on the `Diagnostic` methods themselves (that change is also present in #93259)
* `.emit()`/`.cancel()` aren't available, but IMO calling them from an "error decorator/annotator" function isn't a good practice, and can lead to strange behavior (from the caller's perspective)
* `.downgrade_to_delayed_bug()` was added, letting you convert any `.is_error()` diagnostic into a `delay_span_bug` one (which works because in both cases the guarantees available are the same)
This PR should ideally be reviewed commit-by-commit, since there is a lot of fallout in each.
r? `@estebank` cc `@Manishearth` `@nikomatsakis` `@mark-i-m`
These links never worked, but the lint was suppressed due to the fact
that the span was pointing into the macro. With the new macro
implementation, the span now points directly to the doc comment in the
macro invocation, so it's no longer suppressed.
Introduce `ChunkedBitSet` and use it for some dataflow analyses.
This reduces peak memory usage significantly for some programs with very
large functions.
r? `@ghost`
This reduces peak memory usage significantly for some programs with very
large functions, such as:
- `keccak`, `unicode_normalization`, and `match-stress-enum`, from
the `rustc-perf` benchmark suite;
- `http-0.2.6` from crates.io.
The new type is used in the analyses where the bitsets can get huge
(e.g. 10s of thousands of bits): `MaybeInitializedPlaces`,
`MaybeUninitializedPlaces`, and `EverInitializedPlaces`.
Some refactoring was required in `rustc_mir_dataflow`. All existing
analysis domains are either `BitSet` or a trivial wrapper around
`BitSet`, and access in a few places is done via `Borrow<BitSet>` or
`BorrowMut<BitSet>`. Now that some of these domains are `ClusterBitSet`,
that no longer works. So this commit replaces the `Borrow`/`BorrowMut`
usage with a new trait `BitSetExt` containing the needed bitset
operations. The impls just forward these to the underlying bitset type.
This required fiddling with trait bounds in a few places.
The commit also:
- Moves `static_assert_size` from `rustc_data_structures` to
`rustc_index` so it can be used in the latter; the former now
re-exports it so existing users are unaffected.
- Factors out some common "clear excess bits in the final word"
functionality in `bit_set.rs`.
- Uses `fill` in a few places instead of loops.
Main motivation: Fixes some issues with the current behavior. This PR is
more-or-less completely re-implementing the unused_unsafe lint; it’s also only
done in the MIR-version of the lint, the set of tests for the `-Zthir-unsafeck`
version no longer succeeds (and is thus disabled, see `lint-unused-unsafe.rs`).
On current nightly,
```rs
unsafe fn unsf() {}
fn inner_ignored() {
unsafe {
#[allow(unused_unsafe)]
unsafe {
unsf()
}
}
}
```
doesn’t create any warnings. This situation is not unrealistic to come by, the
inner `unsafe` block could e.g. come from a macro. Actually, this PR even
includes removal of one unused `unsafe` in the standard library that was missed
in a similar situation. (The inner `unsafe` coming from an external macro hides
the warning, too.)
The reason behind this problem is how the check currently works:
* While generating MIR, it already skips nested unsafe blocks (i.e. unsafe
nested in other unsafe) so that the inner one is always the one considered
unused
* To differentiate the cases of no unsafe operations inside the `unsafe` vs.
a surrounding `unsafe` block, there’s some ad-hoc magic walking up the HIR to
look for surrounding used `unsafe` blocks.
There’s a lot of problems with this approach besides the one presented above.
E.g. the MIR-building uses checks for `unsafe_op_in_unsafe_fn` lint to decide
early whether or not `unsafe` blocks in an `unsafe fn` are redundant and ought
to be removed.
```rs
unsafe fn granular_disallow_op_in_unsafe_fn() {
unsafe {
#[deny(unsafe_op_in_unsafe_fn)]
{
unsf();
}
}
}
```
```
error: call to unsafe function is unsafe and requires unsafe block (error E0133)
--> src/main.rs:13:13
|
13 | unsf();
| ^^^^^^ call to unsafe function
|
note: the lint level is defined here
--> src/main.rs:11:16
|
11 | #[deny(unsafe_op_in_unsafe_fn)]
| ^^^^^^^^^^^^^^^^^^^^^^
= note: consult the function's documentation for information on how to avoid undefined behavior
warning: unnecessary `unsafe` block
--> src/main.rs:10:5
|
9 | unsafe fn granular_disallow_op_in_unsafe_fn() {
| --------------------------------------------- because it's nested under this `unsafe` fn
10 | unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
```
Here, the intermediate `unsafe` was ignored, even though it contains a unsafe
operation that is not allowed to happen in an `unsafe fn` without an additional `unsafe` block.
Also closures were problematic and the workaround/algorithms used on current
nightly didn’t work properly. (I skipped trying to fully understand what it was
supposed to do, because this PR uses a completely different approach.)
```rs
fn nested() {
unsafe {
unsafe { unsf() }
}
}
```
```
warning: unnecessary `unsafe` block
--> src/main.rs:10:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
```
vs
```rs
fn nested() {
let _ = || unsafe {
let _ = || unsafe { unsf() };
};
}
```
```
warning: unnecessary `unsafe` block
--> src/main.rs:9:16
|
9 | let _ = || unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:10:20
|
10 | let _ = || unsafe { unsf() };
| ^^^^^^ unnecessary `unsafe` block
```
*note that this warning kind-of suggests that **both** unsafe blocks are redundant*
--------------------------------------------------------------------------------
I also dislike the fact that it always suggests keeping the outermost `unsafe`.
E.g. for
```rs
fn granularity() {
unsafe {
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
I prefer if `rustc` suggests removing the more-course outer-level `unsafe`
instead of the fine-grained inner `unsafe` blocks, which it currently does on nightly:
```
warning: unnecessary `unsafe` block
--> src/main.rs:10:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:11:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
11 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
warning: unnecessary `unsafe` block
--> src/main.rs:12:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
12 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
```
--------------------------------------------------------------------------------
Needless to say, this PR addresses all these points. For context, as far as my
understanding goes, the main advantage of skipping inner unsafe blocks was that
a test case like
```rs
fn top_level_used() {
unsafe {
unsf();
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
should generate some warning because there’s redundant nested `unsafe`, however
every single `unsafe` block _does_ contain some statement that uses it. Of course
this PR doesn’t aim change the warnings on this kind of code example, because
the current behavior, warning on all the inner `unsafe` blocks, makes sense in this case.
As mentioned, during MIR building all the unsafe blocks *are* kept now, and usage
is attributed to them. The way to still generate a warning like
```
warning: unnecessary `unsafe` block
--> src/main.rs:11:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsf();
11 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:12:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
12 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
warning: unnecessary `unsafe` block
--> src/main.rs:13:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
13 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
```
in this case is by emitting a `unused_unsafe` warning for all of the `unsafe`
blocks that are _within a **used** unsafe block_.
The previous code had a little HIR traversal already anyways to collect a set of
all the unsafe blocks (in order to afterwards determine which ones are unused
afterwards). This PR uses such a traversal to do additional things including logic
like _always_ warn for an `unsafe` block that’s inside of another **used**
unsafe block. The traversal is expanded to include nested closures in the same go,
this simplifies a lot of things.
The whole logic around `unsafe_op_in_unsafe_fn` is a little complicated, there’s
some test cases of corner-cases in this PR. (The implementation involves
differentiating between whether a used unsafe block was used exclusively by
operations where `allow(unsafe_op_in_unsafe_fn)` was active.) The main goal was
to make sure that code should compile successfully if all the `unused_unsafe`-warnings
are addressed _simultaneously_ (by removing the respective `unsafe` blocks)
no matter how complicated the patterns of `unsafe_op_in_unsafe_fn` being
disallowed and allowed throughout the function are.
--------------------------------------------------------------------------------
One noteworthy design decision I took here: An `unsafe` block
with `allow(unused_unsafe)` **is considered used** for the purposes of
linting about redundant contained unsafe blocks. So while
```rs
fn granularity() {
unsafe { //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
warns for the outer `unsafe` block,
```rs
fn top_level_ignored() {
#[allow(unused_unsafe)]
unsafe {
#[deny(unused_unsafe)]
{
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
}
}
}
```
warns on the inner ones.
Extend uninhabited enum variant branch elimination to also affect fallthrough
The `uninhabited_enum_branching` mir opt eliminates branches on variants where the data is uninhabited. This change extends this pass to also ensure that the `otherwise` case points to a trivially unreachable bb if all inhabited variants are present in the non-otherwise branches.
I believe it was `@scottmcm` who said that LLVM eliminates some of this information in its SimplifyCFG pass. This is unfortunate, but this change should still be at least a small improvement in principle (I don't think it will show up on any benchmarks)
Adopt let else in more places
Continuation of #89933, #91018, #91481, #93046, #93590, #94011.
I have extended my clippy lint to also recognize tuple passing and match statements. The diff caused by fixing it is way above 1 thousand lines. Thus, I split it up into multiple pull requests to make reviewing easier. This is the biggest of these PRs and handles the changes outside of rustdoc, rustc_typeck, rustc_const_eval, rustc_trait_selection, which were handled in PRs #94139, #94142, #94143, #94144.
Do not ICE when inlining a function with un-satisfiable bounds
Fixes#93008
This is kinda a hack... but it's the fix I thought had the least blast-radius.
We use `normalize_param_env_or_error` to verify that the predicates in the param env are self-consistent, since with RevealAll, a bad predicate like `<&'static () as Clone>` will be evaluated with an empty ParamEnv (since it references no generics), and we'll raise an error for it.
The `uninhabited_enum_branch` miropt now also checks whether the fallthrough
case is inhabited, and if not will ensure that it points to an unreachable
block.
Specifically, rename the `Const` struct as `ConstS` and re-introduce `Const` as
this:
```
pub struct Const<'tcx>(&'tcx Interned<ConstS>);
```
This now matches `Ty` and `Predicate` more closely, including using
pointer-based `eq` and `hash`.
Notable changes:
- `mk_const` now takes a `ConstS`.
- `Const` was copy, despite being 48 bytes. Now `ConstS` is not, so need a
we need separate arena for it, because we can't use the `Dropless` one any
more.
- Many `&'tcx Const<'tcx>`/`&Const<'tcx>` to `Const<'tcx>` changes
- Many `ct.ty` to `ct.ty()` and `ct.val` to `ct.val()` changes.
- Lots of tedious sigil fiddling.
Specifically, change `Region` from this:
```
pub type Region<'tcx> = &'tcx RegionKind;
```
to this:
```
pub struct Region<'tcx>(&'tcx Interned<RegionKind>);
```
This now matches `Ty` and `Predicate` more closely.
Things to note
- Regions have always been interned, but we haven't been using pointer-based
`Eq` and `Hash`. This is now happening.
- I chose to impl `Deref` for `Region` because it makes pattern matching a lot
nicer, and `Region` can be viewed as just a smart wrapper for `RegionKind`.
- Various methods are moved from `RegionKind` to `Region`.
- There is a lot of tedious sigil changes.
- A couple of types like `HighlightBuilder`, `RegionHighlightMode` now have a
`'tcx` lifetime because they hold a `Ty<'tcx>`, so they can call `mk_region`.
- A couple of test outputs change slightly, I'm not sure why, but the new
outputs are a little better.
Specifically, change `Ty` from this:
```
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
to this
```
pub struct Ty<'tcx>(Interned<'tcx, TyS<'tcx>>);
```
There are two benefits to this.
- It's now a first class type, so we can define methods on it. This
means we can move a lot of methods away from `TyS`, leaving `TyS` as a
barely-used type, which is appropriate given that it's not meant to
be used directly.
- The uniqueness requirement is now explicit, via the `Interned` type.
E.g. the pointer-based `Eq` and `Hash` comes from `Interned`, rather
than via `TyS`, which wasn't obvious at all.
Much of this commit is boring churn. The interesting changes are in
these files:
- compiler/rustc_middle/src/arena.rs
- compiler/rustc_middle/src/mir/visit.rs
- compiler/rustc_middle/src/ty/context.rs
- compiler/rustc_middle/src/ty/mod.rs
Specifically:
- Most mentions of `TyS` are removed. It's very much a dumb struct now;
`Ty` has all the smarts.
- `TyS` now has `crate` visibility instead of `pub`.
- `TyS::make_for_test` is removed in favour of the static `BOOL_TY`,
which just works better with the new structure.
- The `Eq`/`Ord`/`Hash` impls are removed from `TyS`. `Interned`s impls
of `Eq`/`Hash` now suffice. `Ord` is now partly on `Interned`
(pointer-based, for the `Equal` case) and partly on `TyS`
(contents-based, for the other cases).
- There are many tedious sigil adjustments, i.e. adding or removing `*`
or `&`. They seem to be unavoidable.
Stabilize `-Z instrument-coverage` as `-C instrument-coverage`
(Tracking issue for `instrument-coverage`: https://github.com/rust-lang/rust/issues/79121)
This PR stabilizes support for instrumentation-based code coverage, previously provided via the `-Z instrument-coverage` option. (Continue supporting `-Z instrument-coverage` for compatibility for now, but show a deprecation warning for it.)
Many, many people have tested this support, and there are numerous reports of it working as expected.
Move the documentation from the unstable book to stable rustc documentation. Update uses and documentation to use the `-C` option.
Addressing questions raised in the tracking issue:
> If/when stabilized, will the compiler flag be updated to -C instrument-coverage? (If so, the -Z variant could also be supported for some time, to ease migrations for existing users and scripts.)
This stabilization PR updates the option to `-C` and keeps the `-Z` variant to ease migration.
> The Rust coverage implementation depends on (and automatically turns on) -Z symbol-mangling-version=v0. Will stabilizing this feature depend on stabilizing v0 symbol-mangling first? If so, what is the current status and timeline?
This stabilization PR depends on https://github.com/rust-lang/rust/pull/90128 , which stabilizes `-C symbol-mangling-version=v0` (but does not change the default symbol-mangling-version).
> The Rust coverage implementation implements the latest version of LLVM's Coverage Mapping Format (version 4), which forces a dependency on LLVM 11 or later. A compiler error is generated if attempting to compile with coverage, and using an older version of LLVM.
Given that LLVM 13 has now been released, requiring LLVM 11 for coverage support seems like a reasonable requirement. If people don't have at least LLVM 11, nothing else breaks; they just can't use coverage support. Given that coverage support currently requires a nightly compiler and LLVM 11 or newer, allowing it on a stable compiler built with LLVM 11 or newer seems like an improvement.
The [tracking issue](https://github.com/rust-lang/rust/issues/79121) and the [issue label A-code-coverage](https://github.com/rust-lang/rust/labels/A-code-coverage) link to a few open issues related to `instrument-coverage`, but none of them seem like showstoppers. All of them seem like improvements and refinements we can make after stabilization.
The original `-Z instrument-coverage` support went through a compiler-team MCP at https://github.com/rust-lang/compiler-team/issues/278 . Based on that, `@pnkfelix` suggested that this needed a stabilization PR and a compiler-team FCP.
Store def_id_to_hir_id as variant in hir_owner.
If hir_owner is Owner(_), the LocalDefId is pointing to an owner, so the ItemLocalId is 0.
If the HIR node does not exist, we store Phantom.
Otherwise, we store the HirId associated to the LocalDefId.
Related to #89278
r? `@oli-obk`
Create `core::fmt::ArgumentV1` with generics instead of fn pointer
Split from (and prerequisite of) #90488, as this seems to have perf implication.
`@rustbot` label: +T-libs
If hir_owner is Owner(_), the LocalDefId is pointing to an owner, so the ItemLocalId is 0.
If the HIR node does not exist, we store Phantom.
Otherwise, we store the HirId associated to the LocalDefId.
Fix the unsoundness in the `early_otherwise_branch` mir opt pass
Closes#78496 .
This change is a significant rewrite of much of the pass. Exactly what it does is documented in the source file (with ascii art!), and all the changes that are made to the MIR that are not trivially sound are carefully documented. That being said, this is my first time touching MIR, so there are probably some invariants I did not know about that I broke.
This version of the optimization is also somewhat more flexible than the original; for example, we do not care how or where the value on which the parent is switching is computed. There is no requirement that any types be the same. This could be made even more flexible in the future by allowing a wider range of statements in the bodies of `BBC, BBD` (as long as they are all the same of course). This should be a good first step though.
Probably needs a perf run.
r? `@oli-obk` who reviewed things the last time this was touched
rustc_mir_itertools: Avoid needless `collect` with itertools
I don't think this should have measurable perf impact (at least not on perf.rlo benchmarks), it's mostly for readability.
Update some rustc dependencies to deduplicate them
This PR updates `rand` and `itertools` in rustc (not the whole workspace) in order to deduplicate them (and hopefully slightly improve compile times).
~~Currently, `object` is still duplicated, but https://github.com/rust-lang/thorin/pull/15 and updating `thorin` in the future will remove the use of version 0.27.~~ Update: Thorin 0.2 has now been released, and this PR updates `rustc_codegen_ssa` to use it and deduplicate the `object` crate.
There's a final tiny rustc dependency, `cfg-if`, which will be left: as both versions 0.1.x and 1.0 looked to be heavily depended on, they will require a few cascading updates to be removed.
Fix variant index / discriminant confusion in uninhabited enum branching
Fix confusion between variant index and variant discriminant. The pass
incorrectly assumed that for `Variants::Single` variant index is the same as
variant discriminant.
r? `@wesleywiser`
Replace `NestedVisitorMap` with generic `NestedFilter`
This is an attempt to make the `intravisit::Visitor` API simpler and "more const" with regard to nested visiting.
With this change, `intravisit::Visitor` does not visit nested things by default, unless you specify `type NestedFilter = nested_filter::OnlyBodies` (or `All`). `nested_visit_map` returns `Self::Map` instead of `NestedVisitorMap<Self::Map>`. It panics by default (unreachable if `type NestedFilter` is omitted).
One somewhat trixty thing here is that `nested_filter::{OnlyBodies, All}` live in `rustc_middle` so that they may have `type Map = map::Map` and so that `impl Visitor`s never need to specify `type Map` - it has a default of `Self::NestedFilter::Map`.
Remove deprecated LLVM-style inline assembly
The `llvm_asm!` was deprecated back in #87590 1.56.0, with intention to remove
it once `asm!` was stabilized, which already happened in #91728 1.59.0. Now it
is time to remove `llvm_asm!` to avoid continued maintenance cost.
Closes#70173.
Closes#92794.
Closes#87612.
Closes#82065.
cc `@rust-lang/wg-inline-asm`
r? `@Amanieu`
[code coverage] Fix missing dead code in modules that are never called
The issue here is that the logic used to determine which CGU to put the dead function stubs in doesn't handle cases where a module is never assigned to a CGU (which is what happens when all of the code in the module is dead).
The partitioning logic also caused issues in #85461 where inline functions were duplicated into multiple CGUs resulting in duplicate symbols.
This commit fixes the issue by removing the complex logic used to assign dead code stubs to CGUs and replaces it with a much simpler model: we pick one CGU to hold all the dead code stubs. We pick a CGU which has exported items which increases the likelihood the linker won't throw away our dead functions and we pick the smallest to minimize the impact on compilation times for crates with very large CGUs.
Fixes#91661Fixes#86177Fixes#85718Fixes#79622
r? ```@tmandry```
cc ```@richkadel```
This PR is not urgent so please don't let it interrupt your holidays! 🎄🎁
Normalize generator-local types with unevaluated constants
Normalize generator-interior types in addition to (i.e. instead of just) erasing regions, since sometimes we collect types with unevaluated const exprs.
Fixes#84737Fixes#88171Fixes#92091Fixes#92634
Probably also fixes#73114, but that one has no code I could test. It looks like it's the same issue, though.
Delay remaining `span_bug`s in drop elaboration
This follows changes from #67967 and converts remaining `span_bug`s into
delayed bugs, since for const items drop elaboration might be executed
on a MIR which failed borrowck.
Fixes#81708.
Fixes#91816.
Remove `NullOp::Box`
Follow up of #89030 and MCP rust-lang/compiler-team#460.
~1 month later nothing seems to be broken, apart from a small regression that #89332 (1aac85bb716c09304b313d69d30d74fe7e8e1a8e) shows could be regained by remvoing the diverging path, so it shall be safe to continue and remove `NullOp::Box` completely.
r? `@jonas-schievink`
`@rustbot` label T-compiler
Continue supporting -Z instrument-coverage for compatibility for now,
but show a deprecation warning for it.
Update uses and documentation to use the -C option.
Move the documentation from the unstable book to stable rustc
documentation.
CTFE eval_fn_call: use FnAbi to determine argument skipping and compatibility
This makes use of the `FnAbi` type in CTFE/Miri, which `@eddyb` has been saying for years is what we should do.^^ `FnAbi` is used to
- determine which arguments to skip (rather than the previous heuristic of skipping ZST arguments with the Rust ABI)
- impose further restrictions on whether caller and callee are consistent in how a given argument is passed
I was hoping it would also simplify the code, but that is not the case -- the previous type compatibility checks are still required (AFAIK), only the ZST skipping is gone and that took barely any code. We also need some hacks because `FnAbi` assumes a certain way of implementing `caller_location` (by passing extra arguments), but Miri can just read the caller location from the call stack so it doesn't need those arguments. (The fact that every backend has to separately implement support for these arguments seems suboptimal -- looks like this might have been better implemented on the MIR level.) To avoid having to implement those unnecessary arguments in Miri, we just compute *whether* the argument is present on the caller/callee side, but don't actually pass that argument around.
I have no idea if this looks the way `@eddyb` thinks it should look... but it makes Miri's test suite pass. ;)
One of rustc's tests fails unfortunately (`ui/const-generics/issues/issue-67739.rs`), some const generic code that is evaluated too early -- I think that should raise `TooGeneric` but instead it ICEs. My assumption is this is some FnAbi code that has not been properly tested on polymorphic code, but it might also be me calling that FnAbi code the wrong way.
r? `@oli-obk` `@eddyb`
Fixes https://github.com/rust-lang/rust/issues/56166
Miri PR at https://github.com/rust-lang/miri/pull/1928
Store a `DefId` instead of an `AdtDef` in `AggregateKind::Adt`
The `AggregateKind` enum ends up in the final mir `Body`. Currently,
any changes to `AdtDef` (regardless of how significant they are)
will legitimately cause the overall result of `optimized_mir` to change,
invalidating any codegen re-use involving that mir.
This will get worse once we start hashing the `Span` inside `FieldDef`
(which is itself contained in `AdtDef`).
To try to reduce these kinds of invalidations, this commit changes
`AggregateKind::Adt` to store just the `DefId`, instead of the full
`AdtDef`. This allows the result of `optimized_mir` to be unchanged
if the `AdtDef` changes in a way that doesn't actually affect any
of the MIR we build.
This follows changes from #67967 and converts remaining `span_bug`s into
delayed bugs, since for const items drop elaboration might be executed
on a MIR which failed borrowck.
The `AggregateKind` enum ends up in the final mir `Body`. Currently,
any changes to `AdtDef` (regardless of how significant they are)
will legitimately cause the overall result of `optimized_mir` to change,
invalidating any codegen re-use involving that mir.
This will get worse once we start hashing the `Span` inside `FieldDef`
(which is itself contained in `AdtDef`).
To try to reduce these kinds of invalidations, this commit changes
`AggregateKind::Adt` to store just the `DefId`, instead of the full
`AdtDef`. This allows the result of `optimized_mir` to be unchanged
if the `AdtDef` changes in a way that doesn't actually affect any
of the MIR we build.
The issue here is that the logic used to determine which CGU to put the
dead function stubs in doesn't handle cases where a module is never
assigned to a CGU.
The partitioning logic also caused issues in #85461 where inline
functions were duplicated into multiple CGUs resulting in duplicate
symbols.
This commit fixes the issue by removing the complex logic used to assign
dead code stubs to CGUs and replaces it with a much simplier model: we
pick one CGU to hold all the dead code stubs. We pick a CGU which has
exported items which increases the likelihood the linker won't throw
away our dead functions and we pick the smallest to minimize the impact
on compilation times for crates with very large CGUs.
Fixes#86177Fixes#85718Fixes#79622
Move generator check earlier in inlining.
Inlining into generator may create references to other generators. For instance, inlining `Pin::<&mut from_generator::GenFuture<[generator1]>>::new_unchecked` into `generator2`. This cross reference can then create cycles when computing inlining for `generator1`.
In order to avoid this kind of surprises, we forbid all inlining into generators, and rely on LLVM to do the right thing. The existing `remove-zst-query-cycle` already ICEs in inline-mir mode, so we use it as test.
Split from #91743.
Previously some code paths would fail to evaluate the rvalue, while
incorrectly indicating success with `Ok`. As a result the previous value
of lhs could have been incorrectly const propagated.
This optimization pass previously made excessive assumptions as to the nature of
the blocks being optimized. We remove those assumptions and make sure to
rigorously justify all changes that are made to the MIR. Details can be found
in the file.
Remove `in_band_lifetimes` from `rustc_mir_transform`
Like #91580, this was inspired by the conversation in #44524 about possibly removing the feature from the compiler. This crate is a heavy `'tcx` user, so is a nice case study.
r? ``@petrochenkov``
Three interesting ones:
This one had the `'tcx` declared on the function, despite the trait taking a `'tcx`:
```diff
-impl Visitor<'_> for UsedLocals {
+impl<'tcx> Visitor<'tcx> for UsedLocals {
fn visit_statement(&mut self, statement: &Statement<'tcx>, location: Location) {
```
This one use in-band for one, and underscore for the other:
```diff
-pub fn remove_dead_blocks(tcx: TyCtxt<'tcx>, body: &mut Body<'_>) {
+pub fn remove_dead_blocks<'tcx>(tcx: TyCtxt<'tcx>, body: &mut Body<'tcx>) {
```
A spurious name, since there's no single-use-lifetime warning:
```diff
-pub fn run_passes(tcx: TyCtxt<'tcx>, body: &'mir mut Body<'tcx>, passes: &[&dyn MirPass<'tcx>]) {
+pub fn run_passes<'tcx>(tcx: TyCtxt<'tcx>, body: &mut Body<'tcx>, passes: &[&dyn MirPass<'tcx>]) {
```
Address some FIXMEs left over from #91475
This shouldn't change behavior, only clarify what we're currently doing. I filed #91576 to see if the treatment of generator drop shims is intentional.
cc #91475
This one is a heavy `'tcx` user.
Two interesting ones:
This one had the `'tcx` declared on the function, despite the trait taking a `'tcx`:
```diff
-impl Visitor<'_> for UsedLocals {
+impl<'tcx> Visitor<'tcx> for UsedLocals {
fn visit_statement(&mut self, statement: &Statement<'tcx>, location: Location) {
```
This one use in-band for one, and underscore for the other:
```diff
-pub fn remove_dead_blocks(tcx: TyCtxt<'tcx>, body: &mut Body<'_>) {
+pub fn remove_dead_blocks<'tcx>(tcx: TyCtxt<'tcx>, body: &mut Body<'tcx>) {
```
Add a MIR pass manager (Taylor's Version)
The final draft of #91386 and #77665.
While the compile-time constraints in #91386 are cool, I decided on a more minimal approach for now. I want to explore phase constraints and maybe relative-ordering constraints in the future, though. This should preserve existing behavior **exactly** (please let me know if it doesn't) while making the following changes to the way we organize things today:
- Each `MirPhase` now corresponds to a single MIR pass. `run_passes` is not responsible for listing the correct MIR phase.
- `run_passes` no longer silently skips passes if the declared MIR phase is greater than or equal to the body's. This has bitten me multiple times. If you want this behavior, you can always branch on `body.phase` yourself.
- If your pass is solely to emit errors, you can use the `MirLint` interface instead, which gets a shared reference to `Body` instead of a mutable one. By differentiating the two, I hope to make it clearer in the short term where lints belong in the pipeline. In the long term perhaps we could enforce this at compile-time?
- MIR is no longer dumped for passes that aren't enabled, or for lints.
I tried to check that `-Zvalidate` still works correctly, since the MIR phase is now updated as soon as the associated pass is done, instead of at the end of all the passes in `run_passes`. However, it looks like `-Zvalidate` is broken with current nightlies anyways 😢 (it spits out a bunch of errors).
cc `@oli-obk` `@wesleywiser`
r? rust-lang/wg-mir-opt
Looks like Generator drop shims already have `post_borrowck_cleanup` run
on them. That's a bit surprising, since it means they're getting const-
and maybe borrow-checked? This merits further investigation, but for now
just preserve the status quo.
Move `#![feature(const_precise_live_drops)]` checks earlier in the pipeline
Should mitigate the issues found during MCP on #73255.
Once this is done, we should clean up the queries a bit, since I think `mir_drops_elaborated_and_const_checked` can be merged back into `mir_promoted`.
Fixes#90770.
cc ``@rust-lang/wg-const-eval``
r? ``@nikomatsakis`` (since they reviewed #71824)
Instead we run `RemoveFalseEdges` and `RemoveUninitDrops` at the
appropriate time. The extra `SimplifyCfg` avoids visiting unreachable
blocks during `RemoveUninitDrops`.
Otherwise dataflow state will propagate along false edges and cause
things to be marked as maybe init unnecessarily. These should be
separate, since `SimplifyBranches` also makes `if true {} else {}` into
a `goto`, which means we wouldn't lint anything in the `else` block.
Previously for enums using the `Variants::Single` layout, the variant
index was being confused with its discriminant. For example, in the case
of `enum E { A = 1 }`.
Use `discriminant_for_variant` to avoid the issue.
The change is limited to the iteration over indices instead of using
`basic_blocks_mut()` directly, in the case the previous implementation
intentionally avoided invalidating the caches stored in MIR body.
Remove hir::map::blocks and use FnKind instead
The principal tool is `FnLikeNode`, which is not often used and can be easily implemented using `rustc_hir::intravisit::FnKind`.
Adopt let_else across the compiler
This performs a substitution of code following the pattern:
```
let <id> = if let <pat> = ... { identity } else { ... : ! };
```
To simplify it to:
```
let <pat> = ... { identity } else { ... : ! };
```
By adopting the `let_else` feature (cc #87335).
The PR also updates the syn crate because the currently used version of the crate doesn't support `let_else` syntax yet.
Note: Generally I'm the person who *removes* usages of unstable features from the compiler, not adds more usages of them, but in this instance I think it hopefully helps the feature get stabilized sooner and in a better state. I have written a [comment](https://github.com/rust-lang/rust/issues/87335#issuecomment-944846205) on the tracking issue about my experience and what I feel could be improved before stabilization of `let_else`.
Index and hash HIR as part of lowering
Part of https://github.com/rust-lang/rust/pull/88186
~Based on https://github.com/rust-lang/rust/pull/88880 (see merge commit).~
Once HIR is lowered, it is later indexed by the `index_hir` query and hashed for `crate_hash`. This PR moves those post-processing steps to lowering itself. As a side objective, the HIR crate data structure is refactored as an `IndexVec<LocalDefId, Option<OwnerInfo<'hir>>>` where `OwnerInfo` stores all the relevant information for an HIR owner.
r? `@michaelwoerister`
cc `@petrochenkov`
This performs a substitution of code following the pattern:
let <id> = if let <pat> = ... { identity } else { ... : ! };
To simplify it to:
let <pat> = ... { identity } else { ... : ! };
By adopting the let_else feature.
Stabilize `const_panic`
Closes#51999
FCP completed in #89006
```@rustbot``` label +A-const-eval +A-const-fn +T-lang
cc ```@oli-obk``` for review (not `r?`'ing as not on lang team)
Rework HIR API to make invocations of the hir_crate query harder.
`hir_crate` forces the recomputation of queries that depend on it.
This PR aims at avoiding useless invocations of `hir_crate` by making dependent code go through `tcx.hir()`.
Correct caller/callsite confusion in inliner message
`callee_body` is the MIR `Body` for the `callsite.callee` so this message basically says `"Inline {bar span} into bar"` when it should say `"Inline bar into foo"`.
Extracted out of #82280
When remapping a resume argument with projections rebase them on top of
the new base.
The case where resume argument has projections is unusual, but might
arise with box syntax where the assignment is performed directly into
the box without an intermediate temporary.
Introduce `Rvalue::ShallowInitBox`
Polished version of #88700.
Implements MCP rust-lang/compiler-team#460, and should allow #43596 to go forward.
In short, creating an empty box is split from a nullary-op `NullOp::Box` into two steps, first a call to `exchange_malloc`, then a `Rvalue::ShallowInitBox` which transmutes `*mut u8` to a shallow-initialized `Box<T>`. This allows the `exchange_malloc` call to unwind. Details can be found in the MCP.
`NullOp::Box` is not yet removed, purely to make reverting easier in case anything goes wrong as the result of this PR. If revert is needed a reversion of "Use Rvalue::ShallowInitBox for box expression" commit followed by a test bless should be sufficient.
Experiments in #88700 showed a very slight compile-time perf regression due to (supposedly) slightly more time spent in LLVM. We could omit unwind edge generation (in non-`oom=panic` case) in box expression MIR construction to restore perf; but I don't think it's necessary since runtime perf isn't affected and perf difference is rather small.
This PR allows applying a `#[track_caller]` attribute to a
closure/generator expression. The attribute as interpreted as applying
to the compiler-generated implementation of the corresponding trait
method (`FnOnce::call_once`, `FnMut::call_mut`, `Fn::call`, or
`Generator::resume`).
This feature does not have its own feature gate - however, it requires
`#![feature(stmt_expr_attributes)]` in order to actually apply
an attribute to a closure or generator.
This is implemented in the same way as for functions - an extra
location argument is appended to the end of the ABI. For closures,
this argument is *not* part of the 'tupled' argument storing the
parameters - the final closure argument for `#[track_caller]` closures
is no longer a tuple.
For direct (monomorphized) calls, the necessary support was already
implemented - we just needeed to adjust some assertions around checking
the ABI and argument count to take closures into account.
For calls through a trait object, more work was needed.
When creating a `ReifyShim`, we need to create a shim
for the trait method (e.g. `FnOnce::call_mut`) - unlike normal
functions, closures are never invoked directly, and always go through a
trait method.
Additional handling was needed for `InstanceDef::ClosureOnceShim`. In
order to pass location information throgh a direct (monomorphized) call
to `FnOnce::call_once` on an `FnMut` closure, we need to make
`ClosureOnceShim` aware of `#[tracked_caller]`. A new field
`track_caller` is added to `ClosureOnceShim` - this is used by
`InstanceDef::requires_caller` location, allowing codegen to
pass through the extra location argument.
Since `ClosureOnceShim.track_caller` is only used by codegen,
we end up generating two identical MIR shims - one for
`track_caller == true`, and one for `track_caller == false`. However,
these two shims are used by the entire crate (i.e. it's two shims total,
not two shims per unique closure), so this shouldn't a big deal.
If any block on a goto chain has more than one predecessor, then the new
start block would have basic block predecessors.
Skip the transformation for the start block altogether, to avoid
violating the new invariant that the start block does not have any basic
block predecessors.
Rollup of 10 pull requests
Successful merges:
- #88292 (Enable --generate-link-to-definition for rustc's docs)
- #88729 (Recover from `Foo(a: 1, b: 2)`)
- #88875 (cleanup(rustc_trait_selection): remove vestigial code from rustc_on_unimplemented)
- #88892 (Move object safety suggestions to the end of the error)
- #88928 (Document the closure arguments for `reduce`.)
- #88976 (Clean up and add doc comments for CStr)
- #88983 (Allow calling `get_body_with_borrowck_facts` without `-Z polonius`)
- #88985 (Update clobber_abi list to include k[1-7] regs)
- #88986 (Update the backtrace crate)
- #89009 (Fix typo in `break` docs)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Querying layout of a generator requires its optimized MIR. Thus
computing layout during MIR optimization of a generator might create a
query cycle. Disable RemoveZsts in generators to avoid the issue
(similar approach is used in ConstProp transform already).
Introduce NullOp::AlignOf
This PR introduces `Rvalue::NullaryOp(NullOp::AlignOf, ty)`, which will be lowered from `align_of`, similar to `size_of` lowering to `Rvalue::NullaryOp(NullOp::SizeOf, ty)`.
The changes are originally part of #88700 but since it's not dependent on other changes and could have performance impact on its own, it's separated into its own PR.
Add -Z panic-in-drop={unwind,abort} command-line option
This PR changes `Drop` to abort if an unwinding panic attempts to escape it, making the process abort instead. This has several benefits:
- The current behavior when unwinding out of `Drop` is very unintuitive and easy to miss: unwinding continues, but the remaining drops in scope are simply leaked.
- A lot of unsafe code doesn't expect drops to unwind, which can lead to unsoundness:
- https://github.com/servo/rust-smallvec/issues/14
- https://github.com/bluss/arrayvec/issues/3
- There is a code size and compilation time cost to this: LLVM needs to generate extra landing pads out of all calls in a drop implementation. This can compound when functions are inlined since unwinding will then continue on to process drops in the callee, which can itself unwind, etc.
- Initial measurements show a 3% size reduction and up to 10% compilation time reduction on some crates (`syn`).
One thing to note about `-Z panic-in-drop=abort` is that *all* crates must be built with this option for it to be sound since it makes the compiler assume that dropping `Box<dyn Any>` will never unwind.
cc https://github.com/rust-lang/lang-team/issues/97
generic_const_exprs: use thir for abstract consts instead of mir
Changes `AbstractConst` building to use `thir` instead of `mir` so that there's less chance of consts unifying when they shouldn't because lowering to mir dropped information (see `abstract-consts-as-cast-5.rs` test)
r? `@lcnr`
Encode spans relative to the enclosing item
The aim of this PR is to avoid recomputing queries when code is moved without modification.
MCP at https://github.com/rust-lang/compiler-team/issues/443
This is achieved by :
1. storing the HIR owner LocalDefId information inside the span;
2. encoding and decoding spans relative to the enclosing item in the incremental on-disk cache;
3. marking a dependency to the `source_span(LocalDefId)` query when we translate a span from the short (`Span`) representation to its explicit (`SpanData`) representation.
Since all client code uses `Span`, step 3 ensures that all manipulations
of span byte positions actually create the dependency edge between
the caller and the `source_span(LocalDefId)`.
This query return the actual absolute span of the parent item.
As a consequence, any source code motion that changes the absolute byte position of a node will either:
- modify the distance to the parent's beginning, so change the relative span's hash;
- dirty `source_span`, and trigger the incremental recomputation of all code that
depends on the span's absolute byte position.
With this scheme, I believe the dependency tracking to be accurate.
For the moment, the spans are marked during lowering.
I'd rather do this during def-collection,
but the AST MutVisitor is not practical enough just yet.
The only difference is that we attach macro-expanded spans
to their expansion point instead of the macro itself.