I have found this code confusing for years. I've always roughly
understood it, but never exactly. I just made my fourth(?) attempt and
finally cracked it.
This commit improves the comments. In particular, it explicitly
describes how you can't do a custom fold/visit of any type; there are
actually a handful of "types of interest" (e.g. `Ty`, `Predicate`,
`Region`, `Const`) that can be custom folded/visted, and all other types
just get a generic traversal. I think this was the part that eluded me
on all my prior attempts at understanding.
The commit also updates comments to account for some newer changes such
as the fallible/infallible folding distinction, does some minor
reorderings, and moves one `impl` to a better place.
Fix inconsistent symbol mangling with -Zverbose
Always skip arguments that are the defaults of their respective
parameters, to avoid generating inconsistent symbols for builds
with `-Zverbose` flag and without it.
Support pretty printing of invalid constants
Make it possible to pretty print invalid constants by introducing a
fallible variant of `destructure_const` and falling back to debug
formatting when it fails.
Closes#93688.
Treat static refs as `mir::ConstantKind::Val`
With the upcoming introduction of Valtrees we want to treat more values as `mir::ConstantKind::Val` directly.
r? `@lcnr`
cc `@oli-obk`
Always skip arguments that are the defaults of their respective
parameters, to avoid generating inconsistent symbols for builds
with `-Zverbose` flag and without it.
Make it possible to pretty print invalid constants by introducing a
fallible variant of `destructure_const` and falling back to debug
formatting when it fails.
Specifically, rename the `Const` struct as `ConstS` and re-introduce `Const` as
this:
```
pub struct Const<'tcx>(&'tcx Interned<ConstS>);
```
This now matches `Ty` and `Predicate` more closely, including using
pointer-based `eq` and `hash`.
Notable changes:
- `mk_const` now takes a `ConstS`.
- `Const` was copy, despite being 48 bytes. Now `ConstS` is not, so need a
we need separate arena for it, because we can't use the `Dropless` one any
more.
- Many `&'tcx Const<'tcx>`/`&Const<'tcx>` to `Const<'tcx>` changes
- Many `ct.ty` to `ct.ty()` and `ct.val` to `ct.val()` changes.
- Lots of tedious sigil fiddling.
The variant names are exported, so we can use them directly (possibly
with a `ty::` qualifier). Lots of places already do this, this commit
just increases consistency.
Specifically, change `Region` from this:
```
pub type Region<'tcx> = &'tcx RegionKind;
```
to this:
```
pub struct Region<'tcx>(&'tcx Interned<RegionKind>);
```
This now matches `Ty` and `Predicate` more closely.
Things to note
- Regions have always been interned, but we haven't been using pointer-based
`Eq` and `Hash`. This is now happening.
- I chose to impl `Deref` for `Region` because it makes pattern matching a lot
nicer, and `Region` can be viewed as just a smart wrapper for `RegionKind`.
- Various methods are moved from `RegionKind` to `Region`.
- There is a lot of tedious sigil changes.
- A couple of types like `HighlightBuilder`, `RegionHighlightMode` now have a
`'tcx` lifetime because they hold a `Ty<'tcx>`, so they can call `mk_region`.
- A couple of test outputs change slightly, I'm not sure why, but the new
outputs are a little better.
Specifically, change `Ty` from this:
```
pub struct Predicate<'tcx> { inner: &'tcx PredicateInner<'tcx> }
```
to this:
```
pub struct Predicate<'tcx>(&'tcx Interned<PredicateS<'tcx>>)
```
where `PredicateInner` is renamed as `PredicateS`.
This (plus a few other minor changes) makes the parallels with `Ty` and
`TyS` much clearer, and makes the uniqueness more explicit.
Specifically, change `Ty` from this:
```
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
to this
```
pub struct Ty<'tcx>(Interned<'tcx, TyS<'tcx>>);
```
There are two benefits to this.
- It's now a first class type, so we can define methods on it. This
means we can move a lot of methods away from `TyS`, leaving `TyS` as a
barely-used type, which is appropriate given that it's not meant to
be used directly.
- The uniqueness requirement is now explicit, via the `Interned` type.
E.g. the pointer-based `Eq` and `Hash` comes from `Interned`, rather
than via `TyS`, which wasn't obvious at all.
Much of this commit is boring churn. The interesting changes are in
these files:
- compiler/rustc_middle/src/arena.rs
- compiler/rustc_middle/src/mir/visit.rs
- compiler/rustc_middle/src/ty/context.rs
- compiler/rustc_middle/src/ty/mod.rs
Specifically:
- Most mentions of `TyS` are removed. It's very much a dumb struct now;
`Ty` has all the smarts.
- `TyS` now has `crate` visibility instead of `pub`.
- `TyS::make_for_test` is removed in favour of the static `BOOL_TY`,
which just works better with the new structure.
- The `Eq`/`Ord`/`Hash` impls are removed from `TyS`. `Interned`s impls
of `Eq`/`Hash` now suffice. `Ord` is now partly on `Interned`
(pointer-based, for the `Equal` case) and partly on `TyS`
(contents-based, for the other cases).
- There are many tedious sigil adjustments, i.e. adding or removing `*`
or `&`. They seem to be unavoidable.
make `find_similar_impl_candidates` even fuzzier
continues the good work of `@BGR360` in #92223. I might have overshot a bit and we're now slightly too fuzzy 😅
with this we can now also simplify `simplify_type`, which is nice :3
Make `Res::SelfTy` a struct variant and update docs
I found pattern matching on a `(Option<DefId>, Option<(DefId, bool)>)` to not be super readable, additionally the doc comments on the types in a tuple variant aren't visible anywhere at use sites as far as I can tell (using rust analyzer + vscode)
The docs incorrectly assumed that the `DefId` in `Option<(DefId, bool)>` would only ever be for an impl item and I also found the code examples to be somewhat unclear about which `DefId` was being talked about.
r? `@lcnr` since you reviewed the last PR changing these docs
Improve chalk integration
- Support subtype bounds in chalk lowering
- Handle universes in canonicalization
- Handle type parameters in chalk responses
- Use `chalk_ir::LifetimeData::Empty` for `ty::ReEmpty`
- Remove `ignore-compare-mode-chalk` for tests that no longer hang (they may still fail or ICE)
This is enough to get a hello world program to compile with `-Zchalk` now. Some of the remaining issues that are needed to get Chalk integration working on larger programs are:
- rust-lang/chalk#234
- rust-lang/chalk#548
- rust-lang/chalk#734
- Generators are handled differently in chalk and rustc
r? `@jackh726`
Apply noundef attribute to &T, &mut T, Box<T>, bool
This doesn't handle `char` because it's a bit awkward to distinguish it from `u32` at this point in codegen.
Note that this _does not_ change whether or not it is UB for `&`, `&mut`, or `Box` to point to undef. It only applies to the pointer itself, not the pointed-to memory.
Fixes (partially) #74378.
r? `@nikic` cc `@RalfJung`
This is required to avoid creating large numbers of universes from each
Chalk query, while still having enough universe information for lifetime
errors.
Make all `hir::Map` methods consistently by-value
`hir::Map` only consists of a single reference (as part of the contained `TyCtxt`) anyways, so copying is literally zero overhead compared to passing a reference
Ensure that queries only return Copy types.
This should pervent the perf footgun of returning a result with an expensive `Clone` impl (like a `Vec` of a hash map).
I went for the stupid solution of allocating on an arena everything that was not `Copy`. Some query results could be made Copy easily, but I did not really investigate.
Refactor query system to maintain a global job id counter
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
Add more *-unwind ABI variants
The following *-unwind ABIs are now supported:
- "C-unwind"
- "cdecl-unwind"
- "stdcall-unwind"
- "fastcall-unwind"
- "vectorcall-unwind"
- "thiscall-unwind"
- "aapcs-unwind"
- "win64-unwind"
- "sysv64-unwind"
- "system-unwind"
cc `@rust-lang/wg-ffi-unwind`
Lazy type-alias-impl-trait
Previously opaque types were processed by
1. replacing all mentions of them with inference variables
2. memorizing these inference variables in a side-table
3. at the end of typeck, resolve the inference variables in the side table and use the resolved type as the hidden type of the opaque type
This worked okayish for `impl Trait` in return position, but required lots of roundabout type inference hacks and processing.
This PR instead stops this process of replacing opaque types with inference variables, and just keeps the opaque types around.
Whenever an opaque type `O` is compared with another type `T`, we make the comparison succeed and record `T` as the hidden type. If `O` is compared to `U` while there is a recorded hidden type for it, we grab the recorded type (`T`) and compare that against `U`. This makes implementing
* https://github.com/rust-lang/rfcs/pull/2515
much simpler (previous attempts on the inference based scheme were very prone to ICEs and general misbehaviour that was not explainable except by random implementation defined oddities).
r? `@nikomatsakis`
fixes#93411fixes#88236
use `fold_list` in `try_super_fold_with` for `SubstsRef`
split out from #93505 as this by itself is responsible for most of the perf improvements there
r? `@michaelwoerister`
This doesn't handle `char` because it's a bit awkward to distinguish it
from u32 at this point in codegen.
Note that for some types (like `&Struct` and `&mut Struct`),
we already apply `dereferenceable`, which implies `noundef`,
so the IR does not change.
Stabilize `-Z instrument-coverage` as `-C instrument-coverage`
(Tracking issue for `instrument-coverage`: https://github.com/rust-lang/rust/issues/79121)
This PR stabilizes support for instrumentation-based code coverage, previously provided via the `-Z instrument-coverage` option. (Continue supporting `-Z instrument-coverage` for compatibility for now, but show a deprecation warning for it.)
Many, many people have tested this support, and there are numerous reports of it working as expected.
Move the documentation from the unstable book to stable rustc documentation. Update uses and documentation to use the `-C` option.
Addressing questions raised in the tracking issue:
> If/when stabilized, will the compiler flag be updated to -C instrument-coverage? (If so, the -Z variant could also be supported for some time, to ease migrations for existing users and scripts.)
This stabilization PR updates the option to `-C` and keeps the `-Z` variant to ease migration.
> The Rust coverage implementation depends on (and automatically turns on) -Z symbol-mangling-version=v0. Will stabilizing this feature depend on stabilizing v0 symbol-mangling first? If so, what is the current status and timeline?
This stabilization PR depends on https://github.com/rust-lang/rust/pull/90128 , which stabilizes `-C symbol-mangling-version=v0` (but does not change the default symbol-mangling-version).
> The Rust coverage implementation implements the latest version of LLVM's Coverage Mapping Format (version 4), which forces a dependency on LLVM 11 or later. A compiler error is generated if attempting to compile with coverage, and using an older version of LLVM.
Given that LLVM 13 has now been released, requiring LLVM 11 for coverage support seems like a reasonable requirement. If people don't have at least LLVM 11, nothing else breaks; they just can't use coverage support. Given that coverage support currently requires a nightly compiler and LLVM 11 or newer, allowing it on a stable compiler built with LLVM 11 or newer seems like an improvement.
The [tracking issue](https://github.com/rust-lang/rust/issues/79121) and the [issue label A-code-coverage](https://github.com/rust-lang/rust/labels/A-code-coverage) link to a few open issues related to `instrument-coverage`, but none of them seem like showstoppers. All of them seem like improvements and refinements we can make after stabilization.
The original `-Z instrument-coverage` support went through a compiler-team MCP at https://github.com/rust-lang/compiler-team/issues/278 . Based on that, `@pnkfelix` suggested that this needed a stabilization PR and a compiler-team FCP.
Fix ret > 1 bound if shadowed by const
Prior to a change, it would only look at types in bounds. When it started looking for consts,
shadowing type variables with a const would cause an ICE, so now defer looking at consts only if
there are no types present.
cc ``````@compiler-errors``````
Should Fix#93553
Prior to a change, it would only look at types in bounds. When it started looking for consts,
shadowing type variables with a const would cause an ICE, so now defer looking at consts only if
there are no types present.
Temporary fix for the layout of aligned enums
Fix for the issue #92464
~~I was after this issue for quite some time now, I have a temporary fix for it.
I think the current problem is [here](e75f96763f/compiler/rustc_middle/src/ty/layout.rs (L1305-L1310)) created `tag` value might be wrong, because when I checked `min` and `max` values it's always between 0..1, which results in wrong size comparison in a few lines down below.
I think `min` and `max` values don't take `#[repr(aligned(8))]` into consideration and just act from base values assigned inside the enum. If what I am saying is true, aligned enums were created with the wrong layout for some time.~~
~~As stated in the title this is only a temporary fix and I think this needs further investigation, if someone wants to mentor it I would like to work on that too.~~ 😸
**Edit: Weird some tests fail now going to close this for now...**
**Edit2: I made it work again.**
I think I figured out the main problem of the issue, layout types of aligned enums with custom discriminant types were not handled, which resulted in confusing(such as this issue) behavior down the line, this is a kinda hacky fix for the issue.
by using an opaque type obligation to bubble up comparisons between opaque types and other types
Also uses proper obligation causes so that the body id works, because out of some reason nll uses body ids for logic instead of just diagnostics.
Return an indexmap in `all_local_trait_impls` query
The data structure previously used here required that `DefId` be `Ord`. As part of #90317, we do not want `DefId` to implement `Ord`.
Fix two incorrect "it's" (typos in comments)
Found one of these while reading the documentation online. The other came up because it's in the same file.
Make dead code check a query.
Dead code check is run for each invocation of the compiler, even if no modifications were involved.
This PR makes dead code check a query keyed on the module. This allows to skip the check when a module has not changed.
To perform this, a query `live_symbols_and_ignored_derived_traits` is introduced to encapsulate the global analysis of finding live symbols. The second query `check_mod_deathness` outputs diagnostics for each module based on this first query's results.
Continue work on associated const equality
This actually implements some more complex logic for assigning associated consts to values.
Inside of projection candidates, it now defers to a separate function for either consts or
types. To reduce amount of code, projections are now generic over T, where T is either a Type or
a Const. I can add some comments back later, but this was the fastest way to implement it.
It also now finds the correct type of consts in type_of.
---
The current main TODO is finding the const of the def id for the LeafDef.
Right now it works if the function isn't called, but once you use the trait impl with the bound it fails inside projection.
I was hoping to get some help in getting the `&'tcx ty::Const<'tcx>`, in addition to a bunch of other `todo!()`s which I think may not be hit.
r? `@oli-obk`
Updates #92827
rustc_errors: only box the `diagnostic` field in `DiagnosticBuilder`.
I happened to need to do the first change (replacing `allow_suggestions` with equivalent functionality on `Diagnostic` itself) as part of a larger change, and noticed that there's only two fields left in `DiagnosticBuilderInner`.
So with this PR, instead of a single pointer, `DiagnosticBuilder` is two pointers, which should work just as well for passing *it* by value (and may even work better wrt some operations, though probably not by much).
But anything that was already taking advantage of `DiagnosticBuilder` being a single pointer, and wrapping it further (e.g. `Result<T, DiagnosticBuilder>` w/ non-ZST `T`), ~~will probably see a slowdown~~, so I want to do a perf run before even trying to propose this.
Store def_id_to_hir_id as variant in hir_owner.
If hir_owner is Owner(_), the LocalDefId is pointing to an owner, so the ItemLocalId is 0.
If the HIR node does not exist, we store Phantom.
Otherwise, we store the HirId associated to the LocalDefId.
Related to #89278
r? `@oli-obk`
Add note suggesting that predicate may be satisfied, but is not `const`
Not sure if we should be printing this in addition to, or perhaps _instead_ of the help message:
```
help: the trait `~const Add` is not implemented for `NonConstAdd`
```
Also added `ParamEnv::is_const` and `PolyTraitPredicate::is_const_if_const` and, in a separate commit, used those in other places instead of `== hir::Constness::Const`, etc.
r? ````@fee1-dead````
If hir_owner is Owner(_), the LocalDefId is pointing to an owner, so the ItemLocalId is 0.
If the HIR node does not exist, we store Phantom.
Otherwise, we store the HirId associated to the LocalDefId.
Store a `Symbol` instead of an `Ident` in `AssocItem`
This is the same idea as #92533, but for `AssocItem` instead
of `VariantDef`/`FieldDef`.
With this change, we no longer have any uses of
`#[stable_hasher(project(...))]`
Check `const Drop` impls considering `~const` Bounds
This PR adds logic to trait selection to account for `~const` bounds in custom `impl const Drop` for types, elaborates the `const Drop` check in `rustc_const_eval` to check those bounds, and steals some drop linting fixes from #92922, thanks `@DrMeepster.`
r? `@fee1-dead` `@oli-obk` <sup>(edit: guess I can't request review from two people, lol)</sup>
since each of you wrote and reviewed #88558, respectively.
Since the logic here is more complicated than what existed, it's possible that this is a perf regression. But it works correctly with tests, and that makes me happy.
Fixes#92881
rustc_lint: Some early linting refactorings
The first one removes and renames some fields and methods from `EarlyContext`.
The second one uses the set of registered tools (for tool attributes and tool lints) in a more centralized way.
The third one removes creation of a fake `ast::Crate` from `fn pre_expansion_lint`.
Pre-expansion linting is done with per-module granularity on freshly loaded modules, and it previously synthesized an `ast::Crate` to visit non-root modules, now they are visited as modules.
The node ID used for pre-expansion linting is also made more precise (the loaded module ID is used).
Make `Decodable` and `Decoder` infallible.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this PR is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
r? `@bjorn3`
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
I have found this code very confusing at times. This commit clarifies
things.
In particular, the commit explains the requirements that the `Borrow`
impls put on the `Eq` and `Hash` impls, which are non-obvious. And it
puts the `Borrow` impls first, since they force `Eq` and `Hash` to have
particular forms.
The commit also notes `TyS`'s uniqueness requirements.
Rollup of 17 pull requests
Successful merges:
- #91032 (Introduce drop range tracking to generator interior analysis)
- #92856 (Exclude "test" from doc_auto_cfg)
- #92860 (Fix errors on blanket impls by ignoring the children of generated impls)
- #93038 (Fix star handling in block doc comments)
- #93061 (Only suggest adding `!` to expressions that can be macro invocation)
- #93067 (rustdoc mobile: fix scroll offset when jumping to internal id)
- #93086 (Add tests to ensure that `let_chains` works with `if_let_guard`)
- #93087 (Fix src/test/run-make/raw-dylib-alt-calling-convention)
- #93091 (⬆ chalk to 0.76.0)
- #93094 (src/test/rustdoc-json: Check for `struct_field`s in `variant_tuple_struct.rs`)
- #93098 (Show a more informative panic message when `DefPathHash` does not exist)
- #93099 (rustdoc: auto create output directory when "--output-format json")
- #93102 (Pretty printer algorithm revamp step 3)
- #93104 (Support --bless for pp-exact pretty printer tests)
- #93114 (update comment for `ensure_monomorphic_enough`)
- #93128 (Add script to prevent point releases with same number as existing ones)
- #93136 (Backport the 1.58.1 release notes to master)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Show a more informative panic message when `DefPathHash` does not exist
This should hopefully make it easier to debug incremental compilation
bugs like #93096 without affecting performance.
Introduce drop range tracking to generator interior analysis
This PR addresses cases such as this one from #57478:
```rust
struct Foo;
impl !Send for Foo {}
let _: impl Send = || {
let guard = Foo;
drop(guard);
yield;
};
```
Previously, the `generator_interior` pass would unnecessarily include the type `Foo` in the generator because it was not aware of the behavior of `drop`. We fix this issue by introducing a drop range analysis that finds portions of the code where a value is guaranteed to be dropped. If a value is dropped at all suspend points, then it is no longer included in the generator type. Note that we are using "dropped" in a generic sense to include any case in which a value has been moved. That is, we do not only look at calls to the `drop` function.
There are several phases to the drop tracking algorithm, and we'll go into more detail below.
1. Use `ExprUseVisitor` to find values that are consumed and borrowed.
2. `DropRangeVisitor` uses consume and borrow information to gather drop and reinitialization events, as well as build a control flow graph.
3. We then propagate drop and reinitialization information through the CFG until we reach a fix point (see `DropRanges::propagate_to_fixpoint`).
4. When recording a type (see `InteriorVisitor::record`), we check the computed drop ranges to see if that value is definitely dropped at the suspend point. If so, we skip including it in the type.
## 1. Use `ExprUseVisitor` to find values that are consumed and borrowed.
We use `ExprUseVisitor` to identify the places where values are consumed. We track both the `hir_id` of the value, and the `hir_id` of the expression that consumes it. For example, in the expression `[Foo]`, the `Foo` is consumed by the array expression, so after the array expression we can consider the `Foo` temporary to be dropped.
In this process, we also collect values that are borrowed. The reason is that the MIR transform for generators conservatively assumes anything borrowed is live across a suspend point (see `rustc_mir_transform::generator::locals_live_across_suspend_points`). We match this behavior here as well.
## 2. Gather drop events, reinitialization events, and control flow graph
After finding the values of interest, we perform a post-order traversal over the HIR tree to find the points where these values are dropped or reinitialized. We use the post-order index of each event because this is how the existing generator interior analysis refers to the position of suspend points and the scopes of variables.
During this traversal, we also record branching and merging information to handle control flow constructs such as `if`, `match`, and `loop`. This is necessary because values may be dropped along some control flow paths but not others.
## 3. Iterate to fixed point
The previous pass found the interesting events and locations, but now we need to find the actual ranges where things are dropped. Upon entry, we have a list of nodes ordered by their position in the post-order traversal. Each node has a set of successors. For each node we additionally keep a bitfield with one bit per potentially consumed value. The bit is set if we the value is dropped along all paths entering this node.
To compute the drop information, we first reverse the successor edges to find each node's predecessors. Then we iterate through each node, and for each node we set its dropped value bitfield to the intersection of all incoming dropped value bitfields.
If any bitfield for any node changes, we re-run the propagation loop again.
## 4. Ignore dropped values across suspend points
At this point we have a data structure where we can ask whether a value is guaranteed to be dropped at any post order index for the HIR tree. We use this information in `InteriorVisitor` to check whether a value in question is dropped at a particular suspend point. If it is, we do not include that value's type in the generator type.
Note that we had to augment the region scope tree to include all yields in scope, rather than just the last one as we did before.
r? `@nikomatsakis`
improve `_` constants in item signature handling
removing the "type" from the error messages does slightly worsen the error messages for types, but figuring out whether the placeholder is for a type or a constant and correctly dealing with that seemed fairly difficult to me so I took the easy way out ✨ Imo the error message is still clear enough.
r? `@BoxyUwU` cc `@estebank`
- Also rename a trivial_const_drop to match style of other functions in
the util module.
- Also add a test for `const Drop` that doesn't depend on a `~const`
bound.
- Also comment a bit why we remove the const bound during dropck impl
check.
This is the same idea as #92533, but for `AssocItem` instead
of `VariantDef`/`FieldDef`.
With this change, we no longer have any uses of
`#[stable_hasher(project(...))]`
Remove some unused ordering derivations based on `DefId`
Like #93018, this removes some unused/unneeded ordering derivations as part of ongoing work on #90317. Here, these changes are aimed at making https://github.com/rust-lang/rust/pull/90749 easier to review, test, and merge.
r? `@cjgillot`
Formally implement let chains
## Let chains
My longest and hardest contribution since #64010.
Thanks to `@Centril` for creating the RFC and special thanks to `@matthewjasper` for helping me since the beginning of this journey. In fact, `@matthewjasper` did much of the complicated MIR stuff so it's true to say that this feature wouldn't be possible without him. Thanks again `@matthewjasper!`
With the changes proposed in this PR, it will be possible to chain let expressions along side local variable declarations or ordinary conditional expressions. In other words, do much of what the `if_chain` crate already does.
## Other considerations
* `if let guard` and `let ... else` features need special care and should be handled in a following PR.
* Irrefutable patterns are allowed within a let chain context
* ~~Three Clippy lints were already converted to start dogfooding and help detect possible corner cases~~
cc #53667
Directly use ConstValue for single literals in blocks
Addresses the minimal repro in https://github.com/rust-lang/rust/issues/92186, but doesn't fix the underlying problem (which would be solved by solving the anon subst problem afaict).
I do, however, think that it makes sense in general to treat single literals in anon blocks as const values directly, especially in light of the problem that the issue refers to (anon const evaluation being postponed until infer variables in substs can be resolved, which was introduced by https://github.com/rust-lang/rust/pull/90023), i.e. while we do get warnings for those unnecessary braces, we should try to avoid errors caused by those braces if possible.
Fix ICEs related to `Deref<Target=[T; N]>` on newtypes
1. Stash a const infer's type into the canonical var during canonicalization, so we can recreate the fresh const infer with that same type.
For example, given `[T; _]` we know `_` is a `usize`. If we go from infer => canonical => infer, we shouldn't forget that variable is a usize.
Fixes#92626Fixes#83704
2. Don't stash the autoderef'd slice type that we get from method lookup, but instead recreate it during method confirmation. We need to do this because the type we receive back after picking the method references a type variable that does not exist after probing is done.
Fixes#92637
... A better solution for the second issue would be to actually _properly_ implement `Deref` for `[T; N]` instead of fixing this autoderef hack to stop leaking inference variables. But I actually looked into this, and there are many complications with const impls.
Replace use of `ty()` on term and use it in more places. This will allow more flexibility in the
future, but slightly worried it allows items which are consts which only accept types.
Implement `#[rustc_must_implement_one_of]` attribute
This PR adds a new attribute — `#[rustc_must_implement_one_of]` that allows changing the "minimal complete definition" of a trait. It's similar to GHC's minimal `{-# MINIMAL #-}` pragma, though `#[rustc_must_implement_one_of]` is weaker atm.
Such attribute was long wanted. It can be, for example, used in `Read` trait to make transitions to recently added `read_buf` easier:
```rust
#[rustc_must_implement_one_of(read, read_buf)]
pub trait Read {
fn read(&mut self, buf: &mut [u8]) -> Result<usize> {
let mut buf = ReadBuf::new(buf);
self.read_buf(&mut buf)?;
Ok(buf.filled_len())
}
fn read_buf(&mut self, buf: &mut ReadBuf<'_>) -> Result<()> {
default_read_buf(|b| self.read(b), buf)
}
}
impl Read for Ty0 {}
//^ This will fail to compile even though all `Read` methods have default implementations
// Both of these will compile just fine
impl Read for Ty1 {
fn read(&mut self, buf: &mut [u8]) -> Result<usize> { /* ... */ }
}
impl Read for Ty2 {
fn read_buf(&mut self, buf: &mut ReadBuf<'_>) -> Result<()> { /* ... */ }
}
```
For now, this is implemented as an internal attribute to start experimenting on the design of this feature. In the future we may want to extend it:
- Allow arbitrary requirements like `a | (b & c)`
- Allow multiple requirements like
- ```rust
#[rustc_must_implement_one_of(a, b)]
#[rustc_must_implement_one_of(c, d)]
```
- Make it appear in rustdoc documentation
- Change the syntax?
- Etc
Eventually, we should make an RFC and make this (or rather similar) attribute public.
---
I'm fairly new to compiler development and not at all sure if the implementation makes sense, but at least it passes tests :)
ProjectionPredicate should be able to handle both associated types and consts so this adds the
first step of that. It mainly just pipes types all the way down, not entirely sure how to handle
consts, but hopefully that'll come with time.
Replace `NestedVisitorMap` with generic `NestedFilter`
This is an attempt to make the `intravisit::Visitor` API simpler and "more const" with regard to nested visiting.
With this change, `intravisit::Visitor` does not visit nested things by default, unless you specify `type NestedFilter = nested_filter::OnlyBodies` (or `All`). `nested_visit_map` returns `Self::Map` instead of `NestedVisitorMap<Self::Map>`. It panics by default (unreachable if `type NestedFilter` is omitted).
One somewhat trixty thing here is that `nested_filter::{OnlyBodies, All}` live in `rustc_middle` so that they may have `type Map = map::Map` and so that `impl Visitor`s never need to specify `type Map` - it has a default of `Self::NestedFilter::Map`.
Remove deprecated LLVM-style inline assembly
The `llvm_asm!` was deprecated back in #87590 1.56.0, with intention to remove
it once `asm!` was stabilized, which already happened in #91728 1.59.0. Now it
is time to remove `llvm_asm!` to avoid continued maintenance cost.
Closes#70173.
Closes#92794.
Closes#87612.
Closes#82065.
cc `@rust-lang/wg-inline-asm`
r? `@Amanieu`
Link impl items to corresponding trait items in late resolver.
Hygienically linking trait impl items to declarations in the trait can be done directly by the late resolver. In fact, it is already done to diagnose unknown items.
This PR uses this resolution work and stores the `DefId` of the trait item in the HIR. This avoids having to do this resolution manually later.
r? `@matthewjasper`
Related to #90639. The added `trait_item_id` field can be moved to `ImplItemRef` to be used directly by your PR.
Prefer projection candidates instead of param_env candidates for Sized predicates
Fixes#89352
Also includes some drive by logging and verbose printing changes that I found useful when debugging this, but I can remove this if needed.
This is a little hacky - but imo no more than the rest of `candidate_should_be_dropped_in_favor_of`. Importantly, in a Chalk-like world, both candidates should be completely compatible.
r? ```@nikomatsakis```
Closure capture cleanup & refactor
Follow up of #89648
Each commit is self-contained and the rationale/changes are documented in the commit message, so it's advisable to review commit by commit.
The code is significantly cleaner (at least IMO), but that could have some perf implication, so I'd suggest a perf run.
r? `@wesleywiser`
cc `@arora-aman`
[code coverage] Fix missing dead code in modules that are never called
The issue here is that the logic used to determine which CGU to put the dead function stubs in doesn't handle cases where a module is never assigned to a CGU (which is what happens when all of the code in the module is dead).
The partitioning logic also caused issues in #85461 where inline functions were duplicated into multiple CGUs resulting in duplicate symbols.
This commit fixes the issue by removing the complex logic used to assign dead code stubs to CGUs and replaces it with a much simpler model: we pick one CGU to hold all the dead code stubs. We pick a CGU which has exported items which increases the likelihood the linker won't throw away our dead functions and we pick the smallest to minimize the impact on compilation times for crates with very large CGUs.
Fixes#91661Fixes#86177Fixes#85718Fixes#79622
r? ```@tmandry```
cc ```@richkadel```
This PR is not urgent so please don't let it interrupt your holidays! 🎄🎁
The field is also renamed from `ident` to `name. In most cases,
we don't actually need the `Span`. A new `ident` method is added
to `VariantDef` and `FieldDef`, which constructs the full `Ident`
using `tcx.def_ident_span()`. This method is used in the cases
where we actually need an `Ident`.
This makes incremental compilation properly track changes
to the `Span`, without all of the invalidations caused by storing
a `Span` directly via an `Ident`.
Normalize struct tail type when checking Pointee trait
Let's go ahead and implement the FIXMEs by properly normalizing the struct-tail type when satisfying a Pointee obligation. This should fix the ICE when we try to calculate a layout depending on `<Ty as Pointee>::Metadata` later.
Fixes#92128Fixes#92577
Additionally, mark the obligation as ambiguous if there are any infer types in that struct-tail type. This has the effect of causing `<_ as Pointee>::Metadata` to be properly replaced with an infer variable ([here](https://github.com/rust-lang/rust/blob/master/compiler/rustc_trait_selection/src/traits/project.rs#L813)) and registered as an obligation... this turns out to be very important in unifying function parameters with formals that are assoc types.
Fixes#91446
Ensure that `Fingerprint` caching respects hashing configuration
Fixes#92266
In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.
To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.
With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
Mak DefId to AccessLevel map in resolve for export
hir_id to accesslevel in resolve and applied in privacy
using local def id
removing tracing probes
making function not recursive and adding comments
Move most of Exported/Public res to rustc_resolve
moving public/export res to resolve
fix missing stability attributes in core, std and alloc
move code to access_levels.rs
return for some kinds instead of going through them
Export correctness, macro changes, comments
add comment for import binding
add comment for import binding
renmae to access level visitor, remove comments, move fn as closure, remove new_key
fmt
fix rebase
fix rebase
fmt
fmt
fix: move macro def to rustc_resolve
fix: reachable AccessLevel for enum variants
fmt
fix: missing stability attributes for other architectures
allow unreachable pub in rustfmt
fix: missing impl access level + renaming export to reexport
Missing impl access level was found thanks to a test in clippy
rustdoc: Introduce a resolver cache for sharing data between early doc link resolution and later passes
The refactoring parts of https://github.com/rust-lang/rust/pull/88679, shouldn't cause any slowdowns.
r? `@jyn514`
Region info is completely unnecessary for upvar capture kind computation
and is only needed to create the final upvar tuple ty. Doing so makes
creation of UpvarCapture very cheap and expose further cleanup opportunity.
Fixes#92266
In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.
To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.
With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
Remove `NullOp::Box`
Follow up of #89030 and MCP rust-lang/compiler-team#460.
~1 month later nothing seems to be broken, apart from a small regression that #89332 (1aac85bb716c09304b313d69d30d74fe7e8e1a8e) shows could be regained by remvoing the diverging path, so it shall be safe to continue and remove `NullOp::Box` completely.
r? `@jonas-schievink`
`@rustbot` label T-compiler
Continue supporting -Z instrument-coverage for compatibility for now,
but show a deprecation warning for it.
Update uses and documentation to use the -C option.
Move the documentation from the unstable book to stable rustc
documentation.
Instead of special-casing mutable pointers/references, we
now support general generic types (currently, we handle
`ty::Ref`, `ty::RawPtr`, and `ty::Adt`)
When a `ty::Adt` is involved, we show an additional note
explaining which of the type's generic parameters is
invariant (e.g. the `T` in `Cell<T>`). Currently, we don't
explain *why* a particular generic parameter ends up becoming
invariant. In the general case, this could require printing
a long 'backtrace' of types, so doing this would be
more suitable for a follow-up PR.
We still only handle the case where our variance switches
to `ty::Invariant`.
rustc_metadata: Encode list of all crate's traits into metadata
While working on https://github.com/rust-lang/rust/pull/88679 I noticed that rustdoc is casually doing something quite expensive, something that is used only for error reporting in rustc - collecting all traits from all crates in the dependency tree.
This PR trades some minor extra time spent by metadata encoder in rustc for major gains for rustdoc (and for rustc runs with errors, which execute the `all_traits` query for better diagnostics).
CTFE eval_fn_call: use FnAbi to determine argument skipping and compatibility
This makes use of the `FnAbi` type in CTFE/Miri, which `@eddyb` has been saying for years is what we should do.^^ `FnAbi` is used to
- determine which arguments to skip (rather than the previous heuristic of skipping ZST arguments with the Rust ABI)
- impose further restrictions on whether caller and callee are consistent in how a given argument is passed
I was hoping it would also simplify the code, but that is not the case -- the previous type compatibility checks are still required (AFAIK), only the ZST skipping is gone and that took barely any code. We also need some hacks because `FnAbi` assumes a certain way of implementing `caller_location` (by passing extra arguments), but Miri can just read the caller location from the call stack so it doesn't need those arguments. (The fact that every backend has to separately implement support for these arguments seems suboptimal -- looks like this might have been better implemented on the MIR level.) To avoid having to implement those unnecessary arguments in Miri, we just compute *whether* the argument is present on the caller/callee side, but don't actually pass that argument around.
I have no idea if this looks the way `@eddyb` thinks it should look... but it makes Miri's test suite pass. ;)
One of rustc's tests fails unfortunately (`ui/const-generics/issues/issue-67739.rs`), some const generic code that is evaluated too early -- I think that should raise `TooGeneric` but instead it ICEs. My assumption is this is some FnAbi code that has not been properly tested on polymorphic code, but it might also be me calling that FnAbi code the wrong way.
r? `@oli-obk` `@eddyb`
Fixes https://github.com/rust-lang/rust/issues/56166
Miri PR at https://github.com/rust-lang/miri/pull/1928
Store a `DefId` instead of an `AdtDef` in `AggregateKind::Adt`
The `AggregateKind` enum ends up in the final mir `Body`. Currently,
any changes to `AdtDef` (regardless of how significant they are)
will legitimately cause the overall result of `optimized_mir` to change,
invalidating any codegen re-use involving that mir.
This will get worse once we start hashing the `Span` inside `FieldDef`
(which is itself contained in `AdtDef`).
To try to reduce these kinds of invalidations, this commit changes
`AggregateKind::Adt` to store just the `DefId`, instead of the full
`AdtDef`. This allows the result of `optimized_mir` to be unchanged
if the `AdtDef` changes in a way that doesn't actually affect any
of the MIR we build.
Update chalk to 0.75.0
- Compute flags in `intern_ty`
- Remove `tracing-serde` from `PERMITTED_DEPENDENCIES`
- Bump `tracing-tree` to 0.2.0
- Bump `tracing-subscriber` to 0.3.3
The `AggregateKind` enum ends up in the final mir `Body`. Currently,
any changes to `AdtDef` (regardless of how significant they are)
will legitimately cause the overall result of `optimized_mir` to change,
invalidating any codegen re-use involving that mir.
This will get worse once we start hashing the `Span` inside `FieldDef`
(which is itself contained in `AdtDef`).
To try to reduce these kinds of invalidations, this commit changes
`AggregateKind::Adt` to store just the `DefId`, instead of the full
`AdtDef`. This allows the result of `optimized_mir` to be unchanged
if the `AdtDef` changes in a way that doesn't actually affect any
of the MIR we build.
The issue here is that the logic used to determine which CGU to put the
dead function stubs in doesn't handle cases where a module is never
assigned to a CGU.
The partitioning logic also caused issues in #85461 where inline
functions were duplicated into multiple CGUs resulting in duplicate
symbols.
This commit fixes the issue by removing the complex logic used to assign
dead code stubs to CGUs and replaces it with a much simplier model: we
pick one CGU to hold all the dead code stubs. We pick a CGU which has
exported items which increases the likelihood the linker won't throw
away our dead functions and we pick the smallest to minimize the impact
on compilation times for crates with very large CGUs.
Fixes#86177Fixes#85718Fixes#79622
This makes `Obligation` two words bigger, but avoids allocating a lot of
the time.
I previously tried this in #73983 and it didn't help much, but local
timings look more promising now.
Remove `in_band_lifetimes` from `rustc_middle`
See #91867
This was mostly straightforward. In several places, I take advantage
of the fact that lifetimes are non-hygenic: a macro declares the
'tcx' lifetime, which is then used in types passed in as macro
arguments.
Remove `SymbolStr`
This was originally proposed in https://github.com/rust-lang/rust/pull/74554#discussion_r466203544. As well as removing the icky `SymbolStr` type, it allows the removal of a lot of `&` and `*` occurrences.
Best reviewed one commit at a time.
r? `@oli-obk`
Add user seed to `-Z randomize-layout`
Allows users of -`Z randomize-layout` to provide `-Z layout-seed=<seed>` in order to further randomizing type layout randomization. Extension of [compiler-team/#457](https://github.com/rust-lang/compiler-team/issues/457), allows users to change struct layouts without changing code and hoping that item path hashes change, aiding in detecting layout errors
hir: Do not introduce dummy type names for `extern` blocks in def paths
Use a separate nameless `DefPathData` variant instead.
Extracted from https://github.com/rust-lang/rust/pull/91795.
Implement normalize_erasing_regions queries in terms of 'try' version
Attempt to lessen performance regression caused by https://github.com/rust-lang/rust/pull/91255
r? `@jackh726`
See #91867
This was mostly straightforward. In several places, I take advantage
of the fact that lifetimes are non-hygenic: a macro declares the
'tcx' lifetime, which is then used in types passed in as macro
arguments.
extend `simplify_type`
might cause a slight perf inprovement and imo more accurately represents what types there are.
considering that I was going to use this in #85048 it seems like we might need this in the future anyways 🤷