ScalarMaybeUninit is explicitly hexadecimal in its formatting
This makes `ScalarMaybeUninit` consistent with `Scalar` after the changes in https://github.com/rust-lang/rust/pull/94189.
r? ``@oli-obk``
Fix several asm! related issues
This is a combination of several fixes, each split into a separate commit. Splitting these into PRs is not practical since they conflict with each other.
Fixes#92378Fixes#85247
r? ``@nagisa``
CTFE engine: Scalar: expose size-generic to_(u)int methods
This matches the size-generic constructors `Scalar::from_(u)int`, and it would have helped in https://github.com/rust-lang/miri/pull/1978.
r? `@oli-obk`
safely `transmute<&List<Ty<'tcx>>, &List<GenericArg<'tcx>>>`
This PR has 3 relevant steps which are is split in distinct commits.
The first commit now interns `List<Ty<'tcx>>` and `List<GenericArg<'tcx>>` together, potentially reusing memory while allowing free conversions between these two using `List<Ty<'tcx>>::as_substs()` and `SubstsRef<'tcx>::try_as_type_list()`.
Using this, we then use `&'tcx List<Ty<'tcx>>` instead of a `SubstsRef<'tcx>` for tuple fields, simplifying a bunch of code.
Finally, as tuple fields and other generic arguments now use a different `TypeFoldable<'tcx>` impl, we optimize the impl for `List<Ty<'tcx>>` improving perf by slightly less than 1% in tuple heavy benchmarks.
This reverts commit a240ccd81c, reversing
changes made to 393fdc1048.
This PR was likely responsible for a relatively large regression in
dist-x86_64-msvc-alt builder times, from approximately 1.7 to 2.8 hours,
bringing that builder into the pool of the slowest builders we currently have.
This seems to be limited to the alt builder due to needing parallel-compiler
enabled, likely leading to slow LLVM compilation for some reason.
Improve `unused_unsafe` lint
I’m going to add some motivation and explanation below, particularly pointing the changes in behavior from this PR.
_Edit:_ Looking for existing issues, looks like this PR fixes#88260.
_Edit2:_ Now also contains code that closes#90776.
Main motivation: Fixes some issues with the current behavior. This PR is
more-or-less completely re-implementing the unused_unsafe lint; it’s also only
done in the MIR-version of the lint, the set of tests for the `-Zthir-unsafeck`
version no longer succeeds (and is thus disabled, see `lint-unused-unsafe.rs`).
On current nightly,
```rs
unsafe fn unsf() {}
fn inner_ignored() {
unsafe {
#[allow(unused_unsafe)]
unsafe {
unsf()
}
}
}
```
doesn’t create any warnings. This situation is not unrealistic to come by, the
inner `unsafe` block could e.g. come from a macro. Actually, this PR even
includes removal of one unused `unsafe` in the standard library that was missed
in a similar situation. (The inner `unsafe` coming from an external macro hides
the warning, too.)
The reason behind this problem is how the check currently works:
* While generating MIR, it already skips nested unsafe blocks (i.e. unsafe
nested in other unsafe) so that the inner one is always the one considered
unused
* To differentiate the cases of no unsafe operations inside the `unsafe` vs.
a surrounding `unsafe` block, there’s some ad-hoc magic walking up the HIR to
look for surrounding used `unsafe` blocks.
There’s a lot of problems with this approach besides the one presented above.
E.g. the MIR-building uses checks for `unsafe_op_in_unsafe_fn` lint to decide
early whether or not `unsafe` blocks in an `unsafe fn` are redundant and ought
to be removed.
```rs
unsafe fn granular_disallow_op_in_unsafe_fn() {
unsafe {
#[deny(unsafe_op_in_unsafe_fn)]
{
unsf();
}
}
}
```
```
error: call to unsafe function is unsafe and requires unsafe block (error E0133)
--> src/main.rs:13:13
|
13 | unsf();
| ^^^^^^ call to unsafe function
|
note: the lint level is defined here
--> src/main.rs:11:16
|
11 | #[deny(unsafe_op_in_unsafe_fn)]
| ^^^^^^^^^^^^^^^^^^^^^^
= note: consult the function's documentation for information on how to avoid undefined behavior
warning: unnecessary `unsafe` block
--> src/main.rs:10:5
|
9 | unsafe fn granular_disallow_op_in_unsafe_fn() {
| --------------------------------------------- because it's nested under this `unsafe` fn
10 | unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
```
Here, the intermediate `unsafe` was ignored, even though it contains a unsafe
operation that is not allowed to happen in an `unsafe fn` without an additional `unsafe` block.
Also closures were problematic and the workaround/algorithms used on current
nightly didn’t work properly. (I skipped trying to fully understand what it was
supposed to do, because this PR uses a completely different approach.)
```rs
fn nested() {
unsafe {
unsafe { unsf() }
}
}
```
```
warning: unnecessary `unsafe` block
--> src/main.rs:10:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
```
vs
```rs
fn nested() {
let _ = || unsafe {
let _ = || unsafe { unsf() };
};
}
```
```
warning: unnecessary `unsafe` block
--> src/main.rs:9:16
|
9 | let _ = || unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:10:20
|
10 | let _ = || unsafe { unsf() };
| ^^^^^^ unnecessary `unsafe` block
```
*note that this warning kind-of suggests that **both** unsafe blocks are redundant*
--------------------------------------------------------------------------------
I also dislike the fact that it always suggests keeping the outermost `unsafe`.
E.g. for
```rs
fn granularity() {
unsafe {
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
I prefer if `rustc` suggests removing the more-course outer-level `unsafe`
instead of the fine-grained inner `unsafe` blocks, which it currently does on nightly:
```
warning: unnecessary `unsafe` block
--> src/main.rs:10:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:11:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
11 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
warning: unnecessary `unsafe` block
--> src/main.rs:12:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
12 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
```
--------------------------------------------------------------------------------
Needless to say, this PR addresses all these points. For context, as far as my
understanding goes, the main advantage of skipping inner unsafe blocks was that
a test case like
```rs
fn top_level_used() {
unsafe {
unsf();
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
should generate some warning because there’s redundant nested `unsafe`, however
every single `unsafe` block _does_ contain some statement that uses it. Of course
this PR doesn’t aim change the warnings on this kind of code example, because
the current behavior, warning on all the inner `unsafe` blocks, makes sense in this case.
As mentioned, during MIR building all the unsafe blocks *are* kept now, and usage
is attributed to them. The way to still generate a warning like
```
warning: unnecessary `unsafe` block
--> src/main.rs:11:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsf();
11 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:12:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
12 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
warning: unnecessary `unsafe` block
--> src/main.rs:13:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
13 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
```
in this case is by emitting a `unused_unsafe` warning for all of the `unsafe`
blocks that are _within a **used** unsafe block_.
The previous code had a little HIR traversal already anyways to collect a set of
all the unsafe blocks (in order to afterwards determine which ones are unused
afterwards). This PR uses such a traversal to do additional things including logic
like _always_ warn for an `unsafe` block that’s inside of another **used**
unsafe block. The traversal is expanded to include nested closures in the same go,
this simplifies a lot of things.
The whole logic around `unsafe_op_in_unsafe_fn` is a little complicated, there’s
some test cases of corner-cases in this PR. (The implementation involves
differentiating between whether a used unsafe block was used exclusively by
operations where `allow(unsafe_op_in_unsafe_fn)` was active.) The main goal was
to make sure that code should compile successfully if all the `unused_unsafe`-warnings
are addressed _simultaneously_ (by removing the respective `unsafe` blocks)
no matter how complicated the patterns of `unsafe_op_in_unsafe_fn` being
disallowed and allowed throughout the function are.
--------------------------------------------------------------------------------
One noteworthy design decision I took here: An `unsafe` block
with `allow(unused_unsafe)` **is considered used** for the purposes of
linting about redundant contained unsafe blocks. So while
```rs
fn granularity() {
unsafe { //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
warns for the outer `unsafe` block,
```rs
fn top_level_ignored() {
#[allow(unused_unsafe)]
unsafe {
#[deny(unused_unsafe)]
{
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
}
}
}
```
warns on the inner ones.
Move ty::print methods to Drop-based scope guards
Primary goal is reducing codegen of the TLS access for each closure, which shaves ~3 seconds of bootstrap time over rustc as a whole.
This was largely just caching the shard value at this point, which is not
particularly useful -- in the use sites the key was being hashed nearby anyway.
Adopt let else in more places
Continuation of #89933, #91018, #91481, #93046, #93590, #94011.
I have extended my clippy lint to also recognize tuple passing and match statements. The diff caused by fixing it is way above 1 thousand lines. Thus, I split it up into multiple pull requests to make reviewing easier. This is the biggest of these PRs and handles the changes outside of rustdoc, rustc_typeck, rustc_const_eval, rustc_trait_selection, which were handled in PRs #94139, #94142, #94143, #94144.
Suggest `impl Trait` return type when incorrectly using a generic return type
Address #85991
When there is a type mismatch error and the return type is generic, and that generic parameter is not used in the function parameters, suggest replacing that generic with the `impl Trait` syntax.
r? `@estebank`
Address #85991
Suggest the `impl Trait` return type syntax if the user tried to return a generic parameter and we get a type mismatch
The suggestion is not emitted if the param appears in the function parameters, and only get the bounds that actually involve `T: ` directly
It also checks whether the generic param is contained in any where bound (where it isn't the self type), and if one is found (like `Option<T>: Send`), it is not suggested.
This also adds `TyS::contains`, which recursively vistits the type and looks if the other type is contained anywhere
Suggest copying trait associated type bounds on lifetime error
Closes#92033
Kind of the most simple suggestion to make - we don't try to be fancy. Turns out, it's still pretty useful (the couple existing tests that trigger this error end up fixed - for this error - upon applying the fix).
r? ``@estebank``
cc ``@nikomatsakis``
Improve comments about type folding/visiting.
I have found this code confusing for years. I've always roughly
understood it, but never exactly. I just made my fourth(?) attempt and
finally cracked it.
This commit improves the comments. In particular, it explicitly
describes how you can't do a custom fold/visit of any type; there are
actually a handful of "types of interest" (e.g. `Ty`, `Predicate`,
`Region`, `Const`) that can be custom folded/visted, and all other types
just get a generic traversal. I think this was the part that eluded me
on all my prior attempts at understanding.
The commit also updates comments to account for some newer changes such
as the fallible/infallible folding distinction, does some minor
reorderings, and moves one `impl` to a better place.
r? `@BoxyUwU`
I have found this code confusing for years. I've always roughly
understood it, but never exactly. I just made my fourth(?) attempt and
finally cracked it.
This commit improves the comments. In particular, it explicitly
describes how you can't do a custom fold/visit of any type; there are
actually a handful of "types of interest" (e.g. `Ty`, `Predicate`,
`Region`, `Const`) that can be custom folded/visted, and all other types
just get a generic traversal. I think this was the part that eluded me
on all my prior attempts at understanding.
The commit also updates comments to account for some newer changes such
as the fallible/infallible folding distinction, does some minor
reorderings, and moves one `impl` to a better place.
Fix inconsistent symbol mangling with -Zverbose
Always skip arguments that are the defaults of their respective
parameters, to avoid generating inconsistent symbols for builds
with `-Zverbose` flag and without it.
Support pretty printing of invalid constants
Make it possible to pretty print invalid constants by introducing a
fallible variant of `destructure_const` and falling back to debug
formatting when it fails.
Closes#93688.
Treat static refs as `mir::ConstantKind::Val`
With the upcoming introduction of Valtrees we want to treat more values as `mir::ConstantKind::Val` directly.
r? `@lcnr`
cc `@oli-obk`
Always skip arguments that are the defaults of their respective
parameters, to avoid generating inconsistent symbols for builds
with `-Zverbose` flag and without it.
Make it possible to pretty print invalid constants by introducing a
fallible variant of `destructure_const` and falling back to debug
formatting when it fails.
Specifically, rename the `Const` struct as `ConstS` and re-introduce `Const` as
this:
```
pub struct Const<'tcx>(&'tcx Interned<ConstS>);
```
This now matches `Ty` and `Predicate` more closely, including using
pointer-based `eq` and `hash`.
Notable changes:
- `mk_const` now takes a `ConstS`.
- `Const` was copy, despite being 48 bytes. Now `ConstS` is not, so need a
we need separate arena for it, because we can't use the `Dropless` one any
more.
- Many `&'tcx Const<'tcx>`/`&Const<'tcx>` to `Const<'tcx>` changes
- Many `ct.ty` to `ct.ty()` and `ct.val` to `ct.val()` changes.
- Lots of tedious sigil fiddling.
The variant names are exported, so we can use them directly (possibly
with a `ty::` qualifier). Lots of places already do this, this commit
just increases consistency.
Specifically, change `Region` from this:
```
pub type Region<'tcx> = &'tcx RegionKind;
```
to this:
```
pub struct Region<'tcx>(&'tcx Interned<RegionKind>);
```
This now matches `Ty` and `Predicate` more closely.
Things to note
- Regions have always been interned, but we haven't been using pointer-based
`Eq` and `Hash`. This is now happening.
- I chose to impl `Deref` for `Region` because it makes pattern matching a lot
nicer, and `Region` can be viewed as just a smart wrapper for `RegionKind`.
- Various methods are moved from `RegionKind` to `Region`.
- There is a lot of tedious sigil changes.
- A couple of types like `HighlightBuilder`, `RegionHighlightMode` now have a
`'tcx` lifetime because they hold a `Ty<'tcx>`, so they can call `mk_region`.
- A couple of test outputs change slightly, I'm not sure why, but the new
outputs are a little better.
Specifically, change `Ty` from this:
```
pub struct Predicate<'tcx> { inner: &'tcx PredicateInner<'tcx> }
```
to this:
```
pub struct Predicate<'tcx>(&'tcx Interned<PredicateS<'tcx>>)
```
where `PredicateInner` is renamed as `PredicateS`.
This (plus a few other minor changes) makes the parallels with `Ty` and
`TyS` much clearer, and makes the uniqueness more explicit.
Specifically, change `Ty` from this:
```
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
to this
```
pub struct Ty<'tcx>(Interned<'tcx, TyS<'tcx>>);
```
There are two benefits to this.
- It's now a first class type, so we can define methods on it. This
means we can move a lot of methods away from `TyS`, leaving `TyS` as a
barely-used type, which is appropriate given that it's not meant to
be used directly.
- The uniqueness requirement is now explicit, via the `Interned` type.
E.g. the pointer-based `Eq` and `Hash` comes from `Interned`, rather
than via `TyS`, which wasn't obvious at all.
Much of this commit is boring churn. The interesting changes are in
these files:
- compiler/rustc_middle/src/arena.rs
- compiler/rustc_middle/src/mir/visit.rs
- compiler/rustc_middle/src/ty/context.rs
- compiler/rustc_middle/src/ty/mod.rs
Specifically:
- Most mentions of `TyS` are removed. It's very much a dumb struct now;
`Ty` has all the smarts.
- `TyS` now has `crate` visibility instead of `pub`.
- `TyS::make_for_test` is removed in favour of the static `BOOL_TY`,
which just works better with the new structure.
- The `Eq`/`Ord`/`Hash` impls are removed from `TyS`. `Interned`s impls
of `Eq`/`Hash` now suffice. `Ord` is now partly on `Interned`
(pointer-based, for the `Equal` case) and partly on `TyS`
(contents-based, for the other cases).
- There are many tedious sigil adjustments, i.e. adding or removing `*`
or `&`. They seem to be unavoidable.
make `find_similar_impl_candidates` even fuzzier
continues the good work of `@BGR360` in #92223. I might have overshot a bit and we're now slightly too fuzzy 😅
with this we can now also simplify `simplify_type`, which is nice :3
Make `Res::SelfTy` a struct variant and update docs
I found pattern matching on a `(Option<DefId>, Option<(DefId, bool)>)` to not be super readable, additionally the doc comments on the types in a tuple variant aren't visible anywhere at use sites as far as I can tell (using rust analyzer + vscode)
The docs incorrectly assumed that the `DefId` in `Option<(DefId, bool)>` would only ever be for an impl item and I also found the code examples to be somewhat unclear about which `DefId` was being talked about.
r? `@lcnr` since you reviewed the last PR changing these docs
Improve chalk integration
- Support subtype bounds in chalk lowering
- Handle universes in canonicalization
- Handle type parameters in chalk responses
- Use `chalk_ir::LifetimeData::Empty` for `ty::ReEmpty`
- Remove `ignore-compare-mode-chalk` for tests that no longer hang (they may still fail or ICE)
This is enough to get a hello world program to compile with `-Zchalk` now. Some of the remaining issues that are needed to get Chalk integration working on larger programs are:
- rust-lang/chalk#234
- rust-lang/chalk#548
- rust-lang/chalk#734
- Generators are handled differently in chalk and rustc
r? `@jackh726`
Apply noundef attribute to &T, &mut T, Box<T>, bool
This doesn't handle `char` because it's a bit awkward to distinguish it from `u32` at this point in codegen.
Note that this _does not_ change whether or not it is UB for `&`, `&mut`, or `Box` to point to undef. It only applies to the pointer itself, not the pointed-to memory.
Fixes (partially) #74378.
r? `@nikic` cc `@RalfJung`
This is required to avoid creating large numbers of universes from each
Chalk query, while still having enough universe information for lifetime
errors.
Make all `hir::Map` methods consistently by-value
`hir::Map` only consists of a single reference (as part of the contained `TyCtxt`) anyways, so copying is literally zero overhead compared to passing a reference
Ensure that queries only return Copy types.
This should pervent the perf footgun of returning a result with an expensive `Clone` impl (like a `Vec` of a hash map).
I went for the stupid solution of allocating on an arena everything that was not `Copy`. Some query results could be made Copy easily, but I did not really investigate.
Refactor query system to maintain a global job id counter
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
Add more *-unwind ABI variants
The following *-unwind ABIs are now supported:
- "C-unwind"
- "cdecl-unwind"
- "stdcall-unwind"
- "fastcall-unwind"
- "vectorcall-unwind"
- "thiscall-unwind"
- "aapcs-unwind"
- "win64-unwind"
- "sysv64-unwind"
- "system-unwind"
cc `@rust-lang/wg-ffi-unwind`
Lazy type-alias-impl-trait
Previously opaque types were processed by
1. replacing all mentions of them with inference variables
2. memorizing these inference variables in a side-table
3. at the end of typeck, resolve the inference variables in the side table and use the resolved type as the hidden type of the opaque type
This worked okayish for `impl Trait` in return position, but required lots of roundabout type inference hacks and processing.
This PR instead stops this process of replacing opaque types with inference variables, and just keeps the opaque types around.
Whenever an opaque type `O` is compared with another type `T`, we make the comparison succeed and record `T` as the hidden type. If `O` is compared to `U` while there is a recorded hidden type for it, we grab the recorded type (`T`) and compare that against `U`. This makes implementing
* https://github.com/rust-lang/rfcs/pull/2515
much simpler (previous attempts on the inference based scheme were very prone to ICEs and general misbehaviour that was not explainable except by random implementation defined oddities).
r? `@nikomatsakis`
fixes#93411fixes#88236
use `fold_list` in `try_super_fold_with` for `SubstsRef`
split out from #93505 as this by itself is responsible for most of the perf improvements there
r? `@michaelwoerister`
This doesn't handle `char` because it's a bit awkward to distinguish it
from u32 at this point in codegen.
Note that for some types (like `&Struct` and `&mut Struct`),
we already apply `dereferenceable`, which implies `noundef`,
so the IR does not change.
Stabilize `-Z instrument-coverage` as `-C instrument-coverage`
(Tracking issue for `instrument-coverage`: https://github.com/rust-lang/rust/issues/79121)
This PR stabilizes support for instrumentation-based code coverage, previously provided via the `-Z instrument-coverage` option. (Continue supporting `-Z instrument-coverage` for compatibility for now, but show a deprecation warning for it.)
Many, many people have tested this support, and there are numerous reports of it working as expected.
Move the documentation from the unstable book to stable rustc documentation. Update uses and documentation to use the `-C` option.
Addressing questions raised in the tracking issue:
> If/when stabilized, will the compiler flag be updated to -C instrument-coverage? (If so, the -Z variant could also be supported for some time, to ease migrations for existing users and scripts.)
This stabilization PR updates the option to `-C` and keeps the `-Z` variant to ease migration.
> The Rust coverage implementation depends on (and automatically turns on) -Z symbol-mangling-version=v0. Will stabilizing this feature depend on stabilizing v0 symbol-mangling first? If so, what is the current status and timeline?
This stabilization PR depends on https://github.com/rust-lang/rust/pull/90128 , which stabilizes `-C symbol-mangling-version=v0` (but does not change the default symbol-mangling-version).
> The Rust coverage implementation implements the latest version of LLVM's Coverage Mapping Format (version 4), which forces a dependency on LLVM 11 or later. A compiler error is generated if attempting to compile with coverage, and using an older version of LLVM.
Given that LLVM 13 has now been released, requiring LLVM 11 for coverage support seems like a reasonable requirement. If people don't have at least LLVM 11, nothing else breaks; they just can't use coverage support. Given that coverage support currently requires a nightly compiler and LLVM 11 or newer, allowing it on a stable compiler built with LLVM 11 or newer seems like an improvement.
The [tracking issue](https://github.com/rust-lang/rust/issues/79121) and the [issue label A-code-coverage](https://github.com/rust-lang/rust/labels/A-code-coverage) link to a few open issues related to `instrument-coverage`, but none of them seem like showstoppers. All of them seem like improvements and refinements we can make after stabilization.
The original `-Z instrument-coverage` support went through a compiler-team MCP at https://github.com/rust-lang/compiler-team/issues/278 . Based on that, `@pnkfelix` suggested that this needed a stabilization PR and a compiler-team FCP.
Fix ret > 1 bound if shadowed by const
Prior to a change, it would only look at types in bounds. When it started looking for consts,
shadowing type variables with a const would cause an ICE, so now defer looking at consts only if
there are no types present.
cc ``````@compiler-errors``````
Should Fix#93553
Prior to a change, it would only look at types in bounds. When it started looking for consts,
shadowing type variables with a const would cause an ICE, so now defer looking at consts only if
there are no types present.
Temporary fix for the layout of aligned enums
Fix for the issue #92464
~~I was after this issue for quite some time now, I have a temporary fix for it.
I think the current problem is [here](e75f96763f/compiler/rustc_middle/src/ty/layout.rs (L1305-L1310)) created `tag` value might be wrong, because when I checked `min` and `max` values it's always between 0..1, which results in wrong size comparison in a few lines down below.
I think `min` and `max` values don't take `#[repr(aligned(8))]` into consideration and just act from base values assigned inside the enum. If what I am saying is true, aligned enums were created with the wrong layout for some time.~~
~~As stated in the title this is only a temporary fix and I think this needs further investigation, if someone wants to mentor it I would like to work on that too.~~ 😸
**Edit: Weird some tests fail now going to close this for now...**
**Edit2: I made it work again.**
I think I figured out the main problem of the issue, layout types of aligned enums with custom discriminant types were not handled, which resulted in confusing(such as this issue) behavior down the line, this is a kinda hacky fix for the issue.
by using an opaque type obligation to bubble up comparisons between opaque types and other types
Also uses proper obligation causes so that the body id works, because out of some reason nll uses body ids for logic instead of just diagnostics.
Return an indexmap in `all_local_trait_impls` query
The data structure previously used here required that `DefId` be `Ord`. As part of #90317, we do not want `DefId` to implement `Ord`.
Fix two incorrect "it's" (typos in comments)
Found one of these while reading the documentation online. The other came up because it's in the same file.
Make dead code check a query.
Dead code check is run for each invocation of the compiler, even if no modifications were involved.
This PR makes dead code check a query keyed on the module. This allows to skip the check when a module has not changed.
To perform this, a query `live_symbols_and_ignored_derived_traits` is introduced to encapsulate the global analysis of finding live symbols. The second query `check_mod_deathness` outputs diagnostics for each module based on this first query's results.
Continue work on associated const equality
This actually implements some more complex logic for assigning associated consts to values.
Inside of projection candidates, it now defers to a separate function for either consts or
types. To reduce amount of code, projections are now generic over T, where T is either a Type or
a Const. I can add some comments back later, but this was the fastest way to implement it.
It also now finds the correct type of consts in type_of.
---
The current main TODO is finding the const of the def id for the LeafDef.
Right now it works if the function isn't called, but once you use the trait impl with the bound it fails inside projection.
I was hoping to get some help in getting the `&'tcx ty::Const<'tcx>`, in addition to a bunch of other `todo!()`s which I think may not be hit.
r? `@oli-obk`
Updates #92827
rustc_errors: only box the `diagnostic` field in `DiagnosticBuilder`.
I happened to need to do the first change (replacing `allow_suggestions` with equivalent functionality on `Diagnostic` itself) as part of a larger change, and noticed that there's only two fields left in `DiagnosticBuilderInner`.
So with this PR, instead of a single pointer, `DiagnosticBuilder` is two pointers, which should work just as well for passing *it* by value (and may even work better wrt some operations, though probably not by much).
But anything that was already taking advantage of `DiagnosticBuilder` being a single pointer, and wrapping it further (e.g. `Result<T, DiagnosticBuilder>` w/ non-ZST `T`), ~~will probably see a slowdown~~, so I want to do a perf run before even trying to propose this.
Store def_id_to_hir_id as variant in hir_owner.
If hir_owner is Owner(_), the LocalDefId is pointing to an owner, so the ItemLocalId is 0.
If the HIR node does not exist, we store Phantom.
Otherwise, we store the HirId associated to the LocalDefId.
Related to #89278
r? `@oli-obk`
Add note suggesting that predicate may be satisfied, but is not `const`
Not sure if we should be printing this in addition to, or perhaps _instead_ of the help message:
```
help: the trait `~const Add` is not implemented for `NonConstAdd`
```
Also added `ParamEnv::is_const` and `PolyTraitPredicate::is_const_if_const` and, in a separate commit, used those in other places instead of `== hir::Constness::Const`, etc.
r? ````@fee1-dead````
If hir_owner is Owner(_), the LocalDefId is pointing to an owner, so the ItemLocalId is 0.
If the HIR node does not exist, we store Phantom.
Otherwise, we store the HirId associated to the LocalDefId.
Store a `Symbol` instead of an `Ident` in `AssocItem`
This is the same idea as #92533, but for `AssocItem` instead
of `VariantDef`/`FieldDef`.
With this change, we no longer have any uses of
`#[stable_hasher(project(...))]`
Check `const Drop` impls considering `~const` Bounds
This PR adds logic to trait selection to account for `~const` bounds in custom `impl const Drop` for types, elaborates the `const Drop` check in `rustc_const_eval` to check those bounds, and steals some drop linting fixes from #92922, thanks `@DrMeepster.`
r? `@fee1-dead` `@oli-obk` <sup>(edit: guess I can't request review from two people, lol)</sup>
since each of you wrote and reviewed #88558, respectively.
Since the logic here is more complicated than what existed, it's possible that this is a perf regression. But it works correctly with tests, and that makes me happy.
Fixes#92881
rustc_lint: Some early linting refactorings
The first one removes and renames some fields and methods from `EarlyContext`.
The second one uses the set of registered tools (for tool attributes and tool lints) in a more centralized way.
The third one removes creation of a fake `ast::Crate` from `fn pre_expansion_lint`.
Pre-expansion linting is done with per-module granularity on freshly loaded modules, and it previously synthesized an `ast::Crate` to visit non-root modules, now they are visited as modules.
The node ID used for pre-expansion linting is also made more precise (the loaded module ID is used).
Make `Decodable` and `Decoder` infallible.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this PR is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
r? `@bjorn3`
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
I have found this code very confusing at times. This commit clarifies
things.
In particular, the commit explains the requirements that the `Borrow`
impls put on the `Eq` and `Hash` impls, which are non-obvious. And it
puts the `Borrow` impls first, since they force `Eq` and `Hash` to have
particular forms.
The commit also notes `TyS`'s uniqueness requirements.
Rollup of 17 pull requests
Successful merges:
- #91032 (Introduce drop range tracking to generator interior analysis)
- #92856 (Exclude "test" from doc_auto_cfg)
- #92860 (Fix errors on blanket impls by ignoring the children of generated impls)
- #93038 (Fix star handling in block doc comments)
- #93061 (Only suggest adding `!` to expressions that can be macro invocation)
- #93067 (rustdoc mobile: fix scroll offset when jumping to internal id)
- #93086 (Add tests to ensure that `let_chains` works with `if_let_guard`)
- #93087 (Fix src/test/run-make/raw-dylib-alt-calling-convention)
- #93091 (⬆ chalk to 0.76.0)
- #93094 (src/test/rustdoc-json: Check for `struct_field`s in `variant_tuple_struct.rs`)
- #93098 (Show a more informative panic message when `DefPathHash` does not exist)
- #93099 (rustdoc: auto create output directory when "--output-format json")
- #93102 (Pretty printer algorithm revamp step 3)
- #93104 (Support --bless for pp-exact pretty printer tests)
- #93114 (update comment for `ensure_monomorphic_enough`)
- #93128 (Add script to prevent point releases with same number as existing ones)
- #93136 (Backport the 1.58.1 release notes to master)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Show a more informative panic message when `DefPathHash` does not exist
This should hopefully make it easier to debug incremental compilation
bugs like #93096 without affecting performance.
Introduce drop range tracking to generator interior analysis
This PR addresses cases such as this one from #57478:
```rust
struct Foo;
impl !Send for Foo {}
let _: impl Send = || {
let guard = Foo;
drop(guard);
yield;
};
```
Previously, the `generator_interior` pass would unnecessarily include the type `Foo` in the generator because it was not aware of the behavior of `drop`. We fix this issue by introducing a drop range analysis that finds portions of the code where a value is guaranteed to be dropped. If a value is dropped at all suspend points, then it is no longer included in the generator type. Note that we are using "dropped" in a generic sense to include any case in which a value has been moved. That is, we do not only look at calls to the `drop` function.
There are several phases to the drop tracking algorithm, and we'll go into more detail below.
1. Use `ExprUseVisitor` to find values that are consumed and borrowed.
2. `DropRangeVisitor` uses consume and borrow information to gather drop and reinitialization events, as well as build a control flow graph.
3. We then propagate drop and reinitialization information through the CFG until we reach a fix point (see `DropRanges::propagate_to_fixpoint`).
4. When recording a type (see `InteriorVisitor::record`), we check the computed drop ranges to see if that value is definitely dropped at the suspend point. If so, we skip including it in the type.
## 1. Use `ExprUseVisitor` to find values that are consumed and borrowed.
We use `ExprUseVisitor` to identify the places where values are consumed. We track both the `hir_id` of the value, and the `hir_id` of the expression that consumes it. For example, in the expression `[Foo]`, the `Foo` is consumed by the array expression, so after the array expression we can consider the `Foo` temporary to be dropped.
In this process, we also collect values that are borrowed. The reason is that the MIR transform for generators conservatively assumes anything borrowed is live across a suspend point (see `rustc_mir_transform::generator::locals_live_across_suspend_points`). We match this behavior here as well.
## 2. Gather drop events, reinitialization events, and control flow graph
After finding the values of interest, we perform a post-order traversal over the HIR tree to find the points where these values are dropped or reinitialized. We use the post-order index of each event because this is how the existing generator interior analysis refers to the position of suspend points and the scopes of variables.
During this traversal, we also record branching and merging information to handle control flow constructs such as `if`, `match`, and `loop`. This is necessary because values may be dropped along some control flow paths but not others.
## 3. Iterate to fixed point
The previous pass found the interesting events and locations, but now we need to find the actual ranges where things are dropped. Upon entry, we have a list of nodes ordered by their position in the post-order traversal. Each node has a set of successors. For each node we additionally keep a bitfield with one bit per potentially consumed value. The bit is set if we the value is dropped along all paths entering this node.
To compute the drop information, we first reverse the successor edges to find each node's predecessors. Then we iterate through each node, and for each node we set its dropped value bitfield to the intersection of all incoming dropped value bitfields.
If any bitfield for any node changes, we re-run the propagation loop again.
## 4. Ignore dropped values across suspend points
At this point we have a data structure where we can ask whether a value is guaranteed to be dropped at any post order index for the HIR tree. We use this information in `InteriorVisitor` to check whether a value in question is dropped at a particular suspend point. If it is, we do not include that value's type in the generator type.
Note that we had to augment the region scope tree to include all yields in scope, rather than just the last one as we did before.
r? `@nikomatsakis`
improve `_` constants in item signature handling
removing the "type" from the error messages does slightly worsen the error messages for types, but figuring out whether the placeholder is for a type or a constant and correctly dealing with that seemed fairly difficult to me so I took the easy way out ✨ Imo the error message is still clear enough.
r? `@BoxyUwU` cc `@estebank`
- Also rename a trivial_const_drop to match style of other functions in
the util module.
- Also add a test for `const Drop` that doesn't depend on a `~const`
bound.
- Also comment a bit why we remove the const bound during dropck impl
check.
This is the same idea as #92533, but for `AssocItem` instead
of `VariantDef`/`FieldDef`.
With this change, we no longer have any uses of
`#[stable_hasher(project(...))]`
Remove some unused ordering derivations based on `DefId`
Like #93018, this removes some unused/unneeded ordering derivations as part of ongoing work on #90317. Here, these changes are aimed at making https://github.com/rust-lang/rust/pull/90749 easier to review, test, and merge.
r? `@cjgillot`
Formally implement let chains
## Let chains
My longest and hardest contribution since #64010.
Thanks to `@Centril` for creating the RFC and special thanks to `@matthewjasper` for helping me since the beginning of this journey. In fact, `@matthewjasper` did much of the complicated MIR stuff so it's true to say that this feature wouldn't be possible without him. Thanks again `@matthewjasper!`
With the changes proposed in this PR, it will be possible to chain let expressions along side local variable declarations or ordinary conditional expressions. In other words, do much of what the `if_chain` crate already does.
## Other considerations
* `if let guard` and `let ... else` features need special care and should be handled in a following PR.
* Irrefutable patterns are allowed within a let chain context
* ~~Three Clippy lints were already converted to start dogfooding and help detect possible corner cases~~
cc #53667
Directly use ConstValue for single literals in blocks
Addresses the minimal repro in https://github.com/rust-lang/rust/issues/92186, but doesn't fix the underlying problem (which would be solved by solving the anon subst problem afaict).
I do, however, think that it makes sense in general to treat single literals in anon blocks as const values directly, especially in light of the problem that the issue refers to (anon const evaluation being postponed until infer variables in substs can be resolved, which was introduced by https://github.com/rust-lang/rust/pull/90023), i.e. while we do get warnings for those unnecessary braces, we should try to avoid errors caused by those braces if possible.
Fix ICEs related to `Deref<Target=[T; N]>` on newtypes
1. Stash a const infer's type into the canonical var during canonicalization, so we can recreate the fresh const infer with that same type.
For example, given `[T; _]` we know `_` is a `usize`. If we go from infer => canonical => infer, we shouldn't forget that variable is a usize.
Fixes#92626Fixes#83704
2. Don't stash the autoderef'd slice type that we get from method lookup, but instead recreate it during method confirmation. We need to do this because the type we receive back after picking the method references a type variable that does not exist after probing is done.
Fixes#92637
... A better solution for the second issue would be to actually _properly_ implement `Deref` for `[T; N]` instead of fixing this autoderef hack to stop leaking inference variables. But I actually looked into this, and there are many complications with const impls.