There are a few places were we have to construct it, though, and a few
places that are more invasive to change. To do this, we create a
constructor with a long obvious name.
Only look for complete suffixes or prefixes of irrefutable let patterns, so
that an irrefutable let pattern in a chain surrounded by refutable ones is
not linted, as it is an useful pattern.
This commit makes `AdtDef` use `Interned`. Much the commit is tedious
changes to introduce getter functions. The interesting changes are in
`compiler/rustc_middle/src/ty/adt.rs`.
Treat constant values as mir::ConstantKind::Val
Another step that is necessary for the introduction of Valtrees: we don't want to treat `ty::Const` instances of kind `ty::ConstKind::Value` as `mir::ConstantKind::Ty` anymore.
r? `@oli-obk`
This makes the order of the output always consistent:
1. Place of the `match` missing arms
2. The `enum` definition span
3. The structured suggestion to add a fallthrough arm
safely `transmute<&List<Ty<'tcx>>, &List<GenericArg<'tcx>>>`
This PR has 3 relevant steps which are is split in distinct commits.
The first commit now interns `List<Ty<'tcx>>` and `List<GenericArg<'tcx>>` together, potentially reusing memory while allowing free conversions between these two using `List<Ty<'tcx>>::as_substs()` and `SubstsRef<'tcx>::try_as_type_list()`.
Using this, we then use `&'tcx List<Ty<'tcx>>` instead of a `SubstsRef<'tcx>` for tuple fields, simplifying a bunch of code.
Finally, as tuple fields and other generic arguments now use a different `TypeFoldable<'tcx>` impl, we optimize the impl for `List<Ty<'tcx>>` improving perf by slightly less than 1% in tuple heavy benchmarks.
This reverts commit a240ccd81c, reversing
changes made to 393fdc1048.
This PR was likely responsible for a relatively large regression in
dist-x86_64-msvc-alt builder times, from approximately 1.7 to 2.8 hours,
bringing that builder into the pool of the slowest builders we currently have.
This seems to be limited to the alt builder due to needing parallel-compiler
enabled, likely leading to slow LLVM compilation for some reason.
Improve `unused_unsafe` lint
I’m going to add some motivation and explanation below, particularly pointing the changes in behavior from this PR.
_Edit:_ Looking for existing issues, looks like this PR fixes#88260.
_Edit2:_ Now also contains code that closes#90776.
Main motivation: Fixes some issues with the current behavior. This PR is
more-or-less completely re-implementing the unused_unsafe lint; it’s also only
done in the MIR-version of the lint, the set of tests for the `-Zthir-unsafeck`
version no longer succeeds (and is thus disabled, see `lint-unused-unsafe.rs`).
On current nightly,
```rs
unsafe fn unsf() {}
fn inner_ignored() {
unsafe {
#[allow(unused_unsafe)]
unsafe {
unsf()
}
}
}
```
doesn’t create any warnings. This situation is not unrealistic to come by, the
inner `unsafe` block could e.g. come from a macro. Actually, this PR even
includes removal of one unused `unsafe` in the standard library that was missed
in a similar situation. (The inner `unsafe` coming from an external macro hides
the warning, too.)
The reason behind this problem is how the check currently works:
* While generating MIR, it already skips nested unsafe blocks (i.e. unsafe
nested in other unsafe) so that the inner one is always the one considered
unused
* To differentiate the cases of no unsafe operations inside the `unsafe` vs.
a surrounding `unsafe` block, there’s some ad-hoc magic walking up the HIR to
look for surrounding used `unsafe` blocks.
There’s a lot of problems with this approach besides the one presented above.
E.g. the MIR-building uses checks for `unsafe_op_in_unsafe_fn` lint to decide
early whether or not `unsafe` blocks in an `unsafe fn` are redundant and ought
to be removed.
```rs
unsafe fn granular_disallow_op_in_unsafe_fn() {
unsafe {
#[deny(unsafe_op_in_unsafe_fn)]
{
unsf();
}
}
}
```
```
error: call to unsafe function is unsafe and requires unsafe block (error E0133)
--> src/main.rs:13:13
|
13 | unsf();
| ^^^^^^ call to unsafe function
|
note: the lint level is defined here
--> src/main.rs:11:16
|
11 | #[deny(unsafe_op_in_unsafe_fn)]
| ^^^^^^^^^^^^^^^^^^^^^^
= note: consult the function's documentation for information on how to avoid undefined behavior
warning: unnecessary `unsafe` block
--> src/main.rs:10:5
|
9 | unsafe fn granular_disallow_op_in_unsafe_fn() {
| --------------------------------------------- because it's nested under this `unsafe` fn
10 | unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
```
Here, the intermediate `unsafe` was ignored, even though it contains a unsafe
operation that is not allowed to happen in an `unsafe fn` without an additional `unsafe` block.
Also closures were problematic and the workaround/algorithms used on current
nightly didn’t work properly. (I skipped trying to fully understand what it was
supposed to do, because this PR uses a completely different approach.)
```rs
fn nested() {
unsafe {
unsafe { unsf() }
}
}
```
```
warning: unnecessary `unsafe` block
--> src/main.rs:10:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
```
vs
```rs
fn nested() {
let _ = || unsafe {
let _ = || unsafe { unsf() };
};
}
```
```
warning: unnecessary `unsafe` block
--> src/main.rs:9:16
|
9 | let _ = || unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:10:20
|
10 | let _ = || unsafe { unsf() };
| ^^^^^^ unnecessary `unsafe` block
```
*note that this warning kind-of suggests that **both** unsafe blocks are redundant*
--------------------------------------------------------------------------------
I also dislike the fact that it always suggests keeping the outermost `unsafe`.
E.g. for
```rs
fn granularity() {
unsafe {
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
I prefer if `rustc` suggests removing the more-course outer-level `unsafe`
instead of the fine-grained inner `unsafe` blocks, which it currently does on nightly:
```
warning: unnecessary `unsafe` block
--> src/main.rs:10:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:11:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsafe { unsf() }
11 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
warning: unnecessary `unsafe` block
--> src/main.rs:12:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
12 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
```
--------------------------------------------------------------------------------
Needless to say, this PR addresses all these points. For context, as far as my
understanding goes, the main advantage of skipping inner unsafe blocks was that
a test case like
```rs
fn top_level_used() {
unsafe {
unsf();
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
should generate some warning because there’s redundant nested `unsafe`, however
every single `unsafe` block _does_ contain some statement that uses it. Of course
this PR doesn’t aim change the warnings on this kind of code example, because
the current behavior, warning on all the inner `unsafe` blocks, makes sense in this case.
As mentioned, during MIR building all the unsafe blocks *are* kept now, and usage
is attributed to them. The way to still generate a warning like
```
warning: unnecessary `unsafe` block
--> src/main.rs:11:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
10 | unsf();
11 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
warning: unnecessary `unsafe` block
--> src/main.rs:12:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
12 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
warning: unnecessary `unsafe` block
--> src/main.rs:13:9
|
9 | unsafe {
| ------ because it's nested under this `unsafe` block
...
13 | unsafe { unsf() }
| ^^^^^^ unnecessary `unsafe` block
```
in this case is by emitting a `unused_unsafe` warning for all of the `unsafe`
blocks that are _within a **used** unsafe block_.
The previous code had a little HIR traversal already anyways to collect a set of
all the unsafe blocks (in order to afterwards determine which ones are unused
afterwards). This PR uses such a traversal to do additional things including logic
like _always_ warn for an `unsafe` block that’s inside of another **used**
unsafe block. The traversal is expanded to include nested closures in the same go,
this simplifies a lot of things.
The whole logic around `unsafe_op_in_unsafe_fn` is a little complicated, there’s
some test cases of corner-cases in this PR. (The implementation involves
differentiating between whether a used unsafe block was used exclusively by
operations where `allow(unsafe_op_in_unsafe_fn)` was active.) The main goal was
to make sure that code should compile successfully if all the `unused_unsafe`-warnings
are addressed _simultaneously_ (by removing the respective `unsafe` blocks)
no matter how complicated the patterns of `unsafe_op_in_unsafe_fn` being
disallowed and allowed throughout the function are.
--------------------------------------------------------------------------------
One noteworthy design decision I took here: An `unsafe` block
with `allow(unused_unsafe)` **is considered used** for the purposes of
linting about redundant contained unsafe blocks. So while
```rs
fn granularity() {
unsafe { //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() }
unsafe { unsf() }
unsafe { unsf() }
}
}
```
warns for the outer `unsafe` block,
```rs
fn top_level_ignored() {
#[allow(unused_unsafe)]
unsafe {
#[deny(unused_unsafe)]
{
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
unsafe { unsf() } //~ ERROR: unnecessary `unsafe` block
}
}
}
```
warns on the inner ones.
Move ty::print methods to Drop-based scope guards
Primary goal is reducing codegen of the TLS access for each closure, which shaves ~3 seconds of bootstrap time over rustc as a whole.
Specifically, rename the `Const` struct as `ConstS` and re-introduce `Const` as
this:
```
pub struct Const<'tcx>(&'tcx Interned<ConstS>);
```
This now matches `Ty` and `Predicate` more closely, including using
pointer-based `eq` and `hash`.
Notable changes:
- `mk_const` now takes a `ConstS`.
- `Const` was copy, despite being 48 bytes. Now `ConstS` is not, so need a
we need separate arena for it, because we can't use the `Dropless` one any
more.
- Many `&'tcx Const<'tcx>`/`&Const<'tcx>` to `Const<'tcx>` changes
- Many `ct.ty` to `ct.ty()` and `ct.val` to `ct.val()` changes.
- Lots of tedious sigil fiddling.
Specifically, change `Region` from this:
```
pub type Region<'tcx> = &'tcx RegionKind;
```
to this:
```
pub struct Region<'tcx>(&'tcx Interned<RegionKind>);
```
This now matches `Ty` and `Predicate` more closely.
Things to note
- Regions have always been interned, but we haven't been using pointer-based
`Eq` and `Hash`. This is now happening.
- I chose to impl `Deref` for `Region` because it makes pattern matching a lot
nicer, and `Region` can be viewed as just a smart wrapper for `RegionKind`.
- Various methods are moved from `RegionKind` to `Region`.
- There is a lot of tedious sigil changes.
- A couple of types like `HighlightBuilder`, `RegionHighlightMode` now have a
`'tcx` lifetime because they hold a `Ty<'tcx>`, so they can call `mk_region`.
- A couple of test outputs change slightly, I'm not sure why, but the new
outputs are a little better.
Specifically, change `Ty` from this:
```
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
to this
```
pub struct Ty<'tcx>(Interned<'tcx, TyS<'tcx>>);
```
There are two benefits to this.
- It's now a first class type, so we can define methods on it. This
means we can move a lot of methods away from `TyS`, leaving `TyS` as a
barely-used type, which is appropriate given that it's not meant to
be used directly.
- The uniqueness requirement is now explicit, via the `Interned` type.
E.g. the pointer-based `Eq` and `Hash` comes from `Interned`, rather
than via `TyS`, which wasn't obvious at all.
Much of this commit is boring churn. The interesting changes are in
these files:
- compiler/rustc_middle/src/arena.rs
- compiler/rustc_middle/src/mir/visit.rs
- compiler/rustc_middle/src/ty/context.rs
- compiler/rustc_middle/src/ty/mod.rs
Specifically:
- Most mentions of `TyS` are removed. It's very much a dumb struct now;
`Ty` has all the smarts.
- `TyS` now has `crate` visibility instead of `pub`.
- `TyS::make_for_test` is removed in favour of the static `BOOL_TY`,
which just works better with the new structure.
- The `Eq`/`Ord`/`Hash` impls are removed from `TyS`. `Interned`s impls
of `Eq`/`Hash` now suffice. `Ord` is now partly on `Interned`
(pointer-based, for the `Equal` case) and partly on `TyS`
(contents-based, for the other cases).
- There are many tedious sigil adjustments, i.e. adding or removing `*`
or `&`. They seem to be unavoidable.
Lazy type-alias-impl-trait
Previously opaque types were processed by
1. replacing all mentions of them with inference variables
2. memorizing these inference variables in a side-table
3. at the end of typeck, resolve the inference variables in the side table and use the resolved type as the hidden type of the opaque type
This worked okayish for `impl Trait` in return position, but required lots of roundabout type inference hacks and processing.
This PR instead stops this process of replacing opaque types with inference variables, and just keeps the opaque types around.
Whenever an opaque type `O` is compared with another type `T`, we make the comparison succeed and record `T` as the hidden type. If `O` is compared to `U` while there is a recorded hidden type for it, we grab the recorded type (`T`) and compare that against `U`. This makes implementing
* https://github.com/rust-lang/rfcs/pull/2515
much simpler (previous attempts on the inference based scheme were very prone to ICEs and general misbehaviour that was not explainable except by random implementation defined oddities).
r? `@nikomatsakis`
fixes#93411fixes#88236
by using an opaque type obligation to bubble up comparisons between opaque types and other types
Also uses proper obligation causes so that the body id works, because out of some reason nll uses body ids for logic instead of just diagnostics.
The unconditional recursion lint determines if all execution paths
eventually lead to a self-recursive call.
The implementation always follows unwinding edges which limits its
practical utility. For example, it would not lint function `f` because a
call to `g` might unwind. It also wouldn't lint function `h` because an
overflow check preceding the self-recursive call might unwind:
```rust
pub fn f() {
g();
f();
}
pub fn g() { /* ... */ }
pub fn h(a: usize) {
h(a + 1);
}
```
To avoid the issue, assume that terminators that might continue
execution along non-unwinding edges do so.
Replace `NestedVisitorMap` with generic `NestedFilter`
This is an attempt to make the `intravisit::Visitor` API simpler and "more const" with regard to nested visiting.
With this change, `intravisit::Visitor` does not visit nested things by default, unless you specify `type NestedFilter = nested_filter::OnlyBodies` (or `All`). `nested_visit_map` returns `Self::Map` instead of `NestedVisitorMap<Self::Map>`. It panics by default (unreachable if `type NestedFilter` is omitted).
One somewhat trixty thing here is that `nested_filter::{OnlyBodies, All}` live in `rustc_middle` so that they may have `type Map = map::Map` and so that `impl Visitor`s never need to specify `type Map` - it has a default of `Self::NestedFilter::Map`.
Remove deprecated LLVM-style inline assembly
The `llvm_asm!` was deprecated back in #87590 1.56.0, with intention to remove
it once `asm!` was stabilized, which already happened in #91728 1.59.0. Now it
is time to remove `llvm_asm!` to avoid continued maintenance cost.
Closes#70173.
Closes#92794.
Closes#87612.
Closes#82065.
cc `@rust-lang/wg-inline-asm`
r? `@Amanieu`
Closure capture cleanup & refactor
Follow up of #89648
Each commit is self-contained and the rationale/changes are documented in the commit message, so it's advisable to review commit by commit.
The code is significantly cleaner (at least IMO), but that could have some perf implication, so I'd suggest a perf run.
r? `@wesleywiser`
cc `@arora-aman`
The field is also renamed from `ident` to `name. In most cases,
we don't actually need the `Span`. A new `ident` method is added
to `VariantDef` and `FieldDef`, which constructs the full `Ident`
using `tcx.def_ident_span()`. This method is used in the cases
where we actually need an `Ident`.
This makes incremental compilation properly track changes
to the `Span`, without all of the invalidations caused by storing
a `Span` directly via an `Ident`.
Region info is completely unnecessary for upvar capture kind computation
and is only needed to create the final upvar tuple ty. Doing so makes
creation of UpvarCapture very cheap and expose further cleanup opportunity.
The `AggregateKind` enum ends up in the final mir `Body`. Currently,
any changes to `AdtDef` (regardless of how significant they are)
will legitimately cause the overall result of `optimized_mir` to change,
invalidating any codegen re-use involving that mir.
This will get worse once we start hashing the `Span` inside `FieldDef`
(which is itself contained in `AdtDef`).
To try to reduce these kinds of invalidations, this commit changes
`AggregateKind::Adt` to store just the `DefId`, instead of the full
`AdtDef`. This allows the result of `optimized_mir` to be unchanged
if the `AdtDef` changes in a way that doesn't actually affect any
of the MIR we build.
Implement let-else type annotations natively
Tracking issue: #87335Fixes#89688, fixes#89807, edit: fixes #89960 as well
As explained in https://github.com/rust-lang/rust/issues/89688#issuecomment-940405082, the previous desugaring moved the let-else scrutinee into a dummy variable, which meant if you wanted to refer to it again in the else block, it had moved.
This introduces a new hir type, ~~`hir::LetExpr`~~ `hir::Let`, which takes over all the fields of `hir::ExprKind::Let(...)` and adds an optional type annotation. The `hir::Let` is then treated like a `hir::Local` when type checking a function body, specifically:
* `GatherLocalsVisitor` overrides a new `Visitor::visit_let_expr` and does pretty much exactly what it does for `visit_local`, assigning a local type to the `hir::Let` ~~(they could be deduplicated but they are right next to each other, so at least we know they're the same)~~
* It reuses the code in `check_decl_local` to typecheck the `hir::Let`, simply returning 'bool' for the expression type after doing that.
* ~~`FnCtxt::check_expr_let` passes this local type in to `demand_scrutinee_type`, and then imitates check_decl_local's pattern checking~~
* ~~`demand_scrutinee_type` (the blindest change for me, please give this extra scrutiny) uses this local type instead of of creating a new one~~
* ~~Just realised the `check_expr_with_needs` was passing NoExpectation further down, need to pass the type there too. And apparently this Expectation API already exists.~~
Some other misc notes:
* ~~Is the clippy code supposed to be autoformatted? I tried not to give huge diffs but maybe some rustfmt changes simply haven't hit it yet.~~
* in `rustc_ast_lowering/src/block.rs`, I noticed some existing `self.alias_attrs()` calls in `LoweringContext::lower_stmts` seem to be copying attributes from the lowered locals/etc to the statements. Is that right? I'm new at this, I don't know.
rustc_mir_build: reorder bindings
No functional changes intended.
I'm playing around with building compiler components using nightly rust
(2021-11-02) in a non-standard way. I encountered the following error while
trying to build rustc_mir_build:
```
error[E0597]: `wildcard` does not live long enough
--> rust/src/nightly/compiler/rustc_mir_build/src/build/matches/mod.rs:1767:82
|
1767 | let mut otherwise_candidate = Candidate::new(expr_place_builder.clone(), &wildcard, false);
| ^^^^^^^^^ borrowed value does not live long enough
...
1799 | }
| -
| |
| `wildcard` dropped here while still borrowed
| borrow might be used here, when `guard_candidate` is dropped and runs the destructor for type `Candidate<'_, '_>`
|
= note: values in a scope are dropped in the opposite order they are defined
```
I believe this flags an issue that may become an error in the future.
Swapping the order of `wildcard` and `guard_candidate` resolves it.
No functional changes intended.
I'm playing around with building compiler components using nightly rust
(2021-11-02) in a non-standard way. I encountered the following error while
trying to build rustc_mir_build:
```
error[E0597]: `wildcard` does not live long enough
--> rust/src/nightly/compiler/rustc_mir_build/src/build/matches/mod.rs:1767:82
|
1767 | let mut otherwise_candidate = Candidate::new(expr_place_builder.clone(), &wildcard, false);
| ^^^^^^^^^ borrowed value does not live long enough
...
1799 | }
| -
| |
| `wildcard` dropped here while still borrowed
| borrow might be used here, when `guard_candidate` is dropped and runs the destructor for type `Candidate<'_, '_>`
|
= note: values in a scope are dropped in the opposite order they are defined
```
I believe this flags an issue that may become an error in the future.
Swapping the order of `wildcard` and `guard_candidate` resolves it.
Optimize pattern matching
These commits speed up the `match-stress-enum` benchmark, which is very artificial, but the changes are simple enough that it's probably worth doing.
r? `@Nadrieril`
Remove hir::map::blocks and use FnKind instead
The principal tool is `FnLikeNode`, which is not often used and can be easily implemented using `rustc_hir::intravisit::FnKind`.
This performs a substitution of code following the pattern:
let <id> = if let <pat> = ... { identity } else { ... : ! };
To simplify it to:
let <pat> = ... { identity } else { ... : ! };
By adopting the let_else feature.
Add test cases for unstable variants
Add test cases for doc hidden variants
Move is_doc_hidden to method on TyCtxt
Add unstable variants test to reachable-patterns ui test
Rename reachable-patterns -> omitted-patterns
Normalize after substituting via `field.ty()`
Back in https://github.com/rust-lang/rust/issues/72476 I hadn't understood where the problem was coming from, and only worked around the issue. What happens is that calling `field.ty()` on a field of a generic struct substitutes the appropriate generics but doesn't normalize the resulting type.
As a consumer of types I'm surprised that one would substitute without normalizing, feels like a footgun, so I added a comment.
Fixes https://github.com/rust-lang/rust/issues/89393.
fix(lint): don't suggest refutable patterns to "fix" irrefutable bind
In function arguments and let bindings, do not suggest changing `C` to `Foo::C` unless `C` is the only variant of `Foo`, because it won't work.
The general warning is still kept, because code like this is confusing.
Fixes#88730
p.s. `src/test/ui/lint/lint-uppercase-variables.rs` already tests the one-variant case.
Use larger span for adjustment THIR expressions
Currently, we use a relatively 'small' span for THIR
expressions generated by an 'adjustment' (e.g. an autoderef,
autoborrow, unsizing). As a result, if a borrow generated
by an adustment ends up causing a borrowcheck error, for example:
```rust
let mut my_var = String::new();
let my_ref = &my_var
my_var.push('a');
my_ref;
```
then the span for the mutable borrow may end up referring
to only the base expression (e.g. `my_var`), rather than
the method call which triggered the mutable borrow
(e.g. `my_var.push('a')`)
Due to a quirk of the MIR borrowck implementation,
this doesn't always get exposed in migration mode,
but it does in many cases.
This commit makes THIR building consistently use 'larger'
spans for adjustment expressions. These spans are recoded
when we first create the adjustment during typecheck. For
example, an autoref adjustment triggered by a method call
will record the span of the entire method call.
The intent of this change it make it clearer to users
when it's the specific way in which a variable is
used (for example, in a method call) that produdes
a borrowcheck error. For example, an error message
claiming that a 'mutable borrow occurs here' might
be confusing if it just points at a usage of a variable
(e.g. `my_var`), when no `&mut` is in sight. Pointing
at the entire expression should help to emphasize
that the method call itself is responsible for
the mutable borrow.
In several cases, this makes the `#![feature(nll)]` diagnostic
output match up exactly with the default (migration mode) output.
As a result, several `.nll.stderr` files end up getting removed
entirely.
In function arguments and let bindings, do not suggest changing `C` to `Foo::C`
unless `C` is the only variant of `Foo`, because it won't work.
The general warning is still kept, because code like this is confusing.
Fixes#88730
Add an intermediate representation to exhaustiveness checking
The exhaustiveness checking algorithm keeps deconstructing patterns into a `Constructor` and some `Fields`, but does so a bit all over the place. This PR introduces a new representation for patterns that already has that information, so we only compute it once at the start.
I find this makes code easier to follow. In particular `DeconstructedPat::specialize` is a lot simpler than what happened before, and more closely matches the description of the algorithm. I'm also hoping this could help for the project of librarifying exhaustiveness for rust_analyzer since it decouples the algorithm from `rustc_middle::Pat`.
Now `Fields` is just a `Vec` of patterns, with some extra info on the
side to reconstruct patterns when needed. This emphasizes that this
extra info is not central to the algorithm.
Currently, we use a relatively 'small' span for THIR
expressions generated by an 'adjustment' (e.g. an autoderef,
autoborrow, unsizing). As a result, if a borrow generated
by an adustment ends up causing a borrowcheck error, for example:
```rust
let mut my_var = String::new();
let my_ref = &my_var
my_var.push('a');
my_ref;
```
then the span for the mutable borrow may end up referring
to only the base expression (e.g. `my_var`), rather than
the method call which triggered the mutable borrow
(e.g. `my_var.push('a')`)
Due to a quirk of the MIR borrowck implementation,
this doesn't always get exposed in migration mode,
but it does in many cases.
This commit makes THIR building consistently use 'larger'
spans for adjustment expressions
The intent of this change it make it clearer to users
when it's the specific way in which a variable is
used (for example, in a method call) that produdes
a borrowcheck error. For example, an error message
claiming that a 'mutable borrow occurs here' might
be confusing if it just points at a usage of a variable
(e.g. `my_var`), when no `&mut` is in sight. Pointing
at the entire expression should help to emphasize
that the method call itself is responsible for
the mutable borrow.
In several cases, this makes the `#![feature(nll)]` diagnostic
output match up exactly with the default (migration mode) output.
As a result, several `.nll.stderr` files end up getting removed
entirely.
Rollup of 12 pull requests
Successful merges:
- #88795 (Print a note if a character literal contains a variation selector)
- #89015 (core::ascii::escape_default: reduce struct size)
- #89078 (Cleanup: Remove needless reference in ParentHirIterator)
- #89086 (Stabilize `Iterator::map_while`)
- #89096 ([bootstrap] Improve the error message when `ninja` is not found to link to installation instructions)
- #89113 (dont `.ensure()` the `thir_abstract_const` query call in `mir_build`)
- #89114 (Fixes a technicality regarding the size of C's `char` type)
- #89115 (⬆️ rust-analyzer)
- #89126 (Fix ICE when `indirect_structural_match` is allowed)
- #89141 (Impl `Error` for `FromSecsError` without foreign type)
- #89142 (Fix match for placeholder region)
- #89147 (add case for checking const refs in check_const_value_eq)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Fix ICE when `indirect_structural_match` is allowed
Fixes#89088. The ICE is caused by `delay_good_path_bug()`, which is called (indirectly) from a `format!()` macro invocation. I have moved the macro invocation into the `decorate` closure of `struct_span_lint_hir()`, so that the macro is only invoked if the lint is not allowed (i.e., causes at least a warning, and thus prevents `delay_good_path_bug()` from firing).
dont `.ensure()` the `thir_abstract_const` query call in `mir_build`
might fix an ICE seen in #89022 (note: this PR does not close that issue) about attempting to read stolen thir. I couldn't repro the ICE but this `.ensure` seems sus anyway.
r? `@lcnr`
This just applies the suggested fixes from the compatibility warnings,
leaving any that are in practice spurious in. This is primarily intended to
provide a starting point to identify possible fixes to the migrations (e.g., by
avoiding spurious warnings).
A secondary commit cleans these up where they are false positives (as is true in
many of the cases).
In some cases, we emit borrowcheck diagnostics pointing
at a particular field expression in a struct expression
(e.g. `MyStruct { field: my_expr }`). However, this
behavior currently relies on us choosing the
`ConstraintCategory::Boring` with the 'correct' span.
When adding additional variants to `ConstraintCategory`,
(or changing existing usages away from `ConstraintCategory::Boring`),
the current behavior can easily get broken, since a non-boring
constraint will get chosen over a boring one.
To make the diagnostic output less fragile, this commit
adds a `ConstraintCategory::Usage` variant. We use this variant
for the temporary assignments created for each field of
an aggregate we are constructing.
Using this new variant, we can emit a message mentioning
"this usage", emphasizing the fact that the error message
is related to the specific use site (in the struct expression).
This is preparation for additional work on improving NLL error messages
(see #57374)
Add linting on non_exhaustive structs and enum variants
Add ui tests for non_exhaustive reachable lint
Rename to non_exhaustive_omitted_patterns and avoid triggering on if let
generic_const_exprs: use thir for abstract consts instead of mir
Changes `AbstractConst` building to use `thir` instead of `mir` so that there's less chance of consts unifying when they shouldn't because lowering to mir dropped information (see `abstract-consts-as-cast-5.rs` test)
r? `@lcnr`
Each pattern in a match arm has its own copy of the match guard in MIR,
with its own temporary, so it has to be dropped before the the guards
are joined to the single copy of the arm.
MIR lowering for `if let` expressions is now more complicated now that
`if let` exists in HIR. This PR adds a scope for the variables bound in
an `if let` expression and then uses an approach similar to how we
handle loops to ensure that we reliably drop the correct variables.
Previously, we would set up the source lines for `match` expressions so
that the code generated to perform the test of the scrutinee was matched
to the line of the arm that required the test and then jump from the arm
block to the "next" block was matched to all of the lines in the `match`
expression.
While that makes sense, it has the side effect of causing strange
stepping behavior in debuggers.
I've changed the source information so that all of the generated tests
are sourced to `match {scrutinee}` and the jumps are sourced to the last
line of the block they are inside. This resolves the weird stepping
behavior in all debuggers and resolves some instances of "ambiguous
symbol" errors in WinDbg preventing the user from setting breakpoints at
`match` expressions.
RFC2229 Only compute place if upvars can be resolved
Closes https://github.com/rust-lang/rust/issues/87987
This PR fixes an ICE when trying to unwrap an Err. This error appears when trying to convert a PlaceBuilder into Place when upvars can't yet be resolved. We should only try to convert a PlaceBuilder into Place if upvars can be resolved.
r? `@nikomatsakis`
Name the captured upvars for closures/generators in debuginfo
Previously, debuggers print closures as something like
```
y::main::closure-0 (0x7fffffffdd34)
```
The pointer actually references to an upvar. It is not very obvious, especially for beginners.
It's because upvars don't have names before, as they are packed into a tuple. This PR names the upvars, so we can expect to see something like
```
y::main::closure-0 {_captured_ref__b: 0x[...]}
```
r? `@tmandry`
Discussed at https://github.com/rust-lang/rust/pull/84752#issuecomment-831639489 .
Places are usually shallow and quick to visit. By contrast, computing
`is_freeze` can be much costlier, involving inference and trait
solving. Making sure to call `is_freeze` only when necessary should be
beneficial for performance in most cases.
This commit intends to fill out some of the remaining pieces of the
C-unwind ABI. This has a number of other changes with it though to move
this design space forward a bit. Notably contained within here is:
* On `panic=unwind`, the `extern "C"` ABI is now considered as "may
unwind". This fixes a longstanding soundness issue where if you
`panic!()` in an `extern "C"` function defined in Rust that's actually
UB because the LLVM representation for the function has the `nounwind`
attribute, but then you unwind.
* Whether or not a function unwinds now mainly considers the ABI of the
function instead of first checking the panic strategy. This fixes a
miscompile of `extern "C-unwind"` with `panic=abort` because that ABI
can still unwind.
* The aborting stub for non-unwinding ABIs with `panic=unwind` has been
reimplemented. Previously this was done as a small tweak during MIR
generation, but this has been moved to a separate and dedicated MIR
pass. This new pass will, for appropriate functions and function
calls, insert a `cleanup` landing pad for any function call that may
unwind within a function that is itself not allowed to unwind. Note
that this subtly changes some behavior from before where previously on
an unwind which was caught-to-abort it would run active destructors in
the function, and now it simply immediately aborts the process.
* The `#[unwind]` attribute has been removed and all users in tests and
such are now using `C-unwind` and `#![feature(c_unwind)]`.
I think this is largely the last piece of the RFC to implement.
Unfortunately I believe this is still not stabilizable as-is because
activating the feature gate changes the behavior of the existing `extern
"C"` ABI in a way that has no replacement. My thinking for how to enable
this is that we add support for the `C-unwind` ABI on stable Rust first,
and then after it hits stable we change the behavior of the `C` ABI.
That way anyone straddling stable/beta/nightly can switch to `C-unwind`
safely.
Properly find owner of closure in THIR unsafeck
Previously, when encountering a closure in a constant, the THIR unsafeck gets invoked on the owner of the constant instead of the constant itself, producing cycles.
Supersedes #87492. ```@FabianWolff``` thanks for your work on that PR, I copied your test file and added you as a co-author.
Fixes#87414.
r? ```@oli-obk```
Since RFC 3052 soft deprecated the authors field anyway, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information, we should remove it from
crates in this repo.
Support -Z unpretty=thir-tree again
Currently `-Z unpretty=thir-tree` is broken after some THIR refactorings. This re-implements it, making it easier to debug THIR-related issues.
We have to do analyzes before getting the THIR, since trying to create THIR from invalid HIR can ICE. But doing those analyzes requires the THIR to be built and stolen. We work around this by creating a separate query to construct the THIR tree string representation.
Closes https://github.com/rust-lang/project-thir-unsafeck/issues/8, fixes#85552.
Combine two loops in `check_match`
Suggested by Nadrieril in
https://github.com/rust-lang/rust/pull/79051#discussion_r548778186.
Opening to get a perf run. Hopefully this code doesn't require everything in the
first loop to be done before running the second! (It shouldn't though.)
cc `@Nadrieril`
CTFE/Miri engine Pointer type overhaul
This fixes the long-standing problem that we are using `Scalar` as a type to represent pointers that might be integer values (since they point to a ZST). The main problem is that with int-to-ptr casts, there are multiple ways to represent the same pointer as a `Scalar` and it is unclear if "normalization" (i.e., the cast) already happened or not. This leads to ugly methods like `force_mplace_ptr` and `force_op_ptr`.
Another problem this solves is that in Miri, it would make a lot more sense to have the `Pointer::offset` field represent the full absolute address (instead of being relative to the `AllocId`). This means we can do ptr-to-int casts without access to any machine state, and it means that the overflow checks on pointer arithmetic are (finally!) accurate.
To solve this, the `Pointer` type is made entirely parametric over the provenance, so that we can use `Pointer<AllocId>` inside `Scalar` but use `Pointer<Option<AllocId>>` when accessing memory (where `None` represents the case that we could not figure out an `AllocId`; in that case the `offset` is an absolute address). Moreover, the `Provenance` trait determines if a pointer with a given provenance can be cast to an integer by simply dropping the provenance.
I hope this can be read commit-by-commit, but the first commit does the bulk of the work. It introduces some FIXMEs that are resolved later.
Fixes https://github.com/rust-lang/miri/issues/841
Miri PR: https://github.com/rust-lang/miri/pull/1851
r? `@oli-obk`
Update Rust Float-Parsing Algorithms to use the Eisel-Lemire algorithm.
# Summary
Rust, although it implements a correct float parser, has major performance issues in float parsing. Even for common floats, the performance can be 3-10x [slower](https://arxiv.org/pdf/2101.11408.pdf) than external libraries such as [lexical](https://github.com/Alexhuszagh/rust-lexical) and [fast-float-rust](https://github.com/aldanor/fast-float-rust).
Recently, major advances in float-parsing algorithms have been developed by Daniel Lemire, along with others, and implement a fast, performant, and correct float parser, with speeds up to 1200 MiB/s on Apple's M1 architecture for the [canada](0e2b5d163d/data/canada.txt) dataset, 10x faster than Rust's 130 MiB/s.
In addition, [edge-cases](https://github.com/rust-lang/rust/issues/85234) in Rust's [dec2flt](868c702d0c/library/core/src/num/dec2flt) algorithm can lead to over a 1600x slowdown relative to efficient algorithms. This is due to the use of Clinger's correct, but slow [AlgorithmM and Bellepheron](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.45.4152&rep=rep1&type=pdf), which have been improved by faster big-integer algorithms and the Eisel-Lemire algorithm, respectively.
Finally, this algorithm provides substantial improvements in the number of floats the Rust core library can parse. Denormal floats with a large number of digits cannot be parsed, due to use of the `Big32x40`, which simply does not have enough digits to round a float correctly. Using a custom decimal class, with much simpler logic, we can parse all valid decimal strings of any digit count.
```rust
// Issue in Rust's dec2fly.
"2.47032822920623272088284396434110686182e-324".parse::<f64>(); // Err(ParseFloatError { kind: Invalid })
```
# Solution
This pull request implements the Eisel-Lemire algorithm, modified from [fast-float-rust](https://github.com/aldanor/fast-float-rust) (which is licensed under Apache 2.0/MIT), along with numerous modifications to make it more amenable to inclusion in the Rust core library. The following describes both features in fast-float-rust and improvements in fast-float-rust for inclusion in core.
**Documentation**
Extensive documentation has been added to ensure the code base may be maintained by others, which explains the algorithms as well as various associated constants and routines. For example, two seemingly magical constants include documentation to describe how they were derived as follows:
```rust
// Round-to-even only happens for negative values of q
// when q ≥ −4 in the 64-bit case and when q ≥ −17 in
// the 32-bitcase.
//
// When q ≥ 0,we have that 5^q ≤ 2m+1. In the 64-bit case,we
// have 5^q ≤ 2m+1 ≤ 2^54 or q ≤ 23. In the 32-bit case,we have
// 5^q ≤ 2m+1 ≤ 2^25 or q ≤ 10.
//
// When q < 0, we have w ≥ (2m+1)×5^−q. We must have that w < 2^64
// so (2m+1)×5^−q < 2^64. We have that 2m+1 > 2^53 (64-bit case)
// or 2m+1 > 2^24 (32-bit case). Hence,we must have 2^53×5^−q < 2^64
// (64-bit) and 2^24×5^−q < 2^64 (32-bit). Hence we have 5^−q < 2^11
// or q ≥ −4 (64-bit case) and 5^−q < 2^40 or q ≥ −17 (32-bitcase).
//
// Thus we have that we only need to round ties to even when
// we have that q ∈ [−4,23](in the 64-bit case) or q∈[−17,10]
// (in the 32-bit case). In both cases,the power of five(5^|q|)
// fits in a 64-bit word.
const MIN_EXPONENT_ROUND_TO_EVEN: i32;
const MAX_EXPONENT_ROUND_TO_EVEN: i32;
```
This ensures maintainability of the code base.
**Improvements for Disguised Fast-Path Cases**
The fast path in float parsing algorithms attempts to use native, machine floats to represent both the significant digits and the exponent, which is only possible if both can be exactly represented without rounding. In practice, this means that the significant digits must be 53-bits or less and the then exponent must be in the range `[-22, 22]` (for an f64). This is similar to the existing dec2flt implementation.
However, disguised fast-path cases exist, where there are few significant digits and an exponent above the valid range, such as `1.23e25`. In this case, powers-of-10 may be shifted from the exponent to the significant digits, discussed at length in https://github.com/rust-lang/rust/issues/85198.
**Digit Parsing Improvements**
Typically, integers are parsed from string 1-at-a-time, requiring unnecessary multiplications which can slow down parsing. An approach to parse 8 digits at a time using only 3 multiplications is described in length [here](https://johnnylee-sde.github.io/Fast-numeric-string-to-int/). This leads to significant performance improvements, and is implemented for both big and little-endian systems.
**Unsafe Changes**
Relative to fast-float-rust, this library makes less use of unsafe functionality and clearly documents it. This includes the refactoring and documentation of numerous unsafe methods undesirably marked as safe. The original code would look something like this, which is deceptively marked as safe for unsafe functionality.
```rust
impl AsciiStr {
#[inline]
pub fn step_by(&mut self, n: usize) -> &mut Self {
unsafe { self.ptr = self.ptr.add(n) };
self
}
}
...
#[inline]
fn parse_scientific(s: &mut AsciiStr<'_>) -> i64 {
// the first character is 'e'/'E' and scientific mode is enabled
let start = *s;
s.step();
...
}
```
The new code clearly documents safety concerns, and does not mark unsafe functionality as safe, leading to better safety guarantees.
```rust
impl AsciiStr {
/// Advance the view by n, advancing it in-place to (n..).
pub unsafe fn step_by(&mut self, n: usize) -> &mut Self {
// SAFETY: same as step_by, safe as long n is less than the buffer length
self.ptr = unsafe { self.ptr.add(n) };
self
}
}
...
/// Parse the scientific notation component of a float.
fn parse_scientific(s: &mut AsciiStr<'_>) -> i64 {
let start = *s;
// SAFETY: the first character is 'e'/'E' and scientific mode is enabled
unsafe {
s.step();
}
...
}
```
This allows us to trivially demonstrate the new implementation of dec2flt is safe.
**Inline Annotations Have Been Removed**
In the previous implementation of dec2flt, inline annotations exist practically nowhere in the entire module. Therefore, these annotations have been removed, which mostly does not impact [performance](https://github.com/aldanor/fast-float-rust/issues/15#issuecomment-864485157).
**Fixed Correctness Tests**
Numerous compile errors in `src/etc/test-float-parse` were present, due to deprecation of `time.clock()`, as well as the crate dependencies with `rand`. The tests have therefore been reworked as a [crate](https://github.com/Alexhuszagh/rust/tree/master/src/etc/test-float-parse), and any errors in `runtests.py` have been patched.
**Undefined Behavior**
An implementation of `check_len` which relied on undefined behavior (in fast-float-rust) has been refactored, to ensure that the behavior is well-defined. The original code is as follows:
```rust
#[inline]
pub fn check_len(&self, n: usize) -> bool {
unsafe { self.ptr.add(n) <= self.end }
}
```
And the new implementation is as follows:
```rust
/// Check if the slice at least `n` length.
fn check_len(&self, n: usize) -> bool {
n <= self.as_ref().len()
}
```
Note that this has since been fixed in [fast-float-rust](https://github.com/aldanor/fast-float-rust/pull/29).
**Inferring Binary Exponents**
Rather than explicitly store binary exponents, this new implementation infers them from the decimal exponent, reducing the amount of static storage required. This removes the requirement to store [611 i16s](868c702d0c/library/core/src/num/dec2flt/table.rs (L8)).
# Code Size
The code size, for all optimizations, does not considerably change relative to before for stripped builds, however it is **significantly** smaller prior to stripping the resulting binaries. These binary sizes were calculated on x86_64-unknown-linux-gnu.
**new**
Using rustc version 1.55.0-dev.
opt-level|size|size(stripped)
|:-:|:-:|:-:|
0|400k|300K
1|396k|292K
2|392k|292K
3|392k|296K
s|396k|292K
z|396k|292K
**old**
Using rustc version 1.53.0-nightly.
opt-level|size|size(stripped)
|:-:|:-:|:-:|
0|3.2M|304K
1|3.2M|292K
2|3.1M|284K
3|3.1M|284K
s|3.1M|284K
z|3.1M|284K
# Correctness
The dec2flt implementation passes all of Rust's unittests and comprehensive float parsing tests, along with numerous other tests such as Nigel Toa's comprehensive float [tests](https://github.com/nigeltao/parse-number-fxx-test-data) and Hrvoje Abraham [strtod_tests](https://github.com/ahrvoje/numerics/blob/master/strtod/strtod_tests.toml). Therefore, it is unlikely that this algorithm will incorrectly round parsed floats.
# Issues Addressed
This will fix and close the following issues:
- resolves#85198
- resolves#85214
- resolves#85234
- fixes#31407
- fixes#31109
- fixes#53015
- resolves#68396
- closes https://github.com/aldanor/fast-float-rust/issues/15
Implementation is based off fast-float-rust, with a few notable changes.
- Some unsafe methods have been removed.
- Safe methods with inherently unsafe functionality have been removed.
- All unsafe functionality is documented and provably safe.
- Extensive documentation has been added for simpler maintenance.
- Inline annotations on internal routines has been removed.
- Fixed Python errors in src/etc/test-float-parse/runtests.py.
- Updated test-float-parse to be a library, to avoid missing rand dependency.
- Added regression tests for #31109 and #31407 in core tests.
- Added regression tests for #31109 and #31407 in ui tests.
- Use the existing slice primitive to simplify shared dec2flt methods
- Remove Miri ignores from dec2flt, due to faster parsing times.
- resolves#85198
- resolves#85214
- resolves#85234
- fixes#31407
- fixes#31109
- fixes#53015
- resolves#68396
- closes https://github.com/aldanor/fast-float-rust/issues/15
Implement Mutation- and BorrowOfLayoutConstrainedField in thir-unsafeck
Since nobody has so far claimed Mutation- and BorrowOfLayoutConstrainedField in rust-lang/project-thir-unsafeck#7, I have taken the liberty of implementing them in thir-unsafeck.
r? `@LeSeulArtichaut`
- Closures in external crates may get compiled in because of
monomorphization. We should store names of captured variables
in `optimized_mir`, so that they are written into the metadata
file and we can use them to generate debuginfo.
- If there are breakpoints inside closures, the names of captured
variables stored in `optimized_mir` can be used to print them.
Now the name is more precise when disjoint fields are captured.