Add MIR validation for unwind out from nounwind functions + fixes to make validation pass
`@Nilstrieb` This is the MIR validation you asked in https://github.com/rust-lang/rust/pull/112403#discussion_r1222739722.
Two passes need to be fixed to get the validation to pass:
* `RemoveNoopLandingPads` currently unconditionally introduce a resume block (even there is none to begin with!), changed to not do that
* Generator state transform introduces a `assert` which may unwind, and its drop elaboration also introduces many new `UnwindAction`s, so in this case run the AbortUnwindingCalls after the transformation.
I believe this PR should also fixRust-for-Linux/linux#1016, cc `@ojeda`
r? `@Nilstrieb`
Operand types are now tracked explicitly, so there is no need to reserve ID 0
for the special always-zero counter.
As part of the renumbering, this change fixes an off-by-one error in the way
counters were counted by the `coverageinfo` query. As a result, functions
should now have exactly the number of counters they actually need, instead of
always having an extra counter that is never used.
Operand types are now tracked explicitly, so there is no need for expression
IDs to avoid counter IDs by descending from `u32::MAX`. Instead they can just
count up from 0, and can be used directly as indices when necessary.
Because the three kinds of operand are now distinguished explicitly, we no
longer need fiddly code to disambiguate counter IDs and expression IDs based on
the total number of counters/expressions in a function.
This does increase the size of operands from 4 bytes to 8 bytes, but that
shouldn't be a big deal since they are mostly stored inside boxed structures,
and the current coverage code is not particularly size-optimized anyway.
interpret: Unify projections for MPlaceTy, PlaceTy, OpTy
For ~forever, we didn't really have proper shared code for handling projections into those three types. This is mostly because `PlaceTy` projections require `&mut self`: they might have to `force_allocate` to be able to represent a project part-way into a local.
This PR finally fixes that, by enhancing `Place::Local` with an `offset` so that such an optimized place can point into a part of a place without having requiring an in-memory representation. If we later write to that place, we will still do `force_allocate` -- for now we don't have an optimized path in `write_immediate` that would avoid allocation for partial overwrites of immediately stored locals. But in `write_immediate` we have `&mut self` so at least this no longer pollutes all our type signatures.
(Ironically, I seem to distantly remember that many years ago, `Place::Local` *did* have an `offset`, and I removed it to simplify things. I guess I didn't realize why it was so useful... I am also not sure if this was actually used to achieve place projection on `&self` back then.)
The `offset` had type `Option<Size>`, where `None` represent "no projection was applied". This is needed because locals *can* be unsized (when they are arguments) but `Place::Local` cannot store metadata: if the offset is `None`, this refers to the entire local, so we can use the metadata of the local itself (which must be indirect); if a projection gets applied, since the local is indirect, it will turn into a `Place::Ptr`. (Note that even for indirect locals we can have `Place::Local`: when the local appears in MIR, we always start with `Place::Local`, and only check `frame.locals` later. We could eagerly normalize to `Place::Ptr` but I don't think that would actually simplify things much.)
Having done all that, we can finally properly abstract projections: we have a new `Projectable` trait that has the basic methods required for projecting, and then all projection methods are implemented for anything that implements that trait. We can even implement it for `ImmTy`! (Not that we need that, but it seems neat.) The visitor can be greatly simplified; it doesn't need its own trait any more but it can use the `Projectable` trait. We also don't need the separate `Mut` visitor any more; that was required only to reflect that projections on `PlaceTy` needed `&mut self`.
It is possible that there are some more `&mut self` that can now become `&self`... I guess we'll notice that over time.
r? `@oli-obk`
Get `!nonnull` metadata on slice iterators, without `assume`s
This updates the non-ZST paths to read the end pointer through a pointer-to-`NonNull`, so that they all get `!nonnull` metadata.
That means that the last `assume(!ptr.is_null())` can be deleted, without impacting codegen -- the codegen tests confirm the LLVM-IR ends up exactly the same as before.
It makes it sound like the `ExprKind` and `Rvalue` are supposed to represent all pointer related
casts, when in reality their just used to share a some enum variants. Make it clear there these
are only coercion to make it clear why only some pointer related "casts" are in the enum.
mir opt + codegen: handle subtyping
fixes#107205
the same issue was caused in multiple places:
- mir opts: both copy and destination propagation
- codegen: assigning operands to locals (which also propagates values)
I changed codegen to always update the type in the operands used for locals which should guard against any new occurrences of this bug going forward. I don't know how to make mir optimizations more resilient here. Hopefully the added tests will be enough to detect any trivially wrong optimizations going forward.
Warn on unused `offset_of!()` result
The usage of `core::hint::must_use()` means that we don't get a specialized message. I figured out that since there are plenty of other methods that just have `#[must_use]` with no message it'll be fine, but it is a bit unfortunate that the error mentions `must_use` and not `offset_of!`.
Fixes#111669.
[libs] Simplify `unchecked_{shl,shr}`
There's no need for the `const_eval_select` dance here. And while I originally wrote the `.try_into().unwrap_unchecked()` implementation here, it's kinda a mess in MIR -- this new one is substantially simpler, as shown by the old one being above the inlining threshold but the new one being below it in the `mir-opt/inline/unchecked_shifts` tests.
We don't need `u32::checked_shl` doing a dance through both `Result` *and* `Option` 🙃
Remove `box_free` lang item
This PR removes the `box_free` lang item, replacing it with `Box`'s `Drop` impl. Box dropping is still slightly magic because the contained value is still dropped by the compiler.
There's no need for the `const_eval_select` dance here. And while I originally wrote the `.try_into().unwrap_unchecked()` implementation here, it's kinda a mess in MIR -- this new one is substantially simpler, as shown by the old one being above the inlining threshold but the new one being below it.
To reproduce the changes in this commit locally:
- Run `./x test tidy` and remove all the output files not associated
with a test file anymore, as reported by tidy.
- Run `./x test tests/mir-opt --bless` to generate the new outputs.
Only check inlining counter after recursing.
This PR aims to reduce the strength of https://github.com/rust-lang/rust/pull/105119 even more.
In the current implementation, we check the inline count before recursing. This means that we never actually reach inlining depth 3.
This PR checks the counter after recursion, to give a chance to inline at depth >= 3.
r? `@scottmcm`
cc `@JakobDegen`
Enable ConstGoto and SeparateConstSwitch passes by default
These 2 passes implement a limited form of jump-threading.
Filing this PR to see if enabling them would be lighter than https://github.com/rust-lang/rust/pull/107009.
Enable ScalarReplacementOfAggregates in optimized builds
Like MatchBranchSimplification, this pass is known to produce significant runtime improvements in Cranelift artifacts, and I believe based on the perf runs here that the primary effect of this pass is to empower MatchBranchSimplification. ScalarReplacementOfAggregates on its own has little effect on anything, but when this was rebased up to include https://github.com/rust-lang/rust/pull/112001 we started seeing significant and majority-positive results.
Based on the fact that we see most of the regressions in debug builds (https://github.com/rust-lang/rust/pull/112002#issuecomment-1566270144) and some rather significant ones in cycles and wall time, I'm only enabling this in optimized builds at the moment.
All the implementations of the trait already are `Copy`, and this seems to be enough to simplify the implementations enough to make the MIR inliner willing to inline basics like `Range::next`.
MIR: opt-in normalization of `BasicBlock` and `Local` numbering
This doesn't matter at all for actual codegen, but after spending some time reading pre-codegen MIR, I was wishing I didn't have to jump around so much in reading post-inlining code.
So this add two passes that are off by default for every mir level, but can be enabled (`-Zmir-enable-passes=+ReorderBasicBlocks,+ReorderLocals`) for humans.
Since this only affects `PreCodegen MIR, and it would be nice for that to be resilient to permutations of things that don't affect the actual semantic behaviours.
Stop turning transmutes into discriminant reads in mir-opt
Partially reverts #109612, as after #109993 these aren't actually equivalent any more, and I'm no longer confident this was ever an improvement in the first place.
Having this "simplification" meant that similar-looking code actually did somewhat different things. For example,
```rust
pub unsafe fn demo1(x: std::cmp::Ordering) -> u8 {
std::mem::transmute(x)
}
pub unsafe fn demo2(x: std::cmp::Ordering) -> i8 {
std::mem::transmute(x)
}
```
in nightly today is generating <https://rust.godbolt.org/z/dPK58zW18>
```llvm
define noundef i8 `@_ZN7example5demo117h341ef313673d2ee6E(i8` noundef %x) unnamed_addr #0 {
%0 = icmp uge i8 %x, -1
%1 = icmp ule i8 %x, 1
%2 = or i1 %0, %1
call void `@llvm.assume(i1` %2)
ret i8 %x
}
define noundef i8 `@_ZN7example5demo217h5ad29f361a3f5700E(i8` noundef %0) unnamed_addr #0 {
%x = alloca i8, align 1
store i8 %0, ptr %x, align 1
%1 = load i8, ptr %x, align 1, !range !2, !noundef !3
ret i8 %1
}
```
Which feels too different when the original code is essentially identical.
---
Aside: that example is different *after* optimizations too:
```llvm
define noundef i8 `@_ZN7example5demo117h341ef313673d2ee6E(i8` noundef returned %x) unnamed_addr #0 {
%0 = add i8 %x, 1
%1 = icmp ult i8 %0, 3
tail call void `@llvm.assume(i1` %1)
ret i8 %x
}
define noundef i8 `@_ZN7example5demo217h5ad29f361a3f5700E(i8` noundef returned %0) unnamed_addr #1 {
ret i8 %0
}
```
so turning the `Transmute` into a `Discriminant` was arguably just making things worse, so leaving it alone instead -- and thus having less code in rustc -- seems clearly better.
debug format `Const`'s less verbosely
Not user visible change only visible to people debugging const generics.
Currently debug output for `ty::Const` is super verbose (even for `-Zverbose` lol), things like printing infer vars as `Infer(Var(?0c))` instead of just `?0c`, bound vars and placeholders not using `^0_1` or `!0_1` syntax respectively. With these changes its imo better but not perfect:
`Const { ty: usize, kind: ^0_1 }`
is still a lot for not much information. not entirely sure what to do about that so not dealing with it yet.
Need to do formatting for `ConstKind::Expr` at some point too since rn it sucks (doesn't even print anything with `Display`) not gonna do that in this PR either.
r? `@compiler-errors`
Partially reverts 109612, as after 109993 these aren't actually equivalent any more, and I'm no longer confident this was ever an improvement in the first place.
Verify copies of mutable pointers in 2 stages in ReferencePropagation
Fixes#111422
In the first stage, we mark the copies as reborrows, to be checked later.
In the second stage, we walk the reborrow chains to verify that all stages are fully replacable.
The replacement itself mirrors the check, and iterates through the reborrow chain.
r? ``````@RalfJung``````
cc ``````@JakobDegen``````
Implement builtin # syntax and use it for offset_of!(...)
Add `builtin #` syntax to the parser, as well as a generic infrastructure to support both item and expression position builtin syntaxes. The PR also uses this infrastructure for the implementation of the `offset_of!` macro, added by #106934.
cc `@petrochenkov` `@DrMeepster`
cc #110680 `builtin #` tracking issue
cc #106655 `offset_of!` tracking issue
Disable nrvo mir opt
See #111005 and #110902 . The ICE can definitely be hit on stable, the miscompilation I'm not sure about. The pass makes some pretty sketchy assumptions though, and we should not have it on while that's the case.
I'm not going to work on actually fixing this, it's probably not excessively difficult though.
r? rust-lang/mir-opt
ConstProp into PlaceElem::Index.
Noticed this while looking at keccak output MIR.
This pass aims to replace `ProjectionElem::Index` with `ProjectionElem::ConstantIndex` during ConstProp.
r? `@ghost`
Rename InstCombine to InstSimplify
```
╭ ➜ ben@archlinux:~/rust
╰ ➤ rg -i instcombine
src/doc/rustc-dev-guide/src/mir/optimizations.md
134:may have been misapplied. Examples of this are `InstCombine` and `ConstantPropagation`.
src/ci/docker/host-x86_64/disabled/dist-x86_64-haiku/llvm-config.sh
38: instcombine instrumentation interpreter ipo irreader lanai \
tests/codegen/slice_as_from_ptr_range.rs
4:// min-llvm-version: 15.0 (because this is a relatively new instcombine)
```
r? `@scottmcm`
Reduce MIR dump file count for MIR-opt tests
As referenced in issue #109502 , mir-opt tests previously used the -Zdump-mir=all flag, which generates very large output. This PR only dumps the passes under test, greatly reducing dump output.
Don't validate constants in const propagation
Validation is neither necessary nor desirable.
The constant validation is already omitted at mir-opt-level >= 3, so there there are not changes in MIR test output (the propagation of invalid constants is covered by an existing test in tests/mir-opt/const_prop/invalid_constant.rs).
Make `mem::replace` simpler in codegen
Since they'd mentioned more intrinsics for simplifying stuff recently,
r? `@WaffleLapkin`
This is a continuation of me looking at foundational stuff that ends up with more instructions than it really needs. Specifically I noticed this one because `Range::next` isn't MIR-inlining, and one of the largest parts of it is a `replace::<usize>` that's a good dozen instructions instead of the two it could be.
So this means that `ptr::write` with a `Copy` type no longer generates worse IR than manually dereferencing (well, at least in LLVM -- MIR still has bonus pointer casts), and in doing so means that we're finally down to just the two essential `memcpy`s when emitting `mem::replace` for a large type, rather than the bonus-`alloca` and three `memcpy`s we emitted before this ([or the 6 we currently emit in 1.69 stable](https://rust.godbolt.org/z/67W8on6nP)). That said, LLVM does _usually_ manage to optimize the extra code away. But it's still nice for it not to have to do as much, thanks to (for example) not going through an `alloca` when `replace`ing a primitive like a `usize`.
(This is a new intrinsic, but one that's immediately lowered to existing MIR constructs, so not anything that MIRI or the codegen backends or MIR semantics needs to do work to handle.)
Tweak await span to not contain dot
Fixes a discrepancy between method calls and await expressions where the latter are desugared to have a span that *contains* the dot (i.e. `.await`) but method call identifiers don't contain the dot. This leads to weird suggestions suggestions in borrowck -- see linked issue.
Fixes#110761
This mostly touches a bunch of tests to tighten their `await` span.
Use MIR's `Offset` for pointer `add` too
~~Status: draft while waiting for #110822 to land, since this is built atop that.~~
~~r? `@ghost~~`
Canonical Rust code has mostly moved to `add`/`sub` on pointers, which take `usize`, instead of `offset` which takes `isize`. (And, relatedly, when `sub_ptr` was added it turned out it replaced every single in-tree use of `offset_from`, because `usize` is just so much more useful than `isize` in Rust.)
Unfortunately, `intrinsics::offset` could only accept `*const` and `isize`, so there's a *huge* amount of type conversions back and forth being done. They're identity conversions in the backend, but still end up producing quite a lot of unhelpful MIR.
This PR changes `intrinsics::offset` to accept `*const` *and* `*mut` along with `isize` *and* `usize`. Conveniently, the backends and CTFE already handle this, since MIR's `BinOp::Offset` [already supports all four combinations](adaac6b166/compiler/rustc_const_eval/src/transform/validate.rs (L523-L528)).
To demonstrate the difference, I added some `mir-opt/pre-codegen/` tests around slice indexing. Here's the difference to `[T]::get_mut`, since it uses `<*mut _>::add` internally:
```diff
`@@` -79,30 +70,21 `@@` fn slice_get_mut_usize(_1: &mut [u32], _2: usize) -> Option<&mut u32> {
StorageLive(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageLive(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL
_9 = _8 as *mut u32 (PtrToPtr); // scope 11 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _13 = _2 as isize (IntToInt); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _15 = _9 as *const u32 (Pointer(MutToConstPointer)); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _14 = Offset(move _15, _13); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _7 = move _14 as *mut u32 (PtrToPtr); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
+ _7 = Offset(_9, _2); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
StorageDead(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageDead(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageDead(_11); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
```
1c1c8e442a (diff-a841b6a4538657add3f39bc895744331453d0625e7aace128b1f604f0b63c8fdR80)
More core::fmt::rt cleanup.
- Removes the `V1` suffix from the `Argument` and `Flag` types.
- Moves more of the format_args lang items into the `core::fmt::rt` module. (The only remaining lang item in `core::fmt` is `Arguments` itself, which is a public type.)
Part of https://github.com/rust-lang/rust/issues/99012
Follow-up to https://github.com/rust-lang/rust/pull/110616
`IntoFuture::into_future` is no longer unstable
We don't need to gate the `IntoFuture::into_future` call in `.await` lowering anymore.
``@bors`` rollup
They're semantically the same, so this means the backends don't need to handle the intrinsic and means fewer MIR basic blocks in pointer arithmetic code.
Remove the size of locals heuristic in MIR inlining
This heuristic doesn't necessarily correlate to complexity of the MIR Body. In particular, a lot of straight-line code in MIR tends to never reuse a local, even though any optimizer would effectively reuse the storage or just put everything in registers. So it doesn't even necessarily make sense that this would be a stack size heuristic.
So... what happens if we just delete the heuristic? The benchmark suite improves significantly. Less heuristics better?
r? `@cjgillot`
Run various queries from other queries instead of explicitly in phases
These are just legacy leftovers from when rustc didn't have a query system. While there are more cleanups of this sort that can be done here, I want to land them in smaller steps.
This phased order of query invocations was already a lie, as any query that looks at types (e.g. the wf checks run before) can invoke e.g. const eval which invokes borrowck, which invokes typeck, ...
Add offset_of! macro (RFC 3308)
Implements https://github.com/rust-lang/rfcs/pull/3308 (tracking issue #106655) by adding the built in macro `core::mem::offset_of`. Two of the future possibilities are also implemented:
* Nested field accesses (without array indexing)
* DST support (for `Sized` fields)
I wrote this a few months ago, before the RFC merged. Now that it's merged, I decided to rebase and finish it.
cc `@thomcc` (RFC author)
Deduplicate unreachable blocks, for real this time
In https://github.com/rust-lang/rust/pull/106428 (in particular 41eda69516) we noticed that inlining `unreachable_unchecked` can produce duplicate unreachable blocks. So we improved two MIR optimizations: `SimplifyCfg` was given a simplify to deduplicate unreachable blocks, then `InstCombine` was given a combiner to deduplicate switch targets that point at the same block. The problem is that change doesn't actually work.
Our current pass order is
```
SimplifyCfg (does nothing relevant to this situation)
Inline (produces multiple unreachable blocks)
InstCombine (doesn't do anything here, oops)
SimplifyCfg (produces the duplicate SwitchTargets that InstCombine is looking for)
```
So in here, I have factored out the specific function from `InstCombine` and placed it inside the simplify that produces the case it is looking for. This should ensure that it runs in the scenario it was designed for.
Fixes https://github.com/rust-lang/rust/issues/110551
r? `@cjgillot`
Refactor unwind in MIR
This makes unwinding from current `Option<BasicBlock>` into
```rust
enum UnwindAction {
Continue,
Cleanup(BasicBlock),
Unreachable,
Terminate,
}
```
cc `@JakobDegen` `@RalfJung` `@Amanieu`
Check pattern refutability on THIR
The current `check_match` query is based on HIR, but partially re-lowers HIR into THIR.
This PR proposed to use the results of the `thir_body` query to check matches, instead of re-building THIR.
Most of the diagnostic changes are spans getting shorter, or commas/semicolons not getting removed.
This PR degrades the diagnostic for confusing constants in patterns (`let A = foo()` where `A` resolves to a `const A` somewhere): it does not point ot the definition of `const A` any more.
Insert alignment checks for pointer dereferences when debug assertions are enabled
Closes https://github.com/rust-lang/rust/issues/54915
- [x] Jake tells me this sounds like a place to use `MirPatch`, but I can't figure out how to insert a new basic block with a new terminator in the middle of an existing basic block, using `MirPatch`. (if nobody else backs up this point I'm checking this as "not actually a good idea" because the code looks pretty clean to me after rearranging it a bit)
- [x] Using `CastKind::PointerExposeAddress` is definitely wrong, we don't want to expose. Calling a function to get the pointer address seems quite excessive. ~I'll see if I can add a new `CastKind`.~ `CastKind::Transmute` to the rescue!
- [x] Implement a more helpful panic message like slice bounds checking.
r? `@oli-obk`
Use span of placeholders in format_args!() expansion.
`format_args!("{}", x)` expands to something that contains `Argument::new_display(&x)`. That entire expression was generated with the span of `x`.
After this PR, `&x` uses the span of `x`, but the `new_display` call uses the span of the `{}` placeholder within the format string. If an implicitly captured argument was used like in `format_args!("{x}")`, both use the span of the `{x}` placeholder.
This fixes https://github.com/rust-lang/rust/issues/109576, and also allows for more improvements to similar diagnostics in the future, since the usage of `x` can now be traced to the exact `{}` placeholder that required it to be `Display` (or `Debug` etc.)
Thanks to the combination of #108246 and #108442 it could already remove identity transmutes.
With this PR, it can also simplify them to `IntToInt` casts, `Discriminant` reads, or `Field` projections.
Permit the MIR inliner to inline diverging functions
This heuristic prevents inlining of `hint::unreachable_unchecked`, which in turn makes `Option/Result::unwrap_unchecked` a bad inlining candidate. I looked through the changes to `core`, `alloc`, `std`, and `hashbrown` by hand and they all seem reasonable. Let's see how this looks in perf...
---
Based on rustc-perf it looks like this regresses ctfe-stress, and the cachegrind diff indicates that this regression is in `InterpCx::statement`. I don't know how to do any deeper analysis because that function is _enormous_ in the try toolchain, which has no debuginfo in it. And a local build produces significantly different codegen for that function, even with LTO.
Simpler checked shifts in MIR building
Doing masking to check unsigned shift amounts is overcomplicated; just comparing the shift directly saves a statement and a temporary, as well as is much easier to read as a human. And shifting by unsigned is the canonical case -- notably, all the library shifting methods (that don't support every type) take shift RHSs as `u32` -- so we might as well make that simpler since it's easy to do so.
This PR also changes *signed* shift amounts to `IntToInt` casts and then uses the same check as for unsigned. The bit-masking is a nice trick, but for example LLVM actually canonicalizes it to an unsigned comparison anyway <https://rust.godbolt.org/z/8h59fMGT4> so I don't think it's worth the effort and the extra `Constant`. (If MIR's `assert` was `assert_nz` then the masking might make sense, but when the `!=` uses another statement I think the comparison is better.)
To review, I suggest looking at 2ee0468c49 first -- that's the interesting code change and has a MIR diff.
My favourite part of the diff:
```diff
- _20 = BitAnd(_19, const 340282366920938463463374607431768211448_u128); // scope 0 at $DIR/shifts.rs:+2:34: +2:44
- _21 = Ne(move _20, const 0_u128); // scope 0 at $DIR/shifts.rs:+2:34: +2:44
- assert(!move _21, "attempt to shift right by `{}`, which would overflow", _19) -> [success: bb3, unwind: bb7]; // scope 0 at $DIR/shifts.rs:+2:34: +2:44
+ _18 = Lt(_17, const 8_u128); // scope 0 at $DIR/shifts.rs:+2:34: +2:44
+ assert(move _18, "attempt to shift right by `{}`, which would overflow", _17) -> [success: bb3, unwind: bb7]; // scope 0 at $DIR/shifts.rs:+2:34: +2:44
```
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.
Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
Custom MIR: Allow optional RET type annotation
This currently doesn't compile because the type of `RET` is inferred, which fails if RET is a composite type and fields are initialised separately.
```rust
#![feature(custom_mir, core_intrinsics)]
extern crate core;
use core::intrinsics::mir::*;
#[custom_mir(dialect = "runtime", phase = "optimized")]
fn fn0() -> (i32, bool) {
mir! ({
RET.0 = 0;
RET.1 = true;
Return()
})
}
```
```
error[E0282]: type annotations needed
--> src/lib.rs:8:9
|
8 | RET.0 = 0;
| ^^^ cannot infer type
For more information about this error, try `rustc --explain E0282`.
```
This PR allows the user to manually specify the return type with `type RET = ...;` if required:
```rust
#[custom_mir(dialect = "runtime", phase = "optimized")]
fn fn0() -> (i32, bool) {
mir! (
type RET = (i32, bool);
{
RET.0 = 0;
RET.1 = true;
Return()
}
)
}
```
The syntax is not optimal, I'm happy to see other suggestions. Ideally I wanted it to be a normal type annotation like `let RET: ...;`, but this runs into the multiple parsing options error during macro expansion, as it can be parsed as a normal `let` declaration as well.
r? ```@oli-obk``` or ```@tmiasko``` or ```@JakobDegen```
move Option::as_slice to intrinsic
````@scottmcm```` suggested on #109095 I use a direct approach of unpacking the operation in MIR lowering, so here's the implementation.
cc ````@nikic```` as this should hopefully unblock #107224 (though perhaps other changes to the prior implementation, which I left for bootstrapping, are needed).
Use index based drop loop for slices and arrays
Instead of building two kinds of drop pair loops, of which only one will be eventually used at runtime in a given monomorphization, always use index based loop.
Wrap the whole LocalInfo in ClearCrossCrate.
MIR contains a lot of information about locals. The primary purpose of this information is the quality of borrowck diagnostics.
This PR aims to drop this information after MIR analyses are finished, ie. starting from post-cleanup runtime MIR.
Implement checked Shl/Shr at MIR building.
This does not require any special handling by codegen backends,
as the overflow behaviour is entirely determined by the rhs (shift amount).
This allows MIR ConstProp to remove the overflow check for constant shifts.
~There is an existing different behaviour between cg_llvm and cg_clif (cc `@bjorn3).`
I took cg_llvm's one as reference: overflow if `rhs < 0 || rhs > number_of_bits_in_lhs_ty`.~
EDIT: `cg_llvm` and `cg_clif` implement the overflow check differently. This PR uses `cg_llvm`'s implementation based on a `BitAnd` instead of `cg_clif`'s one based on an unsigned comparison.
Ensure `ptr::read` gets all the same LLVM `load` metadata that dereferencing does
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`. Trying to narrow it down, it seems that was because `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
The root cause is that `ptr::read` is currently implemented via the *untyped* `copy_nonoverlapping`, and thus the `load` doesn't get any type-aware metadata: no `noundef`, no `!range`. This PR solves that by lowering `ptr::read(p)` to `copy *p` in MIR, for which the backends already do the right thing.
Fortuitiously, this also improves the IR we give to LLVM for things like `mem::replace`, and fixes a couple of long-standing bugs where `ptr::read` on `Copy` types was worse than `*`ing them.
Zulip conversation: <https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Move.20array.3A.3AIntoIter.20to.20ManuallyDrop/near/341189936>
cc `@erikdesjardins` `@JakobDegen` `@workingjubilee` `@the8472`
Fixes#106369Fixes#73258
Instead of building two kinds of drop pair loops, of which only one will
be eventually used at runtime in a given monomorphization, always use
index based loop.
Remove `identity_future` indirection
This was previously needed because the indirection used to hide some unexplained lifetime errors, which it turned out were related to the `min_choice` algorithm.
Removing the indirection also solves a couple of cycle errors, large moves and makes async blocks support the `#[track_caller]`annotation.
Fixes https://github.com/rust-lang/rust/issues/104826.
Remove `box_syntax`
r? `@Nilstrieb`
This removes the feature `box_syntax`, which allows the use of `box <expr>` to create a Box, and finalises removing use of the feature from the compiler. `box_patterns` (allowing the use of `box <pat>` in a pattern) is unaffected.
It also removes `ast::ExprKind::Box` - the only way to create a 'box' expression now is with the rustc-internal `#[rustc_box]` attribute.
As a temporary measure to help users move away, `box <expr>` now parses the inner expression, and emits a `MachineApplicable` lint to replace it with `Box::new`
Closes#49733
Strengthen state tracking in const-prop
Some/many of the changes are replicated between both the const-prop lint and the const-prop optimization.
Behaviour changes:
- const-prop opt does not give a span to propagated values. This was useless as that span's primary purpose is to diagnose evaluation failure in codegen.
- we remove the `OnlyPropagateInto` mode. It was only used for function arguments, which are better modeled by a write before entry.
- the tracking of assignments and discriminants make clearer that we do nothing in `NoPropagation` mode or on indirect places.
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`.
Turned out to be a more general problem as `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
This PR lowers `ptr::read(p)` to `copy *p` in MIR, which fortuitiously also improves the IR we give to LLVM for things like `mem::replace`.
Rollup of 8 pull requests
Successful merges:
- #108754 (Retry `pred_known_to_hold_modulo_regions` with fulfillment if ambiguous)
- #108759 (1.41.1 supported 32-bit Apple targets)
- #108839 (Canonicalize root var when making response from new solver)
- #108856 (Remove DropAndReplace terminator)
- #108882 (Tweak E0740)
- #108898 (Set `LIBC_CHECK_CFG=1` when building Rust code in bootstrap)
- #108911 (Improve rustdoc-gui/tester.js code a bit)
- #108916 (Remove an unused return value in `rustc_hir_typeck`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Do not consider `&mut *x` as mutating `x` in `CopyProp`
This PR removes an unfortunate overly cautious case from the current implementation.
Found by https://github.com/rust-lang/rust/pull/105274 cc `@saethlin`
This was previously needed because the indirection used to hide some unexplained lifetime errors, which it turned out were related to the `min_choice` algorithm.
Removing the indirection also solves a couple of cycle errors, large moves and makes async blocks support the `#[track_caller]` annotation.
This commit desugars the drop and replace deriving from an
assignment at MIR build, avoiding the construction of the
DropAndReplace terminator (which will be removed in a followign PR)
In order to retain the same error messages for replaces a new
DesugaringKind::Replace variant is introduced.
Stabilize `#![feature(target_feature_11)]`
## Stabilization report
### Summary
Allows for safe functions to be marked with `#[target_feature]` attributes.
Functions marked with `#[target_feature]` are generally considered as unsafe functions: they are unsafe to call, cannot be assigned to safe function pointers, and don't implement the `Fn*` traits.
However, calling them from other `#[target_feature]` functions with a superset of features is safe.
```rust
// Demonstration function
#[target_feature(enable = "avx2")]
fn avx2() {}
fn foo() {
// Calling `avx2` here is unsafe, as we must ensure
// that AVX is available first.
unsafe {
avx2();
}
}
#[target_feature(enable = "avx2")]
fn bar() {
// Calling `avx2` here is safe.
avx2();
}
```
### Test cases
Tests for this feature can be found in [`src/test/ui/rfcs/rfc-2396-target_feature-11/`](b67ba9ba20/src/test/ui/rfcs/rfc-2396-target_feature-11/).
### Edge cases
- https://github.com/rust-lang/rust/issues/73631
Closures defined inside functions marked with `#[target_feature]` inherit the target features of their parent function. They can still be assigned to safe function pointers and implement the appropriate `Fn*` traits.
```rust
#[target_feature(enable = "avx2")]
fn qux() {
let my_closure = || avx2(); // this call to `avx2` is safe
let f: fn() = my_closure;
}
```
This means that in order to call a function with `#[target_feature]`, you must show that the target-feature is available while the function executes *and* for as long as whatever may escape from that function lives.
### Documentation
- Reference: https://github.com/rust-lang/reference/pull/1181
---
cc tracking issue #69098
r? `@ghost`
Correctly handle aggregates in DataflowConstProp
The previous implementation from https://github.com/rust-lang/rust/pull/107411 flooded target of an aggregate assignment with `Bottom`, corresponding to the `deinit` that the interpreter does.
As a consequence, when assigning `target = Enum::Variant#i(...)` all the `(target as Variant#j)` were at `Bottom` while they should have been `Top`.
This PR replaces that flooding with `Top`.
Aside, it corrects a second bug where the wrong place would be used to assign to enum variant fields, resulting to nothing happening.
Fixes https://github.com/rust-lang/rust/issues/108166