Add offset_of! macro (RFC 3308)
Implements https://github.com/rust-lang/rfcs/pull/3308 (tracking issue #106655) by adding the built in macro `core::mem::offset_of`. Two of the future possibilities are also implemented:
* Nested field accesses (without array indexing)
* DST support (for `Sized` fields)
I wrote this a few months ago, before the RFC merged. Now that it's merged, I decided to rebase and finish it.
cc `@thomcc` (RFC author)
Deduplicate unreachable blocks, for real this time
In https://github.com/rust-lang/rust/pull/106428 (in particular 41eda69516) we noticed that inlining `unreachable_unchecked` can produce duplicate unreachable blocks. So we improved two MIR optimizations: `SimplifyCfg` was given a simplify to deduplicate unreachable blocks, then `InstCombine` was given a combiner to deduplicate switch targets that point at the same block. The problem is that change doesn't actually work.
Our current pass order is
```
SimplifyCfg (does nothing relevant to this situation)
Inline (produces multiple unreachable blocks)
InstCombine (doesn't do anything here, oops)
SimplifyCfg (produces the duplicate SwitchTargets that InstCombine is looking for)
```
So in here, I have factored out the specific function from `InstCombine` and placed it inside the simplify that produces the case it is looking for. This should ensure that it runs in the scenario it was designed for.
Fixes https://github.com/rust-lang/rust/issues/110551
r? `@cjgillot`
Refactor unwind in MIR
This makes unwinding from current `Option<BasicBlock>` into
```rust
enum UnwindAction {
Continue,
Cleanup(BasicBlock),
Unreachable,
Terminate,
}
```
cc `@JakobDegen` `@RalfJung` `@Amanieu`
Check pattern refutability on THIR
The current `check_match` query is based on HIR, but partially re-lowers HIR into THIR.
This PR proposed to use the results of the `thir_body` query to check matches, instead of re-building THIR.
Most of the diagnostic changes are spans getting shorter, or commas/semicolons not getting removed.
This PR degrades the diagnostic for confusing constants in patterns (`let A = foo()` where `A` resolves to a `const A` somewhere): it does not point ot the definition of `const A` any more.
Insert alignment checks for pointer dereferences when debug assertions are enabled
Closes https://github.com/rust-lang/rust/issues/54915
- [x] Jake tells me this sounds like a place to use `MirPatch`, but I can't figure out how to insert a new basic block with a new terminator in the middle of an existing basic block, using `MirPatch`. (if nobody else backs up this point I'm checking this as "not actually a good idea" because the code looks pretty clean to me after rearranging it a bit)
- [x] Using `CastKind::PointerExposeAddress` is definitely wrong, we don't want to expose. Calling a function to get the pointer address seems quite excessive. ~I'll see if I can add a new `CastKind`.~ `CastKind::Transmute` to the rescue!
- [x] Implement a more helpful panic message like slice bounds checking.
r? `@oli-obk`
Use span of placeholders in format_args!() expansion.
`format_args!("{}", x)` expands to something that contains `Argument::new_display(&x)`. That entire expression was generated with the span of `x`.
After this PR, `&x` uses the span of `x`, but the `new_display` call uses the span of the `{}` placeholder within the format string. If an implicitly captured argument was used like in `format_args!("{x}")`, both use the span of the `{x}` placeholder.
This fixes https://github.com/rust-lang/rust/issues/109576, and also allows for more improvements to similar diagnostics in the future, since the usage of `x` can now be traced to the exact `{}` placeholder that required it to be `Display` (or `Debug` etc.)
Thanks to the combination of #108246 and #108442 it could already remove identity transmutes.
With this PR, it can also simplify them to `IntToInt` casts, `Discriminant` reads, or `Field` projections.
Permit the MIR inliner to inline diverging functions
This heuristic prevents inlining of `hint::unreachable_unchecked`, which in turn makes `Option/Result::unwrap_unchecked` a bad inlining candidate. I looked through the changes to `core`, `alloc`, `std`, and `hashbrown` by hand and they all seem reasonable. Let's see how this looks in perf...
---
Based on rustc-perf it looks like this regresses ctfe-stress, and the cachegrind diff indicates that this regression is in `InterpCx::statement`. I don't know how to do any deeper analysis because that function is _enormous_ in the try toolchain, which has no debuginfo in it. And a local build produces significantly different codegen for that function, even with LTO.
Simpler checked shifts in MIR building
Doing masking to check unsigned shift amounts is overcomplicated; just comparing the shift directly saves a statement and a temporary, as well as is much easier to read as a human. And shifting by unsigned is the canonical case -- notably, all the library shifting methods (that don't support every type) take shift RHSs as `u32` -- so we might as well make that simpler since it's easy to do so.
This PR also changes *signed* shift amounts to `IntToInt` casts and then uses the same check as for unsigned. The bit-masking is a nice trick, but for example LLVM actually canonicalizes it to an unsigned comparison anyway <https://rust.godbolt.org/z/8h59fMGT4> so I don't think it's worth the effort and the extra `Constant`. (If MIR's `assert` was `assert_nz` then the masking might make sense, but when the `!=` uses another statement I think the comparison is better.)
To review, I suggest looking at 2ee0468c49 first -- that's the interesting code change and has a MIR diff.
My favourite part of the diff:
```diff
- _20 = BitAnd(_19, const 340282366920938463463374607431768211448_u128); // scope 0 at $DIR/shifts.rs:+2:34: +2:44
- _21 = Ne(move _20, const 0_u128); // scope 0 at $DIR/shifts.rs:+2:34: +2:44
- assert(!move _21, "attempt to shift right by `{}`, which would overflow", _19) -> [success: bb3, unwind: bb7]; // scope 0 at $DIR/shifts.rs:+2:34: +2:44
+ _18 = Lt(_17, const 8_u128); // scope 0 at $DIR/shifts.rs:+2:34: +2:44
+ assert(move _18, "attempt to shift right by `{}`, which would overflow", _17) -> [success: bb3, unwind: bb7]; // scope 0 at $DIR/shifts.rs:+2:34: +2:44
```
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.
Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
Custom MIR: Allow optional RET type annotation
This currently doesn't compile because the type of `RET` is inferred, which fails if RET is a composite type and fields are initialised separately.
```rust
#![feature(custom_mir, core_intrinsics)]
extern crate core;
use core::intrinsics::mir::*;
#[custom_mir(dialect = "runtime", phase = "optimized")]
fn fn0() -> (i32, bool) {
mir! ({
RET.0 = 0;
RET.1 = true;
Return()
})
}
```
```
error[E0282]: type annotations needed
--> src/lib.rs:8:9
|
8 | RET.0 = 0;
| ^^^ cannot infer type
For more information about this error, try `rustc --explain E0282`.
```
This PR allows the user to manually specify the return type with `type RET = ...;` if required:
```rust
#[custom_mir(dialect = "runtime", phase = "optimized")]
fn fn0() -> (i32, bool) {
mir! (
type RET = (i32, bool);
{
RET.0 = 0;
RET.1 = true;
Return()
}
)
}
```
The syntax is not optimal, I'm happy to see other suggestions. Ideally I wanted it to be a normal type annotation like `let RET: ...;`, but this runs into the multiple parsing options error during macro expansion, as it can be parsed as a normal `let` declaration as well.
r? ```@oli-obk``` or ```@tmiasko``` or ```@JakobDegen```
move Option::as_slice to intrinsic
````@scottmcm```` suggested on #109095 I use a direct approach of unpacking the operation in MIR lowering, so here's the implementation.
cc ````@nikic```` as this should hopefully unblock #107224 (though perhaps other changes to the prior implementation, which I left for bootstrapping, are needed).
Use index based drop loop for slices and arrays
Instead of building two kinds of drop pair loops, of which only one will be eventually used at runtime in a given monomorphization, always use index based loop.
Wrap the whole LocalInfo in ClearCrossCrate.
MIR contains a lot of information about locals. The primary purpose of this information is the quality of borrowck diagnostics.
This PR aims to drop this information after MIR analyses are finished, ie. starting from post-cleanup runtime MIR.
Implement checked Shl/Shr at MIR building.
This does not require any special handling by codegen backends,
as the overflow behaviour is entirely determined by the rhs (shift amount).
This allows MIR ConstProp to remove the overflow check for constant shifts.
~There is an existing different behaviour between cg_llvm and cg_clif (cc `@bjorn3).`
I took cg_llvm's one as reference: overflow if `rhs < 0 || rhs > number_of_bits_in_lhs_ty`.~
EDIT: `cg_llvm` and `cg_clif` implement the overflow check differently. This PR uses `cg_llvm`'s implementation based on a `BitAnd` instead of `cg_clif`'s one based on an unsigned comparison.