Ignore allocation bytes in some mir-opt tests
This adds `rustc -Zdump-mir-exclude-alloc-bytes` to skip writing allocation bytes in MIR dumps, and applies it to tests that were failing on s390x due to its big-endian byte order.
Fixes#126261
In 126578 we ended up with more binary size increases than expected.
This change attempts to avoid inlining large things into small things, to avoid that kind of increase, in cases when top-down inlining will still be able to do that inlining later.
Remove more `PtrToPtr` casts in GVN
This addresses two things I noticed in MIR:
1. `NonNull::<T>::eq` does `(a as *mut T) == (b as *mut T)`, but it could just compare the `*const T`s, so this removes `PtrToPtr` casts that are on both sides of a pointer comparison, so long as they're not fat-to-thin casts.
2. `NonNull::<T>::addr` does `transmute::<_, usize>(p as *const ())`, but so long as `T: Thin` that cast doesn't do anything, and thus we can directly transmute the `*const T` instead.
r? mir-opt
`PtrMetadata` doesn't care about `*const`/`*mut`/`&`/`&mut`, so GVN away those casts in its argument.
This includes updating MIR to allow calling PtrMetadata on references too, not just raw pointers. That means that `[T]::len` can be just `_0 = PtrMetadata(_1)`, for example.
# Conflicts:
# tests/mir-opt/pre-codegen/slice_index.slice_get_unchecked_mut_range.PreCodegen.after.panic-abort.mir
# tests/mir-opt/pre-codegen/slice_index.slice_get_unchecked_mut_range.PreCodegen.after.panic-unwind.mir
Apparently MIR borrowck cares about at least one of these for checking variance.
In runtime MIR, though, there's no need for them as `PtrToPtr` does the same thing.
(Banning them simplifies passes like GVN that no longer need to handle multiple cast possibilities.)
The only non-obvious changes:
- `building/storage_live_dead_in_statics.rs` has a `#[rustfmt::skip]`
attribute to avoid reformating a table of data.
- Two `.mir` files have slight changes involving line numbers.
- In `unusual_item_types.rs` an `EMIT_MIR` annotation is moved to
outside a function, which is the usual spot, because `tidy` complains
if such a comment is indented.
The commit also tweaks the comments in `rustfmt.toml`.
Enable DestinationPropagation by default.
~~Based on https://github.com/rust-lang/rust/pull/115291.~~
This PR proposes to enable the destination propagation pass by default.
This pass is meant to reduce the amount of copies present in MIR.
At the same time, this PR removes the `RenameReturnPlace` pass, as it is currently unsound.
`DestinationPropagation` is not limited to `_0`, but does not handle borrowed locals.
Casting to `*const ()` or `*mut ()` just bloats the MIR, so let's not.
If ACP#362 goes through we can keep calling `ptr::from_raw_parts(_mut)` in these also without the cast, but that hasn't had any libs-api attention yet, so I'm not waiting on it.
Account for immutably borrowed locals in MIR copy-prop and GVN
For the most part, we consider that immutably borrowed `Freeze` locals still fulfill SSA conditions. As the borrow is immutable, any use of the local will have the value given by the single assignment, and there can be no surprise.
This allows copy-prop to merge a non-borrowed local with a borrowed local. We chose to keep copy-classes heads unborrowed, as those may be easier to optimize in later passes.
This also allows to GVN the value behind an immutable borrow. If a SSA local is borrowed, dereferencing that borrow is equivalent to copying the local's value: re-executing the assignment between the borrow and the dereference would be UB.
r? `@ghost` for perf
For things with easily pre-checked overflow conditions -- shifts and unsigned subtraction -- write then checked methods in such a way that we stop emitting wrapping versions of them.
For example, today <https://rust.godbolt.org/z/qM9YK8Txb> neither
```rust
a.checked_sub(b).unwrap()
```
nor
```rust
a.checked_sub(b).unwrap_unchecked()
```
actually optimizes to `sub nuw`. After this PR they do.
Add `Ord::cmp` for primitives as a `BinOp` in MIR
Update: most of this OP was written months ago. See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.
---
There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches. Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:
1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.
Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic. Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical. Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)? But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)? And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers. Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.
As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR. The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks. Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues. (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)
---
r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
This saves some debug and scope metadata in every single function that calls it.
Normally wouldn't be worth it, but with the derives there's *so* many of these.
refactor check_{lang,library}_ub: use a single intrinsic
This enacts the plan I laid out [here](https://github.com/rust-lang/rust/pull/122282#issuecomment-1996917998): use a single intrinsic, called `ub_checks` (in aniticpation of https://github.com/rust-lang/compiler-team/issues/725), that just exposes the value of `debug_assertions` (consistently implemented in both codegen and the interpreter). Put the language vs library UB logic into the library.
This makes it easier to do something like https://github.com/rust-lang/rust/pull/122282 in the future: that just slightly alters the semantics of `ub_checks` (making it more approximating when crates built with different flags are mixed), but it no longer affects whether these checks can happen in Miri or compile-time.
The first commit just moves things around; I don't think these macros and functions belong into `intrinsics.rs` as they are not intrinsics.
r? `@saethlin`