Add cross-language LLVM CFI support to the Rust compiler
This PR adds cross-language LLVM Control Flow Integrity (CFI) support to the Rust compiler by adding the `-Zsanitizer-cfi-normalize-integers` option to be used with Clang `-fsanitize-cfi-icall-normalize-integers` for normalizing integer types (see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space). For more information about LLVM CFI and cross-language LLVM CFI support for the Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and -Zsanitizer-cfi-normalize-integers, and requires proper (i.e., non-rustc) LTO (i.e., -Clinker-plugin-lto).
Thank you again, ``@bjorn3,`` ``@nikic,`` ``@samitolvanen,`` and the Rust community for all the help!
This commit adds cross-language LLVM Control Flow Integrity (CFI)
support to the Rust compiler by adding the
`-Zsanitizer-cfi-normalize-integers` option to be used with Clang
`-fsanitize-cfi-icall-normalize-integers` for normalizing integer types
(see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust
-compiled code "mixed binaries" (i.e., for when C or C++ and Rust
-compiled code share the same virtual address space). For more
information about LLVM CFI and cross-language LLVM CFI support for the
Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and
-Zsanitizer-cfi-normalize-integers, and requires proper (i.e.,
non-rustc) LTO (i.e., -Clinker-plugin-lto).
Improve niche placement by trying two strategies and picking the better result
Fixes#104807Fixes#105371
Determining which sort order is better requires calculating the struct size (so we can calculate the niche offset). But that in turn depends on the field order, so happens after sorting. So the simple way to solve that is to run the whole thing twice and pick the better result.
1st commit is just code motion, the meat is in the later ones.
Add `intrinsics::transmute_unchecked`
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
See also https://github.com/rust-lang/rust/pull/108442#issuecomment-1474777273, where `CastKind::Transmute` was added having exactly these semantics before the lang meeting (which I wasn't in) independently expressed interest.
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
ci: add a runner for vanilla LLVM 16
Like #107044, this will let us track compatibility with LLVM 16 going
forward, especially after we eventually upgrade our own to the next.
This also drops `tidy` here and in `x86_64-gnu-llvm-15`, syncing with
that change in #106085.
tests: adapt for LLVM change 5b386b864c7619897c51a1da97d78f1cf6f3eff6
The above-mentioned change modified the output of thread-local.rs by changing some variable names. Rather than assume things get put in %0, we capture the variable so the test passes in both the old and new version.
The above-mentioned change modified the output of thread-local.rs by
changing some variable names. Rather than assume things get put in %0,
we capture the variable so the test passes in both the old and new
version.
Permit MIR inlining without #[inline]
I noticed that there are at least a handful of portable-simd functions that have no `#[inline]` but compile to an assign + return.
I locally benchmarked inlining thresholds between 0 and 50 in increments of 5, and 50 seems to be the best. Interesting. That didn't include check builds though, ~maybe perf will have something to say about that~.
Perf has little useful to say about this. We generally regress all the check builds, as best as I can tell, due to a number of small codegen changes in a particular hot function in the compiler. Probably this is because we've nudged the inlining outcomes all over, and uses of `#[inline(always)]`/`#[inline(never)]` might need to be adjusted.
Like #107044, this will let us track compatibility with LLVM 16 going
forward, especially after we eventually upgrade our own to the next.
This also drops `tidy` here and in `x86_64-gnu-llvm-15`, syncing with
that change in #106085.
Preserve argument indexes when inlining MIR
We store argument indexes on VarDebugInfo. Unlike the previous method of relying on the variable index to know whether a variable is an argument, this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope. When a function gets inlined, the arguments to the inner function will no longer be in the outermost scope. What we care about though is whether they were in the outermost scope prior to inlining, which we know by whether we assigned an argument index.
Fixes#83217
I considered using `Option<NonZeroU16>` instead of `Option<u16>` to store the index. I didn't because `TypeFoldable` isn't implemented for `NonZeroU16` and because it looks like due to padding, it currently wouldn't make any difference. But I indexed from 1 anyway because (a) it'll make it easier if later it becomes worthwhile to use a `NonZeroU16` and because the arguments were previously indexed from 1, so it made for a smaller change.
This is my first PR on rust-lang/rust, so apologies if I've gotten anything not quite right.
These don't optimize with debug assertions. For one of them, this
is due to the new alignment checks, for the other I'm not sure
what specifically blocks it.
We store argument indexes on VarDebugInfo. Unlike the previous method of
relying on the variable index to know whether a variable is an argument,
this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope.
When a function gets inlined, the arguments to the inner function will
no longer be in the outermost scope. What we care about though is
whether they were in the outermost scope prior to inlining, which we
know by whether we assigned an argument index.
Use SipHash-1-3 instead of SipHash-2-4 for StableHasher
Noticed this, and it seems easy and likely a perf win. IIUC we don't need DDOS resistance (just collision) so we ideally would have an even faster hash, but it's hard to beat this SipHash impl here, since it's been so highly tuned for the interface.
It wouldn't surprise me if there's some subtle reason changing this sucks, as it's so obvious it seems likely to have been done. Still, SipHash-1-3 seems to still have the guarantees StableHasher should need (and seemingly more), and is clearly less work. So it's worth a shot.
Not fully tested locally.
Validate `ignore` and `only` compiletest directive, and add human-readable ignore reasons
This PR adds strict validation for the `ignore` and `only` compiletest directives, failing if an unknown value is provided to them. Doing so uncovered 79 tests in `tests/ui` that had invalid directives, so this PR also fixes them.
Finally, this PR adds human-readable ignore reasons when tests are ignored due to `ignore` or `only` directives, like *"only executed when the architecture is aarch64"* or *"ignored when the operative system is windows"*. This was the original reason why I started working on this PR and #108659, as we need both of them for Ferrocene.
The PR is a draft because the code is extremely inefficient: it calls `rustc --print=cfg --target $target` for every rustc target (to gather the list of allowed ignore values), which on my system takes between 4s and 5s, and performs a lot of allocations of constant values. I'll fix both of them in the coming days.
r? `@ehuss`
Allow `transmute`s to produce `OperandValue`s instead of needing `alloca`s
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.