Update the minimum external LLVM to 13
With this change, we'll have stable support for LLVM 13 through 15 (pending release).
For reference, the previous increase to LLVM 12 was #90175.
r? `@nagisa`
Add support for generating unique profraw files by default when using `-C instrument-coverage`
Currently, enabling the rustc flag `-C instrument-coverage` instruments the given crate and by default uses the naming scheme `default.profraw` for any instrumented profile files generated during the execution of a binary linked against this crate. This leads to multiple binaries being executed overwriting one another and causing only the last executable run to contain actual coverage results.
This can be overridden by manually setting the environment variable `LLVM_PROFILE_FILE` to use a unique naming scheme.
This PR adds a change to add support for a reasonable default for rustc to use when enabling coverage instrumentation similar to how the Rust compiler treats generating these same `profraw` files when PGO is enabled.
The new naming scheme is set to `default_%m_%p.profraw` to ensure the uniqueness of each file being generated using [LLVMs special pattern strings](https://clang.llvm.org/docs/SourceBasedCodeCoverage.html#running-the-instrumented-program).
Today the compiler sets the default for PGO `profraw` files to `default_%m.profraw` to ensure a unique file for each run. The same can be done for the instrumented profile files generated via the `-C instrument-coverage` flag as well which LLVM has API support for.
Linked Issue: https://github.com/rust-lang/rust/issues/100381
r? `@wesleywiser`
debuginfo: Generalize C++-like encoding for enums.
The updated encoding should be able to handle niche layouts where more than one variant has fields (as introduced in https://github.com/rust-lang/rust/pull/94075).
The new encoding is more uniform as there is no structural difference between direct-tag, niche-tag, and no-tag layouts anymore. The only difference between those cases is that the "dataful" variant in a niche-tag enum will have a `(start, end)` pair denoting the tag range instead of a single value.
The new encoding now also supports 128-bit tags, which occur in at least some standard library types. These tags are represented as `u64` pairs so that debuggers (which don't always have support for 128-bit integers) can reliably deal with them. The downside is that this adds quite a bit of complexity to the encoding and especially to the corresponding NatVis.
The new encoding seems to increase the size of (x86_64-pc-windows-msvc) debuginfo by 10-15%. The size of binaries is not affected (release builds were built with `-Cdebuginfo=2`, numbers are in kilobytes):
EXE | before | after | relative
-- | -- | -- | --
cargo (debug) | 40453 | 40450 | +0%
ripgrep (debug) | 10275 | 10273 | +0%
cargo (release) | 16186 | 16185 | +0%
ripgrep (release) | 4727 | 4726 | +0%
PDB | before | after | relative
-- | -- | -- | --
cargo (debug) | 236524 | 261412 | +11%
ripgrep (debug) | 53140 | 59060 | +11%
cargo (release) | 148516 | 169620 | +14%
ripgrep (release) | 10676 | 11804 | +11%
Given that the new encoding is more general, this is to be expected. Only platforms using C++-like debuginfo are affected -- which currently is only `*-pc-windows-msvc`.
*TODO*
- [x] Properly update documentation
- [x] Add regression tests for new optimized enum layouts as introduced by #94075.
r? `@wesleywiser`
https://reviews.llvm.org/D120026 changed atomics on thumbv6m to
use libatomic, to ensure that atomic load/store are compatible with
atomic RMW/CAS. However, Rust wants to expose only load/store
without libcalls.
https://reviews.llvm.org/D130480 added support for this behind
the +atomics-32 target feature, so enable that feature.
Introduce an ArchiveBuilderBuilder
This avoids monomorphizing all linker code for each codegen backend and will allow passing in extra information to the archive builder from the codegen backend. I'm going to use this in https://github.com/rust-lang/rust/pull/97485 to allow passing in the right function to extract symbols from object files to a generic archive builder to be used by cg_llvm, cg_clif and cg_gcc.
This avoids monomorphizing all linker code for each codegen backend and
will allow passing in extra information to the archive builder from the
codegen backend.
codegen: use new {re,de,}allocator annotations in llvm
This obviates the patch that teaches LLVM internals about
_rust_{re,de}alloc functions by putting annotations directly in the IR
for the optimizer.
The sole test change is required to anchor FileCheck to the body of the
`box_uninitialized` method, so it doesn't see the `allocalign` on
`__rust_alloc` and get mad about the string `alloca` showing up. Since I
was there anyway, I added some checks on the attributes to prove the
right attributes got set.
r? `@nikic`
This obviates the patch that teaches LLVM internals about
_rust_{re,de}alloc functions by putting annotations directly in the IR
for the optimizer.
The sole test change is required to anchor FileCheck to the body of the
`box_uninitialized` method, so it doesn't see the `allocalign` on
`__rust_alloc` and get mad about the string `alloca` showing up. Since I
was there anyway, I added some checks on the attributes to prove the
right attributes got set.
While we're here, we also emit allocator attributes on
__rust_alloc_zeroed. This should allow LLVM to perform more
optimizations for zeroed blocks, and probably fixes#90032. [This
comment](https://github.com/rust-lang/rust/issues/24194#issuecomment-308791157)
mentions "weird UB-like behaviour with bitvec iterators in
rustc_data_structures" so we may need to back this change out if things
go wrong.
The new test cases require LLVM 15, so we copy them into LLVM
14-supporting versions, which we can delete when we drop LLVM 14.
Enable raw-dylib for bin crates
Fixes#93842
When `raw-dylib` is used in a `bin` crate, we need to collect all of the `raw-dylib` functions, generate the import library and add that to the linker command line.
I also changed the tests so that 1) the C++ dlls are created after the Rust dlls, thus there is no chance of accidentally using them in the Rust linking process and 2) disabled generating import libraries when building with MSVC.
Add fine-grained LLVM CFI support to the Rust compiler
This PR improves the LLVM Control Flow Integrity (CFI) support in the Rust compiler by providing forward-edge control flow protection for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue https://github.com/rust-lang/rust/issues/89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e., -Clto).
Thank you again, `@eddyb,` `@nagisa,` `@pcc,` and `@tmiasko` for all the help!
Add support for LLVM ShadowCallStack.
LLVMs ShadowCallStack provides backward edge control flow integrity protection by using a separate shadow stack to store and retrieve a function's return address.
LLVM currently only supports this for AArch64 targets. The x18 register is used to hold the pointer to the shadow stack, and therefore this only works on ABIs which reserve x18. Further details are available in the [LLVM ShadowCallStack](https://clang.llvm.org/docs/ShadowCallStack.html) docs.
# Usage
`-Zsanitizer=shadow-call-stack`
# Comments/Caveats
* Currently only enabled for the aarch64-linux-android target
* Requires the platform to define a runtime to initialize the shadow stack, see the [LLVM docs](https://clang.llvm.org/docs/ShadowCallStack.html) for more detail.
This commit improves the LLVM Control Flow Integrity (CFI) support in
the Rust compiler by providing forward-edge control flow protection for
Rust-compiled code only by aggregating function pointers in groups
identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e.,
-Clto).
make vtable pointers entirely opaque
This implements the scheme discussed in https://github.com/rust-lang/unsafe-code-guidelines/issues/338: vtable pointers should be considered entirely opaque and not even readable by Rust code, similar to function pointers.
- We have a new kind of `GlobalAlloc` that symbolically refers to a vtable.
- Miri uses that kind of allocation when generating a vtable.
- The codegen backends, upon encountering such an allocation, call `vtable_allocation` to obtain an actually dataful allocation for this vtable.
- We need new intrinsics to obtain the size and align from a vtable (for some `ptr::metadata` APIs), since direct accesses are UB now.
I had to touch quite a bit of code that I am not very familiar with, so some of this might not make much sense...
r? `@oli-obk`
Allow to disable thinLTO buffer to support lto-embed-bitcode lld feature
Hello
This change is to fix issue (https://github.com/rust-lang/rust/issues/84395) in which passing "-lto-embed-bitcode=optimized" to lld when linking rust code via linker-plugin-lto doesn't produce the expected result.
Instead of emitting a single unified module into a llvmbc section of the linked elf, it emits multiple submodules.
This is caused because rustc emits the BC modules after running llvm `createWriteThinLTOBitcodePass` pass.
Which in turn triggers a thinLTO linkage and causes the said issue.
This patch allows via compiler flag (-Cemit-thin-lto=<bool>) to select between running `createWriteThinLTOBitcodePass` and `createBitcodeWriterPass`.
Note this pattern of selecting between those 2 passes is common inside of LLVM code.
The default is to match the old behavior.
Only compile #[used] as llvm.compiler.used for ELF targets
This returns `#[used]` to how it worked prior to the LLVM 13 update. The intention is not that this is a stable promise.
I'll add tests later today. The tests will test things that we don't actually promise, though.
It's a deliberately small patch, mostly comments. And assuming it's reviewed and lands in time, IMO it should at least be considered for uplifting to beta (so that it can be in 1.59), as the change broke many crates in the ecosystem, even if they are relying on behavior that is not guaranteed.
# Background
LLVM has two ways of preventing removal of an unused variable: `llvm.compiler.used`, which must be present in object files, but allows the linker to remove the value, and `llvm.used` which is supposed to apply to the linker as well, if possible.
Prior to LLVM 13, `llvm.used` and `llvm.compiler.used` were the same on ELF targets, although they were different elsewhere. Prior to our update to LLVM 13, we compiled `#[used]` using `llvm.used` unconditionally, even though we only ever promised behavior like `llvm.compiler.used`.
In LLVM 13, ELF targets gained some support for preventing linker removal of `llvm.used` via the SHF_RETAIN section flag. This has some compatibility issues though: Concretely: some older versions `ld.gold` (specifically ones prior to v2.36, released in Jan 2021) had a bug where it would fail to place a `#[used] #[link_section = ".init_array"]` static in between `__init_array_start`/`__init_array_end`, leading to code that does this failing to run a static constructor. This is technically not a thing we guarantee will work, is a common use case, and is needed in `libstd` (for example, to get access to `std::env::args()` even if Rust does not control `main`, such as when in a `cdylib` crate).
As a result, when updating to LLVM 13, we unconditionally switched to using `llvm.compiler.used`, which mirror the guarantees we make for `#[used]` and doesn't require the latest ld.gold. Unfortunately, this happened to break quite a bit of things in the ecosystem, as non-ELF targets had come to rely on `#[used]` being slightly stronger. In particular, there are cases where it will even break static constructors on these targets[^initinit] (and in fact, breaks way more use cases, as Mach-O uses special sections as an interface to the OS/linker/loader in many places).
As a result, we only switch to `llvm.compiler.used` on ELF[^elfish] targets. The rationale here is:
1. It is (hopefully) identical to the semantics we used prior to the LLVM13 update as prior to that update we unconditionally used `llvm.used`, but on ELF `llvm.used` was the same as `llvm.compiler.used`.
2. It seems to be how Clang compiles this, and given that they have similar (but stronger) compatibility promises, that makes sense.
[^initinit]: For Mach-O targets: It is not always guaranteed that `__DATA,__mod_init_func` is a GC root if it does not have the `S_MOD_INIT_FUNC_POINTERS` flag which we cannot add. In most cases, when ld64 transformed this section into `__DATA_CONST,__mod_init_func` it gets applied, but it's not clear that that is intentional (let alone guaranteed), and the logic is complex enough that it probably happens sometimes, and people in the wild report it occurring.
[^elfish]: Actually, there's not a great way to tell if it's ELF, so I've approximated it.
This is pretty ad-hoc and hacky! We probably should have a firmer set of guarantees here, but this change should relax the pressure on coming up with that considerably, returning it to previous levels.
---
Unsure who should review so leaving it open, but for sure CC `@nikic`
Remove branch target prologues from `#[naked] fn`
This patch hacks around rust-lang/rust#98768 for now via injecting appropriate attributes into the LLVMIR we emit for naked functions. I intend to pursue this upstream so that these attributes can be removed in general, but it's slow going wading through C++ for me.
Revert "Work around invalid DWARF bugs for fat LTO"
Since September, the toolchain has not been generating reliable DWARF
information for static variables when LTO is on. This has affected
projects in the embedded space where the use of LTO is typical. In our
case, it has kept us from bumping past the 2021-09-22 nightly toolchain
lest our debugger break. This has been a pretty dramatic regression for
people using debuggers and static variables. See #90357 for more info
and a repro case.
This commit is a mechanical revert of
d5de680e20 from PR #89041, which caused
the issue. (Note on that PR that the commit's author has requested it be
reverted.)
I have locally verified that this fixes#90357 by restoring the
functionality of both the repro case I posted on that bug, and debugger
behavior on real programs. There do not appear to be test cases for this
in the toolchain; if I've missed them, point me at 'em and I'll update
them.
Adding the option to control from rustc CLI
if the resulted ".o" bitcode module files are with
thinLTO info or regular LTO info.
Allows using "-lto-embed-bitcode=optimized" during linkage
correctly.
Signed-off-by: Ziv Dunkelman <ziv.dunkelman@nextsilicon.com>
Keep unstable target features for asm feature checking
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
There are several indications that we should not ZST as a ScalarInt:
- We had two ways to have ZST valtrees, either an empty `Branch` or a `Leaf` with a ZST in it.
`ValTree::zst()` used the former, but the latter could possibly arise as well.
- Likewise, the interpreter had `Immediate::Uninit` and `Immediate::Scalar(Scalar::ZST)`.
- LLVM codegen already had to special-case ZST ScalarInt.
So instead add new ZST variants to those types that did not have other variants
which could be used for this purpose.