Enable raw-dylib for bin crates
Fixes#93842
When `raw-dylib` is used in a `bin` crate, we need to collect all of the `raw-dylib` functions, generate the import library and add that to the linker command line.
I also changed the tests so that 1) the C++ dlls are created after the Rust dlls, thus there is no chance of accidentally using them in the Rust linking process and 2) disabled generating import libraries when building with MSVC.
Simplify some code that depend on Deref
Now that we can assume #97025 works, it's safe to expect Deref is always in the first place of projections. With this, I was able to simplify some code that depended on Deref's place in projections. When we are able to move Derefer before `ElaborateDrops` successfully we will be able to optimize more places.
r? `@oli-obk`
Add `sign-ext` target feature to the WASM target
Some target features are still missing from that list.
See #97808 for basically the same PR by `@alexcrichton.`
Related issue: #96472.
PR introducing this issue: #87402.
Upgrade indexmap and thorin-dwp to use hashbrown 0.12
This removes the last dependencies on hashbrown 0.11.
This also upgrades to hashbrown 0.12.3 to fix a double-free (#99372).
Add fine-grained LLVM CFI support to the Rust compiler
This PR improves the LLVM Control Flow Integrity (CFI) support in the Rust compiler by providing forward-edge control flow protection for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue https://github.com/rust-lang/rust/issues/89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e., -Clto).
Thank you again, `@eddyb,` `@nagisa,` `@pcc,` and `@tmiasko` for all the help!
This commit improves the LLVM Control Flow Integrity (CFI) support in
the Rust compiler by providing forward-edge control flow protection for
Rust-compiled code only by aggregating function pointers in groups
identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e.,
-Clto).
Fix unreachable coverage generation for inlined functions
To generate a function coverage we need at least one coverage counter,
so a coverage from unreachable blocks is retained only when some live
counters remain.
The previous implementation incorrectly retained unreachable coverage,
because it didn't account for the fact that those live counters can
belong to another function due to inlining.
Fixes#98833.
make vtable pointers entirely opaque
This implements the scheme discussed in https://github.com/rust-lang/unsafe-code-guidelines/issues/338: vtable pointers should be considered entirely opaque and not even readable by Rust code, similar to function pointers.
- We have a new kind of `GlobalAlloc` that symbolically refers to a vtable.
- Miri uses that kind of allocation when generating a vtable.
- The codegen backends, upon encountering such an allocation, call `vtable_allocation` to obtain an actually dataful allocation for this vtable.
- We need new intrinsics to obtain the size and align from a vtable (for some `ptr::metadata` APIs), since direct accesses are UB now.
I had to touch quite a bit of code that I am not very familiar with, so some of this might not make much sense...
r? `@oli-obk`
Allow to disable thinLTO buffer to support lto-embed-bitcode lld feature
Hello
This change is to fix issue (https://github.com/rust-lang/rust/issues/84395) in which passing "-lto-embed-bitcode=optimized" to lld when linking rust code via linker-plugin-lto doesn't produce the expected result.
Instead of emitting a single unified module into a llvmbc section of the linked elf, it emits multiple submodules.
This is caused because rustc emits the BC modules after running llvm `createWriteThinLTOBitcodePass` pass.
Which in turn triggers a thinLTO linkage and causes the said issue.
This patch allows via compiler flag (-Cemit-thin-lto=<bool>) to select between running `createWriteThinLTOBitcodePass` and `createBitcodeWriterPass`.
Note this pattern of selecting between those 2 passes is common inside of LLVM code.
The default is to match the old behavior.
Only compile #[used] as llvm.compiler.used for ELF targets
This returns `#[used]` to how it worked prior to the LLVM 13 update. The intention is not that this is a stable promise.
I'll add tests later today. The tests will test things that we don't actually promise, though.
It's a deliberately small patch, mostly comments. And assuming it's reviewed and lands in time, IMO it should at least be considered for uplifting to beta (so that it can be in 1.59), as the change broke many crates in the ecosystem, even if they are relying on behavior that is not guaranteed.
# Background
LLVM has two ways of preventing removal of an unused variable: `llvm.compiler.used`, which must be present in object files, but allows the linker to remove the value, and `llvm.used` which is supposed to apply to the linker as well, if possible.
Prior to LLVM 13, `llvm.used` and `llvm.compiler.used` were the same on ELF targets, although they were different elsewhere. Prior to our update to LLVM 13, we compiled `#[used]` using `llvm.used` unconditionally, even though we only ever promised behavior like `llvm.compiler.used`.
In LLVM 13, ELF targets gained some support for preventing linker removal of `llvm.used` via the SHF_RETAIN section flag. This has some compatibility issues though: Concretely: some older versions `ld.gold` (specifically ones prior to v2.36, released in Jan 2021) had a bug where it would fail to place a `#[used] #[link_section = ".init_array"]` static in between `__init_array_start`/`__init_array_end`, leading to code that does this failing to run a static constructor. This is technically not a thing we guarantee will work, is a common use case, and is needed in `libstd` (for example, to get access to `std::env::args()` even if Rust does not control `main`, such as when in a `cdylib` crate).
As a result, when updating to LLVM 13, we unconditionally switched to using `llvm.compiler.used`, which mirror the guarantees we make for `#[used]` and doesn't require the latest ld.gold. Unfortunately, this happened to break quite a bit of things in the ecosystem, as non-ELF targets had come to rely on `#[used]` being slightly stronger. In particular, there are cases where it will even break static constructors on these targets[^initinit] (and in fact, breaks way more use cases, as Mach-O uses special sections as an interface to the OS/linker/loader in many places).
As a result, we only switch to `llvm.compiler.used` on ELF[^elfish] targets. The rationale here is:
1. It is (hopefully) identical to the semantics we used prior to the LLVM13 update as prior to that update we unconditionally used `llvm.used`, but on ELF `llvm.used` was the same as `llvm.compiler.used`.
2. It seems to be how Clang compiles this, and given that they have similar (but stronger) compatibility promises, that makes sense.
[^initinit]: For Mach-O targets: It is not always guaranteed that `__DATA,__mod_init_func` is a GC root if it does not have the `S_MOD_INIT_FUNC_POINTERS` flag which we cannot add. In most cases, when ld64 transformed this section into `__DATA_CONST,__mod_init_func` it gets applied, but it's not clear that that is intentional (let alone guaranteed), and the logic is complex enough that it probably happens sometimes, and people in the wild report it occurring.
[^elfish]: Actually, there's not a great way to tell if it's ELF, so I've approximated it.
This is pretty ad-hoc and hacky! We probably should have a firmer set of guarantees here, but this change should relax the pressure on coming up with that considerably, returning it to previous levels.
---
Unsure who should review so leaving it open, but for sure CC `@nikic`
Use constant eval to do strict mem::uninit/zeroed validity checks
I'm not sure about the code organisation here, I just dumped the check in rustc_const_eval at the root. Not hard to move it elsewhere, in any case.
Also, this means cranelift codegen intrinsics lose the strict checks, since they don't seem to depend on rustc_const_eval, and I didn't see a point in keeping around two copies.
I also left comments in the is_zero_valid methods about "uhhh help how do i do this", those apply to both methods equally.
Also rustc_codegen_ssa now depends on rustc_const_eval... is this okay?
Pinging `@RalfJung` since you were the one who mentioned this to me, so I'm assuming you're interested.
Haven't had a chance to run full tests on this since it's really warm, and it's 1AM, I'll check out any failures/comments in the morning :)
Stop keeping metadata in memory before writing it to disk
Fixes#96358
I created this PR according with the instruction given in the issue except for the following points:
- While the issue says "Write metadata into the temporary file in `encode_and_write_metadata` even if `!need_metadata_file`", I could not do that. That is because though I tried to do that and run `x.py test`, I got a lot of test failures as follows.
<details>
<summary>List of failed tests</summary>
<pre>
<code>
failures:
[ui] src/test/ui/json-multiple.rs
[ui] src/test/ui/json-options.rs
[ui] src/test/ui/rmeta/rmeta-rpass.rs
[ui] src/test/ui/save-analysis/emit-notifications.rs
[ui] src/test/ui/svh/changing-crates.rs
[ui] src/test/ui/svh/svh-change-lit.rs
[ui] src/test/ui/svh/svh-change-significant-cfg.rs
[ui] src/test/ui/svh/svh-change-trait-bound.rs
[ui] src/test/ui/svh/svh-change-type-arg.rs
[ui] src/test/ui/svh/svh-change-type-ret.rs
[ui] src/test/ui/svh/svh-change-type-static.rs
[ui] src/test/ui/svh/svh-use-trait.rs
test result: FAILED. 12915 passed; 12 failed; 100 ignored; 0 measured; 0 filtered out; finished in 71.41s
Some tests failed in compiletest suite=ui mode=ui host=x86_64-unknown-linux-gnu target=x86_64-unknown-linux-gnu
Build completed unsuccessfully in 0:01:58
</code>
</pre>
</details>
- I could not resolve the extra tasks about `create_rmeta_file` and `create_compressed_metadata_file` for my lack of ability.
Adding the option to control from rustc CLI
if the resulted ".o" bitcode module files are with
thinLTO info or regular LTO info.
Allows using "-lto-embed-bitcode=optimized" during linkage
correctly.
Signed-off-by: Ziv Dunkelman <ziv.dunkelman@nextsilicon.com>
Keep unstable target features for asm feature checking
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
There are several indications that we should not ZST as a ScalarInt:
- We had two ways to have ZST valtrees, either an empty `Branch` or a `Leaf` with a ZST in it.
`ValTree::zst()` used the former, but the latter could possibly arise as well.
- Likewise, the interpreter had `Immediate::Uninit` and `Immediate::Scalar(Scalar::ZST)`.
- LLVM codegen already had to special-case ZST ScalarInt.
So instead add new ZST variants to those types that did not have other variants
which could be used for this purpose.
Use less string interning
This removes string interning in a couple of places where doing so won't result in perf improvements. I also switched one place to use pre-interned symbols.
Make lowering a query
Split from https://github.com/rust-lang/rust/pull/88186.
This PR refactors the relationship between lowering and the resolver outputs in order to make lowering itself a query.
In a first part, lowering is changed to avoid modifying resolver outputs, by maintaining its own data structures for creating new `NodeId`s and so.
Then, the `TyCtxt` is modified to allow creating new `LocalDefId`s from inside it. This is done by:
- enclosing `Definitions` in a lock, so as to allow modification;
- creating a query `register_def` whose purpose is to declare a `LocalDefId` to the query system.
See `TyCtxt::create_def` and `TyCtxt::iter_local_def_id` for more detailed explanations of the design.
Make MIR basic blocks field public
This makes it possible to mutably borrow different fields of the MIR
body without resorting to methods like `basic_blocks_local_decls_mut_and_var_debug_info`.
To preserve validity of control flow graph caches in the presence of
modifications, a new struct `BasicBlocks` wraps together basic blocks
and control flow graph caches.
The `BasicBlocks` dereferences to `IndexVec<BasicBlock, BasicBlockData>`.
On the other hand a mutable access requires explicit `as_mut()` call.
incr: cache dwarf objects in work products
Cache DWARF objects alongside object files in work products when those exist so that DWARF object files are available for thorin in packed mode in incremental scenarios.
r? `@michaelwoerister`
rustc_codegen_ssa: use `project_index`, not `project_field`, for array literals.
See https://github.com/rust-lang/rust/pull/98615#issuecomment-1170082774 for some context.
In short, we were using `project_field` even for array `mir::Rvalue::Aggregate`s, which results in benchmarks like `deep-vector.rs` (and presumably also some real-world usecases?) being impacted by how we handle non-array aggregate fields.
(This is a separate PR so that we can measure the perf effects in isolation)
r? `@nikic`
Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.
This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.
I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.
API change proposal: https://github.com/rust-lang/libs-team/issues/60Fixes#95492
Compiling with `-Csplit-debuginfo=packed` was leaving behind `.dwo`
files because either the metadata or allocator module contained a DWARF
object which was not being removed by the
`maybe_remove_temps_from_module` closure.
Cache DWARF objects alongside object files in work products when those
exist so that DWARF object files are available for thorin in packed mode
in incremental scenarios.
Signed-off-by: David Wood <david.wood@huawei.com>