foreign_modules query hash table lookups
When compiling a large monolithic crate we're seeing huge times in the `foreign_modules` query due to repeated iteration over foreign modules (in order to find a module by its id). This implements hash table lookups so that which massively reduces time spent in that query in this particular case. We'll need to see if the overhead of creating the hash table has a negative impact on performance in more normal compilation scenarios.
I'm working with `@wesleywiser` on this.
This lets rustc users tweak whether the linker should relax ELF relocations,
namely whether it should emit R_X86_64_GOTPCRELX relocations instead of
R_X86_64_GOTPCREL, as the former is allowed by the ABI to be further
optimised. The default value is whatever the target defines.
Implement -Z function-sections=yes|no
This lets rustc users tweak whether all functions should be put in their own TEXT section, using whatever default value the target defines if the flag is missing.
I'm having fun experimenting with musl libc and trying to implement the start symbol in Rust, that means avoiding code that requires relocations, and AFAIK putting everything in its own section makes the toolchain generate `GOTPCREL` relocations for symbols that could use plain old PC-relative addressing (at least on `x86_64`) if they were all in the same section.
This lets rustc users tweak whether all functions should be put in their own
TEXT section, using whatever default value the target defines if the flag
is missing.
rustc_mir: track inlined callees in SourceScopeData.
We now record which MIR scopes are the roots of *other* (inlined) functions's scope trees, which allows us to generate the correct debuginfo in codegen, similar to what LLVM inlining generates.
This PR makes the `ui` test `backtrace-debuginfo` pass, if the MIR inliner is turned on by default.
Also, `#[track_caller]` is now correct in the face of MIR inlining (cc `@anp).`
Fixes#76997.
r? `@rust-lang/wg-mir-opt`
Properly define va_arg and va_list for aarch64-apple-darwin
From [Apple][]:
> Because of these changes, the type `va_list` is an alias for `char*`,
> and not for the struct type in the generic procedure call standard.
With this change `/x.py test --stage 1 src/test/ui/abi/variadic-ffi`
passes.
Fixes#78092
[Apple]: https://developer.apple.com/documentation/xcode/writing_arm64_code_for_apple_platforms
Addresses Issue #78286
Libraries compiled with coverage and linked with out enabling coverage
would fail when attempting to add the library's coverage statements to
the codegen coverage context (None).
Now, if coverage statements are encountered while compiling / linking
with `-Z instrument-coverage` disabled, codegen will *not* attempt to
add code regions to a coverage map, and it will not inject the LLVM
instrprof_increment intrinsic calls.
Make set_span take mut self
This was a mistake in https://github.com/rust-lang/rust/pull/77614
It's not a _huge_ deal, because backends can always implement this with interior mutability, but it's nice to avoid interior mutability when possible. For context, the `set_source_location` method, called alongside `set_span`, also takes `&mut self`.
r? `@eddyb`
Use rebind instead of Binder::bind when possible
These are really only the easy places. I just searched for `Binder::bind` and replaced where it straightforward.
r? `@lcnr`
cc. `@nikomatsakis`
Set .llvmbc and .llvmcmd sections as allocatable
This marks both sections as allocatable rather than excluded, which matches what
clang does with the equivalent `-fembed-bitcode` flag.
Prevent miscompilation in trivial loop {}
Ideally, we would want to handle a broader set of cases to fully fix the
underlying bug here. That is currently relatively expensive at compile and
runtime, so we don't do that for now.
Performance results indicate this is not a major regression, if at all, so it should be safe to land.
cc #28728
Ideally, we would want to handle a broader set of cases to fully fix the
underlying bug here. That is currently relatively expensive at compile and
runtime, so we don't do that for now.
The wrapper type led to tons of target.target
across the compiler. Its ptr_width field isn't
required any more, as target_pointer_width
is already present in parsed form.
Preparation for a subsequent change that replaces
rustc_target::config::Config with its wrapped Target.
On its own, this commit breaks the build. I don't like making
build-breaking commits, but in this instance I believe that it
makes review easier, as the "real" changes of this PR can be
seen much more easily.
Result of running:
find compiler/ -type f -exec sed -i -e 's/target\.target\([)\.,; ]\)/target\1/g' {} \;
find compiler/ -type f -exec sed -i -e 's/target\.target$/target/g' {} \;
find compiler/ -type f -exec sed -i -e 's/target.ptr_width/target.pointer_width/g' {} \;
./x.py fmt
Rename target_pointer_width to pointer_width because it is already
member of the Target struct.
The compiler supports only three valid values for target_pointer_width:
16, 32, 64. Thus it can safely be turned into an int.
This means less allocations and clones as well as easier handling of the type.
Remove unused code
Rustc has a builtin lint for detecting unused code inside a crate, but when an item is marked `pub`, the code, even if unused inside the entire workspace, is never marked as such. Therefore, I've built [warnalyzer](https://github.com/est31/warnalyzer) to detect unused items in a cross-crate setting.
Closes https://github.com/est31/warnalyzer/issues/2
Codegen backend interface refactor
This moves several things away from the codegen backend to rustc_interface. There are a few behavioral changes where previously the incremental cache (incorrectly) wouldn't get finalized, but now it does. See the individual commit messages.
Add LLVM flags to limit DWARF version to 2 on BSD
This has been a thorn in my side for a while, I can finally generate flamegraphs of rust programs on bsd again. This fixes dtrace profiling on freebsd, I think it might help with lldb as well but I can't test that because my current rust-lldb setup is messed up.
I'm limiting the dwarf version to 2 on all bsd's (netbsd/openbsd/freebsd) since it looks like this applies to all of them, but I have only tested on freebsd.
Let me know if there's anything I can improve!
---
Currently on FreeBSD dtrace profiling does not work and shows jumbled/incorrect
symbols in the backtraces. FreeBSD does not support the latest versions of DWARF
in dtrace (and lldb?) yet, and needs to be limited to DWARF2 in the same way as macos.
This adds an is_like_bsd flag since it was missing. NetBSD/OpenBSD/FreeBSD all
match this.
This effectively copies #11864 but targets FreeBSD instead of macos.
Certain platforms need to limit the DWARF version emitted (oxs, *bsd). This
change adds a dwarf_version entry to the options that allows a platform to
specify the dwarf version to use. By default this option is none and the default
DWARF version is selected.
Also adds an option for printing Option<u32> json keys
Use llvm::computeLTOCacheKey to determine post-ThinLTO CGU reuse
During incremental ThinLTO compilation, we attempt to re-use the
optimized (post-ThinLTO) bitcode file for a module if it is 'safe' to do
so.
Up until now, 'safe' has meant that the set of modules that our current
modules imports from/exports to is unchanged from the previous
compilation session. See PR #67020 and PR #71131 for more details.
However, this turns out be insufficient to guarantee that it's safe
to reuse the post-LTO module (i.e. that optimizing the pre-LTO module
would produce the same result). When LLVM optimizes a module during
ThinLTO, it may look at other information from the 'module index', such
as whether a (non-imported!) global variable is used. If this
information changes between compilation runs, we may end up re-using an
optimized module that (for example) had dead-code elimination run on a
function that is now used by another module.
Fortunately, LLVM implements its own ThinLTO module cache, which is used
when ThinLTO is performed by a linker plugin (e.g. when clang is used to
compile a C proect). Using this cache directly would require extensive
refactoring of our code - but fortunately for us, LLVM provides a
function that does exactly what we need.
The function `llvm::computeLTOCacheKey` is used to compute a SHA-1 hash
from all data that might influence the result of ThinLTO on a module.
In addition to the module imports/exports that we manually track, it
also hashes information about global variables (e.g. their liveness)
which might be used during optimization. By using this function, we
shouldn't have to worry about new LLVM passes breaking our module re-use
behavior.
In LLVM, the output of this function forms part of the filename used to
store the post-ThinLTO module. To keep our current filename structure
intact, this PR just writes out the mapping 'CGU name -> Hash' to a
file. To determine if a post-LTO module should be reused, we compare
hashes from the previous session.
This should unblock PR #75199 - by sheer chance, it seems to have hit
this issue due to the particular CGU partitioning and optimization
decisions that end up getting made.
Add asm! support for mips64
- [x] Updated `src/doc/unstable-book/src/library-features/asm.md`.
- [ ] No vector type support. I don't know much about those types.
cc #76839