Delete -Zquery-stats infrastructure
These statistics are computable from the self-profile data and/or ad-hoc collectable as needed, and in the meantime contribute to rustc bootstrap times -- locally, this PR shaves ~2.5% from rustc_query_impl builds in instruction counts.
If this does lose some functionality we want to keep, I think we should migrate it to self-profile (or a similar interface) rather than this ad-hoc reporting.
These statistics are computable from the self-profile data and/or ad-hoc
collectable as needed, and in the meantime contribute to rustc bootstrap times.
Stabilize `-Z instrument-coverage` as `-C instrument-coverage`
(Tracking issue for `instrument-coverage`: https://github.com/rust-lang/rust/issues/79121)
This PR stabilizes support for instrumentation-based code coverage, previously provided via the `-Z instrument-coverage` option. (Continue supporting `-Z instrument-coverage` for compatibility for now, but show a deprecation warning for it.)
Many, many people have tested this support, and there are numerous reports of it working as expected.
Move the documentation from the unstable book to stable rustc documentation. Update uses and documentation to use the `-C` option.
Addressing questions raised in the tracking issue:
> If/when stabilized, will the compiler flag be updated to -C instrument-coverage? (If so, the -Z variant could also be supported for some time, to ease migrations for existing users and scripts.)
This stabilization PR updates the option to `-C` and keeps the `-Z` variant to ease migration.
> The Rust coverage implementation depends on (and automatically turns on) -Z symbol-mangling-version=v0. Will stabilizing this feature depend on stabilizing v0 symbol-mangling first? If so, what is the current status and timeline?
This stabilization PR depends on https://github.com/rust-lang/rust/pull/90128 , which stabilizes `-C symbol-mangling-version=v0` (but does not change the default symbol-mangling-version).
> The Rust coverage implementation implements the latest version of LLVM's Coverage Mapping Format (version 4), which forces a dependency on LLVM 11 or later. A compiler error is generated if attempting to compile with coverage, and using an older version of LLVM.
Given that LLVM 13 has now been released, requiring LLVM 11 for coverage support seems like a reasonable requirement. If people don't have at least LLVM 11, nothing else breaks; they just can't use coverage support. Given that coverage support currently requires a nightly compiler and LLVM 11 or newer, allowing it on a stable compiler built with LLVM 11 or newer seems like an improvement.
The [tracking issue](https://github.com/rust-lang/rust/issues/79121) and the [issue label A-code-coverage](https://github.com/rust-lang/rust/labels/A-code-coverage) link to a few open issues related to `instrument-coverage`, but none of them seem like showstoppers. All of them seem like improvements and refinements we can make after stabilization.
The original `-Z instrument-coverage` support went through a compiler-team MCP at https://github.com/rust-lang/compiler-team/issues/278 . Based on that, `@pnkfelix` suggested that this needed a stabilization PR and a compiler-team FCP.
Stabilize `-Z print-link-args` as `--print link-args`
We have stable options for adding linker arguments; we should have a
stable option to help debug linker arguments.
Add documentation for the new option. In the documentation, make it clear that
the *exact* format of the output is not a stable guarantee.
Implement raw-dylib support for windows-gnu
Add support for `#[link(kind = "raw-dylib")]` on windows-gnu targets. Work around binutils's linker's inability to read import libraries produced by LLVM by calling out to the binutils `dlltool` utility to create an import library from a temporary .DEF file; this approach is effectively a slightly refined version of `@mati865's` earlier attempt at this strategy in PR #88801. (In particular, this attempt at this strategy adds support for `#[link_ordinal(...)]` as well.)
In support of #58713.
Continue supporting -Z instrument-coverage for compatibility for now,
but show a deprecation warning for it.
Update uses and documentation to use the -C option.
Move the documentation from the unstable book to stable rustc
documentation.
This allows selecting `v0` symbol-mangling without an unstable option.
Selecting `legacy` still requires -Z unstable-options.
Continue supporting -Z symbol-mangling-version for compatibility for
now, but show a deprecation warning for it.
Add codegen option for branch protection and pointer authentication on AArch64
The branch-protection codegen option enables the use of hint-space pointer
authentication code for AArch64 targets.
- Changed the separator from '+' to ','.
- Moved the branch protection options from -C to -Z.
- Additional test for incorrect branch-protection option.
- Remove LLVM < 12 code.
- Style fixes.
Co-authored-by: James McGregor <james.mcgregor2@arm.com>
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in #15179.
Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.
Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.
Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.
LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see #26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.
The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
Reviewed-by: Nikita Popov <nikic@php.net>
Extra commits during review:
- [address-review] make the stack-protector option unstable
- [address-review] reduce detail level of stack-protector option help text
- [address-review] correct grammar in comment
- [address-review] use compiler flag to avoid merging functions in test
- [address-review] specify min LLVM version in fortanix stack-protector test
Only for Fortanix test, since this target specifically requests the
`--x86-experimental-lvi-inline-asm-hardening` flag.
- [address-review] specify required LLVM components in stack-protector tests
- move stack protector option enum closer to other similar option enums
- rustc_interface/tests: sort debug option list in tracking hash test
- add an explicit `none` stack-protector option
Revert "set LLVM requirements for all stack protector support test revisions"
This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
Try all stable method candidates first before trying unstable ones
Currently we try methods in this order in each step:
* Stable by value
* Unstable by value
* Stable autoref
* Unstable autoref
* ...
This PR changes it to first try pick methods without any unstable candidates, and if none is found, try again to pick unstable ones.
Fix#90320
CC #88971, hopefully would allow us to rename the "unstable_*" methods for integer impls back.
`@rustbot` label T-compiler T-libs-api
Leave -Z strip available temporarily as an alias, to avoid breaking
cargo until cargo transitions to using -C strip. (If the user passes
both, the -C version wins.)
Add -Z no-unique-section-names to reduce ELF header bloat.
This change adds a new compiler flag that can help reduce the size of ELF binaries that contain many functions.
By default, when enabling function sections (which is the default for most targets), the LLVM backend will generate different section names for each function. For example, a function `func` would generate a section called `.text.func`. Normally this is fine because the linker will merge all those sections into a single one in the binary. However, starting with [LLVM 12](https://github.com/llvm/llvm-project/commit/ee5d1a04), the backend will also generate unique section names for exception handling, resulting in thousands of `.gcc_except_table.*` sections ending up in the final binary because some linkers like LLD don't currently merge or strip these EH sections (see discussion [here](https://reviews.llvm.org/D83655)). This can bloat the ELF headers and string table significantly in binaries that contain many functions.
The new option is analogous to Clang's `-fno-unique-section-names`, and instructs LLVM to generate the same `.text` and `.gcc_except_table` section for each function, resulting in a smaller final binary.
The motivation to add this new option was because we have a binary that ended up with so many ELF sections (over 65,000) that it broke some existing ELF tools, which couldn't handle so many sections.
Here's our old binary:
```
$ readelf --sections old.elf | head -1
There are 71746 section headers, starting at offset 0x2a246508:
$ readelf --sections old.elf | grep shstrtab
[71742] .shstrtab STRTAB 0000000000000000 2977204c ad44bb 00 0 0 1
```
That's an 11MB+ string table. Here's the new binary using this option:
```
$ readelf --sections new.elf | head -1
There are 43 section headers, starting at offset 0x29143ca8:
$ readelf --sections new.elf | grep shstrtab
[40] .shstrtab STRTAB 0000000000000000 29143acc 0001db 00 0 0 1
```
The whole binary size went down by over 20MB, which is quite significant.
This change adds a new compiler flag that can help reduce the size of
ELF binaries that contain many functions.
By default, when enabling function sections (which is the default for most
targets), the LLVM backend will generate different section names for each
function. For example, a function "func" would generate a section called
".text.func". Normally this is fine because the linker will merge all those
sections into a single one in the binary. However, starting with LLVM 12
(llvm/llvm-project@ee5d1a0), the backend will
also generate unique section names for exception handling, resulting in
thousands of ".gcc_except_table.*" sections ending up in the final binary
because some linkers don't currently merge or strip these EH sections.
This can bloat the ELF headers and string table significantly in
binaries that contain many functions.
The new option is analogous to Clang's -fno-unique-section-names, and
instructs LLVM to generate the same ".text" and ".gcc_except_table"
section for each function, resulting in smaller object files and
potentially a smaller final binary.
This largely involves implementing the options debug-info-for-profiling
and profile-sample-use and forwarding them on to LLVM.
AutoFDO can be used on x86-64 Linux like this:
rustc -O -Cdebug-info-for-profiling main.rs -o main
perf record -b ./main
create_llvm_prof --binary=main --out=code.prof
rustc -O -Cprofile-sample-use=code.prof main.rs -o main2
Now `main2` will have feedback directed optimization applied to it.
The create_llvm_prof tool can be obtained from this github repository:
https://github.com/google/autofdoFixes#64892.
Introduce -Z remap-cwd-prefix switch
This switch remaps any absolute paths rooted under the current
working directory to a new value. This includes remapping the
debug info in `DW_AT_comp_dir` and `DW_AT_decl_file`.
Importantly, this flag does not require passing the current working
directory to the compiler, such that the command line can be
run on any machine (with the same input files) and produce the
same results. This is critical property for debugging compiler
issues that crop up on remote machines.
This is based on adetaylor's dbc4ae7cba
Major Change Proposal: https://github.com/rust-lang/compiler-team/issues/450
Discussed on #38322. Would resolve issue #87325.
This was removed by #85284 in favor of -Zprofiler-runtime=<name>.
However the suggested -Zprofiler-runtime=None doesn't work because
"None" is treated as a crate name.
Add flag to configure `large_assignments` lint
The `large_assignments` lints detects moves over specified limit. The
limit is configured through `move_size_limit = "N"` attribute placed at
the root of a crate. When attribute is absent, the lint is disabled.
Make it possible to enable the lint without making any changes to the
source code, through a new flag `-Zmove-size-limit=N`. For example, to
detect moves exceeding 1023 bytes in a cargo crate, including all
dependencies one could use:
```
$ env RUSTFLAGS=-Zmove-size-limit=1024 cargo build -vv
```
Lint tracking issue #83518.
The `large_assignments` lints detects moves over specified limit. The
limit is configured through `move_size_limit = "N"` attribute placed at
the root of a crate. When attribute is absent, the lint is disabled.
Make it possible to enable the lint without making any changes to the
source code, through a new flag `-Zmove-size-limit=N`. For example, to
detect moves exceeding 1023 bytes in a cargo crate, including all
dependencies one could use:
```
$ env RUSTFLAGS=-Zmove-size-limit=1024 cargo build -vv
```
This creates a CSV with name "closure_profile_XXXXX.csv", where the
variable part is the process id of the compiler.
To profile a cargo project you can run one of the following depending on
if you're compiling a library or a binary:
```
cargo +stage1 rustc --lib -- -Zprofile-closures
cargo +stage1 rustc --bin -- -Zprofile-closures
```
Allow loading of llvm plugins on nightly
Based on a discussion in #82734 / with `@wsmoses.`
Mainly moves [this](0149bc4e7e) behind a -Z flag, so it can only be used on nightly,
as requested by `@nagisa` in https://github.com/rust-lang/rust/issues/82734#issuecomment-835863940
This change allows loading of llvm plugins like Enzyme.
Right now it also requires a shared library LLVM build of rustc for symbol resolution.
```rust
// test.rs
extern { fn __enzyme_autodiff(_: usize, ...) -> f64; }
fn square(x : f64) -> f64 {
return x * x;
}
fn main() {
unsafe {
println!("Hello, world {} {}!", square(3.0), __enzyme_autodiff(square as usize, 3.0));
}
}
```
```
./rustc test.rs -Z llvm-plugins="./LLVMEnzyme-12.so" -C passes="enzyme"
./test
Hello, world 9 6!
```
I will try to figure out how to simplify the usage and get this into stable in a later iteration,
but having this on nightly will already help testing further steps.
Provide option for specifying the profiler runtime
Currently, if `-Zinstrument-coverage` is enabled, the target is linked
against the `library/profiler_builtins` crate (which pulls in LLVM's
compiler-rt runtime).
This option enables backends to specify an alternative runtime crate for
handling injected instrumentation calls.
Previously, we sorted the vec prior to hashing, making the hash
independent of the original (command-line argument) order. However, the
original vec was still always kept in the original order, so we were
relying on the rest of the compiler always working with it in an
'order-independent' way.
This assumption was not being upheld by the `native_libraries` query -
the order of the entires in its result depends on the order of entries
in `Options.libs`. This lead to an 'unstable fingerprint' ICE when the
`-l` arguments were re-ordered.
This PR removes the sorting logic entirely. Re-ordering command-line
arguments (without adding/removing/changing any arguments) seems like a
really niche use case, and correctly optimizing for it would require
additional work. By always hashing arguments in their original order, we
can entirely avoid a cause of 'unstable fingerprint' errors.
Currently, if `-Zinstrument-coverage` is enabled, the target is linked
against the `library/profiler_builtins` crate (which pulls in LLVM's
compiler-rt runtime).
This option enables backends to specify an alternative runtime crate for
handling injected instrumentation calls.
Introduce the beginning of a THIR unsafety checker
This poses the foundations for the THIR unsafety checker, so that it can be implemented incrementally:
- implements a rudimentary `Visitor` for the THIR (which will definitely need some tweaking in the future)
- introduces a new `-Zthir-unsafeck` flag which tells the compiler to use THIR unsafeck instead of MIR unsafeck
- implements detection of unsafe functions
- adds revisions to the UI tests to test THIR unsafeck alongside MIR unsafeck
This uses a very simple query design, where bodies are unsafety-checked on a body per body basis. This however has some big flaws:
- the unsafety-checker builds the THIR itself, which means a lot of work is duplicated with MIR building constructing its own copy of the THIR
- unsafety-checking closures is currently completely wrong: closures should take into account the "safety context" in which they are created, here we are considering that closures are always a safe context
I had intended to fix these problems in follow-up PRs since they are always gated under the `-Zthir-unsafeck` flag (which is explicitely noted to be unsound).
r? `@nikomatsakis`
cc https://github.com/rust-lang/project-thir-unsafeck/issues/3https://github.com/rust-lang/project-thir-unsafeck/issues/7
Fix `--remap-path-prefix` not correctly remapping `rust-src` component paths and unify handling of path mapping with virtualized paths
This PR fixes#73167 ("Binaries end up containing path to the rust-src component despite `--remap-path-prefix`") by preventing real local filesystem paths from reaching compilation output if the path is supposed to be remapped.
`RealFileName::Named` introduced in #72767 is now renamed as `LocalPath`, because this variant wraps a (most likely) valid local filesystem path.
`RealFileName::Devirtualized` is renamed as `Remapped` to be used for remapped path from a real path via `--remap-path-prefix` argument, as well as real path inferred from a virtualized (during compiler bootstrapping) `/rustc/...` path. The `local_path` field is now an `Option<PathBuf>`, as it will be set to `None` before serialisation, so it never reaches any build output. Attempting to serialise a non-`None` `local_path` will cause an assertion faliure.
When a path is remapped, a `RealFileName::Remapped` variant is created. The original path is preserved in `local_path` field and the remapped path is saved in `virtual_name` field. Previously, the `local_path` is directly modified which goes against its purpose of "suitable for reading from the file system on the local host".
`rustc_span::SourceFile`'s fields `unmapped_path` (introduced by #44940) and `name_was_remapped` (introduced by #41508 when `--remap-path-prefix` feature originally added) are removed, as these two pieces of information can be inferred from the `name` field: if it's anything other than a `FileName::Real(_)`, or if it is a `FileName::Real(RealFileName::LocalPath(_))`, then clearly `name_was_remapped` would've been false and `unmapped_path` would've been `None`. If it is a `FileName::Real(RealFileName::Remapped{local_path, virtual_name})`, then `name_was_remapped` would've been true and `unmapped_path` would've been `Some(local_path)`.
cc `@eddyb` who implemented `/rustc/...` path devirtualisation