Add support for LLVM coverage mapping format versions 5 and 6
This PR cherry-pick's Swatinem's initial commit in unsubmitted PR #90047.
My additional commit augments Swatinem's great starting point, but adds full support for LLVM
Coverage Mapping Format version 6, conditionally, if compiling with LLVM 13.
Version 6 requires adding the compilation directory when file paths are
relative, and since Rustc coverage maps use relative paths, we should
add the expected compilation directory entry.
Note, however, that with the compilation directory, coverage reports
from `llvm-cov show` can now report file names (when the report includes
more than one file) with the full absolute path to the file.
This would be a problem for test results, but the workaround (for the
rust coverage tests) is to include an additional `llvm-cov show`
parameter: `--compilation-dir=.`
This commit augments Swatinem's initial commit in uncommitted PR #90047,
which was a great starting point, but did not fully support LLVM
Coverage Mapping Format version 6.
Version 6 requires adding the compilation directory when file paths are
relative, and since Rustc coverage maps use relative paths, we should
add the expected compilation directory entry.
Note, however, that with the compilation directory, coverage reports
from `llvm-cov show` can now report file names (when the report includes
more than one file) with the full absolute path to the file.
This would be a problem for test results, but the workaround (for the
rust coverage tests) is to include an additional `llvm-cov show`
parameter: `--compilation-dir=.`
Emit LLVM optimization remarks when enabled with `-Cremark`
The default diagnostic handler considers all remarks to be disabled by
default unless configured otherwise through LLVM internal flags:
`-pass-remarks`, `-pass-remarks-missed`, and `-pass-remarks-analysis`.
This behaviour makes `-Cremark` ineffective on its own.
Fix this by configuring a custom diagnostic handler that enables
optimization remarks based on the value of `-Cremark` option. With
`-Cremark=all` enabling all remarks.
Fixes#90924.
r? `@nikic`
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in #15179.
Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.
Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.
Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.
LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see #26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.
The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
Reviewed-by: Nikita Popov <nikic@php.net>
Extra commits during review:
- [address-review] make the stack-protector option unstable
- [address-review] reduce detail level of stack-protector option help text
- [address-review] correct grammar in comment
- [address-review] use compiler flag to avoid merging functions in test
- [address-review] specify min LLVM version in fortanix stack-protector test
Only for Fortanix test, since this target specifically requests the
`--x86-experimental-lvi-inline-asm-hardening` flag.
- [address-review] specify required LLVM components in stack-protector tests
- move stack protector option enum closer to other similar option enums
- rustc_interface/tests: sort debug option list in tracking hash test
- add an explicit `none` stack-protector option
Revert "set LLVM requirements for all stack protector support test revisions"
This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
std: Get the standard library compiling for wasm64
This commit goes through and updates various `#[cfg]` as appropriate to
get the wasm64-unknown-unknown target behaving similarly to the
wasm32-unknown-unknown target. Most of this is just updating various
conditions for `target_arch = "wasm32"` to also account for `target_arch
= "wasm64"` where appropriate. This commit also lists `wasm64` as an
allow-listed architecture to not have the `restricted_std` feature
enabled, enabling experimentation with `-Z build-std` externally.
The main goal of this commit is to enable playing around with
`wasm64-unknown-unknown` externally via `-Z build-std` in a way that's
similar to the `wasm32-unknown-unknown` target. These targets are
effectively the same and only differ in their pointer size, but wasm64
is much newer and has much less ecosystem/library support so it'll still
take time to get wasm64 fully-fledged.
The default diagnostic handler considers all remarks to be disabled by
default unless configured otherwise through LLVM internal flags:
`-pass-remarks`, `-pass-remarks-missed`, and `-pass-remarks-analysis`.
This behaviour makes `-Cremark` ineffective on its own.
Fix this by configuring a custom diagnostic handler that enables
optimization remarks based on the value of `-Cremark` option. With
`-Cremark=all` enabling all remarks.
* Add wasm64 variants for inline assembly along the same lines as wasm32
* Update a few directives in libtest to check for `target_family`
instead of `target_arch`
* Update some rustc codegen and typechecks specialized for wasm32 to
also work for wasm64.
This commit works around a crash in LLVM when the
`-generate-arange-section` argument is passed to LLVM. An LLVM bug is
opened for this and the code in question is also set to continue passing
this flag with LLVM 14, assuming that this is fixed by the time LLVM 14
comes out. Otherwise this should work around debuginfo crashes on LLVM
13.
Record more artifact sizes during self-profiling.
This PR adds artifact size recording for
- "linked artifacts" (executables, RLIBs, dylibs, static libs)
- object files
- dwo files
- assembly files
- crate metadata
- LLVM bitcode files
- LLVM IR files
- codegen unit size estimates
Currently the identifiers emitted for these are hard-coded as string literals. Is it worth adding constants to https://github.com/rust-lang/measureme/blob/master/measureme/src/rustc.rs instead? We don't do that for query names and the like -- but artifact kinds might be more stable than query names.
Type inference for inline consts
Fixes#78132Fixes#78174Fixes#81857Fixes#89964
Perform type checking/inference of inline consts in the same context as the outer def, similar to what is currently done to closure.
Doing so would require `closure_base_def_id` of the inline const to return the outer def, and since `closure_base_def_id` can be called on non-local crate (and thus have no HIR available), a new `DefKind` is created for inline consts.
The type of the generated anon const can capture lifetime of outer def, so we couldn't just use the typeck result as the type of the inline const's def. Closure has a similar issue, and it uses extra type params `CK, CS, U` to capture closure kind, input/output signature and upvars. I use a similar approach for inline consts, letting it have an extra type param `R`, and then `typeof(InlineConst<[paremt generics], R>)` would just be `R`. In borrowck region requirements are also propagated to the outer MIR body just like it's currently done for closure.
With this PR, inline consts in expression position are quitely usable now; however the usage in pattern position is still incomplete -- since those does not remain in the MIR borrowck couldn't verify the lifetime there. I have left an ignored test as a FIXME.
Some disucssions can be found on [this Zulip thread](https://rust-lang.zulipchat.com/#narrow/stream/260443-project-const-generics/topic/inline.20consts.20typeck).
cc `````@spastorino````` `````@lcnr`````
r? `````@nikomatsakis`````
`````@rustbot````` label A-inference F-inline_const T-compiler
The only reason to use `abort_if_errors` is when the program is so broken that either:
1. later passes get confused and ICE
2. any diagnostics from later passes would be noise
This is never the case for lints, because the compiler has to be able to deal with `allow`-ed lints.
So it can continue to lint and compile even if there are lint errors.
Initialize LLVM time trace profiler on each code generation thread
In https://reviews.llvm.org/D71059 LLVM 11, the time trace profiler was
extended to support multiple threads.
`timeTraceProfilerInitialize` creates a thread local profiler instance.
When a thread finishes `timeTraceProfilerFinishThread` moves a thread
local instance into a global collection of instances. Finally when all
codegen work is complete `timeTraceProfilerWrite` writes data from the
current thread local instance and the instances in global collection
of instances.
Previously, the profiler was intialized on a single thread only. Since
this thread performs no code generation on its own, the resulting
profile was empty.
Update LLVM codegen to initialize & finish time trace profiler on each
code generation thread.
cc `@tmandry`
r? `@wesleywiser`
In https://reviews.llvm.org/D71059 LLVM 11, the time trace profiler was
extended to support multiple threads.
`timeTraceProfilerInitialize` creates a thread local profiler instance.
When a thread finishes `timeTraceProfilerFinishThread` moves a thread
local instance into a global collection of instances. Finally when all
codegen work is complete `timeTraceProfilerWrite` writes data from the
current thread local instance and the instances in global collection
of instances.
Previously, the profiler was intialized on a single thread only. Since
this thread performs no code generation on its own, the resulting
profile was empty.
Update LLVM codegen to initialize & finish time trace profiler on each
code generation thread.
Add LLVM CFI support to the Rust compiler
This PR adds LLVM Control Flow Integrity (CFI) support to the Rust compiler. It initially provides forward-edge control flow protection for Rust-compiled code only by aggregating function pointers in groups identified by their number of arguments.
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by defining and using compatible type identifiers (see Type metadata in the design document in the tracking issue #89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e., -Clto).
Thank you, `@eddyb` and `@pcc,` for all the help!
This commit adds LLVM Control Flow Integrity (CFI) support to the Rust
compiler. It initially provides forward-edge control flow protection for
Rust-compiled code only by aggregating function pointers in groups
identified by their number of arguments.
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by defining and using compatible type identifiers
(see Type metadata in the design document in the tracking issue #89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e.,
-Clto).
Add -Z no-unique-section-names to reduce ELF header bloat.
This change adds a new compiler flag that can help reduce the size of ELF binaries that contain many functions.
By default, when enabling function sections (which is the default for most targets), the LLVM backend will generate different section names for each function. For example, a function `func` would generate a section called `.text.func`. Normally this is fine because the linker will merge all those sections into a single one in the binary. However, starting with [LLVM 12](https://github.com/llvm/llvm-project/commit/ee5d1a04), the backend will also generate unique section names for exception handling, resulting in thousands of `.gcc_except_table.*` sections ending up in the final binary because some linkers like LLD don't currently merge or strip these EH sections (see discussion [here](https://reviews.llvm.org/D83655)). This can bloat the ELF headers and string table significantly in binaries that contain many functions.
The new option is analogous to Clang's `-fno-unique-section-names`, and instructs LLVM to generate the same `.text` and `.gcc_except_table` section for each function, resulting in a smaller final binary.
The motivation to add this new option was because we have a binary that ended up with so many ELF sections (over 65,000) that it broke some existing ELF tools, which couldn't handle so many sections.
Here's our old binary:
```
$ readelf --sections old.elf | head -1
There are 71746 section headers, starting at offset 0x2a246508:
$ readelf --sections old.elf | grep shstrtab
[71742] .shstrtab STRTAB 0000000000000000 2977204c ad44bb 00 0 0 1
```
That's an 11MB+ string table. Here's the new binary using this option:
```
$ readelf --sections new.elf | head -1
There are 43 section headers, starting at offset 0x29143ca8:
$ readelf --sections new.elf | grep shstrtab
[40] .shstrtab STRTAB 0000000000000000 29143acc 0001db 00 0 0 1
```
The whole binary size went down by over 20MB, which is quite significant.
Cleanup LLVM multi-threading checks
The support for runtime multi-threading was removed from LLVM. Calls to
`LLVMStartMultithreaded` became no-ops equivalent to checking if LLVM
was compiled with support for threads http://reviews.llvm.org/D4216.
Add support for artifact size profiling
This adds support for profiling artifact file sizes (incremental compilation artifacts and query cache to begin with).
Eventually we want to track this in perf.rlo so we can ensure that file sizes do not change dramatically on each pull request.
This relies on support in measureme: https://github.com/rust-lang/measureme/pull/169. Once that lands we can update this PR to not point to a git dependency.
This was worked on together with `@michaelwoerister.`
r? `@wesleywiser`
The support for runtime multi-threading was removed from LLVM. Calls to
`LLVMStartMultithreaded` became no-ops equivalent to checking if LLVM
was compiled with support for threads http://reviews.llvm.org/D4216.
This change adds a new compiler flag that can help reduce the size of
ELF binaries that contain many functions.
By default, when enabling function sections (which is the default for most
targets), the LLVM backend will generate different section names for each
function. For example, a function "func" would generate a section called
".text.func". Normally this is fine because the linker will merge all those
sections into a single one in the binary. However, starting with LLVM 12
(llvm/llvm-project@ee5d1a0), the backend will
also generate unique section names for exception handling, resulting in
thousands of ".gcc_except_table.*" sections ending up in the final binary
because some linkers don't currently merge or strip these EH sections.
This can bloat the ELF headers and string table significantly in
binaries that contain many functions.
The new option is analogous to Clang's -fno-unique-section-names, and
instructs LLVM to generate the same ".text" and ".gcc_except_table"
section for each function, resulting in smaller object files and
potentially a smaller final binary.