2015-02-28 23:53:12 +02:00
|
|
|
//! Set and unset common attributes on LLVM values.
|
|
|
|
|
2018-12-07 13:51:03 -05:00
|
|
|
use rustc_codegen_ssa::traits::*;
|
2022-03-03 00:00:00 +00:00
|
|
|
use rustc_data_structures::small_str::SmallStr;
|
2020-10-09 19:35:17 +02:00
|
|
|
use rustc_hir::def_id::DefId;
|
2020-03-29 17:19:48 +02:00
|
|
|
use rustc_middle::middle::codegen_fn_attrs::CodegenFnAttrFlags;
|
2020-03-31 08:27:09 -04:00
|
|
|
use rustc_middle::ty::{self, TyCtxt};
|
2021-02-07 23:47:03 +02:00
|
|
|
use rustc_session::config::OptLevel;
|
2022-05-02 09:31:56 +02:00
|
|
|
use rustc_span::symbol::sym;
|
rustc: Add a new `wasm` ABI
This commit implements the idea of a new ABI for the WebAssembly target,
one called `"wasm"`. This ABI is entirely of my own invention
and has no current precedent, but I think that the addition of this ABI
might help solve a number of issues with the WebAssembly targets.
When `wasm32-unknown-unknown` was first added to Rust I naively
"implemented an abi" for the target. I then went to write `wasm-bindgen`
which accidentally relied on details of this ABI. Turns out the ABI
definition didn't match C, which is causing issues for C/Rust interop.
Currently the compiler has a "wasm32 bindgen compat" ABI which is the
original implementation I added, and it's purely there for, well,
`wasm-bindgen`.
Another issue with the WebAssembly target is that it's not clear to me
when and if the default C ABI will change to account for WebAssembly's
multi-value feature (a feature that allows functions to return multiple
values). Even if this does happen, though, it seems like the C ABI will
be guided based on the performance of WebAssembly code and will likely
not match even what the current wasm-bindgen-compat ABI is today. This
leaves a hole in Rust's expressivity in binding WebAssembly where given
a particular import type, Rust may not be able to import that signature
with an updated C ABI for multi-value.
To fix these issues I had the idea of a new ABI for WebAssembly, one
called `wasm`. The definition of this ABI is "what you write
maps straight to wasm". The goal here is that whatever you write down in
the parameter list or in the return values goes straight into the
function's signature in the WebAssembly file. This special ABI is for
intentionally matching the ABI of an imported function from the
environment or exporting a function with the right signature.
With the addition of a new ABI, this enables rustc to:
* Eventually remove the "wasm-bindgen compat hack". Once this
ABI is stable wasm-bindgen can switch to using it everywhere.
Afterwards the wasm32-unknown-unknown target can have its default ABI
updated to match C.
* Expose the ability to precisely match an ABI signature for a
WebAssembly function, regardless of what the C ABI that clang chooses
turns out to be.
* Continue to evolve the definition of the default C ABI to match what
clang does on all targets, since the purpose of that ABI will be
explicitly matching C rather than generating particular function
imports/exports.
Naturally this is implemented as an unstable feature initially, but it
would be nice for this to get stabilized (if it works) in the near-ish
future to remove the wasm32-unknown-unknown incompatibility with the C
ABI. Doing this, however, requires the feature to be on stable because
wasm-bindgen works with stable Rust.
2021-04-01 16:08:29 -07:00
|
|
|
use rustc_target::spec::abi::Abi;
|
add rustc option for using LLVM stack smash protection
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in #15179.
Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.
Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.
Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.
LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see #26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.
The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
Reviewed-by: Nikita Popov <nikic@php.net>
Extra commits during review:
- [address-review] make the stack-protector option unstable
- [address-review] reduce detail level of stack-protector option help text
- [address-review] correct grammar in comment
- [address-review] use compiler flag to avoid merging functions in test
- [address-review] specify min LLVM version in fortanix stack-protector test
Only for Fortanix test, since this target specifically requests the
`--x86-experimental-lvi-inline-asm-hardening` flag.
- [address-review] specify required LLVM components in stack-protector tests
- move stack protector option enum closer to other similar option enums
- rustc_interface/tests: sort debug option list in tracking hash test
- add an explicit `none` stack-protector option
Revert "set LLVM requirements for all stack protector support test revisions"
This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
2021-04-06 21:37:49 +02:00
|
|
|
use rustc_target::spec::{FramePointer, SanitizerSet, StackProbeType, StackProtector};
|
2022-02-21 11:19:16 -05:00
|
|
|
use smallvec::SmallVec;
|
2017-06-21 12:08:18 -07:00
|
|
|
|
2019-02-18 03:58:58 +09:00
|
|
|
use crate::attributes;
|
2022-08-26 21:27:17 +02:00
|
|
|
use crate::errors::{MissingFeatures, SanitizerMemtagRequiresMte, TargetFeatureDisableOrEnable};
|
2019-02-18 03:58:58 +09:00
|
|
|
use crate::llvm::AttributePlace::Function;
|
2022-11-04 16:20:42 +00:00
|
|
|
use crate::llvm::{self, AllocKindFlags, Attribute, AttributeKind, AttributePlace, MemoryEffects};
|
2019-02-18 03:58:58 +09:00
|
|
|
use crate::llvm_util;
|
2020-10-08 23:23:27 +01:00
|
|
|
pub use rustc_attr::{InlineAttr, InstructionSetAttr, OptimizeAttr};
|
2018-07-10 13:28:39 +03:00
|
|
|
|
2019-02-18 03:58:58 +09:00
|
|
|
use crate::context::CodegenCx;
|
|
|
|
use crate::value::Value;
|
2015-02-28 23:53:12 +02:00
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
pub fn apply_to_llfn(llfn: &Value, idx: AttributePlace, attrs: &[&Attribute]) {
|
|
|
|
if !attrs.is_empty() {
|
|
|
|
llvm::AddFunctionAttributes(llfn, idx, attrs);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn apply_to_callsite(callsite: &Value, idx: AttributePlace, attrs: &[&Attribute]) {
|
|
|
|
if !attrs.is_empty() {
|
|
|
|
llvm::AddCallSiteAttributes(callsite, idx, attrs);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Get LLVM attribute for the provided inline heuristic.
|
2015-02-28 23:53:12 +02:00
|
|
|
#[inline]
|
2022-02-21 11:19:16 -05:00
|
|
|
fn inline_attr<'ll>(cx: &CodegenCx<'ll, '_>, inline: InlineAttr) -> Option<&'ll Attribute> {
|
2022-08-08 16:58:27 -07:00
|
|
|
if !cx.tcx.sess.opts.unstable_opts.inline_llvm {
|
|
|
|
// disable LLVM inlining
|
|
|
|
return Some(AttributeKind::NoInline.create_attr(cx.llcx));
|
|
|
|
}
|
2015-02-28 23:53:12 +02:00
|
|
|
match inline {
|
2022-02-21 11:19:16 -05:00
|
|
|
InlineAttr::Hint => Some(AttributeKind::InlineHint.create_attr(cx.llcx)),
|
|
|
|
InlineAttr::Always => Some(AttributeKind::AlwaysInline.create_attr(cx.llcx)),
|
|
|
|
InlineAttr::Never => {
|
|
|
|
if cx.sess().target.arch != "amdgpu" {
|
|
|
|
Some(AttributeKind::NoInline.create_attr(cx.llcx))
|
|
|
|
} else {
|
|
|
|
None
|
2018-07-18 22:04:27 -05:00
|
|
|
}
|
|
|
|
}
|
2022-02-21 11:19:16 -05:00
|
|
|
InlineAttr::None => None,
|
|
|
|
}
|
2015-02-28 23:53:12 +02:00
|
|
|
}
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
/// Get LLVM sanitize attributes.
|
2020-01-16 00:00:00 +00:00
|
|
|
#[inline]
|
2022-02-21 11:19:16 -05:00
|
|
|
pub fn sanitize_attrs<'ll>(
|
|
|
|
cx: &CodegenCx<'ll, '_>,
|
|
|
|
no_sanitize: SanitizerSet,
|
2022-02-26 16:58:17 -05:00
|
|
|
) -> SmallVec<[&'ll Attribute; 4]> {
|
|
|
|
let mut attrs = SmallVec::new();
|
2022-07-06 07:44:47 -05:00
|
|
|
let enabled = cx.tcx.sess.opts.unstable_opts.sanitizer - no_sanitize;
|
2022-09-11 19:36:19 -04:00
|
|
|
if enabled.contains(SanitizerSet::ADDRESS) || enabled.contains(SanitizerSet::KERNELADDRESS) {
|
2022-02-21 11:19:16 -05:00
|
|
|
attrs.push(llvm::AttributeKind::SanitizeAddress.create_attr(cx.llcx));
|
2020-06-14 00:00:00 +00:00
|
|
|
}
|
|
|
|
if enabled.contains(SanitizerSet::MEMORY) {
|
2022-02-21 11:19:16 -05:00
|
|
|
attrs.push(llvm::AttributeKind::SanitizeMemory.create_attr(cx.llcx));
|
2020-06-14 00:00:00 +00:00
|
|
|
}
|
|
|
|
if enabled.contains(SanitizerSet::THREAD) {
|
2022-02-21 11:19:16 -05:00
|
|
|
attrs.push(llvm::AttributeKind::SanitizeThread.create_attr(cx.llcx));
|
2020-01-16 00:00:00 +00:00
|
|
|
}
|
2021-01-22 18:32:38 -08:00
|
|
|
if enabled.contains(SanitizerSet::HWADDRESS) {
|
2022-02-21 11:19:16 -05:00
|
|
|
attrs.push(llvm::AttributeKind::SanitizeHWAddress.create_attr(cx.llcx));
|
2021-01-22 18:32:38 -08:00
|
|
|
}
|
2022-06-17 14:14:58 -04:00
|
|
|
if enabled.contains(SanitizerSet::SHADOWCALLSTACK) {
|
|
|
|
attrs.push(llvm::AttributeKind::ShadowCallStack.create_attr(cx.llcx));
|
|
|
|
}
|
2021-12-03 16:11:13 -05:00
|
|
|
if enabled.contains(SanitizerSet::MEMTAG) {
|
|
|
|
// Check to make sure the mte target feature is actually enabled.
|
2021-09-24 18:02:02 +03:00
|
|
|
let features = cx.tcx.global_backend_features(());
|
|
|
|
let mte_feature =
|
|
|
|
features.iter().map(|s| &s[..]).rfind(|n| ["+mte", "-mte"].contains(&&n[..]));
|
|
|
|
if let None | Some("-mte") = mte_feature {
|
2022-08-26 12:19:10 +02:00
|
|
|
cx.tcx.sess.emit_err(SanitizerMemtagRequiresMte);
|
2021-12-03 16:11:13 -05:00
|
|
|
}
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
attrs.push(llvm::AttributeKind::SanitizeMemTag.create_attr(cx.llcx));
|
2021-12-03 16:11:13 -05:00
|
|
|
}
|
2022-02-21 11:19:16 -05:00
|
|
|
attrs
|
2020-01-16 00:00:00 +00:00
|
|
|
}
|
|
|
|
|
2015-02-28 23:53:12 +02:00
|
|
|
/// Tell LLVM to emit or not emit the information necessary to unwind the stack for the function.
|
|
|
|
#[inline]
|
2022-02-21 11:19:16 -05:00
|
|
|
pub fn uwtable_attr(llcx: &llvm::Context) -> &Attribute {
|
2022-02-14 14:01:19 -05:00
|
|
|
// NOTE: We should determine if we even need async unwind tables, as they
|
|
|
|
// take have more overhead and if we can use sync unwind tables we
|
|
|
|
// probably should.
|
2022-02-21 11:19:16 -05:00
|
|
|
llvm::CreateUWTableAttr(llcx, true)
|
2016-03-21 21:01:08 +01:00
|
|
|
}
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
pub fn frame_pointer_type_attr<'ll>(cx: &CodegenCx<'ll, '_>) -> Option<&'ll Attribute> {
|
2021-06-26 23:53:35 +03:00
|
|
|
let mut fp = cx.sess().target.frame_pointer;
|
2022-12-20 15:02:15 +01:00
|
|
|
let opts = &cx.sess().opts;
|
2021-06-26 23:53:35 +03:00
|
|
|
// "mcount" function relies on stack pointer.
|
|
|
|
// See <https://sourceware.org/binutils/docs/gprof/Implementation.html>.
|
2022-12-20 15:02:15 +01:00
|
|
|
if opts.unstable_opts.instrument_mcount || matches!(opts.cg.force_frame_pointers, Some(true)) {
|
2021-06-26 23:53:35 +03:00
|
|
|
fp = FramePointer::Always;
|
rustc: Update LLVM
This commit updates the LLVM submodule in use to the current HEAD of the LLVM
repository. This is primarily being done to start picking up unwinding support
for MSVC, which is currently unimplemented in the revision of LLVM we are using.
Along the way a few changes had to be made:
* As usual, lots of C++ debuginfo bindings in LLVM changed, so there were some
significant changes to our RustWrapper.cpp
* As usual, some pass management changed in LLVM, so clang was re-scrutinized to
ensure that we're doing the same thing as clang.
* Some optimization options are now passed directly into the
`PassManagerBuilder` instead of through CLI switches to LLVM.
* The `NoFramePointerElim` option was removed from LLVM, favoring instead the
`no-frame-pointer-elim` function attribute instead.
Additionally, LLVM has picked up some new optimizations which required fixing an
existing soundness hole in the IR we generate. It appears that the current LLVM
we use does not expose this hole. When an enum is moved, the previous slot in
memory is overwritten with a bit pattern corresponding to "dropped". When the
drop glue for this slot is run, however, the switch on the discriminant can
often start executing the `unreachable` block of the switch due to the
discriminant now being outside the normal range. This was patched over locally
for now by having the `unreachable` block just change to a `ret void`.
2015-05-14 12:10:43 -07:00
|
|
|
}
|
2021-06-26 23:53:35 +03:00
|
|
|
let attr_value = match fp {
|
2022-03-03 00:00:00 +00:00
|
|
|
FramePointer::Always => "all",
|
|
|
|
FramePointer::NonLeaf => "non-leaf",
|
2022-02-21 11:19:16 -05:00
|
|
|
FramePointer::MayOmit => return None,
|
2021-06-26 23:53:35 +03:00
|
|
|
};
|
2022-03-03 00:00:00 +00:00
|
|
|
Some(llvm::CreateAttrStringValue(cx.llcx, "frame-pointer", attr_value))
|
2016-05-27 13:27:56 -04:00
|
|
|
}
|
|
|
|
|
2018-12-30 11:59:03 -08:00
|
|
|
/// Tell LLVM what instrument function to insert.
|
|
|
|
#[inline]
|
2022-09-24 21:54:47 +09:00
|
|
|
fn instrument_function_attr<'ll>(cx: &CodegenCx<'ll, '_>) -> SmallVec<[&'ll Attribute; 4]> {
|
|
|
|
let mut attrs = SmallVec::new();
|
2022-12-20 15:02:15 +01:00
|
|
|
if cx.sess().opts.unstable_opts.instrument_mcount {
|
2018-12-30 11:59:03 -08:00
|
|
|
// Similar to `clang -pg` behavior. Handled by the
|
|
|
|
// `post-inline-ee-instrument` LLVM pass.
|
2019-03-29 06:44:31 +09:00
|
|
|
|
|
|
|
// The function name varies on platforms.
|
|
|
|
// See test/CodeGen/mcount.c in clang.
|
2022-03-22 11:43:05 +01:00
|
|
|
let mcount_name = cx.sess().target.mcount.as_ref();
|
2019-03-29 06:44:31 +09:00
|
|
|
|
2022-09-24 21:54:47 +09:00
|
|
|
attrs.push(llvm::CreateAttrStringValue(
|
2022-02-21 11:19:16 -05:00
|
|
|
cx.llcx,
|
2022-03-03 00:00:00 +00:00
|
|
|
"instrument-function-entry-inlined",
|
2019-03-30 21:37:02 +09:00
|
|
|
&mcount_name,
|
2022-09-24 21:54:47 +09:00
|
|
|
));
|
2018-12-30 11:59:03 -08:00
|
|
|
}
|
2022-09-24 22:05:25 +09:00
|
|
|
if let Some(options) = &cx.sess().opts.unstable_opts.instrument_xray {
|
|
|
|
// XRay instrumentation is similar to __cyg_profile_func_{enter,exit}.
|
|
|
|
// Function prologue and epilogue are instrumented with NOP sleds,
|
|
|
|
// a runtime library later replaces them with detours into tracing code.
|
|
|
|
if options.always {
|
|
|
|
attrs.push(llvm::CreateAttrStringValue(cx.llcx, "function-instrument", "xray-always"));
|
|
|
|
}
|
|
|
|
if options.never {
|
|
|
|
attrs.push(llvm::CreateAttrStringValue(cx.llcx, "function-instrument", "xray-never"));
|
|
|
|
}
|
|
|
|
if options.ignore_loops {
|
|
|
|
attrs.push(llvm::CreateAttrString(cx.llcx, "xray-ignore-loops"));
|
|
|
|
}
|
|
|
|
// LLVM will not choose the default for us, but rather requires specific
|
|
|
|
// threshold in absence of "xray-always". Use the same default as Clang.
|
|
|
|
let threshold = options.instruction_threshold.unwrap_or(200);
|
|
|
|
attrs.push(llvm::CreateAttrStringValue(
|
|
|
|
cx.llcx,
|
|
|
|
"xray-instruction-threshold",
|
|
|
|
&threshold.to_string(),
|
|
|
|
));
|
|
|
|
if options.skip_entry {
|
|
|
|
attrs.push(llvm::CreateAttrString(cx.llcx, "xray-skip-entry"));
|
|
|
|
}
|
|
|
|
if options.skip_exit {
|
|
|
|
attrs.push(llvm::CreateAttrString(cx.llcx, "xray-skip-exit"));
|
|
|
|
}
|
|
|
|
}
|
2022-09-24 21:54:47 +09:00
|
|
|
attrs
|
2018-12-30 11:59:03 -08:00
|
|
|
}
|
|
|
|
|
2022-12-17 02:50:08 +01:00
|
|
|
fn nojumptables_attr<'ll>(cx: &CodegenCx<'ll, '_>) -> Option<&'ll Attribute> {
|
|
|
|
if !cx.sess().opts.unstable_opts.no_jump_tables {
|
|
|
|
return None;
|
|
|
|
}
|
|
|
|
|
|
|
|
Some(llvm::CreateAttrStringValue(cx.llcx, "no-jump-tables", "true"))
|
|
|
|
}
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
fn probestack_attr<'ll>(cx: &CodegenCx<'ll, '_>) -> Option<&'ll Attribute> {
|
2017-06-21 12:08:18 -07:00
|
|
|
// Currently stack probes seem somewhat incompatible with the address
|
2019-10-04 00:00:00 +00:00
|
|
|
// sanitizer and thread sanitizer. With asan we're already protected from
|
|
|
|
// stack overflow anyway so we don't really need stack probes regardless.
|
2020-06-14 00:00:00 +00:00
|
|
|
if cx
|
|
|
|
.sess()
|
|
|
|
.opts
|
2022-07-06 07:44:47 -05:00
|
|
|
.unstable_opts
|
2020-06-14 00:00:00 +00:00
|
|
|
.sanitizer
|
|
|
|
.intersects(SanitizerSet::ADDRESS | SanitizerSet::THREAD)
|
|
|
|
{
|
2022-02-21 11:19:16 -05:00
|
|
|
return None;
|
2017-06-21 12:08:18 -07:00
|
|
|
}
|
|
|
|
|
2019-05-28 16:48:03 +02:00
|
|
|
// probestack doesn't play nice either with `-C profile-generate`.
|
|
|
|
if cx.sess().opts.cg.profile_generate.enabled() {
|
2022-02-21 11:19:16 -05:00
|
|
|
return None;
|
2018-02-19 01:57:55 +01:00
|
|
|
}
|
|
|
|
|
2018-06-20 22:07:55 +01:00
|
|
|
// probestack doesn't play nice either with gcov profiling.
|
2022-07-06 07:44:47 -05:00
|
|
|
if cx.sess().opts.unstable_opts.profile {
|
2022-02-21 11:19:16 -05:00
|
|
|
return None;
|
2018-06-20 22:07:55 +01:00
|
|
|
}
|
|
|
|
|
2021-01-09 16:54:20 +02:00
|
|
|
let attr_value = match cx.sess().target.stack_probes {
|
2022-02-21 11:19:16 -05:00
|
|
|
StackProbeType::None => return None,
|
2021-01-09 16:54:20 +02:00
|
|
|
// Request LLVM to generate the probes inline. If the given LLVM version does not support
|
|
|
|
// this, no probe is generated at all (even if the attribute is specified).
|
2022-03-03 00:00:00 +00:00
|
|
|
StackProbeType::Inline => "inline-asm",
|
2021-01-09 16:54:20 +02:00
|
|
|
// Flag our internal `__rust_probestack` function as the stack probe symbol.
|
|
|
|
// This is defined in the `compiler-builtins` crate for each architecture.
|
2022-03-03 00:00:00 +00:00
|
|
|
StackProbeType::Call => "__rust_probestack",
|
2021-01-09 16:54:20 +02:00
|
|
|
// Pick from the two above based on the LLVM version.
|
|
|
|
StackProbeType::InlineOrCall { min_llvm_version_for_inline } => {
|
|
|
|
if llvm_util::get_version() < min_llvm_version_for_inline {
|
2022-03-03 00:00:00 +00:00
|
|
|
"__rust_probestack"
|
2021-01-09 16:54:20 +02:00
|
|
|
} else {
|
2022-03-03 00:00:00 +00:00
|
|
|
"inline-asm"
|
2021-01-09 16:54:20 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
};
|
2022-03-03 00:00:00 +00:00
|
|
|
Some(llvm::CreateAttrStringValue(cx.llcx, "probe-stack", attr_value))
|
2017-06-21 12:08:18 -07:00
|
|
|
}
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
fn stackprotector_attr<'ll>(cx: &CodegenCx<'ll, '_>) -> Option<&'ll Attribute> {
|
add rustc option for using LLVM stack smash protection
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in #15179.
Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.
Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.
Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.
LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see #26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.
The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
Reviewed-by: Nikita Popov <nikic@php.net>
Extra commits during review:
- [address-review] make the stack-protector option unstable
- [address-review] reduce detail level of stack-protector option help text
- [address-review] correct grammar in comment
- [address-review] use compiler flag to avoid merging functions in test
- [address-review] specify min LLVM version in fortanix stack-protector test
Only for Fortanix test, since this target specifically requests the
`--x86-experimental-lvi-inline-asm-hardening` flag.
- [address-review] specify required LLVM components in stack-protector tests
- move stack protector option enum closer to other similar option enums
- rustc_interface/tests: sort debug option list in tracking hash test
- add an explicit `none` stack-protector option
Revert "set LLVM requirements for all stack protector support test revisions"
This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
2021-04-06 21:37:49 +02:00
|
|
|
let sspattr = match cx.sess().stack_protector() {
|
2022-02-21 11:19:16 -05:00
|
|
|
StackProtector::None => return None,
|
|
|
|
StackProtector::All => AttributeKind::StackProtectReq,
|
|
|
|
StackProtector::Strong => AttributeKind::StackProtectStrong,
|
|
|
|
StackProtector::Basic => AttributeKind::StackProtect,
|
add rustc option for using LLVM stack smash protection
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in #15179.
Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.
Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.
Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.
LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see #26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.
The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
Reviewed-by: Nikita Popov <nikic@php.net>
Extra commits during review:
- [address-review] make the stack-protector option unstable
- [address-review] reduce detail level of stack-protector option help text
- [address-review] correct grammar in comment
- [address-review] use compiler flag to avoid merging functions in test
- [address-review] specify min LLVM version in fortanix stack-protector test
Only for Fortanix test, since this target specifically requests the
`--x86-experimental-lvi-inline-asm-hardening` flag.
- [address-review] specify required LLVM components in stack-protector tests
- move stack protector option enum closer to other similar option enums
- rustc_interface/tests: sort debug option list in tracking hash test
- add an explicit `none` stack-protector option
Revert "set LLVM requirements for all stack protector support test revisions"
This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
2021-04-06 21:37:49 +02:00
|
|
|
};
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
Some(sspattr.create_attr(cx.llcx))
|
add rustc option for using LLVM stack smash protection
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in #15179.
Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.
Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.
Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.
LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see #26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.
The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
Reviewed-by: Nikita Popov <nikic@php.net>
Extra commits during review:
- [address-review] make the stack-protector option unstable
- [address-review] reduce detail level of stack-protector option help text
- [address-review] correct grammar in comment
- [address-review] use compiler flag to avoid merging functions in test
- [address-review] specify min LLVM version in fortanix stack-protector test
Only for Fortanix test, since this target specifically requests the
`--x86-experimental-lvi-inline-asm-hardening` flag.
- [address-review] specify required LLVM components in stack-protector tests
- move stack protector option enum closer to other similar option enums
- rustc_interface/tests: sort debug option list in tracking hash test
- add an explicit `none` stack-protector option
Revert "set LLVM requirements for all stack protector support test revisions"
This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
2021-04-06 21:37:49 +02:00
|
|
|
}
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
pub fn target_cpu_attr<'ll>(cx: &CodegenCx<'ll, '_>) -> &'ll Attribute {
|
2022-03-03 00:00:00 +00:00
|
|
|
let target_cpu = llvm_util::target_cpu(cx.tcx.sess);
|
|
|
|
llvm::CreateAttrStringValue(cx.llcx, "target-cpu", target_cpu)
|
2018-08-02 18:10:26 +02:00
|
|
|
}
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
pub fn tune_cpu_attr<'ll>(cx: &CodegenCx<'ll, '_>) -> Option<&'ll Attribute> {
|
2022-03-03 00:00:00 +00:00
|
|
|
llvm_util::tune_cpu(cx.tcx.sess)
|
|
|
|
.map(|tune_cpu| llvm::CreateAttrStringValue(cx.llcx, "tune-cpu", tune_cpu))
|
2020-09-17 17:39:26 +08:00
|
|
|
}
|
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
/// Get the `NonLazyBind` LLVM attribute,
|
|
|
|
/// if the codegen options allow skipping the PLT.
|
|
|
|
pub fn non_lazy_bind_attr<'ll>(cx: &CodegenCx<'ll, '_>) -> Option<&'ll Attribute> {
|
2018-09-26 19:19:55 +03:00
|
|
|
// Don't generate calls through PLT if it's not necessary
|
2022-02-21 11:19:16 -05:00
|
|
|
if !cx.sess().needs_plt() {
|
|
|
|
Some(AttributeKind::NonLazyBind.create_attr(cx.llcx))
|
|
|
|
} else {
|
|
|
|
None
|
2018-09-26 19:19:55 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-02-21 14:47:56 -05:00
|
|
|
/// Get the default optimizations attrs for a function.
|
2022-02-21 11:19:16 -05:00
|
|
|
#[inline]
|
|
|
|
pub(crate) fn default_optimisation_attrs<'ll>(
|
|
|
|
cx: &CodegenCx<'ll, '_>,
|
2022-02-21 14:47:56 -05:00
|
|
|
) -> SmallVec<[&'ll Attribute; 2]> {
|
|
|
|
let mut attrs = SmallVec::new();
|
2022-02-21 11:19:16 -05:00
|
|
|
match cx.sess().opts.optimize {
|
2019-01-19 00:37:52 +02:00
|
|
|
OptLevel::Size => {
|
2022-02-21 14:47:56 -05:00
|
|
|
attrs.push(llvm::AttributeKind::OptimizeForSize.create_attr(cx.llcx));
|
2019-01-19 00:37:52 +02:00
|
|
|
}
|
|
|
|
OptLevel::SizeMin => {
|
2022-02-21 14:47:56 -05:00
|
|
|
attrs.push(llvm::AttributeKind::MinSize.create_attr(cx.llcx));
|
|
|
|
attrs.push(llvm::AttributeKind::OptimizeForSize.create_attr(cx.llcx));
|
2019-01-19 00:37:52 +02:00
|
|
|
}
|
|
|
|
_ => {}
|
|
|
|
}
|
2022-02-21 14:47:56 -05:00
|
|
|
attrs
|
2019-01-19 00:37:52 +02:00
|
|
|
}
|
|
|
|
|
2022-03-21 15:30:54 -04:00
|
|
|
fn create_alloc_family_attr(llcx: &llvm::Context) -> &llvm::Attribute {
|
|
|
|
llvm::CreateAttrStringValue(llcx, "alloc-family", "__rust_alloc")
|
|
|
|
}
|
|
|
|
|
2018-11-27 02:59:49 +00:00
|
|
|
/// Composite function which sets LLVM attributes for function depending on its AST (`#[attribute]`)
|
2016-05-27 13:27:56 -04:00
|
|
|
/// attributes.
|
2021-12-14 13:49:49 -05:00
|
|
|
pub fn from_fn_attrs<'ll, 'tcx>(
|
|
|
|
cx: &CodegenCx<'ll, 'tcx>,
|
|
|
|
llfn: &'ll Value,
|
|
|
|
instance: ty::Instance<'tcx>,
|
|
|
|
) {
|
2019-11-27 12:53:19 +02:00
|
|
|
let codegen_fn_attrs = cx.tcx.codegen_fn_attrs(instance.def_id());
|
2018-01-30 22:39:23 -05:00
|
|
|
|
2022-02-21 11:19:16 -05:00
|
|
|
let mut to_add = SmallVec::<[_; 16]>::new();
|
|
|
|
|
2018-10-27 15:29:06 +03:00
|
|
|
match codegen_fn_attrs.optimize {
|
|
|
|
OptimizeAttr::None => {
|
2022-02-21 14:47:56 -05:00
|
|
|
to_add.extend(default_optimisation_attrs(cx));
|
2018-10-27 15:29:06 +03:00
|
|
|
}
|
|
|
|
OptimizeAttr::Size => {
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.push(llvm::AttributeKind::MinSize.create_attr(cx.llcx));
|
|
|
|
to_add.push(llvm::AttributeKind::OptimizeForSize.create_attr(cx.llcx));
|
2018-10-27 15:29:06 +03:00
|
|
|
}
|
2022-02-21 14:47:56 -05:00
|
|
|
OptimizeAttr::Speed => {}
|
2018-10-27 15:29:06 +03:00
|
|
|
}
|
|
|
|
|
2022-12-02 00:00:00 +00:00
|
|
|
let inline =
|
|
|
|
if codegen_fn_attrs.inline == InlineAttr::None && instance.def.requires_inline(cx.tcx) {
|
|
|
|
InlineAttr::Hint
|
|
|
|
} else {
|
|
|
|
codegen_fn_attrs.inline
|
|
|
|
};
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.extend(inline_attr(cx, inline));
|
2018-11-18 20:51:56 +02:00
|
|
|
|
2018-08-16 13:19:04 -07:00
|
|
|
// The `uwtable` attribute according to LLVM is:
|
|
|
|
//
|
|
|
|
// This attribute indicates that the ABI being targeted requires that an
|
|
|
|
// unwind table entry be produced for this function even if we can show
|
|
|
|
// that no exceptions passes by it. This is normally the case for the
|
|
|
|
// ELF x86-64 abi, but it can be disabled for some compilation units.
|
|
|
|
//
|
|
|
|
// Typically when we're compiling with `-C panic=abort` (which implies this
|
|
|
|
// `no_landing_pads` check) we don't need `uwtable` because we can't
|
|
|
|
// generate any exceptions! On Windows, however, exceptions include other
|
|
|
|
// events such as illegal instructions, segfaults, etc. This means that on
|
|
|
|
// Windows we end up still needing the `uwtable` attribute even if the `-C
|
|
|
|
// panic=abort` flag is passed.
|
|
|
|
//
|
2020-07-07 11:12:44 -04:00
|
|
|
// You can also find more info on why Windows always requires uwtables here:
|
2018-08-16 13:19:04 -07:00
|
|
|
// https://bugzilla.mozilla.org/show_bug.cgi?id=1302078
|
Add Option to Force Unwind Tables
When panic != unwind, `nounwind` is added to all functions for a target.
This can cause issues when a panic happens with RUST_BACKTRACE=1, as
there needs to be a way to reconstruct the backtrace. There are three
possible sources of this information: forcing frame pointers (for which
an option exists already), debug info (for which an option exists), or
unwind tables.
Especially for embedded devices, forcing frame pointers can have code
size overheads (RISC-V sees ~10% overheads, ARM sees ~2-3% overheads).
In code, it can be the case that debug info is not kept, so it is useful
to provide this third option, unwind tables, that users can use to
reconstruct the call stack. Reconstructing this stack is harder than
with frame pointers, but it is still possible.
This commit adds a compiler option which allows a user to force the
addition of unwind tables. Unwind tables cannot be disabled on targets
that require them for correctness, or when using `-C panic=unwind`.
2020-05-04 12:08:35 +01:00
|
|
|
if cx.sess().must_emit_unwind_tables() {
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.push(uwtable_attr(cx.llcx));
|
2018-08-16 13:19:04 -07:00
|
|
|
}
|
|
|
|
|
2022-07-06 07:44:47 -05:00
|
|
|
if cx.sess().opts.unstable_opts.profile_sample_use.is_some() {
|
2022-03-03 00:00:00 +00:00
|
|
|
to_add.push(llvm::CreateAttrString(cx.llcx, "use-sample-profile"));
|
2021-05-07 07:41:37 +00:00
|
|
|
}
|
|
|
|
|
2021-03-28 00:11:24 +02:00
|
|
|
// FIXME: none of these three functions interact with source level attributes.
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.extend(frame_pointer_type_attr(cx));
|
|
|
|
to_add.extend(instrument_function_attr(cx));
|
2022-12-17 02:50:08 +01:00
|
|
|
to_add.extend(nojumptables_attr(cx));
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.extend(probestack_attr(cx));
|
|
|
|
to_add.extend(stackprotector_attr(cx));
|
2018-01-05 13:26:26 -08:00
|
|
|
|
2018-05-08 16:10:16 +03:00
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::COLD) {
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.push(AttributeKind::Cold.create_attr(cx.llcx));
|
2018-01-15 20:08:09 -05:00
|
|
|
}
|
2019-02-09 15:55:30 +01:00
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::FFI_RETURNS_TWICE) {
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.push(AttributeKind::ReturnsTwice.create_attr(cx.llcx));
|
2019-02-09 15:55:30 +01:00
|
|
|
}
|
2020-04-14 00:19:46 +02:00
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::FFI_PURE) {
|
2022-11-04 16:20:42 +00:00
|
|
|
to_add.push(MemoryEffects::ReadOnly.create_attr(cx.llcx));
|
2020-04-14 00:19:46 +02:00
|
|
|
}
|
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::FFI_CONST) {
|
2022-11-04 16:20:42 +00:00
|
|
|
to_add.push(MemoryEffects::None.create_attr(cx.llcx));
|
2020-04-14 00:19:46 +02:00
|
|
|
}
|
2018-05-08 16:10:16 +03:00
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::NAKED) {
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.push(AttributeKind::Naked.create_attr(cx.llcx));
|
2022-07-06 19:07:52 -07:00
|
|
|
// HACK(jubilee): "indirect branch tracking" works by attaching prologues to functions.
|
|
|
|
// And it is a module-level attribute, so the alternative is pulling naked functions into new LLVM modules.
|
|
|
|
// Otherwise LLVM's "naked" functions come with endbr prefixes per https://github.com/rust-lang/rust/issues/98768
|
|
|
|
to_add.push(AttributeKind::NoCfCheck.create_attr(cx.llcx));
|
2022-07-06 19:09:33 -07:00
|
|
|
// Need this for AArch64.
|
|
|
|
to_add.push(llvm::CreateAttrStringValue(cx.llcx, "branch-target-enforcement", "false"));
|
2018-01-15 20:08:09 -05:00
|
|
|
}
|
2022-03-21 15:30:54 -04:00
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::ALLOCATOR)
|
|
|
|
|| codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::ALLOCATOR_ZEROED)
|
|
|
|
{
|
|
|
|
if llvm_util::get_version() >= (15, 0, 0) {
|
|
|
|
to_add.push(create_alloc_family_attr(cx.llcx));
|
|
|
|
// apply to argument place instead of function
|
|
|
|
let alloc_align = AttributeKind::AllocAlign.create_attr(cx.llcx);
|
|
|
|
attributes::apply_to_llfn(llfn, AttributePlace::Argument(1), &[alloc_align]);
|
|
|
|
to_add.push(llvm::CreateAllocSizeAttr(cx.llcx, 0));
|
|
|
|
let mut flags = AllocKindFlags::Alloc | AllocKindFlags::Aligned;
|
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::ALLOCATOR) {
|
|
|
|
flags |= AllocKindFlags::Uninitialized;
|
|
|
|
} else {
|
|
|
|
flags |= AllocKindFlags::Zeroed;
|
|
|
|
}
|
|
|
|
to_add.push(llvm::CreateAllocKindAttr(cx.llcx, flags));
|
|
|
|
}
|
2022-02-21 11:19:16 -05:00
|
|
|
// apply to return place instead of function (unlike all other attributes applied in this function)
|
|
|
|
let no_alias = AttributeKind::NoAlias.create_attr(cx.llcx);
|
|
|
|
attributes::apply_to_llfn(llfn, AttributePlace::ReturnValue, &[no_alias]);
|
2018-01-15 20:08:09 -05:00
|
|
|
}
|
2022-03-21 15:30:54 -04:00
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::REALLOCATOR) {
|
|
|
|
if llvm_util::get_version() >= (15, 0, 0) {
|
|
|
|
to_add.push(create_alloc_family_attr(cx.llcx));
|
|
|
|
to_add.push(llvm::CreateAllocKindAttr(
|
|
|
|
cx.llcx,
|
|
|
|
AllocKindFlags::Realloc | AllocKindFlags::Aligned,
|
|
|
|
));
|
|
|
|
// applies to argument place instead of function place
|
|
|
|
let allocated_pointer = AttributeKind::AllocatedPointer.create_attr(cx.llcx);
|
|
|
|
attributes::apply_to_llfn(llfn, AttributePlace::Argument(0), &[allocated_pointer]);
|
|
|
|
// apply to argument place instead of function
|
|
|
|
let alloc_align = AttributeKind::AllocAlign.create_attr(cx.llcx);
|
|
|
|
attributes::apply_to_llfn(llfn, AttributePlace::Argument(2), &[alloc_align]);
|
|
|
|
to_add.push(llvm::CreateAllocSizeAttr(cx.llcx, 3));
|
|
|
|
}
|
|
|
|
let no_alias = AttributeKind::NoAlias.create_attr(cx.llcx);
|
|
|
|
attributes::apply_to_llfn(llfn, AttributePlace::ReturnValue, &[no_alias]);
|
|
|
|
}
|
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::DEALLOCATOR) {
|
|
|
|
if llvm_util::get_version() >= (15, 0, 0) {
|
|
|
|
to_add.push(create_alloc_family_attr(cx.llcx));
|
|
|
|
to_add.push(llvm::CreateAllocKindAttr(cx.llcx, AllocKindFlags::Free));
|
|
|
|
// applies to argument place instead of function place
|
|
|
|
let allocated_pointer = AttributeKind::AllocatedPointer.create_attr(cx.llcx);
|
|
|
|
attributes::apply_to_llfn(llfn, AttributePlace::Argument(0), &[allocated_pointer]);
|
|
|
|
}
|
|
|
|
}
|
2020-09-28 21:10:38 +01:00
|
|
|
if codegen_fn_attrs.flags.contains(CodegenFnAttrFlags::CMSE_NONSECURE_ENTRY) {
|
2022-03-03 00:00:00 +00:00
|
|
|
to_add.push(llvm::CreateAttrString(cx.llcx, "cmse_nonsecure_entry"));
|
2020-09-28 21:10:38 +01:00
|
|
|
}
|
2021-01-20 21:49:04 -05:00
|
|
|
if let Some(align) = codegen_fn_attrs.alignment {
|
|
|
|
llvm::set_alignment(llfn, align as usize);
|
|
|
|
}
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.extend(sanitize_attrs(cx, codegen_fn_attrs.no_sanitize));
|
2018-05-24 12:03:05 -07:00
|
|
|
|
2018-08-02 18:10:26 +02:00
|
|
|
// Always annotate functions with the target-cpu they are compiled for.
|
|
|
|
// Without this, ThinLTO won't inline Rust functions into Clang generated
|
|
|
|
// functions (because Clang annotates functions this way too).
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.push(target_cpu_attr(cx));
|
2020-09-17 17:39:26 +08:00
|
|
|
// tune-cpu is only conveyed through the attribute for our purpose.
|
|
|
|
// The target doesn't care; the subtarget reads our attribute.
|
2022-02-21 11:19:16 -05:00
|
|
|
to_add.extend(tune_cpu_attr(cx));
|
2018-08-02 18:10:26 +02:00
|
|
|
|
2022-01-31 13:04:27 +00:00
|
|
|
let function_features =
|
|
|
|
codegen_fn_attrs.target_features.iter().map(|f| f.as_str()).collect::<Vec<&str>>();
|
|
|
|
|
|
|
|
if let Some(f) = llvm_util::check_tied_features(
|
|
|
|
cx.tcx.sess,
|
|
|
|
&function_features.iter().map(|f| (*f, true)).collect(),
|
|
|
|
) {
|
|
|
|
let span = cx
|
|
|
|
.tcx
|
2022-09-06 14:16:54 +08:00
|
|
|
.get_attrs(instance.def_id(), sym::target_feature)
|
|
|
|
.next()
|
2022-01-31 13:04:27 +00:00
|
|
|
.map_or_else(|| cx.tcx.def_span(instance.def_id()), |a| a.span);
|
2022-08-26 21:27:17 +02:00
|
|
|
cx.tcx
|
|
|
|
.sess
|
2022-10-30 19:26:12 +01:00
|
|
|
.create_err(TargetFeatureDisableOrEnable {
|
|
|
|
features: f,
|
|
|
|
span: Some(span),
|
|
|
|
missing_features: Some(MissingFeatures),
|
|
|
|
})
|
2022-08-26 21:27:17 +02:00
|
|
|
.emit();
|
2022-01-31 13:04:27 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
let mut function_features = function_features
|
2021-03-13 15:29:39 +02:00
|
|
|
.iter()
|
2022-01-31 13:04:27 +00:00
|
|
|
.flat_map(|feat| {
|
2021-07-23 16:19:22 +03:00
|
|
|
llvm_util::to_llvm_features(cx.tcx.sess, feat).into_iter().map(|f| format!("+{}", f))
|
2021-03-13 15:29:39 +02:00
|
|
|
})
|
2020-10-08 23:23:27 +01:00
|
|
|
.chain(codegen_fn_attrs.instruction_set.iter().map(|x| match x {
|
|
|
|
InstructionSetAttr::ArmA32 => "-thumb-mode".to_string(),
|
|
|
|
InstructionSetAttr::ArmT32 => "+thumb-mode".to_string(),
|
|
|
|
}))
|
2021-03-13 15:29:39 +02:00
|
|
|
.collect::<Vec<String>>();
|
2018-02-10 14:28:17 -08:00
|
|
|
|
2020-12-30 12:52:21 -06:00
|
|
|
if cx.tcx.sess.target.is_like_wasm {
|
rustc: Add a new `wasm` ABI
This commit implements the idea of a new ABI for the WebAssembly target,
one called `"wasm"`. This ABI is entirely of my own invention
and has no current precedent, but I think that the addition of this ABI
might help solve a number of issues with the WebAssembly targets.
When `wasm32-unknown-unknown` was first added to Rust I naively
"implemented an abi" for the target. I then went to write `wasm-bindgen`
which accidentally relied on details of this ABI. Turns out the ABI
definition didn't match C, which is causing issues for C/Rust interop.
Currently the compiler has a "wasm32 bindgen compat" ABI which is the
original implementation I added, and it's purely there for, well,
`wasm-bindgen`.
Another issue with the WebAssembly target is that it's not clear to me
when and if the default C ABI will change to account for WebAssembly's
multi-value feature (a feature that allows functions to return multiple
values). Even if this does happen, though, it seems like the C ABI will
be guided based on the performance of WebAssembly code and will likely
not match even what the current wasm-bindgen-compat ABI is today. This
leaves a hole in Rust's expressivity in binding WebAssembly where given
a particular import type, Rust may not be able to import that signature
with an updated C ABI for multi-value.
To fix these issues I had the idea of a new ABI for WebAssembly, one
called `wasm`. The definition of this ABI is "what you write
maps straight to wasm". The goal here is that whatever you write down in
the parameter list or in the return values goes straight into the
function's signature in the WebAssembly file. This special ABI is for
intentionally matching the ABI of an imported function from the
environment or exporting a function with the right signature.
With the addition of a new ABI, this enables rustc to:
* Eventually remove the "wasm-bindgen compat hack". Once this
ABI is stable wasm-bindgen can switch to using it everywhere.
Afterwards the wasm32-unknown-unknown target can have its default ABI
updated to match C.
* Expose the ability to precisely match an ABI signature for a
WebAssembly function, regardless of what the C ABI that clang chooses
turns out to be.
* Continue to evolve the definition of the default C ABI to match what
clang does on all targets, since the purpose of that ABI will be
explicitly matching C rather than generating particular function
imports/exports.
Naturally this is implemented as an unstable feature initially, but it
would be nice for this to get stabilized (if it works) in the near-ish
future to remove the wasm32-unknown-unknown incompatibility with the C
ABI. Doing this, however, requires the feature to be on stable because
wasm-bindgen works with stable Rust.
2021-04-01 16:08:29 -07:00
|
|
|
// If this function is an import from the environment but the wasm
|
|
|
|
// import has a specific module/name, apply them here.
|
2019-11-27 12:53:19 +02:00
|
|
|
if let Some(module) = wasm_import_module(cx.tcx, instance.def_id()) {
|
2022-03-03 00:00:00 +00:00
|
|
|
to_add.push(llvm::CreateAttrStringValue(cx.llcx, "wasm-import-module", &module));
|
2019-12-16 14:15:57 -08:00
|
|
|
|
|
|
|
let name =
|
|
|
|
codegen_fn_attrs.link_name.unwrap_or_else(|| cx.tcx.item_name(instance.def_id()));
|
2022-03-03 00:00:00 +00:00
|
|
|
let name = name.as_str();
|
|
|
|
to_add.push(llvm::CreateAttrStringValue(cx.llcx, "wasm-import-name", name));
|
2018-02-10 14:28:17 -08:00
|
|
|
}
|
rustc: Add a new `wasm` ABI
This commit implements the idea of a new ABI for the WebAssembly target,
one called `"wasm"`. This ABI is entirely of my own invention
and has no current precedent, but I think that the addition of this ABI
might help solve a number of issues with the WebAssembly targets.
When `wasm32-unknown-unknown` was first added to Rust I naively
"implemented an abi" for the target. I then went to write `wasm-bindgen`
which accidentally relied on details of this ABI. Turns out the ABI
definition didn't match C, which is causing issues for C/Rust interop.
Currently the compiler has a "wasm32 bindgen compat" ABI which is the
original implementation I added, and it's purely there for, well,
`wasm-bindgen`.
Another issue with the WebAssembly target is that it's not clear to me
when and if the default C ABI will change to account for WebAssembly's
multi-value feature (a feature that allows functions to return multiple
values). Even if this does happen, though, it seems like the C ABI will
be guided based on the performance of WebAssembly code and will likely
not match even what the current wasm-bindgen-compat ABI is today. This
leaves a hole in Rust's expressivity in binding WebAssembly where given
a particular import type, Rust may not be able to import that signature
with an updated C ABI for multi-value.
To fix these issues I had the idea of a new ABI for WebAssembly, one
called `wasm`. The definition of this ABI is "what you write
maps straight to wasm". The goal here is that whatever you write down in
the parameter list or in the return values goes straight into the
function's signature in the WebAssembly file. This special ABI is for
intentionally matching the ABI of an imported function from the
environment or exporting a function with the right signature.
With the addition of a new ABI, this enables rustc to:
* Eventually remove the "wasm-bindgen compat hack". Once this
ABI is stable wasm-bindgen can switch to using it everywhere.
Afterwards the wasm32-unknown-unknown target can have its default ABI
updated to match C.
* Expose the ability to precisely match an ABI signature for a
WebAssembly function, regardless of what the C ABI that clang chooses
turns out to be.
* Continue to evolve the definition of the default C ABI to match what
clang does on all targets, since the purpose of that ABI will be
explicitly matching C rather than generating particular function
imports/exports.
Naturally this is implemented as an unstable feature initially, but it
would be nice for this to get stabilized (if it works) in the near-ish
future to remove the wasm32-unknown-unknown incompatibility with the C
ABI. Doing this, however, requires the feature to be on stable because
wasm-bindgen works with stable Rust.
2021-04-01 16:08:29 -07:00
|
|
|
|
|
|
|
// The `"wasm"` abi on wasm targets automatically enables the
|
|
|
|
// `+multivalue` feature because the purpose of the wasm abi is to match
|
|
|
|
// the WebAssembly specification, which has this feature. This won't be
|
|
|
|
// needed when LLVM enables this `multivalue` feature by default.
|
|
|
|
if !cx.tcx.is_closure(instance.def_id()) {
|
2023-01-18 16:52:47 -07:00
|
|
|
let abi = cx.tcx.fn_sig(instance.def_id()).skip_binder().abi();
|
rustc: Add a new `wasm` ABI
This commit implements the idea of a new ABI for the WebAssembly target,
one called `"wasm"`. This ABI is entirely of my own invention
and has no current precedent, but I think that the addition of this ABI
might help solve a number of issues with the WebAssembly targets.
When `wasm32-unknown-unknown` was first added to Rust I naively
"implemented an abi" for the target. I then went to write `wasm-bindgen`
which accidentally relied on details of this ABI. Turns out the ABI
definition didn't match C, which is causing issues for C/Rust interop.
Currently the compiler has a "wasm32 bindgen compat" ABI which is the
original implementation I added, and it's purely there for, well,
`wasm-bindgen`.
Another issue with the WebAssembly target is that it's not clear to me
when and if the default C ABI will change to account for WebAssembly's
multi-value feature (a feature that allows functions to return multiple
values). Even if this does happen, though, it seems like the C ABI will
be guided based on the performance of WebAssembly code and will likely
not match even what the current wasm-bindgen-compat ABI is today. This
leaves a hole in Rust's expressivity in binding WebAssembly where given
a particular import type, Rust may not be able to import that signature
with an updated C ABI for multi-value.
To fix these issues I had the idea of a new ABI for WebAssembly, one
called `wasm`. The definition of this ABI is "what you write
maps straight to wasm". The goal here is that whatever you write down in
the parameter list or in the return values goes straight into the
function's signature in the WebAssembly file. This special ABI is for
intentionally matching the ABI of an imported function from the
environment or exporting a function with the right signature.
With the addition of a new ABI, this enables rustc to:
* Eventually remove the "wasm-bindgen compat hack". Once this
ABI is stable wasm-bindgen can switch to using it everywhere.
Afterwards the wasm32-unknown-unknown target can have its default ABI
updated to match C.
* Expose the ability to precisely match an ABI signature for a
WebAssembly function, regardless of what the C ABI that clang chooses
turns out to be.
* Continue to evolve the definition of the default C ABI to match what
clang does on all targets, since the purpose of that ABI will be
explicitly matching C rather than generating particular function
imports/exports.
Naturally this is implemented as an unstable feature initially, but it
would be nice for this to get stabilized (if it works) in the near-ish
future to remove the wasm32-unknown-unknown incompatibility with the C
ABI. Doing this, however, requires the feature to be on stable because
wasm-bindgen works with stable Rust.
2021-04-01 16:08:29 -07:00
|
|
|
if abi == Abi::Wasm {
|
|
|
|
function_features.push("+multivalue".to_string());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-03-03 00:00:00 +00:00
|
|
|
let global_features = cx.tcx.global_backend_features(()).iter().map(|s| s.as_str());
|
|
|
|
let function_features = function_features.iter().map(|s| s.as_str());
|
|
|
|
let target_features =
|
|
|
|
global_features.chain(function_features).intersperse(",").collect::<SmallStr<1024>>();
|
|
|
|
if !target_features.is_empty() {
|
|
|
|
to_add.push(llvm::CreateAttrStringValue(cx.llcx, "target-features", &target_features));
|
2018-02-10 14:28:17 -08:00
|
|
|
}
|
2022-02-21 11:19:16 -05:00
|
|
|
|
|
|
|
attributes::apply_to_llfn(llfn, Function, &to_add);
|
2016-11-29 19:02:00 -05:00
|
|
|
}
|
|
|
|
|
2022-03-03 00:00:00 +00:00
|
|
|
fn wasm_import_module(tcx: TyCtxt<'_>, id: DefId) -> Option<&String> {
|
|
|
|
tcx.wasm_import_module_map(id.krate).get(&id)
|
2018-02-10 14:28:17 -08:00
|
|
|
}
|