Implements the following simd reduction intrinsics:
- simd_reduce_add_ordered
- simd_reduce_mul_ordered
- simd_reduce_min_nanless
- simd_reduce_max_nanless
- simd_reduce_xor
- simd_reduce_any
- simd_reduce_all
Also fixes the ordering of simd_reduce_min and simd_reduce_max,
which were tested to be flipped.
Both simd_reduce_min_nanless and simd_reduce_max_nanless are identical
to their non-nanless variants for the time being. An attempt was made
at a more optimal codegen solution based on vector_reduce_op. However,
this approach ran into masking issues for floating-point vector types,
which appears to be broken for the same reason that comparison
operations such as simd_lt are broken for floating-point vector types.
More investigation is required, however, to determine a root cause and
appropriate fix.
This should be enough to pass the generic-reduction-pass.rs ui tests
with the 'master' feature enabled.
Signed-off-by: Andy Sadler <andrewsadler122@gmail.com>
Currently they try to be very precise. But they are wrong, i.e. they
don't match what's happening in the loop below. This code isn't hot
enough for it to matter that much.
Because `PassMode::Cast` is by far the largest variant, but is
relatively rare.
This requires making `PassMode` not impl `Copy`, and `Clone` is no
longer necessary. This causes lots of sigil adjusting, but nothing very
notable.
Now that we require at least LLVM 13, that codegen backend is always
using its intrinsic `fptosi.sat` and `fptoui.sat` conversions, so it
doesn't need the manual implementation. However, the GCC backend still
needs it, so we can move all of that code down there.
This avoids monomorphizing all linker code for each codegen backend and
will allow passing in extra information to the archive builder from the
codegen backend.
Enable raw-dylib for bin crates
Fixes#93842
When `raw-dylib` is used in a `bin` crate, we need to collect all of the `raw-dylib` functions, generate the import library and add that to the linker command line.
I also changed the tests so that 1) the C++ dlls are created after the Rust dlls, thus there is no chance of accidentally using them in the Rust linking process and 2) disabled generating import libraries when building with MSVC.
Add fine-grained LLVM CFI support to the Rust compiler
This PR improves the LLVM Control Flow Integrity (CFI) support in the Rust compiler by providing forward-edge control flow protection for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue https://github.com/rust-lang/rust/issues/89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e., -Clto).
Thank you again, `@eddyb,` `@nagisa,` `@pcc,` and `@tmiasko` for all the help!
This commit improves the LLVM Control Flow Integrity (CFI) support in
the Rust compiler by providing forward-edge control flow protection for
Rust-compiled code only by aggregating function pointers in groups
identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e.,
-Clto).
make vtable pointers entirely opaque
This implements the scheme discussed in https://github.com/rust-lang/unsafe-code-guidelines/issues/338: vtable pointers should be considered entirely opaque and not even readable by Rust code, similar to function pointers.
- We have a new kind of `GlobalAlloc` that symbolically refers to a vtable.
- Miri uses that kind of allocation when generating a vtable.
- The codegen backends, upon encountering such an allocation, call `vtable_allocation` to obtain an actually dataful allocation for this vtable.
- We need new intrinsics to obtain the size and align from a vtable (for some `ptr::metadata` APIs), since direct accesses are UB now.
I had to touch quite a bit of code that I am not very familiar with, so some of this might not make much sense...
r? `@oli-obk`
Keep unstable target features for asm feature checking
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
Use less string interning
This removes string interning in a couple of places where doing so won't result in perf improvements. I also switched one place to use pre-interned symbols.