detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
compile-time evaluation: detect writes through immutable pointers
This has two motivations:
- it unblocks https://github.com/rust-lang/rust/pull/116745 (and therefore takes a big step towards `const_mut_refs` stabilization), because we can now detect if the memory that we find in `const` can be interned as "immutable"
- it would detect the UB that was uncovered in https://github.com/rust-lang/rust/pull/117905, which was caused by accidental stabilization of `copy` functions in `const` that can only be called with UB
When UB is detected, we emit a future-compat warn-by-default lint. This is not a breaking change, so completely in line with [the const-UB RFC](https://rust-lang.github.io/rfcs/3016-const-ub.html), meaning we don't need t-lang FCP here. I made the lint immediately show up for dependencies since it is nearly impossible to even trigger this lint without `const_mut_refs` -- the accidentally stabilized `copy` functions are the only way this can happen, so the crates that popped up in #117905 are the only causes of such UB (in the code that crater covers), and the three cases of UB that we know about have all been fixed in their respective crates already.
The way this is implemented is by making use of the fact that our interpreter is already generic over the notion of provenance. For CTFE we now use the new `CtfeProvenance` type which is conceptually an `AllocId` plus a boolean `immutable` flag (but packed for a more efficient representation). This means we can mark a pointer as immutable when it is created as a shared reference. The flag will be propagated to all pointers derived from this one. We can then check the immutable flag on each write to reject writes through immutable pointers.
I just hope perf works out.
Currently, `Handler::fatal` returns `FatalError`. But `Session::fatal`
returns `!`, because it calls `Handler::fatal` and then calls `raise` on
the result. This inconsistency is unfortunate.
This commit changes `Handler::fatal` to do the `raise` itself, changing
its return type to `!`. This is safe because there are only two calls to
`Handler::fatal`, one in `rustc_session` and one in
`rustc_codegen_cranelift`, and they both call `raise` on the result.
`HandlerInner::fatal` still returns `FatalError`, so I renamed it
`fatal_no_raise` to emphasise the return type difference.
share some track_caller logic between interpret and codegen
Also move the code that implements the track_caller intrinsics out of the core interpreter engine -- it's just a helper creating a const-allocation, doesn't need to be part of the interpreter core.
Bump stdarch submodule and remove special handling for LLVM intrinsics that are no longer needed
Bumps stdarch to pull https://github.com/rust-lang/stdarch/pull/1477, which reimplemented some functions with portable SIMD intrinsics instead of arch specific LLVM intrinsics.
Handling of those LLVM intrinsics is removed from cranelift codegen and miri.
cc `@RalfJung` `@bjorn3`
Prototype using const generic for simd_shuffle IDX array
cc https://github.com/rust-lang/rust/issues/85229
r? `@workingjubilee` on the design
TLDR: there is now a `fn simd_shuffle_generic<T, U, const IDX: &'static [u32]>(x: T, y: T) -> U;` intrinsic that allows replacing
```rust
simd_shuffle(a, b, const { stuff })
```
with
```rust
simd_shuffle_generic::<_, _, {&stuff}>(a, b)
```
which makes the compiler implementations much simpler, if we manage to at some point eliminate `simd_shuffle`.
There are some issues with this today though (can't do math without bubbling it up in the generic arguments). With this change, we can start porting the simple cases and get better data on the others.
rename mir::Constant -> mir::ConstOperand, mir::ConstKind -> mir::Const
Also, be more consistent with the `to/eval_bits` methods... we had some that take a type and some that take a size, and then sometimes the one that takes a type is called `bits_for_ty`.
Turns out that `ty::Const`/`mir::ConstKind` carry their type with them, so we don't need to even pass the type to those `eval_bits` functions at all.
However this is not properly consistent yet: in `ty` we have most of the methods on `ty::Const`, but in `mir` we have them on `mir::ConstKind`. And indeed those two types are the ones that correspond to each other. So `mir::ConstantKind` should actually be renamed to `mir::Const`. But what to do with `mir::Constant`? It carries around a span, that's really more like a constant operand that appears as a MIR operand... it's more suited for `syntax.rs` than `consts.rs`, but the bigger question is, which name should it get if we want to align the `mir` and `ty` types? `ConstOperand`? `ConstOp`? `Literal`? It's not a literal but it has a field called `literal` so it would at least be consistently wrong-ish...
``@oli-obk`` any ideas?
move required_consts check to general post-mono-check function
This factors some code that is common between the interpreter and the codegen backends into shared helper functions. Also as a side-effect the interpreter now uses the same `eval` functions as everyone else to get the evaluated MIR constants.
Also this is in preparation for another post-mono check that will be needed for (the current hackfix for) https://github.com/rust-lang/rust/issues/115709: ensuring that all locals are dynamically sized.
I didn't expect this to change diagnostics, but it's just cycle errors that change.
r? `@oli-obk`
Remove `verbose_generic_activity_with_arg`
This removes `verbose_generic_activity_with_arg` and changes users to `generic_activity_with_arg`. This keeps the output of `-Z time` readable while these repeated events are still available with the self profiling mechanism.
Update stdarch submodule and remove special handling in cranelift codegen for some AVX and SSE2 LLVM intrinsics
https://github.com/rust-lang/stdarch/pull/1463 reimplemented some x86 intrinsics to avoid using some x86-specific LLVM intrinsics:
* Store unaligned (`_mm*_storeu_*`) use `<*mut _>::write_unaligned` instead of `llvm.x86.*.storeu.*`.
* Shift by immediate (`_mm*_s{ll,rl,ra}i_epi*`) use `if` (srl, sll) or `min` (sra) to simulate the behaviour when the RHS is out of range. RHS is constant, so the `if`/`min` will be optimized away.
This PR updates the stdarch submodule to pull these changes and removes special handling for those LLVM intrinsics from cranelift codegen. I left gcc codegen untouched because there are some autogenerated lists.
As experimentation in 115242 has shown looks better than `coldcc`.
And *don't* use a different convention for cold on Windows, because that actually ends up making things worse.
cc tracking issue 97544
Rollup of 6 pull requests
Successful merges:
- #110435 (rustdoc-json: Add test for field ordering.)
- #111891 (feat: `riscv-interrupt-{m,s}` calling conventions)
- #114377 (test_get_dbpath_for_term(): handle non-utf8 paths (fix FIXME))
- #114469 (Detect method not found on arbitrary self type with different mutability)
- #114587 (Convert Const to Allocation in smir)
- #114670 (Don't use `type_of` to determine if item has intrinsic shim)
Failed merges:
- #114599 (Add impl trait declarations to SMIR)
r? `@ghost`
`@rustbot` modify labels: rollup
Similar to prior support added for the mips430, avr, and x86 targets
this change implements the rough equivalent of clang's
[`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling
e.g.
```rust
static mut CNT: usize = 0;
pub extern "riscv-interrupt-m" fn isr_m() {
unsafe {
CNT += 1;
}
}
```
to produce highly effective assembly like:
```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0: 1141 addi sp,sp,-16
unsafe {
CNT += 1;
420003a2: c62a sw a0,12(sp)
420003a4: c42e sw a1,8(sp)
420003a6: 3fc80537 lui a0,0x3fc80
420003aa: 63c52583 lw a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae: 0585 addi a1,a1,1
420003b0: 62b52e23 sw a1,1596(a0)
}
}
420003b4: 4532 lw a0,12(sp)
420003b6: 45a2 lw a1,8(sp)
420003b8: 0141 addi sp,sp,16
420003ba: 30200073 mret
```
(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)
This outcome is superior to hand-coded interrupt routines which, lacking
visibility into any non-assembly body of the interrupt handler, have to
be very conservative and save the [entire CPU state to the stack
frame][full-frame-save]. By instead asking LLVM to only save the
registers that it uses, we defer the decision to the tool with the best
context: it can more accurately account for the cost of spills if it
knows that every additional register used is already at the cost of an
implicit spill.
At the LLVM level, this is apparently [implemented by] marking every
register as "[callee-save]," matching the semantics of an interrupt
handler nicely (it has to leave the CPU state just as it found it after
its `{m|s}ret`).
This approach is not suitable for every interrupt handler, as it makes
no attempt to e.g. save the state in a user-accessible stack frame. For
a full discussion of those challenges and tradeoffs, please refer to
[the interrupt calling conventions RFC][rfc].
Inside rustc, this implementation differs from prior art because LLVM
does not expose the "all-saved" function flavor as a calling convention
directly, instead preferring to use an attribute that allows for
differentiating between "machine-mode" and "superivsor-mode" interrupts.
Finally, some effort has been made to guide those who may not yet be
aware of the differences between machine-mode and supervisor-mode
interrupts as to why no `riscv-interrupt` calling convention is exposed
through rustc, and similarly for why `riscv-interrupt-u` makes no
appearance (as it would complicate future LLVM upgrades).
[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
Add a new `compare_bytes` intrinsic instead of calling `memcmp` directly
As discussed in #113435, this lets the backends be the place that can have the "don't call the function if n == 0" logic, if it's needed for the target. (I didn't actually *add* those checks, though, since as I understood it we didn't actually need them on known targets?)
Doing this also let me make it `const` (unstable), which I don't think `extern "C" fn memcmp` can be.
cc `@RalfJung` `@Amanieu`
Resurrect: rustc_target: Add alignment to indirectly-passed by-value types, correcting the alignment of byval on x86 in the process.
Same as #111551, which I [accidentally closed](https://github.com/rust-lang/rust/pull/111551#issuecomment-1571222612) :/
---
This resurrects PR #103830, which has sat idle for a while.
Beyond #103830, this also:
- fixes byval alignment for types containing vectors on Darwin (see `tests/codegen/align-byval-vector.rs`)
- fixes byval alignment for overaligned types on x86 Windows (see `tests/codegen/align-byval.rs`)
- fixes ABI for types with 128bit requested alignment on ARM64 Linux (see `tests/codegen/aarch64-struct-align-128.rs`)
r? `@nikic`
---
`@pcwalton's` original PR description is reproduced below:
Commit 88e4d2c from five years ago removed
support for alignment on indirectly-passed arguments because of problems with
the `i686-pc-windows-msvc` target. Unfortunately, the `memcpy` optimizations I
recently added to LLVM 16 depend on this to forward `memcpy`s. This commit
attempts to fix the problems with `byval` parameters on that target and now
correctly adds the `align` attribute.
The problem is summarized in [this comment] by `@eddyb.` Briefly, 32-bit x86 has
special alignment rules for `byval` parameters: for the most part, their
alignment is forced to 4. This is not well-documented anywhere but in the Clang
source. I looked at the logic in Clang `TargetInfo.cpp` and tried to replicate
it here. The relevant methods in that file are
`X86_32ABIInfo::getIndirectResult()` and
`X86_32ABIInfo::getTypeStackAlignInBytes()`. The `align` parameter attribute
for `byval` parameters in LLVM must match the platform ABI, or miscompilations
will occur. Note that this doesn't use the approach suggested by eddyb, because
I felt it was overkill to store the alignment in `on_stack` when special
handling is really only needed for 32-bit x86.
As a side effect, this should fix#80127, because it will make the `align`
parameter attribute for `byval` parameters match the platform ABI on LLVM
x86-64.
[this comment]: #80822 (comment)
It makes it sound like the `ExprKind` and `Rvalue` are supposed to represent all pointer related
casts, when in reality their just used to share a some enum variants. Make it clear there these
are only coercion to make it clear why only some pointer related "casts" are in the enum.
`lookup_debug_loc` calls `SourceMap::lookup_line`, which does a binary
search over the files, and then a binary search over the lines within
the found file. It then calls `SourceFile::line_begin_pos`, which redoes
the binary search over the lines within the found file.
This commit removes the second binary search over the lines, instead
getting the line starting pos directly using the result of the first
binary search over the lines.
(And likewise for `get_span_loc`, in the cranelift backend.)