Don't report any errors in `lower_intrinsics`.
Intrinsics should have been type checked earlier.
This is part of moving all mir-opt diagnostics early enough so that they are reliably emitted even in check builds: https://github.com/rust-lang/rust/issues/49292#issuecomment-1692212095
Implement refinement lint for RPITIT
Implements a lint that warns against accidentally refining an RPITIT in an implementation. This is not a hard error, and can be suppressed with `#[allow(refining_impl_trait)]`, since this behavior may be desirable -- the lint just serves as an acknowledgement from the impl author that they understand that the types they write in the implementation are an API guarantee.
This compares bounds syntactically, not semantically -- semantic implication is more difficult and essentially relies on adding the ability to keep the RPITIT hidden in the trait system so that things can be proven about the type that shows up in the impl without its own bounds leaking through, either via a new reveal mode or something else. This was experimentally implemented in #111931.
Somewhat opinionated choices:
1. Putting the lint behind `refining_impl_trait` rather than a blanket `refine` lint. This could be changed, but I like keeping the lint specialized to RPITITs so the explanation can be tailored to it.
2. This PR does not include the `#[refine]` attribute or the feature gate, since it's kind of orthogonal and can be added in a separate PR.
r? `@oli-obk`
Bump: Include RISC-V intrinsics for stdarch
This bumps the version of the `stdarch` submodule to the current master. Notably, it now includes intrinsics for the RISC-V Scalar Cryptographic and Bit Manipulation extensions.
r? `@Amanieu`
Sync rustc_codegen_cranelift
Not much changed this time. Mostly doing this sync to make it easier to run the entire test suite on the in-tree version.
r? `@ghost`
`@rustbot` label +A-codegen +A-cranelift +T-compiler
Use a specialized varint + bitpacking scheme for DepGraph encoding
The previous scheme here uses leb128 to encode the edge tables that represent the incr comp dependency graph. The problem with that scheme is that leb128 has overhead for larger values, and generally relies on the distribution of encoded values being heavily skewed towards smaller values. That is definitely not the case for a dep node index, since they are handed out sequentially and the whole range is covered, the distribution is actually biased in the opposite direction: Most dep nodes are large.
This PR implements a different varint encoding scheme. Instead of applying varint encoding to individual dep node indices (which is extremely branchy) we now apply it per node.
While being built, each node now stores its edges in a `SmallVec` with a bit of extra logic to track the max value of each edge. Then we varint encode the whole batch. This is a gamble: We save on space by only claiming 2 bits per node instead of ~3 bits per edge which is a nice savings but needs to balance out with the space overhead that a single large index in a node with a lot of edges will encode unnecessary bytes in each of that node's edge indices.
Then, to keep the runtime overhead of this encoding scheme down we deserialize our indices by loading 4 bytes for each then masking off the bytes that are't ours. This is much less code and branches than leb128, but relies on having some readable bytes past the end of each edge list. We explicitly add such padding to the in-memory data during decoding. And we also do this decoding lazily, turning a dense on-disk encoding into a peak memory reduction.
Then we apply a bit-packing scheme; since in https://github.com/rust-lang/rust/pull/115391 we now have unused bits on `DepKind`, we use those unused bits (currently there are 7!) to store the 2 bits that we need for the byte width of the edges in each node, then use the remaining bits to store the length of the edge list, if it fits.
r? `@nnethercote`
Lint on invalid usage of `UnsafeCell::raw_get` in reference casting
This PR proposes to take into account `UnsafeCell::raw_get` method call for non-Freeze types for the `invalid_reference_casting` lint.
The goal of this is to catch those kind of invalid reference casting:
```rust
fn as_mut<T>(x: &T) -> &mut T {
unsafe { &mut *std::cell::UnsafeCell::raw_get(x as *const _ as *const _) }
//~^ ERROR casting `&T` to `&mut T` is undefined behavior
}
```
r? `@est31`
Update stdarch submodule and remove special handling in cranelift codegen for some AVX and SSE2 LLVM intrinsics
https://github.com/rust-lang/stdarch/pull/1463 reimplemented some x86 intrinsics to avoid using some x86-specific LLVM intrinsics:
* Store unaligned (`_mm*_storeu_*`) use `<*mut _>::write_unaligned` instead of `llvm.x86.*.storeu.*`.
* Shift by immediate (`_mm*_s{ll,rl,ra}i_epi*`) use `if` (srl, sll) or `min` (sra) to simulate the behaviour when the RHS is out of range. RHS is constant, so the `if`/`min` will be optimized away.
This PR updates the stdarch submodule to pull these changes and removes special handling for those LLVM intrinsics from cranelift codegen. I left gcc codegen untouched because there are some autogenerated lists.
Fix log formatting in bootstrap
`format!()` was missing, so log was just showing `{target}` verbatim.
(I also applied a small clippy suggestion in `builder.info()`)
fix#115348fix#115348
It looks that:
- In `rustc_mir_build::build`, the body of function will not be built, when the `tcx.check_match(def)` fails due to `non-exhaustive patterns`
- In `rustc_mir_transform::check_unsafety`, the `UnsafetyChecker` collects all `used_unsafe_blocks` in the MIR of a function, and the `UnusedUnsafeVisitor` will visit all `UnsafeBlock`s in the HIR and collect `unused_unsafes`, which are not contained in `used_unsafe_blocks`, and report `unnecessary_unsafe`s
- So the unsafe block in the issue example code will be reported as `unnecessary_unsafe`.
Replace `rustc_data_structures` dependency with `rustc_index` in `rustc_parse_format`
`rustc_data_structures` is only used for the `static_assert_size` macro, yet that is defined in `rustc_index` and merely re-exported. `rustc_index` is a lot more lightweight than `rustc_data_structures` which would make this a lot more reusable for rust-analyzer.
Add explanatory note to 'expected item' error
Fixes#113110
It changes the diagnostic from this:
```
error: expected item, found `5`
--> ../test.rs:1:1
|
1 | 5
| ^ expected item
```
to this:
```
error: expected item, found `5`
--> ../test.rs:1:1
|
1 | 5
| ^ expected item
|
= note: items are things that can appear at the root of a module
= note: for a full list see https://doc.rust-lang.org/reference/items.html
```
Represent MIR composite debuginfo as projections instead of aggregates
Composite debuginfo for MIR is currently represented as
```
debug name => Type { projection1 => place1, projection2 => place2 };
```
ie. a single `VarDebugInfo` object with that name, and its value a `VarDebugInfoContents::Composite`.
This PR proposes to reverse the representation to be
```
debug name.projection1 => place1;
debug name.projection2 => place2;
```
ie. multiple `VarDebugInfo` objects with each their projection.
This simplifies the handling of composite debuginfo by the compiler by avoiding weird nesting.
Based on https://github.com/rust-lang/rust/pull/115139
Add `FreezeLock` type and use it to store `Definitions`
This adds a `FreezeLock` type which allows mutation using a lock until the value is frozen where it can be accessed lock-free. It's used to store `Definitions` in `Untracked` instead of a `RwLock`. Unlike the current scheme of leaking read guards this doesn't deadlock if definitions is written to after no mutation are expected.
Implement SMIR generic parameter instantiation
Also demonstrates the use of it with a test.
This required a few smaller changes that may conflict with `@ericmarkmartin` work, but should be easy to resolve any conflicts on my end if their stuff lands first.
It uses `once` chained with `(0..self.num_calls).map(...)` followed by
`.take(self.num_calls`. I found this hard to read. It's simpler to just
use `repeat_with`.