Move `mir::Field` → `abi::FieldIdx`
The first PR for https://github.com/rust-lang/compiler-team/issues/606
This is just the move-and-rename, because it's plenty big already. Future PRs will start using `FieldIdx` more broadly, and concomitantly removing `FieldIdx::new`s.
The first PR for https://github.com/rust-lang/compiler-team/issues/606
This is just the move-and-rename, because it's plenty big-and-bitrotty already. Future PRs will start using `FieldIdx` more broadly, and concomitantly removing `FieldIdx::new`s.
Use Rayon's TLV directly
This accesses Rayon's `TLV` thread local directly avoiding wrapper functions. This makes rustc work with https://github.com/rust-lang/rustc-rayon/pull/10.
r? `@cuviper`
Since structs are always `VariantIdx(0)`, there's a bunch of files where the only reason they had `VariantIdx` or `vec::Idx` imported at all was to get the first variant.
So this uses a constant for that, and adds some doc-comments to `VariantIdx` while I'm there, since it doesn't have any today.
They have been causing a lot of trouble. For example current MIR
building thinks that it is fine to dynamically index into them. And
there are different paths depending on if the repr(simd) struct uses
fields or a single array as interior. There is also trouble with moving
the inner array of a repr(simd) type that is an array wrapper.
If performance becomes a concern, I will implement this in a more
principled way.
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.
Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
Implement checked Shl/Shr at MIR building.
This does not require any special handling by codegen backends,
as the overflow behaviour is entirely determined by the rhs (shift amount).
This allows MIR ConstProp to remove the overflow check for constant shifts.
~There is an existing different behaviour between cg_llvm and cg_clif (cc `@bjorn3).`
I took cg_llvm's one as reference: overflow if `rhs < 0 || rhs > number_of_bits_in_lhs_ty`.~
EDIT: `cg_llvm` and `cg_clif` implement the overflow check differently. This PR uses `cg_llvm`'s implementation based on a `BitAnd` instead of `cg_clif`'s one based on an unsigned comparison.