Auto merge of #118324 - RalfJung:ctfe-read-only-pointers, r=saethlin

compile-time evaluation: detect writes through immutable pointers

This has two motivations:
- it unblocks https://github.com/rust-lang/rust/pull/116745 (and therefore takes a big step towards `const_mut_refs` stabilization), because we can now detect if the memory that we find in `const` can be interned as "immutable"
- it would detect the UB that was uncovered in https://github.com/rust-lang/rust/pull/117905, which was caused by accidental stabilization of `copy` functions in `const` that can only be called with UB

When UB is detected, we emit a future-compat warn-by-default lint. This is not a breaking change, so completely in line with [the const-UB RFC](https://rust-lang.github.io/rfcs/3016-const-ub.html), meaning we don't need t-lang FCP here. I made the lint immediately show up for dependencies since it is nearly impossible to even trigger this lint without `const_mut_refs` -- the accidentally stabilized `copy` functions are the only way this can happen, so the crates that popped up in #117905 are the only causes of such UB (in the code that crater covers), and the three cases of UB that we know about have all been fixed in their respective crates already.

The way this is implemented is by making use of the fact that our interpreter is already generic over the notion of provenance. For CTFE we now use the new `CtfeProvenance` type which is conceptually an `AllocId` plus a boolean `immutable` flag (but packed for a more efficient representation). This means we can mark a pointer as immutable when it is created as a shared reference. The flag will be propagated to all pointers derived from this one. We can then check the immutable flag on each write to reject writes through immutable pointers.

I just hope perf works out.
This commit is contained in:
bors 2023-12-07 18:11:01 +00:00
commit 17f0dd5898
2 changed files with 5 additions and 3 deletions

View File

@ -199,7 +199,8 @@ fn scalar_to_backend(&self, cv: Scalar, layout: abi::Scalar, ty: Type<'gcc>) ->
}
}
Scalar::Ptr(ptr, _size) => {
let (alloc_id, offset) = ptr.into_parts();
let (prov, offset) = ptr.into_parts(); // we know the `offset` is relative
let alloc_id = prov.alloc_id();
let base_addr =
match self.tcx.global_alloc(alloc_id) {
GlobalAlloc::Memory(alloc) => {

View File

@ -285,7 +285,8 @@ pub fn const_alloc_to_gcc<'gcc, 'tcx>(cx: &CodegenCx<'gcc, 'tcx>, alloc: ConstAl
let pointer_size = dl.pointer_size.bytes() as usize;
let mut next_offset = 0;
for &(offset, alloc_id) in alloc.provenance().ptrs().iter() {
for &(offset, prov) in alloc.provenance().ptrs().iter() {
let alloc_id = prov.alloc_id();
let offset = offset.bytes();
assert_eq!(offset as usize as u64, offset);
let offset = offset as usize;
@ -313,7 +314,7 @@ pub fn const_alloc_to_gcc<'gcc, 'tcx>(cx: &CodegenCx<'gcc, 'tcx>, alloc: ConstAl
llvals.push(cx.scalar_to_backend(
InterpScalar::from_pointer(
interpret::Pointer::new(alloc_id, Size::from_bytes(ptr_offset)),
interpret::Pointer::new(prov, Size::from_bytes(ptr_offset)),
&cx.tcx,
),
abi::Scalar::Initialized { value: Primitive::Pointer(address_space), valid_range: WrappingRange::full(dl.pointer_size) },