Std module docs improvements
My primary goal is to create a cleaner separation between primitive types and primitive type helper modules (fixes#92777). I also changed a few header lines in other top-level std modules (seen at https://doc.rust-lang.org/std/) for consistency.
Some conventions used/established:
* "The \`Box\<T>` type for heap allocation." - if a module mainly provides a single type, name it and summarize its purpose in the module header
* "Utilities for the _ primitive type." - this wording is used for the header of helper modules
* Documentation for primitive types themselves are removed from helper modules
* provided-by-core functionality of primitive types is documented in the primitive type instead of the helper module (such as the "Iteration" section in the slice docs)
I wonder if some content in `std::ptr` should be in `pointer` but I did not address this.
Replace most uses of `pointer::offset` with `add` and `sub`
As PR title says, it replaces `pointer::offset` in compiler and standard library with `pointer::add` and `pointer::sub`. This generally makes code cleaner, easier to grasp and removes (or, well, hides) integer casts.
This is generally trivially correct, `.offset(-constant)` is just `.sub(constant)`, `.offset(usized as isize)` is just `.add(usized)`, etc. However in some cases we need to be careful with signs of things.
r? ````@scottmcm````
_split off from #100746_
Make some docs nicer wrt pointer offsets
This PR replaces `pointer::offset` with `pointer::add` and similarly `.cast().wrapping_add().cast()` with `.wrapping_byte_add()` **in docs**.
r? ``````@scottmcm``````
_split off from #100746_
Clamp Function for f32 and f64
I thought the clamp function could use a little improvement for readability purposes. The function now returns early in order to skip the extra bound checks.
If there was a reason for binding `self` to `x` or if this code is incorrect, please correct me :)
This commit adds the following functions all of which have a signature
`pointer, usize -> pointer`:
- `<*mut T>::mask`
- `<*const T>::mask`
- `intrinsics::ptr_mask`
These functions are equivalent to `.map_addr(|a| a & mask)` but they
utilize `llvm.ptrmask` llvm intrinsic.
*masks your pointers*
Fix trailing space showing up in example
The current text is rendered as: U+005B ..= U+0060 ``[ \ ] ^ _ ` ``, or (**note the final space!**)
This patch changes that to render as: U+005B ..= U+0060 `` [ \ ] ^ _ ` ``, or (**note no final space!**)
The reason for that, is that CommonMark has a solution for starting or ending inline code with a backtick/grave accent: padding both sides with a space, makes that padding disappear.
Expose `Utf8Lossy` as `Utf8Chunks`
This PR changes the feature for `Utf8Lossy` from `str_internals` to `utf8_lossy` and improves the API. This is done to eventually expose the API as stable.
Proposal: rust-lang/libs-team#54
Tracking Issue: #99543
Refactor iteration logic in the `Flatten` and `FlatMap` iterators
The `Flatten` and `FlatMap` iterators both delegate to `FlattenCompat`:
```rust
struct FlattenCompat<I, U> {
iter: Fuse<I>,
frontiter: Option<U>,
backiter: Option<U>,
}
```
Every individual iterator method that `FlattenCompat` implements needs to carefully manage this state, checking whether the `frontiter` and `backiter` are present, and storing the current iterator appropriately if iteration is aborted. This has led to methods such as `next`, `advance_by`, and `try_fold` all having similar code for managing the iterator's state.
I have extracted this common logic of iterating the inner iterators with the option to exit early into a `iter_try_fold` method:
```rust
impl<I, U> FlattenCompat<I, U>
where
I: Iterator<Item: IntoIterator<IntoIter = U>>,
{
fn iter_try_fold<Acc, Fold, R>(&mut self, acc: Acc, fold: Fold) -> R
where
Fold: FnMut(Acc, &mut U) -> R,
R: Try<Output = Acc>,
{ ... }
}
```
It passes each of the inner iterators to the given function as long as it keep succeeding. It takes care of managing `FlattenCompat`'s state, so that the actual `Iterator` methods don't need to. The resulting code that makes use of this abstraction is much more straightforward:
```rust
fn next(&mut self) -> Option<U::Item> {
#[inline]
fn next<U: Iterator>((): (), iter: &mut U) -> ControlFlow<U::Item> {
match iter.next() {
None => ControlFlow::CONTINUE,
Some(x) => ControlFlow::Break(x),
}
}
self.iter_try_fold((), next).break_value()
}
```
Note that despite being implemented in terms of `iter_try_fold`, `next` is still able to benefit from `U`'s `next` method. It therefore does not take the performance hit that implementing `next` directly in terms of `Self::try_fold` causes (in some benchmarks).
This PR also adds `iter_try_rfold` which captures the shared logic of `try_rfold` and `advance_back_by`, as well as `iter_fold` and `iter_rfold` for folding without early exits (used by `fold`, `rfold`, `count`, and `last`).
Benchmark results:
```
before after
bench_flat_map_sum 423,255 ns/iter 414,338 ns/iter
bench_flat_map_ref_sum 1,942,139 ns/iter 2,216,643 ns/iter
bench_flat_map_chain_sum 1,616,840 ns/iter 1,246,445 ns/iter
bench_flat_map_chain_ref_sum 4,348,110 ns/iter 3,574,775 ns/iter
bench_flat_map_chain_option_sum 780,037 ns/iter 780,679 ns/iter
bench_flat_map_chain_option_ref_sum 2,056,458 ns/iter 834,932 ns/iter
```
I added the last two benchmarks specifically to demonstrate an extreme case where `FlatMap::next` can benefit from custom internal iteration of the outer iterator, so take it with a grain of salt. We should probably do a perf run to see if the changes to `next` are worth it in practice.
I noticed in the MIR for <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=67097e0494363ee27421a4e3bdfaf513> that it's inlined most stuff
```
scope 5 (inlined <Result<i32, u32> as Try>::branch)
```
```
scope 8 (inlined <Result<i32, u32> as Try>::from_output)
```
But yet the do-nothing `from` call was still there:
```
_17 = <u32 as From<u32>>::from(move _18) -> bb9;
```
So let's give this a try and see what perf has to say.
Don't derive `PartialEq::ne`.
Currently we skip deriving `PartialEq::ne` for C-like (fieldless) enums
and empty structs, thus reyling on the default `ne`. This behaviour is
unnecessarily conservative, because the `PartialEq` docs say this:
> Implementations must ensure that eq and ne are consistent with each other:
>
> `a != b` if and only if `!(a == b)` (ensured by the default
> implementation).
This means that the default implementation (`!(a == b)`) is always good
enough. So this commit changes things such that `ne` is never derived.
The motivation for this change is that not deriving `ne` reduces compile
times and binary sizes.
Observable behaviour may change if a user has defined a type `A` with an
inconsistent `PartialEq` and then defines a type `B` that contains an
`A` and also derives `PartialEq`. Such code is already buggy and
preserving bug-for-bug compatibility isn't necessary.
Two side-effects of the change:
- There is only one error message produced for types where `PartialEq`
cannot be derived, instead of two.
- For coverage reports, some warnings about generated `ne` methods not
being executed have disappeared.
Both side-effects seem fine, and possibly preferable.
Simple Clamp Function
I thought this was more robust and easier to read. I also allowed this function to return early in order to skip the extra bound check (I'm sure the difference is negligible). I'm not sure if there was a reason for binding `self` to `x`; if so, please correct me.
Simple Clamp Function for f64
I thought this was more robust and easier to read. I also allowed this function to return early in order to skip the extra bound check (I'm sure the difference is negligible). I'm not sure if there was a reason for binding `self` to `x`; if so, please correct me.
Floating point clamp test
f32 clamp using mut self
f64 clamp using mut self
Update library/core/src/num/f32.rs
Update f64.rs
Update x86_64-floating-point-clamp.rs
Update src/test/assembly/x86_64-floating-point-clamp.rs
Update x86_64-floating-point-clamp.rs
Co-Authored-By: scottmcm <scottmcm@users.noreply.github.com>
Update the minimum external LLVM to 13
With this change, we'll have stable support for LLVM 13 through 15 (pending release).
For reference, the previous increase to LLVM 12 was #90175.
r? `@nagisa`
The current text is rendered as: U+005B ..= U+0060 ``[ \ ] ^ _ ` ``, or.
This patch changes that to render as: U+005B ..= U+0060 `` [ \ ] ^ _ ` ``, or
The reason for that, is that CommonMark has a solution for starting or ending inline code with a backtick/grave accent: padding both sides with a space, makes that padding disappear.
Add `Iterator::array_chunks` (take N+1)
A revival of https://github.com/rust-lang/rust/pull/92393.
r? `@Mark-Simulacrum`
cc `@rossmacarthur` `@scottmcm` `@the8472`
I've tried to address most of the review comments on the previous attempt. The only thing I didn't address is `try_fold` implementation, I've left the "custom" one for now, not sure what exactly should it use.
make raw_eq precondition more restrictive
Specifically, don't allow comparing pointers that way. Comparing pointers is subtle because you have to talk about what happens to the provenance.
This matches what [Miri already implements](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=9eb1dfb8a61b5a2d4a7cee43df2717af), and all existing users are fine with this.
If raw_eq on pointers is ever desired, we can adjust the intrinsic spec and Miri implementation as needed, but for now that seems just unnecessary. Also, this is a const intrinsic, and in const, comparing pointers this way is *not possible* -- so if we allow the intrinsic to compare pointers in general, we need to impose an extra restrictions saying that in const-context, pointers are *not* okay.
Reoptimize layout array
This way it's one check instead of two, so hopefully (cc #99117) it'll be simpler for rustc perf too 🤞
Quick demonstration:
```rust
pub fn demo(n: usize) -> Option<Layout> {
Layout::array::<i32>(n).ok()
}
```
Nightly: <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=e97bf33508aa03f38968101cdeb5322d>
```nasm
mov rax, rdi
mov ecx, 4
mul rcx
seto cl
movabs rdx, 9223372036854775805
xor esi, esi
cmp rax, rdx
setb sil
shl rsi, 2
xor edx, edx
test cl, cl
cmove rdx, rsi
ret
```
This PR (note no `mul`, in addition to being much shorter):
```nasm
xor edx, edx
lea rax, [4*rcx]
shr rcx, 61
sete dl
shl rdx, 2
ret
```
This is built atop `@CAD97` 's #99136; the new changes are cb8aba66ef6a0e17f08a0574e4820653e31b45a0.
I added a bunch more tests for `Layout::from_size_align` and `Layout::array` too.
Inline CStr::from_bytes_with_nul_unchecked::rt_impl
Currently `CStr::from_bytes_with_nul_unchecked::rt_impl` is not being inlined. The following function:
```rust
pub unsafe fn from_bytes_with_nul_unchecked(bytes: &[u8]) {
CStr::from_bytes_with_nul_unchecked(bytes);
}
```
Outputs the following assembly on current nightly
```asm
example::from_bytes_with_nul_unchecked:
jmp qword ptr [rip + _ZN4core3ffi5c_str4CStr29from_bytes_with_nul_unchecked7rt_impl17h026f29f3d6a41333E@GOTPCREL]
```
Meanwhile on beta this provides the following assembly:
```asm
example::from_bytes_with_nul_unchecked:
ret
```
This pull request adds `#[inline]` annotation to`rt_impl` to fix a code generation regression for `CStr::from_bytes_with_nul_unchecked`.
docs: remove repetition in `is_numeric` function docs
In https://github.com/rust-lang/rust/pull/99628 we introduce new docs for the `is_numeric` function, and this is a follow-up PR that removes some unnecessary repetition that may be introduced by some rebasing.
`@rustbot` r? `@joshtriplett`
Add back Send and Sync impls on ChunksMut iterators
Fixes https://github.com/rust-lang/rust/issues/100014
These were accidentally removed in #94247 because the representation was changed from `&mut [T]` to `*mut T`, which has `!Send + !Sync`.
do not claim that transmute is like memcpy
Saying transmute is like memcpy is not a well-formed statement, since memcpy is by-ref whereas transmute is by-val. The by-val nature of transmute inherently means that padding is lost along the way. (This is not specific to transmute, this is how all by-value operations work.) So adjust the docs to clarify this aspect.
Cc `@workingjubilee`
Currently we skip deriving `PartialEq::ne` for C-like (fieldless) enums
and empty structs, thus reyling on the default `ne`. This behaviour is
unnecessarily conservative, because the `PartialEq` docs say this:
> Implementations must ensure that eq and ne are consistent with each other:
>
> `a != b` if and only if `!(a == b)` (ensured by the default
> implementation).
This means that the default implementation (`!(a == b)`) is always good
enough. So this commit changes things such that `ne` is never derived.
The motivation for this change is that not deriving `ne` reduces compile
times and binary sizes.
Observable behaviour may change if a user has defined a type `A` with an
inconsistent `PartialEq` and then defines a type `B` that contains an
`A` and also derives `PartialEq`. Such code is already buggy and
preserving bug-for-bug compatibility isn't necessary.
Two side-effects of the change:
- There is only one error message produced for types where `PartialEq`
cannot be derived, instead of two.
- For coverage reports, some warnings about generated `ne` methods not
being executed have disappeared.
Both side-effects seem fine, and possibly preferable.
mem::uninitialized: mitigate many incorrect uses of this function
Alternative to https://github.com/rust-lang/rust/pull/98966: fill memory with `0x01` rather than leaving it uninit. This is definitely bitewise valid for all `bool` and nonnull types, and also those `Option<&T>` that we started putting `noundef` on. However it is still invalid for `char` and some enums, and on references the `dereferenceable` attribute is still violated, so the generated LLVM IR still has UB -- but in fewer cases, and `dereferenceable` is hopefully less likely to cause problems than clearly incorrect range annotations.
This can make using `mem::uninitialized` a lot slower, but that function has been deprecated for years and we keep telling everyone to move to `MaybeUninit` because it is basically impossible to use `mem::uninitialized` correctly. For the cases where that hasn't helped (and all the old code out there that nobody will ever update), we can at least mitigate the effect of using this API. Note that this is *not* in any way a stable guarantee -- it is still UB to call `mem::uninitialized::<bool>()`, and Miri will call it out as such.
This is somewhat similar to https://github.com/rust-lang/rust/pull/87032, which proposed to make `uninitialized` return a buffer filled with 0x00. However
- That PR also proposed to reduce the situations in which we panic, which I don't think we should do at this time.
- The 0x01 bit pattern means that nonnull requirements are satisfied, which (due to references) is the most common validity invariant.
`@5225225` I hope I am using `cfg(sanitize)` the right way; I was not sure for which ones to test here.
Cc https://github.com/rust-lang/rust/issues/66151
Fixes https://github.com/rust-lang/rust/issues/87675
This initial implementation handles transmutations between types with specified layouts, except when references are involved.
Co-authored-by: Igor null <m1el.2027@gmail.com>
Fix slice::ChunksMut aliasing
Fixes https://github.com/rust-lang/rust/issues/94231, details in that issue.
cc `@RalfJung`
This isn't done just yet, all the safety comments are placeholders. But otherwise, it seems to work.
I don't really like this approach though. There's a lot of unsafe code where there wasn't before, but as far as I can tell the only other way to uphold the aliasing requirement imposed by `__iterator_get_unchecked` is to use raw slices, which I think require the same amount of unsafe code. All that would do is tie the `len` and `ptr` fields together.
Oh I just looked and I'm pretty sure that `ChunksExactMut`, `RChunksMut`, and `RChunksExactMut` also need to be patched. Even more reason to put up a draft.
interpret, ptr_offset_from: refactor and test too-far-apart check
We didn't have any tests for the "too far apart" message, and indeed that check mostly relied on the in-bounds check and was otherwise probably not entirely correct... so I rewrote that check, and it is before the in-bounds check so we can test it separately.
Constify a few `(Partial)Ord` impls
Only a few `impl`s are constified for now, as #92257 has not landed in the bootstrap compiler yet and quite a few impls would need that fix.
This unblocks #92228, which unblocks marking iterator methods as `default_method_body_is_const`.
miri: prune some atomic operation and raw pointer details from stacktrace
Since Miri removes `track_caller` frames from the stacktrace, adding that attribute can help make backtraces more readable (similar to how it makes panic locations better). I made them only show up with `cfg(miri)` to make sure the extra arguments induced by `track_caller` do not cause any runtime performance trouble.
This is also testing the waters for whether the libs team is okay with having these attributes in their code, or whether you'd prefer if we find some other way to do this. If you are fine with this, we will probably want to add it to a lot more functions (all the other atomic operations, to start).
Before:
```
error: Undefined Behavior: Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
--> /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:2594:23
|
2594 | SeqCst => intrinsics::atomic_load_seqcst(dst),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
|
= help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
= help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
= note: inside `std::sync::atomic::atomic_load::<usize>` at /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:2594:23
= note: inside `std::sync::atomic::AtomicUsize::load` at /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:1719:26
note: inside closure at ../miri/tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
--> ../miri/tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
|
22 | (&*c.0).load(Ordering::SeqCst)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
After:
```
error: Undefined Behavior: Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
--> tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
|
22 | (&*c.0).load(Ordering::SeqCst)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
|
= help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
= help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
= note: inside closure at tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
```
Add `[f32]::sort_floats` and `[f64]::sort_floats`
It's inconvenient to sort a slice or Vec of floats, compared to sorting integers. To simplify numeric code, add a convenience method to `[f32]` and `[f64]` to sort them using `sort_unstable_by` with `total_cmp`.
Rename `<*{mut,const} T>::as_{const,mut}` to `cast_`
This renames the methods to use the `cast_` prefix instead of `as_` to
make it more readable and avoid confusion with `<*mut T>::as_mut()`
which is `unsafe` and returns a reference.
Sorry, didn't notice ACP process exists, opened https://github.com/rust-lang/libs-team/issues/51
See #92675
make vtable pointers entirely opaque
This implements the scheme discussed in https://github.com/rust-lang/unsafe-code-guidelines/issues/338: vtable pointers should be considered entirely opaque and not even readable by Rust code, similar to function pointers.
- We have a new kind of `GlobalAlloc` that symbolically refers to a vtable.
- Miri uses that kind of allocation when generating a vtable.
- The codegen backends, upon encountering such an allocation, call `vtable_allocation` to obtain an actually dataful allocation for this vtable.
- We need new intrinsics to obtain the size and align from a vtable (for some `ptr::metadata` APIs), since direct accesses are UB now.
I had to touch quite a bit of code that I am not very familiar with, so some of this might not make much sense...
r? `@oli-obk`
This renames the methods to use the `cast_` prefix instead of `as_` to
make it more readable and avoid confusion with `<*mut T>::as_mut()`
which is `unsafe` and returns a reference.
See #92675
Changed wording in sections on "Reflexivity":
replaced "that is there is" with "i.e. there would be" and removed comma
before "with"
Reason: "there is" somewhat contradicted the "would be" hypothetical.
A slightly redundant wording has now been chosen for better clarity.
The comma seemed to be superfluous.
Improve the function pointer docs
This is #97842 but for function pointers instead of tuples. The concept is basically the same.
* Reduce duplicate impls; show `fn (T₁, T₂, …, Tₙ)` and include a sentence saying that there exists up to twelve of them.
* Show `Copy` and `Clone`.
* Show auto traits like `Send` and `Sync`, and blanket impls like `Any`.
https://notriddle.com/notriddle-rustdoc-test/std/primitive.fn.html
* Reduce duplicate impls; show only the `fn (T)` and include a sentence
saying that there exists up to twelve of them.
* Show `Copy` and `Clone`.
* Show auto traits like `Send` and `Sync`, and blanket impls like `Any`.
core::any: replace some generic types with impl Trait
This gives a cleaner API since the caller only specifies the concrete type they usually want to.
r? `@yaahc`
- Explicitly mention that `AsRef` and `AsMut` do not auto-dereference
generally for all dereferencable types (but only if inner type is a
shared and/or mutable reference)
- Give advice to not use `AsRef` or `AsMut` for the sole purpose of
dereferencing
- Suggest providing a transitive `AsRef` or `AsMut` implementation for
types which implement `Deref`
- Add new section "Reflexivity" in documentation comments for `AsRef`
and `AsMut`
- Provide better example for `AsMut`
- Added heading "Relation to `Borrow`" in `AsRef`'s docs to improve
structure
Issue #45742 and a corresponding FIXME in the libcore suggest that
`AsRef` and `AsMut` should provide a blanket implementation over
`Deref`. As that is difficult to realize at the moment, this commit
updates the documentation to better describe the status-quo and to give
advice on how to use `AsRef` and `AsMut`.
Fix `Skip::next` for non-fused inner iterators
`iter.skip(n).next()` will currently call `nth` and `next` in succession on `iter`, without checking whether `nth` exhausts the iterator. Using `?` to propagate a `None` value returned by `nth` avoids this.
Use split_once in FromStr docs
Current implementation:
```rust
fn from_str(s: &str) -> Result<Self, Self::Err> {
let coords: Vec<&str> = s.trim_matches(|p| p == '(' || p == ')' )
.split(',')
.collect();
let x_fromstr = coords[0].parse::<i32>()?;
let y_fromstr = coords[1].parse::<i32>()?;
Ok(Point { x: x_fromstr, y: y_fromstr })
}
```
Creating the vector is not necessary, `split_once` does the job better.
Alternatively we could also remove `trim_matches` with `strip_prefix` and `strip_suffix`:
```rust
let (x, y) = s
.strip_prefix('(')
.and_then(|s| s.strip_suffix(')'))
.and_then(|s| s.split_once(','))
.unwrap();
```
The question is how much 'correctness' is too much and distracts from the example. In a real implementation you would also not unwrap (or originally access the vector without bounds checks), but implementing a custom Error and adding a `From<ParseIntError>` and implementing the `Error` trait adds a lot of code to the example which is not relevant to the `FromStr` trait.
Add assertion that `transmute_copy`'s U is not larger than T
This is called out as a safety requirement in the docs, but because knowing this can be done at compile time and constant folded (just like the `align_of` branch is removed), we can just panic here.
I've looked at the asm (using `cargo-asm`) of a function that both is correct and incorrect, and the panic is completely removed, or is unconditional, without needing build-std.
I don't expect this to cause much breakage in the wild. I scanned through https://miri.saethlin.dev/ub for issues that would look like this (error: Undefined Behavior: memory access failed: alloc1768 has size 1, so pointer to 8 bytes starting at offset 0 is out-of-bounds), but couldn't find any.
That doesn't rule out it happening in crates tested that fail earlier for some other reason, though, but it indicates that doing this is rare, if it happens at all. A crater run for this would need to be build and test, since this is a runtime thing.
Also added a few more transmute_copy tests.
Rearrange slice::split_mut to remove bounds check
Closes https://github.com/rust-lang/rust/issues/86313
Turns out that all we need to do here is reorder the bounds checks to convince LLVM that all the bounds checks can be removed. It seems like LLVM just fails to propagate the original length information past the first bounds check and into the second one. With this implementation it doesn't need to, each check can be proven inbounds based on the one immediately previous.
I've gradually convinced myself that this implementation is unambiguously better based on the above logic, but maybe this is still deserving of a codegen test?
Also the mentioned borrowck limitation no longer seems to exist.
Add a special case for align_offset /w stride != 1
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:
leaq 15(%rdi), %rax
andq $-16, %rax
This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:
pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
slice.offset(slice.align_offset(16) as isize)
}
// =>
movl %edi, %eax
andl $3, %eax
leaq 15(%rdi), %rcx
andq $-16, %rcx
subq %rdi, %rcx
shrq $2, %rcx
negq %rax
sbbq %rax, %rax
orq %rcx, %rax
leaq (%rdi,%rax,4), %rax
Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.
This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.
Fixes#98809.
Sadly, this does not help with #72356.
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:
leaq 15(%rdi), %rax
andq $-16, %rax
This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:
pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
slice.offset(slice.align_offset(16) as isize)
}
// =>
movl %edi, %eax
andl $3, %eax
leaq 15(%rdi), %rcx
andq $-16, %rcx
subq %rdi, %rcx
shrq $2, %rcx
negq %rax
sbbq %rax, %rax
orq %rcx, %rax
leaq (%rdi,%rax,4), %rax
Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.
This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.
Fixes#98809.
Sadly, this does not help with #72356.
Stabilize `core::ffi::CStr`, `alloc::ffi::CString`, and friends
Stabilize the `core_c_str` and `alloc_c_string` feature gates.
Change `std::ffi` to re-export these types rather than creating type
aliases, since they now have matching stability.
Stabilize the `core_c_str` and `alloc_c_string` feature gates.
Change `std::ffi` to re-export these types rather than creating type
aliases, since they now have matching stability.
Stabilize `core::ffi:c_*` and rexport in `std::ffi`
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
Rollup of 5 pull requests
Successful merges:
- #99045 (improve print styles)
- #99086 (Fix display of search result crate filter dropdown)
- #99100 (Fix binary name in help message for test binaries)
- #99103 (Avoid some `&str` to `String` conversions)
- #99109 (fill new tracking issue for `feature(strict_provenance_atomic_ptr)`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Enforce that layout size fits in isize in Layout
As it turns out, enforcing this _in APIs that already enforce `usize` overflow_ is fairly trivial. `Layout::from_size_align_unchecked` continues to "allow" sizes which (when rounded up) would overflow `isize`, but these are now declared as library UB for `Layout`, meaning that consumers of `Layout` no longer have to check this before making an allocation.
(Note that this is "immediate library UB;" IOW it is valid for a future release to make this immediate "language UB," and there is an extant patch to do so, to allow Miri to catch this misuse.)
See also #95252, [Zulip discussion](https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Layout.20Isn't.20Enforcing.20The.20isize.3A.3AMAX.20Rule).
Fixes https://github.com/rust-lang/rust/issues/95334
Some relevant quotes:
`@eddyb,` https://github.com/rust-lang/rust/pull/95252#issuecomment-1078513769
> [B]ecause of the non-trivial presence of both of these among code published on e.g. crates.io:
>
> 1. **`Layout` "producers" / `GlobalAlloc` "users"**: smart pointers (including `alloc::rc` copies with small tweaks), collections, etc.
> 2. **`Layout` "consumers" / `GlobalAlloc` "providers"**: perhaps fewer of these, but anything built on top of OS APIs like `mmap` will expose `> isize::MAX` allocations (on 32-bit hosts) if they lack extra checks
>
> IMO the only responsible option is to enforce the `isize::MAX` limit in `Layout`, which:
>
> * makes `Layout` _sound_ in terms of only ever allowing allocations where `(alloc_base_ptr: *mut u8).offset(size)` is never UB
> * frees both "producers" and "consumers" of `Layout` from manually reimplementing the checks
> * manual checks can be risky, e.g. if the final size passed to the allocator isn't the one being checked
> * this applies retroactively, fixing the overall soundness of existing code with zero transition period or _any_ changes required from users (as long as going through `Layout` is mandatory, making a "choke point")
>
>
> Feel free to quote this comment onto any relevant issue, I might not be able to keep track of developments.
`@Gankra,` https://github.com/rust-lang/rust/pull/95252#issuecomment-1078556371
> As someone who spent way too much time optimizing libcollections checks for this stuff and tried to splatter docs about it everywhere on the belief that it was a reasonable thing for people to manually take care of: I concede the point, it is not reasonable. I am wholy spiritually defeated by the fact that _liballoc_ of all places is getting this stuff wrong. This isn't throwing shade at the folks who implemented these Rc features, but rather a statement of how impractical it is to expect anyone out in the wider ecosystem to enforce them if _some of the most audited rust code in the library that defines the very notion of allocating memory_ can't even reliably do it.
>
> We need the nuclear option of Layout enforcing this rule. Code that breaks this rule is _deeply_ broken and any "regressions" from changing Layout's contract is a _correctness_ fix. Anyone who disagrees and is sufficiently motivated can go around our backs but the standard library should 100% refuse to enable them.
cc also `@RalfJung` `@rust-lang/wg-allocators.` Even though this technically supersedes #95252, those potential failure points should almost certainly still get nicer panics than just "unwrap failed" (which they would get by this PR).
It might additionally be worth recommending to users of the `Layout` API that they should ideally use `.and_then`/`?` to complete the entire layout calculation, and then `panic!` from a single location at the end of `Layout` manipulation, to reduce the overhead of the checks and optimizations preserving the exact location of each `panic` which are conceptually just one failure: allocation too big.
Probably deserves a T-lang and/or T-libs-api FCP (this technically solidifies the [objects must be no larger than `isize::MAX`](https://rust-lang.github.io/unsafe-code-guidelines/layout/scalars.html#isize-and-usize) rule further, and the UCG document says this hasn't been RFCd) and a crater run. Ideally, no code exists that will start failing with this addition; if it does, it was _likely_ (but not certainly) causing UB.
Changes the raw_vec allocation path, thus deserves a perf run as well.
I suggest hiding whitespace-only changes in the diff view.
rustdoc: Add more semantic information to impl IDs
Take over of #92745.
I fixed the last remaining issue for the links in the sidebar (mentioned by `@jsha)` and fixed the few links broken in the std/core docs.
cc `@camelid`
r? `@notriddle`
Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.
This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.
I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.
API change proposal: https://github.com/rust-lang/libs-team/issues/60Fixes#95492
ptr::copy and ptr::swap are doing untyped copies
The consensus in https://github.com/rust-lang/rust/issues/63159 seemed to be that these operations should be "untyped", i.e., they should treat the data as raw bytes, should work when these bytes violate the validity invariant of `T`, and should exactly preserve the initialization state of the bytes that are being copied. This is already somewhat implied by the description of "copying/swapping size*N bytes" (rather than "N instances of `T`").
The implementations mostly already work that way (well, for LLVM's intrinsics the documentation is not precise enough to say what exactly happens to poison, but if this ever gets clarified to something that would *not* perfectly preserve poison, then I strongly assume there will be some way to make a copy that *does* perfectly preserve poison). However, I had to adjust `swap_nonoverlapping`; after ``@scottmcm's`` [recent changes](https://github.com/rust-lang/rust/pull/94212), that one (sometimes) made a typed copy. (Note that `mem::swap`, which works on mutable references, is unchanged. It is documented as "swapping the values at two mutable locations", which to me strongly indicates that it is indeed typed. It is also safe and can rely on `&mut T` pointing to a valid `T` as part of its safety invariant.)
On top of adding a test (that will be run by Miri), this PR then also adjusts the documentation to indeed stably promise the untyped semantics. I assume this means the PR has to go through t-libs (and maybe t-lang?) FCP.
Fixes https://github.com/rust-lang/rust/issues/63159
[core] add `Exclusive` to sync
(discussed here: https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Adding.20.60SyncWrapper.60.20to.20std)
`Exclusive` is a wrapper that exclusively allows mutable access to the inner value if you have exclusive access to the wrapper. It acts like a compile time mutex, and hold an unconditional `Sync` implementation.
## Justification for inclusion into std
- This wrapper unblocks actual problems:
- The example that I hit was a vector of `futures::future::BoxFuture`'s causing a central struct in a script to be non-`Sync`. To work around it, you either write really difficult code, or wrap the futures in a needless mutex.
- Easy to maintain: this struct is as simple as a wrapper can get, and its `Sync` implementation has very clear reasoning
- Fills a gap: `&/&mut` are to `RwLock` as `Exclusive` is to `Mutex`
## Public Api
```rust
// core::sync
#[derive(Default)]
struct Exclusive<T: ?Sized> { ... }
impl<T: ?Sized> Sync for Exclusive {}
impl<T> Exclusive<T> {
pub const fn new(t: T) -> Self;
pub const fn into_inner(self) -> T;
}
impl<T: ?Sized> Exclusive<T> {
pub const fn get_mut(&mut self) -> &mut T;
pub const fn get_pin_mut(Pin<&mut self>) -> Pin<&mut T>;
pub const fn from_mut(&mut T) -> &mut Exclusive<T>;
pub const fn from_pin_mut(Pin<&mut T>) -> Pin<&mut Exclusive<T>>;
}
impl<T: Future> Future for Exclusive { ... }
impl<T> From<T> for Exclusive<T> { ... }
impl<T: ?Sized> Debug for Exclusive { ... }
```
## Naming
This is a big bikeshed, but I felt that `Exclusive` captured its general purpose quite well.
## Stability and location
As this is so simple, it can be in `core`. I feel that it can be stabilized quite soon after it is merged, if the libs teams feels its reasonable to add. Also, I don't really know how unstable feature work in std/core's codebases, so I might need help fixing them
## Tips for review
The docs probably are the thing that needs to be reviewed! I tried my best, but I'm sure people have more experience than me writing docs for `Core`
### Implementation:
The API is mostly pulled from https://docs.rs/sync_wrapper/latest/sync_wrapper/struct.SyncWrapper.html (which is apache 2.0 licenesed), and the implementation is trivial:
- its an unsafe justification for pinning
- its an unsafe justification for the `Sync` impl (mostly reasoned about by ````@danielhenrymantilla```` here: https://github.com/Actyx/sync_wrapper/pull/2)
- and forwarding impls, starting with derivable ones and `Future`
Simplify memory ordering intrinsics
This changes the names of the atomic intrinsics to always fully include their memory ordering arguments.
```diff
- atomic_cxchg
+ atomic_cxchg_seqcst_seqcst
- atomic_cxchg_acqrel
+ atomic_cxchg_acqrel_release
- atomic_cxchg_acqrel_failrelaxed
+ atomic_cxchg_acqrel_relaxed
// And so on.
```
- `seqcst` is no longer implied
- The failure ordering on chxchg is no longer implied in some cases, but now always explicitly part of the name.
- `release` is no longer shortened to just `rel`. That was especially confusing, since `relaxed` also starts with `rel`.
- `acquire` is no longer shortened to just `acq`, such that the names now all match the `std::sync::atomic::Ordering` variants exactly.
- This now allows for more combinations on the compare exchange operations, such as `atomic_cxchg_acquire_release`, which is necessary for #68464.
- This PR only exposes the new possibilities through unstable intrinsics, but not yet through the stable API. That's for [a separate PR](https://github.com/rust-lang/rust/pull/98383) that requires an FCP.
Suffixes for operations with a single memory order:
| Order | Before | After |
|---------|--------------|------------|
| Relaxed | `_relaxed` | `_relaxed` |
| Acquire | `_acq` | `_acquire` |
| Release | `_rel` | `_release` |
| AcqRel | `_acqrel` | `_acqrel` |
| SeqCst | (none) | `_seqcst` |
Suffixes for compare-and-exchange operations with two memory orderings:
| Success | Failure | Before | After |
|---------|---------|--------------------------|--------------------|
| Relaxed | Relaxed | `_relaxed` | `_relaxed_relaxed` |
| Relaxed | Acquire | ❌ | `_relaxed_acquire` |
| Relaxed | SeqCst | ❌ | `_relaxed_seqcst` |
| Acquire | Relaxed | `_acq_failrelaxed` | `_acquire_relaxed` |
| Acquire | Acquire | `_acq` | `_acquire_acquire` |
| Acquire | SeqCst | ❌ | `_acquire_seqcst` |
| Release | Relaxed | `_rel` | `_release_relaxed` |
| Release | Acquire | ❌ | `_release_acquire` |
| Release | SeqCst | ❌ | `_release_seqcst` |
| AcqRel | Relaxed | `_acqrel_failrelaxed` | `_acqrel_relaxed` |
| AcqRel | Acquire | `_acqrel` | `_acqrel_acquire` |
| AcqRel | SeqCst | ❌ | `_acqrel_seqcst` |
| SeqCst | Relaxed | `_failrelaxed` | `_seqcst_relaxed` |
| SeqCst | Acquire | `_failacq` | `_seqcst_acquire` |
| SeqCst | SeqCst | (none) | `_seqcst_seqcst` |
Refactor iter adapters with less macros
Just some code cleanup. Introduced a util `and_then_or_clear` for each of chain, flatten and fuse iter adapter impls. This reduces code nicely for flatten, but admittedly the other modules are more of a lateral move replacing macros with a function. But I think consistency across the modules and avoiding macros when possible is good.
libcore tests: avoid int2ptr casts
We don't need any of these pointers to actually be dereferenceable so using `ptr::invalid` should be fine. And then we can run Miri with strict provenance enforcement on the tests.
This commit adds new methods that combine sequences of existing
formatting methods.
- `Formatter::debug_{tuple,struct}_field[12345]_finish`, equivalent to a
`Formatter::debug_{tuple,struct}` + N x `Debug{Tuple,Struct}::field` +
`Debug{Tuple,Struct}::finish` call sequence.
- `Formatter::debug_{tuple,struct}_fields_finish` is similar, but can
handle any number of fields by using arrays.
These new methods are all marked as `doc(hidden)` and unstable. They are
intended for the compiler's own use.
Special-casing up to 5 fields gives significantly better performance
results than always using arrays (as was tried in #95637).
The commit also changes the `Debug` deriving code to use these new methods. For
example, where the old `Debug` code for a struct with two fields would be like
this:
```
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
match *self {
Self {
f1: ref __self_0_0,
f2: ref __self_0_1,
} => {
let debug_trait_builder = &mut ::core::fmt::Formatter::debug_struct(f, "S2");
let _ = ::core::fmt::DebugStruct::field(debug_trait_builder, "f1", &&(*__self_0_0));
let _ = ::core::fmt::DebugStruct::field(debug_trait_builder, "f2", &&(*__self_0_1));
::core::fmt::DebugStruct::finish(debug_trait_builder)
}
}
}
```
the new code is like this:
```
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
match *self {
Self {
f1: ref __self_0_0,
f2: ref __self_0_1,
} => ::core::fmt::Formatter::debug_struct_field2_finish(
f,
"S2",
"f1",
&&(*__self_0_0),
"f2",
&&(*__self_0_1),
),
}
}
```
This shrinks the code produced for `Debug` instances
considerably, reducing compile times and binary sizes.
Co-authored-by: Scott McMurray <scottmcm@users.noreply.github.com>
clarify how Rust atomics correspond to C++ atomics
``@cbeuw`` noted in https://github.com/rust-lang/miri/pull/1963 that the correspondence between C++ atomics and Rust atomics is not quite as obvious as one might think, since in Rust I can use `get_mut` to treat previously non-atomic data as atomic. However, I think using C++20 `atomic_ref`, we can establish a suitable relation between the two -- or do you see problems with that ``@cbeuw?`` (I recall you said there was some issue, but it was deep inside that PR and Github makes it impossible to find...)
Cc ``@thomcc;`` not sure whom else to ping for atomic memory model things.
Rollup of 4 pull requests
Successful merges:
- #98235 (Drop magic value 3 from code)
- #98267 (Don't omit comma when suggesting wildcard arm after macro expr)
- #98276 (Mention formatting macros when encountering `ArgumentV1` method in const)
- #98296 (Add a link to the unstable book page on Generator doc comment)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
It's inconvenient to sort a slice or Vec of floats, compared to sorting
integers. To simplify numeric code, add a convenience method to `[f32]`
and `[f64]` to sort them using `sort_unstable_by` with `total_cmp`.
Add a link to the unstable book page on Generator doc comment
This makes it easier to jump into the Generator section on the unstable book.
Signed-off-by: Yuki Okushi <jtitor@2k36.org>
Mention formatting macros when encountering `ArgumentV1` method in const
Also open to just closing this if it's overkill. There are a lot of other distracting error messages around, so maybe it's not worth fixing just this one.
Fixes#93665
Fix the generator example for `pin!()`
The previous generator example is not actually self-referential, since the reference is created after the yield.
CC #93178 (tracking issue)
Add `core::mem::copy` to complement `core::mem::drop`.
This is useful for combinators. I didn't add `clone` since you can already
use `Clone::clone` in its place; copy has no such corresponding function.
Stabilize checked slice->str conversion functions
This PR stabilizes the following APIs as `const` functions in Rust 1.63:
```rust
// core::str
pub const fn from_utf8(v: &[u8]) -> Result<&str, Utf8Error>;
impl Utf8Error {
pub const fn valid_up_to(&self) -> usize;
pub const fn error_len(&self) -> Option<usize>;
}
```
Note that the `from_utf8_mut` function is not stabilized as unique references (`&mut _`) are [unstable in const context].
FCP: https://github.com/rust-lang/rust/issues/91006#issuecomment-1134593095
[unstable in const context]: https://github.com/rust-lang/rust/issues/57349
once cell renamings
This PR does the renamings proposed in https://github.com/rust-lang/rust/issues/74465#issuecomment-1153703128
- Move/rename `lazy::{OnceCell, Lazy}` to `cell::{OnceCell, LazyCell}`
- Move/rename `lazy::{SyncOnceCell, SyncLazy}` to `sync::{OnceLock, LazyLock}`
(I used `Lazy...` instead of `...Lazy` as it seems to be more consistent, easier to pronounce, etc)
```@rustbot``` label +T-libs-api -T-libs
Make `std::mem::needs_drop` accept `?Sized`
This change attempts to make `needs_drop` work with types like `[u8]` and `str`.
This enables code in types like `Arc<T>` that was not possible before, such as https://github.com/rust-lang/rust/pull/97676.
Inline `const_eval_select`
To avoid circular link time dependency between core and compiler
builtins when building with `-Zshare-generics`.
r? ```@Amanieu```
In cases where the nth element is not unique within the slice, it is not
correct to say that the values in the returned triplet include ones for
"all elements" less/greater than that at the given index: indeed one (or
more) such values would then laso contain values equal to that at the
given index.
The text proposed here clarifies exactly what is returned, but in so
doing it is also documenting an implementation detail that previously
wasn't detailed: namely that the return slices are slices into the
reordered slice. I don't think this can be contentious, because the
lifetimes of those returned slices are bound to that of the original
(now reordered) slice—so there really isn't any other reasonable
implementation that could have this behaviour; but nevertheless it's
probably best if @rust-lang/libs-api give it a nod?
Fixes#97982
r? m-ou-se
@rustbot label +A-docs C-bug +T-libs-api
line 1352, change `self` to `*self`, other to `*other`
The current code will not results bug, but it difficult to understand. These code result to call &f32::partial_cmp(), and the performance will be lower than the changed code. I'm not sure why the current code don't use (*self) (*other), if you have some idea, please let me know.
The current code will not results bug, but it difficult to understand. These code result to call &f32::partial_cmp(), and the performance will be lower than the changed code. I'm not sure why the current code don't use (*self) (*other), if you have some idea, please let me know.
update docs for `std::future::IntoFuture`
Ref https://github.com/rust-lang/rust/issues/67644.
This updates the docs for `IntoFuture` providing a bit more guidance on how to use it. Thanks!
Add the Provider api to core::any
This is an implementation of [RFC 3192](https://github.com/rust-lang/rfcs/pull/3192) ~~(which is yet to be merged, thus why this is a draft PR)~~. It adds an API for type-driven requests and provision of data from trait objects. A primary use case is for the `Error` trait, though that is not implemented in this PR. The only major difference to the RFC is that the functionality is added to the `any` module, rather than being in a sibling `provide_any` module (as discussed in the RFC thread).
~~Still todo: improve documentation on items, including adding examples.~~
cc `@yaahc`
This commit adds a new unstable attribute, `#[doc(tuple_varadic)]`, that
shows a 1-tuple as `(T, ...)` instead of just `(T,)`, and links to a section
in the tuple primitive docs that talks about these.
Suggest using `iter()` or `into_iter()` for `Vec`
We cannot do that for `&Vec` because `#[rustc_on_unimplemented]` is limited (it does not clean generic instantiation for references, only for ADTs).
`@rustbot` label +A-diagnostics
Use repr(C) when depending on struct layout in ptr tests
The test depends on the layout of this struct `Pair`, so it should use `repr(C)` instead of the default `repr(Rust)`.
Improve the safety docs for `CStr`
Namely, the two functions `from_ptr` and `from_bytes_with_nul_unchecked`.
Before, these functions didn't state the requirements clearly enough,
and I was not immediately able to find them like for other functions.
This doesn't change the content of the docs, but simply rewords them for
clarity.
note: I'm not entirely sure about the '`ptr` must be valid for reads of `u8`.', there might be room for improvement for this (and maybe for the other docs as well 😄)
Namely, the two functions `from_ptr` and `from_bytes_with_nul_unchecked`.
Before, this functions didn't state the requirements clearly enough,
and I was not immediately able to find them like for other functions.
This doesn't change the content of the docs, but simply rewords them for
clarity.
use strict provenance APIs
The stdlib was adjusted to avoid bare int2ptr casts, but recently some casts of that sort have sneaked back in. Let's fix that. :)
implement ptr.addr() via transmute
As per the discussion in https://github.com/rust-lang/unsafe-code-guidelines/issues/286, the semantics for ptr-to-int transmutes that we are going with for now is to make them strip provenance without exposing it. That's exactly what `ptr.addr()` does! So we can implement `ptr.addr()` via `transmute`. This also means that once https://github.com/rust-lang/rust/pull/97684 lands, Miri can distinguish `ptr.addr()` from `ptr.expose_addr()`, and the following code will correctly be called out as having UB (if permissive provenance mode is enabled, which will become the default once the [implementation is complete](https://github.com/rust-lang/miri/issues/2133)):
```rust
fn main() {
let x: i32 = 3;
let x_ptr = &x as *const i32;
let x_usize: usize = x_ptr.addr();
// Cast back an address that did *not* get exposed.
let ptr = std::ptr::from_exposed_addr::<i32>(x_usize);
assert_eq!(unsafe { *ptr }, 3); //~ ERROR Undefined Behavior: dereferencing pointer failed
}
```
This completes the Miri implementation of the new distinctions introduced by strict provenance. :)
Cc `@Gankra` -- for now I left in your `FIXME(strict_provenance_magic)` saying these should be intrinsics, but I do not necessarily agree that they should be. Or if we have an intrinsic, I think it should behave exactly like the `transmute` does, which makes one wonder why the intrinsic should be needed.
Stabilize `{slice,array}::from_ref`
This PR stabilizes the following APIs as `const` functions in Rust `1.63`:
```rust
// core::array
pub const fn from_ref<T>(s: &T) -> &[T; 1];
// core::slice
pub const fn from_ref<T>(s: &T) -> &[T];
```
Note that the `mut` versions are not stabilized as unique references (`&mut _`) are [unstable in const context].
FCP: https://github.com/rust-lang/rust/issues/90206#issuecomment-1134586665
r? rust-lang/libs-api `@rustbot` label +T-libs-api -T-libs
[unstable in const context]: https://github.com/rust-lang/rust/issues/57349
Additional `*mut [T]` methods
Split out from #94247
This adds the following methods to raw slices that already exist on regular slices
* `*mut [T]::is_empty`
* `*mut [T]::split_at_mut`
* `*mut [T]::split_at_mut_unchecked`
These methods reduce the amount of unsafe code needed to migrate `ChunksMut` and related iterators
to raw slices (#94247)
r? `@m-ou-se`
Corrected EBNF grammar for from_str
Hello! This is my first time contributing to an open-source project. I'm excited to have the chance to contribute to the rust community 🥳
I noticed an issue with the documentation for `from_str` in `f32` and `f64`. It states that "All strings that adhere to the following [EBNF](https://www.w3.org/TR/REC-xml/#sec-notation) grammar when lowercased will result in an `Ok` being returned. I believe this is incorrect for the string `"."`, which is valid for the given EBNF grammar, but does not result in an `Ok` being returned ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=09f891aa87963a56d3b0d715d8cbc2b4)). I have simplified the grammar in a way which fixes that, but is otherwise identical.
Previously, the `Number` part of the EBNF grammar had an option for `'.' Digit*`, which would include the string `"."`. This is not valid, and does not return an Ok as stated. The corrected version removes this, and still allows for the `'.' Digit+` case with the already existing `Digit* '.' Digit+` case.
Add unicode fast path to `is_printable`
Before, it would enter the full expensive check even for normal ascii characters. Now, it skips the check for the ascii characters in `32..127`. This range was checked manually from the current behavior.
I ran the `tracing` test suite in miri, and it was really slow. I looked at a profile, and miri spent most of the time in `core::char::methods::escape_debug_ext`, where half of that was dominated by `core::unicode::printable::is_printable`. So I optimized it here.
The tracing profile:
![The tracing profile](https://user-images.githubusercontent.com/48135649/170883650-23876e7b-3fd1-4e8b-9001-47672e06d914.svg)
Before, it would enter the full expensive check even for normal ascii
characters. Now, it skips the check for the ascii characters in
`32..127`. This range was checked manually from the current behavior.
Use Box::new() instead of box syntax in library tests
The tests inside `library/*` have no reason to use `box` syntax as they have 0 performance relevance. Therefore, we can safely remove them (instead of having to use alternatives like the one in #97293).