Constify Default impl's for Arrays and Tuples.
Allows to create arrays and tuples in const Context using the ~const Default implementation of the inner type.
Constify slice.split_at_mut(_unchecked)
Tracking Issue: [Tracking Issue for const_slice_split_at_mut](https://github.com/rust-lang/rust/issues/101804)
Feature gate: `#![feature(const_slice_split_at_mut)]`
Still requires const_mut_refs to be actually used, but this feature removes the need to manually re implement these functions in a user crate.
Clarify `[T]::select_nth_unstable*` return values
In cases where the nth element is not unique within the slice, it is not
correct to say that the values in the returned triplet include ones for
"all elements" less/greater than that at the given index: indeed one (or
more) such values would then also contain elements equal to that at
the given index.
The text proposed here clarifies exactly what is returned, but in so
doing it is also documenting an implementation detail that previously
wasn't detailed: namely that the returned slices are slices into the
reordered slice. I don't think this can be contentious, because the
lifetimes of those returned slices are bound to that of the original
(now reordered) slice—so there really isn't any other reasonable
implementation that could have this behaviour; but nevertheless it's
probably best if `@rust-lang/libs-api` give it a nod?
Fixes#97982
r? `@m-ou-se`
`@rustbot` label +A-docs +C-bug +T-libs-api -T-libs
Make ZST checks in core/alloc more readable
There's a bunch of these checks because of special handing for ZSTs in various unsafe implementations of stuff.
This lets them be `T::IS_ZST` instead of `mem::size_of::<T>() == 0` every time, making them both more readable and more terse.
*Not* proposed for stabilization. Would be `pub(crate)` except `alloc` wants to use it too.
(And while it doesn't matter now, if we ever get something like #85836 making it a const can help codegen be simpler.)
Add const_closure, Constify Try trait
Adds a struct for creating const `FnMut` closures (for now just copy pasted form my [const_closure](https://crates.io/crates/const_closure) crate).
I'm not sure if this way is how it should be done.
The `ConstFnClosure` and `ConstFnOnceClosure` structs can probably also be entirely removed.
This is then used to constify the try trait.
Not sure if i should add const_closure in its own pr and maybe make it public behind a perma-unstable feature gate.
cc ```@fee1-dead``` ```@rust-lang/wg-const-eval```
Refactor some `std` code that works with pointer offstes
This PR replaces `pointer::offset` in standard library with `pointer::add` and `pointer::sub`, [re]moving some casts and using `.addr()` while we are at it.
This is a more complicated refactor than all other sibling PRs, so take a closer look when reviewing, please 😃 (though I've checked this multiple times and it looks fine).
r? ````@scottmcm````
_split off from #100746, continuation of #100822_
Add `#[inline]` to trivial functions on `core::sync::Exclusive`
When optimizing for size things like these sometimes don't inlined even though they're generic. This is bad because they're no-ops.
Only dodgy one is poll I guess since it forwards to the inner poll, but it's not like we're doing `#[inline(always)]` here.
Use internal iteration in `Iterator` comparison methods
Updates the `Iterator` methods `cmp_by`, `partial_cmp_by`, and `eq_by` to use internal iteration on `self`. I've also extracted their shared logic into a private helper function `iter_compare`, which will either short-circuit once the comparison result is known or return the comparison of the lengths of the iterators.
This change also indirectly benefits calls to `cmp`, `partial_cmp`, `eq`, `lt`, `le`, `gt`, and `ge`.
Unsurprising benchmark results: iterators that benefit from internal iteration (like `Chain`) see a speedup, while other iterators are unaffected.
```
name before ns/iter after ns/iter diff ns/iter diff % speedup
iter::bench_chain_partial_cmp 208,301 54,978 -153,323 -73.61% x 3.79
iter::bench_partial_cmp 55,527 55,702 175 0.32% x 1.00
iter::bench_lt 55,502 55,322 -180 -0.32% x 1.00
```
Add examples to `bool::then` and `bool::then_some`
Added examples to `bool::then` and `bool::then_some` to show the distinction between the eager evaluation of `bool::then_some` and the lazy evaluation of `bool::then`.
There's a bunch of these checks because of special handing for ZSTs in various unsafe implementations of stuff.
This lets them be `T::IS_ZST` instead of `mem::size_of::<T>() == 0` every time, making them both more readable and more terse.
*Not* proposed for stabilization at this time. Would be `pub(crate)` except `alloc` wants to use it too.
(And while it doesn't matter now, if we ever get something like 85836 making it a const can help codegen be simpler.)
Extend const_convert with const {FormResidual, Try} for ControlFlow.
Very small change so I just used the existing `const_convert` feature flag. #88674
Newly const API:
```
impl<B, C> const ops::Try for ControlFlow<B, C>;
impl<B, C> const ops::FromResidual for ControlFlow<B, C>;
```
`@usbalbin` I hope it is ok that I added to your feature.
Added examples to `bool::then` and `bool::then_some` to show the distinction between the eager evaluation of `bool::then_some` and the lazy evaluation of `bool::then`.
Optimize `array::IntoIter`
`.into_iter()` on arrays was slower than it needed to be (especially compared to slice iterator) since it uses `Range<usize>`, which needs to handle degenerate ranges like `10..4`.
This PR adds an internal `IndexRange` type that's like `Range<usize>` but with a safety invariant that means it doesn't need to worry about those cases -- it only handles `start <= end` -- and thus can give LLVM more information to optimize better.
I added one simple demonstration of the improvement as a codegen test.
(`vec::IntoIter` uses pointers instead of indexes, so doesn't have this problem, but that only works because its elements are boxed. `array::IntoIter` can't use pointers because that would keep it from being movable.)
`.into_iter()` on arrays was slower than it needed to be (especially compared to slice iterator) since it uses `Range<usize>`, which needs to handle degenerate ranges like `10..4`.
This PR adds an internal `IndexRange` type that's like `Range<usize>` but with a safety invariant that means it doesn't need to worry about those cases -- it only handles `start <= end` -- and thus can give LLVM more information to optimize better.
I added one simple demonstration of the improvement as a codegen test.
Make `from_waker`, `waker` and `from_raw` unstably `const`
Make
- `Context::from_waker`
- `Context::waker`
- `Waker::from_raw`
`const`.
Also added a small test.
Tone down explanation on RefCell::get_mut
The language around `RefCell::get_mut` is remarkably sketchy and especially to the novice seems to quite strongly discourage using the method ("be cautious", "Also, please be aware", "special circumstances", "usually not what you want"). It was added six years ago in #40634 due to confusion about when to use `get_mut` and `borrow_mut`.
While its signature limits the use-cases for `get_mut`, there is no chance for a safety footgun, and readers can be made aware of `borrow_mut` more softly. I've also just sent a [PR](https://github.com/rust-lang/rust-clippy/issues/9044) to lint situations where `get_mut` could be used to improve ergonomics and performance.
So this PR tones down the language around `get_mut` and also brings it more in line with [`std::sync::Mutex::get_mut()`](https://doc.rust-lang.org/stable/std/sync/struct.Mutex.html#method.get_mut).
array docs - advertise how to get array from slice
On my first Rust project, I spent more time than I care to admit figuring out how to efficiently get an array from a slice. Update the array documentation to explain this a bit more clearly.
(As a side note, it's a bit unfortunate that get-array-from-slice is only available via trait since that means it can't be used from const functions yet.)