Implement iterator specialization traits on more adapters
This adds
* `TrustedLen` to `Skip` and `StepBy`
* `TrustedRandomAccess` to `Skip`
* `InPlaceIterable` and `SourceIter` to `Copied` and `Cloned`
The first two might improve performance in the compiler itself since `skip` is used in several places. Constellations that would exercise the last point are probably rare since it would require an owning iterator that has references as Items somewhere in its iterator pipeline.
Improvements for `Skip`:
```
# old
test iter::bench_skip_trusted_random_access ... bench: 8,335 ns/iter (+/- 90)
# new
test iter::bench_skip_trusted_random_access ... bench: 2,753 ns/iter (+/- 27)
```
Add lint against ambiguous wide pointer comparisons
This PR is the resolution of https://github.com/rust-lang/rust/issues/106447 decided in https://github.com/rust-lang/rust/issues/117717 by T-lang.
## `ambiguous_wide_pointer_comparisons`
*warn-by-default*
The `ambiguous_wide_pointer_comparisons` lint checks comparison of `*const/*mut ?Sized` as the operands.
### Example
```rust
let ab = (A, B);
let a = &ab.0 as *const dyn T;
let b = &ab.1 as *const dyn T;
let _ = a == b;
```
### Explanation
The comparison includes metadata which may not be expected.
-------
This PR also drops `clippy::vtable_address_comparisons` which is superseded by this one.
~~One thing: is the current naming right? `invalid` seems a bit too much.~~
Fixes https://github.com/rust-lang/rust/issues/117717
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
Add support for making lib features internal
We have the notion of an "internal" lang feature: a feature that is never intended to be stabilized, and using which can cause ICEs and other issues without that being considered a bug.
This extends that idea to lib features as well. It is an alternative to https://github.com/rust-lang/rust/pull/115623: instead of using an attribute to declare lib features internal, we simply do this based on the name. Everything ending in `_internals` or `_internal` is considered internal.
Then we rename `core_intrinsics` to `core_intrinsics_internal`, which fixes https://github.com/rust-lang/rust/issues/115597.
Expand in-place iteration specialization to Flatten, FlatMap and ArrayChunks
This enables the following cases to collect in-place:
```rust
let v = vec![[0u8; 4]; 1024]
let v: Vec<_> = v.into_iter().flatten().collect();
let v: Vec<Option<NonZeroUsize>> = vec![NonZeroUsize::new(0); 1024];
let v: Vec<_> = v.into_iter().flatten().collect();
let v = vec![u8; 4096];
let v: Vec<_> = v.into_iter().array_chunks::<4>().collect();
```
Especially the nicheful-option-flattening should be useful in real code.
Implement `From<{&,&mut} [T; N]>` for `Vec<T>` where `T: Clone`
Currently, if `T` implements `Clone`, we can create a `Vec<T>` from an `&[T]` or an `&mut [T]`, can we also support creating a `Vec<T>` from an `&[T; N]` or an `&mut [T; N]`? Also, do I need to add `#[inline]` to the implementation?
ACP: rust-lang/libs-team#220. [Accepted]
Closes#100880.
Don't panic in ceil_char_boundary
Implementing the alternative mentioned in this comment: https://github.com/rust-lang/rust/issues/93743#issuecomment-1579935853
Since `floor_char_boundary` will always work (rounding down to the length of the string is possible), it feels best for `ceil_char_boundary` to not panic either. However, the semantics of "rounding up" past the length of the string aren't very great, which is why the method originally panicked in these cases.
Taking into account how people are using this method, it feels best to simply return the end of the string in these cases, so that the result is still a valid char boundary.
Eliminate ZST allocations in `Box` and `Vec`
This PR fixes 2 issues with `Box` and `RawVec` related to ZST allocations. Specifically, the `Allocator` trait requires that:
- If you allocate a zero-sized layout then you must later deallocate it, otherwise the allocator may leak memory.
- You cannot pass a ZST pointer to the allocator that you haven't previously allocated.
These restrictions exist because an allocator implementation is allowed to allocate non-zero amounts of memory for a zero-sized allocation. For example, `malloc` in libc does this.
Currently, ZSTs are handled differently in `Box` and `Vec`:
- `Vec` never allocates when `T` is a ZST or if the vector capacity is 0.
- `Box` just blindly passes everything on to the allocator, including ZSTs.
This causes problems due to the free conversions between `Box<[T]>` and `Vec<T>`, specifically that ZST allocations could get leaked or a dangling pointer could be passed to `deallocate`.
This PR fixes this by changing `Box` to not allocate for zero-sized values and slices. It also fixes a bug in `RawVec::shrink` where shrinking to a size of zero did not actually free the backing memory.
Spelling library
Split per https://github.com/rust-lang/rust/pull/110392
I can squash once people are happy w/ the changes. It's really uncommon for large sets of changes to be perfectly acceptable w/o at least some changes.
I probably won't have time to respond until tomorrow or the next day
Partial stabilization of `once_cell`
This PR aims to stabilize a portion of the `once_cell` feature:
- `core::cell::OnceCell`
- `std::cell::OnceCell` (re-export of the above)
- `std::sync::OnceLock`
This will leave `LazyCell` and `LazyLock` unstabilized, which have been moved to the `lazy_cell` feature flag.
Tracking issue: https://github.com/rust-lang/rust/issues/74465 (does not fully close, but it may make sense to move to a new issue)
Future steps for separate PRs:
- ~~Add `#[inline]` to many methods~~ #105651
- Update cranelift usage of the `once_cell` crate
- Update rust-analyzer usage of the `once_cell` crate
- Update error messages discussing once_cell
## To be stabilized API summary
```rust
// core::cell (in core/cell/once.rs)
pub struct OnceCell<T> { .. }
impl<T> OnceCell<T> {
pub const fn new() -> OnceCell<T>;
pub fn get(&self) -> Option<&T>;
pub fn get_mut(&mut self) -> Option<&mut T>;
pub fn set(&self, value: T) -> Result<(), T>;
pub fn get_or_init<F>(&self, f: F) -> &T where F: FnOnce() -> T;
pub fn into_inner(self) -> Option<T>;
pub fn take(&mut self) -> Option<T>;
}
impl<T: Clone> Clone for OnceCell<T>;
impl<T: Debug> Debug for OnceCell<T>
impl<T> Default for OnceCell<T>;
impl<T> From<T> for OnceCell<T>;
impl<T: PartialEq> PartialEq for OnceCell<T>;
impl<T: Eq> Eq for OnceCell<T>;
```
```rust
// std::sync (in std/sync/once_lock.rs)
impl<T> OnceLock<T> {
pub const fn new() -> OnceLock<T>;
pub fn get(&self) -> Option<&T>;
pub fn get_mut(&mut self) -> Option<&mut T>;
pub fn set(&self, value: T) -> Result<(), T>;
pub fn get_or_init<F>(&self, f: F) -> &T where F: FnOnce() -> T;
pub fn into_inner(self) -> Option<T>;
pub fn take(&mut self) -> Option<T>;
}
impl<T: Clone> Clone for OnceLock<T>;
impl<T: Debug> Debug for OnceLock<T>;
impl<T> Default for OnceLock<T>;
impl<#[may_dangle] T> Drop for OnceLock<T>;
impl<T> From<T> for OnceLock<T>;
impl<T: PartialEq> PartialEq for OnceLock<T>
impl<T: Eq> Eq for OnceLock<T>;
impl<T: RefUnwindSafe + UnwindSafe> RefUnwindSafe for OnceLock<T>;
unsafe impl<T: Send> Send for OnceLock<T>;
unsafe impl<T: Sync + Send> Sync for OnceLock<T>;
impl<T: UnwindSafe> UnwindSafe for OnceLock<T>;
```
No longer planned as part of this PR, and moved to the `rust_cell_try` feature gate:
```rust
impl<T> OnceCell<T> {
pub fn get_or_try_init<F, E>(&self, f: F) -> Result<&T, E> where F: FnOnce() -> Result<T, E>;
}
impl<T> OnceLock<T> {
pub fn get_or_try_init<F, E>(&self, f: F) -> Result<&T, E> where F: FnOnce() -> Result<T, E>;
}
```
I am new to this process so would appreciate mentorship wherever needed.
Remove ~const from alloc
There is currently an effort underway to stop using `~const Trait`, temporarily, so as to refactor the logic underlying const traits with relative ease. This means it has to go from the standard library, as well.
I have taken the initial step of just removing these impls from alloc, as removing them from core is a much more tangled task. In addition, all of these implementations are one more-or-less logically-connected group, so reverting their deconstification as a group seems like it will also be sensible.
r? `@fee1-dead`
Change advance(_back)_by to return the remainder instead of the number of processed elements
When advance_by can't advance the iterator by the number of requested elements it now returns the amount by which it couldn't be advanced instead of the amount by which it did.
This simplifies adapters like chain, flatten or cycle because the remainder doesn't have to be calculated as the difference between requested steps and completed steps anymore.
Additionally switching from `Result<(), usize>` to `Result<(), NonZeroUsize>` reduces the size of the result and makes converting from/to a usize representing the number of remaining steps cheap.
A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.
This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
Stabilize `nonnull_slice_from_raw_parts`
FCP is done: https://github.com/rust-lang/rust/issues/71941#issuecomment-1100910416
Note that this doesn't const-stabilize `NonNull::slice_from_raw_parts` as `slice_from_raw_parts_mut` isn't const-stabilized yet. Given #67456 and #57349, it's not likely available soon, meanwhile, stabilizing only the feature makes some sense, I think.
Closes#71941