Documentation for the following methods
with_capacity
with_capacity_in
with_capacity_and_hasher
reserve
reserve_exact
try_reserve
try_reserve_exact
was inconsistent and often not entirely correct where they existed on the following types
Vec
VecDeque
String
OsString
PathBuf
BinaryHeap
HashSet
HashMap
BufWriter
LineWriter
since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity).
Some effort was made to make the documentation more consistent between types as well.
Fix with_capacity* methods for Vec
Fix *reserve* methods for Vec
Fix docs for *reserve* methods of VecDeque
Fix docs for String::with_capacity
Fix docs for *reserve* methods of String
Fix docs for OsString::with_capacity
Fix docs for *reserve* methods on OsString
Fix docs for with_capacity* methods on HashSet
Fix docs for *reserve methods of HashSet
Fix docs for with_capacity* methods of HashMap
Fix docs for *reserve methods on HashMap
Fix expect messages about OOM in doctests
Fix docs for BinaryHeap::with_capacity
Fix docs for *reserve* methods of BinaryHeap
Fix typos
Fix docs for with_capacity on BufWriter and LineWriter
Fix consistent use of `hasher` between `HashMap` and `HashSet`
Fix warning in doc test
Add test for capacity of vec with ZST
Fix doc test error
This updates the standard library's documentation to use the new syntax. The
documentation is worthwhile to update as it should be more idiomatic
(particularly for features like this, which are nice for users to get acquainted
with). The general codebase is likely more hassle than benefit to update: it'll
hurt git blame, and generally updates can be done by folks updating the code if
(and when) that makes things more readable with the new format.
A few places in the compiler and library code are updated (mostly just due to
already having been done when this commit was first authored).
The `advance_by(n)` docs state that in the error case `Err(k)` that k is always less than n.
It also states that `advance_by(0)` may return `Err(0)` to indicate an exhausted iterator.
These statements are inconsistent.
Since only one implementation (Skip) actually made use of that I changed it to return Ok(()) in that case too.
While adding some tests I also found a bug in `Take::advance_back_by`.
This also checks the contents and not only the capacity in case IntoIter's clone implementation is changed to add capacity at the end. Extra capacity at the beginning would be needed to make InPlaceIterable work.
Co-authored-by: Giacomo Stevanato <giaco.stevanato@gmail.com>
The unsoundness is not in Peekable per se, it rather is due to the
interaction between Peekable being able to hold an extra item
and vec::IntoIter's clone implementation shortening the allocation.
An alternative solution would be to change IntoIter's clone implementation
to keep enough spare capacity available.
Discovered this kind of issue in an unrelated library.
The author copied the tests from here and AFAIK, there are no tests for this particular case.
Signed-off-by: Hanif Bin Ariffin <hanif.ariffin.4326@gmail.com>
Optimize Vec::retain
Use `copy_non_overlapping` instead of `swap` to reduce memory writes, like what we've done in #44355 and `String::retain`.
#48065 already tried to do this optimization but it is reverted in #67300 due to bad codegen of `DrainFilter::drop`.
This PR re-implement the drop-then-move approach. I did a [benchmark](https://gist.github.com/oxalica/3360eec9376f22533fcecff02798b698) on small-no-drop, small-need-drop, large-no-drop elements with different predicate functions. It turns out that the new implementation is >20% faster in average for almost all cases. Only 2/24 cases are slower by 3% and 5%. See the link above for more detail.
I think regression in may-panic cases is due to drop-guard preventing some optimization. If it's permitted to leak elements when predicate function of element's `drop` panic, the new implementation should be almost always faster than current one.
I'm not sure if we should leak on panic, since there is indeed an issue (#52267) complains about it before.
Implement <https://github.com/rust-lang/rfcs/pull/2714>, changes from the RFC:
- Rename the method `append_from_within` => `extend_from_within`
- Loose :Copy bound => :Clone
- Specialize in case of :Copy
This commit also adds `Vec::split_at_spare` private method and use it to implement
`Vec::spare_capacity_mut` and `Vec::extend_from_within`. This method returns 2
slices - initialized elements (same as `&mut vec[..]`) and uninitialized but
allocated space (same as `vec.spare_capacity_mut()`).
Add PartialEq impls for Vec <-> slice
This is a follow-up to #71660 and rust-lang/rfcs#2917 to add two more missing vec/slice PartialEq impls:
```
impl<A, B> PartialEq<[B]> for Vec<A> where A: PartialEq<B> { .. }
impl<A, B> PartialEq<Vec<B>> for [A] where A: PartialEq<B> { .. }
```
Since this is insta-stable, it should go through the `@rust-lang/libs` FCP process. Note that I used version 1.47.0 for the `stable` attribute because I assume this will not merge before the 1.46.0 branch is cut next week.
Fix liballoc test suite for Miri
Mostly, fix the regression introduced by https://github.com/rust-lang/rust/pull/75207 that caused slices (i.e., references) to be created to invalid memory or memory that has aliasing pointers that we want to keep valid. @dylni this changes the type of `check_range` to only require the length, not the full reference to the slice, which indeed is all the information this function requires.
Also reduce the size of a test introduced in https://github.com/rust-lang/rust/pull/70793 to make it not take 3 minutes in Miri.
This makes https://github.com/RalfJung/miri-test-libstd work again.
Detect overflow in proc_macro_server subspan
* Detect overflow in proc_macro_server subspan
* Add tests for overflow in Vec::drain
* Add tests for overflow in String / VecDeque operations using ranges
Optimize behavior of vec.split_off(0) (take all)
Optimization improvement to `split_off()` so the performance meets the
intuitively expected behavior when `at == 0`, avoiding the current behavior
of copying the entire vector.
The change honors documented behavior that the original vector's
"previous capacity unchanged".
This improvement better supports the pattern for building and flushing a
buffer of elements, such as the following:
```rust
let mut vec = Vec::new();
loop {
vec.push(something);
if condition_is_met {
process(vec.split_off(0));
}
}
```
`Option` wrapping is the first alternative I thought of, but is much
less obvious and more verbose:
```rust
let mut capacity = 1;
let mut vec: Option<Vec<Stuff>> = None;
loop {
vec.get_or_insert_with(|| Vec::with_capacity(capacity)).push(something);
if condition_is_met {
capacity = vec.capacity();
process(vec.take().unwrap());
}
}
```
Directly using `mem::replace()` (instead of calling`split_off()`) could work,
but `mem::replace()` is a more advanced tool for Rust developers, and in
this case, I believe developers would assume the standard library should
be sufficient for the purpose described here.
The benefit of the approach to this change is it does not change the
existing API contract, but improves the peformance of `split_off(0)` for
`Vec`, `String` (which delegates `split_off()` to `Vec`), and any other
existing use cases.
This change adds tests to validate the behavior of `split_off()` with
regard to capacity, as originally documented, and confirm that behavior
still holds, when `at == 0`.
The change is an implementation detail, and does not require a
documentation change, but documenting the new behavior as part of its
API contract may benefit future users.
(Let me know if I should make that documentation update.)
Note, for future consideration:
I think it would be helpful to introduce an additional method to `Vec`
(if not also to `String`):
```
pub fn take_all(&mut self) -> Self {
self.split_off(0)
}
```
This would make it more clear how `Vec` supports the pattern, and make
it easier to find, since the behavior is similar to other `take()`
methods in the Rust standard library.
r? `@wesleywiser`
FYI: `@tmandry`
Optimization improvement to `split_off()` so the performance meets the
intuitively expected behavior when `at == 0`, avoiding the current
behavior of copying the entire vector.
The change honors documented behavior that the method leaves the
original vector's "previous capacity unchanged".
This improvement better supports the pattern for building and flushing a
buffer of elements, such as the following:
```rust
let mut vec = Vec::new();
loop {
vec.push(something);
if condition_is_met {
process(vec.split_off(0));
}
}
```
`Option` wrapping is the first alternative I thought of, but is much
less obvious and more verbose:
```rust
let mut capacity = 1;
let mut vec: Option<Vec<Stuff>> = None;
loop {
vec.get_or_insert_with(|| Vec::with_capacity(capacity)).push(something);
if condition_is_met {
capacity = vec.capacity();
process(vec.take().unwrap());
}
}
```
Directly applying `mem::replace()` could work, but `mem::` functions are
typically a last resort, when a developer is actively seeking better
performance than the standard library provides, for example.
The benefit of the approach to this change is it does not change the
existing API contract, but improves the peformance of `split_off(0)` for
`Vec`, `String` (which delegates `split_off()` to `Vec`), and any other
existing use cases.
This change adds tests to validate the behavior of `split_off()` with
regard to capacity, as originally documented, and confirm that behavior
still holds, when `at == 0`.
The change is an implementation detail, and does not require a
documentation change, but documenting the new behavior as part of its
API contract may benefit future users.
(Let me know if I should make that documentation update.)
Note, for future consideration:
I think it would be helpful to introduce an additional method to `Vec`
(if not also to `String`):
```
pub fn take_all(&mut self) -> Self {
self.split_off(0)
}
```
This would make it more clear how `Vec` supports the pattern, and make
it easier to find, since the behavior is similar to other `take()`
methods in the Rust standard library.