collections: Make BinaryHeap panic safe in sift_up / sift_down
Use a struct called Hole that keeps track of an invalid location
in the vector and fills the hole on drop.
I include a run-pass test that the current BinaryHeap fails, and the new
one passes.
NOTE: The BinaryHeap will still be inconsistent after a comparison fails. It will
not have the heap property. What we fix is just that elements will be valid
values.
This is actually a performance win -- the new code does not bother to write in `zeroed()`
values in the holes, it just leaves them as they were.
Net result is something like a 5% decrease in runtime for `BinaryHeap::from_vec`. This
can be further improved by using unchecked indexing (I confirmed it makes a difference,
not a surprise with the non-sequential access going on), but let's leave that for another PR.
Safety first 😉Fixes#25842
Use a struct called Hole that keeps track of an invalid location
in the vector and fills the hole on drop.
I include a run-pass test that the current BinaryHeap fails, and the new
one passes.
Fixes#25842
The `debug_builders` feature is up for 1.1 stabilization in #24028. This commit stabilizes the API as-is with no changes.
Some nits that @alexcrichton mentioned that may be worth discussing now if anyone cares:
* Should `debug_tuple_struct` and `DebugTupleStruct` be used instead of `debug_tuple` and `DebugTuple`? It's more typing but is a technically more correct name.
* `DebugStruct` and `DebugTuple` have `field` methods while `DebugSet`, `DebugMap` and `DebugList` have `entry` methods. Should we switch those to something else for consistency?
cc @alexcrichton @aturon
collections: Reorder slice methods to improve API docs
We have an evolutionary history whose traces are still visible in the
slice docs today.
Some heuristics:
* Group method and method_mut together
* Group method and method_by together
* Group by use case, here we have roughly:
Basic interrogators (len)
Mutation (swap)
Iterators (iter)
Segmentation (split)
Searching (contains)
Permutations (permutations)
Misc (clone_from_slice)
We have an evolutionary history whose traces are still visible in the
slice docs today.
Some heuristics:
* Group method and method_mut together
* Group method and method_by together
* Group by use case, here we have roughly:
Basic interrogators (len)
Mutation (swap)
Iterators (iter)
Segmentation (split)
Searching (contains)
Permutations (permutations)
Misc (clone_from_slice)
Use stable code in doc examples (libcollections)
Main task is to change from String::from_str to String::from in examples for String
(the latter constructor is stable). While I'm at it, also remove redundant feature flags,
fix some other instances of unstable code in examples (in examples for stable
methods), and remove some use of usize in examples too.
Prefer String::from over from_str; String::from_str is unstable while
String::from is stable. Promote the latter by using it in examples.
Simply migrating unstable function to the closest alternative.
Some modest running-time improvements to `std::collections::BitSet` on bit-sets of varying set-membership densities. This is work originally from [here](https://github.com/rayglover/alt_collections). (Benchmarks copied below)
```
std::collections::BitSet / alt_collections::BitSet
copy_dense ... 3.08x
copy_sparse ... 4.22x
count_dense ... 11.01x
count_sparse ... 8.11x
from_bytes ... 1.47x
intersect_dense ... 6.54x
intersect_sparse ... 4.37x
union_dense ... 5.53x
union_sparse ... 5.60x
```
The exception is `from_bytes`, which I've left unaltered since the optimization is rather obscure.
Compiling with the cpu feature `popcnt` gave a further ~10% improvement on my machine, but this wasn't factored in to the benchmarks above.
Similar improvements could be made to `BitVec`, although that would probably require more substantial changes.
criticism welcome!
Using regular pointer arithmetic to iterate collections of zero-sized types
doesn't work, because we'd get the same pointer all the time. Our
current solution is to convert the pointer to an integer, add an offset
and then convert back, but this inhibits certain optimizations.
What we should do instead is to convert the pointer to one that points
to an i8\*, and then use a LLVM GEP instructions without the inbounds
flag to perform the pointer arithmetic. This allows to generate pointers
that point outside allocated objects without causing UB (as long as you
don't dereference them), and it wraps around using two's complement,
i.e. it behaves exactly like the wrapping_* operations we're currently
using, with the added benefit of LLVM being able to better optimize the
resulting IR.
Using regular pointer arithmetic to iterate collections of zero-sized types
doesn't work, because we'd get the same pointer all the time. Our
current solution is to convert the pointer to an integer, add an offset
and then convert back, but this inhibits certain optimizations.
What we should do instead is to convert the pointer to one that points
to an i8*, and then use a LLVM GEP instructions without the inbounds
flag to perform the pointer arithmetic. This allows to generate pointers
that point outside allocated objects without causing UB (as long as you
don't dereference them), and it wraps around using two's complement,
i.e. it behaves exactly like the wrapping_* operations we're currently
using, with the added benefit of LLVM being able to better optimize the
resulting IR.
The functions BitSet::{iter,union,symmetric_difference} each had docs that claimed u32s were output when their actual output each end up being usizes.
r? @steveklabnik
We don't need to copy any elements if `at` is behind the last element
in the map. The last element is at index `self.v.len() - 1`, so we
should not copy if `at` is greater **or equals** `self.v.len()`.
r? @Gankro
Several Minor API / Reference Documentation Fixes
- Fix a few small errors in the reference.
- Fix paper cuts in the API docs.
Fixes#24882Fixes#25233Fixes#25250
We don't need to copy any elements if `at` is behind the last element
in the map. The last element is at index `self.v.len() - 1`, so we
should not copy if `at` is greater or equals `self.v.len()`.
I was profiling my code again and this time AsRef<str> for String
was eating up a considerable chunk of my runtime; adding the inline
annotation made the program run almost twice as fast!
While I was at it I also added the annotation to other implementations
of AsRef as well as AsMut.