The recent bug that was found in LinkedList reminded me of some general cleanup
that's been waiting for some time.
- Use a loop from the front in Drop, it works just as well and without an unsafe block
- Change Rawlink methods to use `unsafe` in an idiomatic way. This does mean that
we need an unsafe block for each dereference of a raw link. Even then, the extent
of unsafe-critical code is even larger of course, since safety depends on the whole
data structure's integrity. This is a general problem we are aware of.
- Some cleanup just to try to decrease the amount of Rawlink handling.
This wasn't marked inline, so wasn't being inlined cross-crate. It's
actually a no-op function, since it's a wrapper around `mem::transmute`.
Marking it inline means that programs calling it can see that it's a
no-op and act accordingly during optimisation.
This wasn't marked inline, so wasn't being inlined cross-crate. It's
actually a no-op function, since it's a wrapper around `mem::transmute`.
Marking it inline means that programs calling it can see that it's a
no-op and act accordingly during optimisation.
collections: Make BinaryHeap panic safe in sift_up / sift_down
Use a struct called Hole that keeps track of an invalid location
in the vector and fills the hole on drop.
I include a run-pass test that the current BinaryHeap fails, and the new
one passes.
NOTE: The BinaryHeap will still be inconsistent after a comparison fails. It will
not have the heap property. What we fix is just that elements will be valid
values.
This is actually a performance win -- the new code does not bother to write in `zeroed()`
values in the holes, it just leaves them as they were.
Net result is something like a 5% decrease in runtime for `BinaryHeap::from_vec`. This
can be further improved by using unchecked indexing (I confirmed it makes a difference,
not a surprise with the non-sequential access going on), but let's leave that for another PR.
Safety first 😉Fixes#25842
Use a struct called Hole that keeps track of an invalid location
in the vector and fills the hole on drop.
I include a run-pass test that the current BinaryHeap fails, and the new
one passes.
Fixes#25842
The `debug_builders` feature is up for 1.1 stabilization in #24028. This commit stabilizes the API as-is with no changes.
Some nits that @alexcrichton mentioned that may be worth discussing now if anyone cares:
* Should `debug_tuple_struct` and `DebugTupleStruct` be used instead of `debug_tuple` and `DebugTuple`? It's more typing but is a technically more correct name.
* `DebugStruct` and `DebugTuple` have `field` methods while `DebugSet`, `DebugMap` and `DebugList` have `entry` methods. Should we switch those to something else for consistency?
cc @alexcrichton @aturon
collections: Reorder slice methods to improve API docs
We have an evolutionary history whose traces are still visible in the
slice docs today.
Some heuristics:
* Group method and method_mut together
* Group method and method_by together
* Group by use case, here we have roughly:
Basic interrogators (len)
Mutation (swap)
Iterators (iter)
Segmentation (split)
Searching (contains)
Permutations (permutations)
Misc (clone_from_slice)
We have an evolutionary history whose traces are still visible in the
slice docs today.
Some heuristics:
* Group method and method_mut together
* Group method and method_by together
* Group by use case, here we have roughly:
Basic interrogators (len)
Mutation (swap)
Iterators (iter)
Segmentation (split)
Searching (contains)
Permutations (permutations)
Misc (clone_from_slice)
Use stable code in doc examples (libcollections)
Main task is to change from String::from_str to String::from in examples for String
(the latter constructor is stable). While I'm at it, also remove redundant feature flags,
fix some other instances of unstable code in examples (in examples for stable
methods), and remove some use of usize in examples too.
Prefer String::from over from_str; String::from_str is unstable while
String::from is stable. Promote the latter by using it in examples.
Simply migrating unstable function to the closest alternative.
Some modest running-time improvements to `std::collections::BitSet` on bit-sets of varying set-membership densities. This is work originally from [here](https://github.com/rayglover/alt_collections). (Benchmarks copied below)
```
std::collections::BitSet / alt_collections::BitSet
copy_dense ... 3.08x
copy_sparse ... 4.22x
count_dense ... 11.01x
count_sparse ... 8.11x
from_bytes ... 1.47x
intersect_dense ... 6.54x
intersect_sparse ... 4.37x
union_dense ... 5.53x
union_sparse ... 5.60x
```
The exception is `from_bytes`, which I've left unaltered since the optimization is rather obscure.
Compiling with the cpu feature `popcnt` gave a further ~10% improvement on my machine, but this wasn't factored in to the benchmarks above.
Similar improvements could be made to `BitVec`, although that would probably require more substantial changes.
criticism welcome!
Using regular pointer arithmetic to iterate collections of zero-sized types
doesn't work, because we'd get the same pointer all the time. Our
current solution is to convert the pointer to an integer, add an offset
and then convert back, but this inhibits certain optimizations.
What we should do instead is to convert the pointer to one that points
to an i8\*, and then use a LLVM GEP instructions without the inbounds
flag to perform the pointer arithmetic. This allows to generate pointers
that point outside allocated objects without causing UB (as long as you
don't dereference them), and it wraps around using two's complement,
i.e. it behaves exactly like the wrapping_* operations we're currently
using, with the added benefit of LLVM being able to better optimize the
resulting IR.