The first time I read the docs for `insert()`, I thought it was saying it didn't update existing *values*, and I was confused. Reword the docs to make it clear that `insert()` does update values.
The first time I read the docs for `insert()`, I thought it was saying
it didn't update existing *values*, and I was confused. Reword the docs
to make it clear that `insert()` does update values.
collections: Use slice parts in PartialEq for VecDeque
This improves == for VecDeque by using the slice representation.
This will also improve further if codegen for slice comparison improves.
Benchmark run of 1000 u64 elements, comparing for equality (all equal).
Cpu time to compare the vecdeques is reduced to less than 50% of what it
was before.
```
test test_eq_u64 ... bench: 1,885 ns/iter (+/- 163) = 4244 MB/s
test test_eq_new_u64 ... bench: 802 ns/iter (+/- 100) = 9975 MB/s
```
When I last did a pass through the string documentation, I focused on
consistency across similar functions. Unfortunately, I missed some
details. This example was _too_ consistent: it wasn't actually accurate!
This commit fixes the docs do both be more accurate and to explain why
the return type is a Cow<'a, str>.
First reported here:
https://www.reddit.com/r/rust/comments/44q9ms/stringfrom_utf8_lossy_doesnt_return_a_string/
Hash VecDeque in its slice parts
Use .as_slices() for a more efficient code path in VecDeque's Hash impl.
This still hashes the elements in the same order.
Before/after timing of VecDeque hashing 1024 elements of u8 and
u64 shows that the vecdeque now can match the Vec
(test_hashing_vec_of_u64 is the Vec run).
```
before
test test_hashing_u64 ... bench: 14,031 ns/iter (+/- 236) = 583 MB/s
test test_hashing_u8 ... bench: 7,887 ns/iter (+/- 65) = 129 MB/s
test test_hashing_vec_of_u64 ... bench: 6,578 ns/iter (+/- 76) = 1245 MB/s
after
running 5 tests
test test_hashing_u64 ... bench: 6,495 ns/iter (+/- 52) = 1261 MB/s
test test_hashing_u8 ... bench: 851 ns/iter (+/- 16) = 1203 MB/s
test test_hashing_vec_of_u64 ... bench: 6,499 ns/iter (+/- 59) = 1260 MB/s
```
This improves == for VecDeque by using the slice representation.
This will also improve further if codegen for slice comparison improves.
Benchmark run of 1000 u64 elements, comparing for equality (all equal).
Cpu time to compare the vecdeques is reduced to less than 50% of what it
was before.
```
test test_eq_u64 ... bench: 1,885 ns/iter (+/- 163) = 4244 MB/s
test test_eq_new_u64 ... bench: 802 ns/iter (+/- 100) = 9975 MB/s
```
Use .as_slices() for a more efficient code path in VecDeque's Hash impl.
This still hashes the elements in the same order.
Before/after timing of VecDeque hashing 1024 elements of u8 and
u64 shows that the vecdeque now can match the Vec
(test_hashing_vec_of_u64 is the Vec run).
before
test test_hashing_u64 ... bench: 14,031 ns/iter (+/- 236) = 583 MB/s
test test_hashing_u8 ... bench: 7,887 ns/iter (+/- 65) = 129 MB/s
test test_hashing_vec_of_u64 ... bench: 6,578 ns/iter (+/- 76) = 1245 MB/s
after
running 5 tests
test test_hashing_u64 ... bench: 6,495 ns/iter (+/- 52) = 1261 MB/s
test test_hashing_u8 ... bench: 851 ns/iter (+/- 16) = 1203 MB/s
test test_hashing_vec_of_u64 ... bench: 6,499 ns/iter (+/- 59) = 1260 MB/s
This commit removes the `-D warnings` flag being passed through the makefiles to
all crates to instead be a crate attribute. We want these attributes always
applied for all our standard builds, and this is more amenable to Cargo-based
builds as well.
Note that all `deny(warnings)` attributes are gated with a `cfg(stage0)`
attribute currently to match the same semantics we have today