This is a standard "clean out libstd" commit which removes all 1.5-and-before
deprecated functionality as it's now all been deprecated for at least one entire
cycle.
This commit is the standard API stabilization commit for the 1.6 release cycle.
The list of issues and APIs below have all been through their cycle-long FCP and
the libs team decisions are listed below
Stabilized APIs
* `Read::read_exact`
* `ErrorKind::UnexpectedEof` (renamed from `UnexpectedEOF`)
* libcore -- this was a bit of a nuanced stabilization, the crate itself is now
marked as `#[stable]` and the methods appearing via traits for primitives like
`char` and `str` are now also marked as stable. Note that the extension traits
themeselves are marked as unstable as they're imported via the prelude. The
`try!` macro was also moved from the standard library into libcore to have the
same interface. Otherwise the functions all have copied stability from the
standard library now.
* The `#![no_std]` attribute
* `fs::DirBuilder`
* `fs::DirBuilder::new`
* `fs::DirBuilder::recursive`
* `fs::DirBuilder::create`
* `os::unix::fs::DirBuilderExt`
* `os::unix::fs::DirBuilderExt::mode`
* `vec::Drain`
* `vec::Vec::drain`
* `string::Drain`
* `string::String::drain`
* `vec_deque::Drain`
* `vec_deque::VecDeque::drain`
* `collections::hash_map::Drain`
* `collections::hash_map::HashMap::drain`
* `collections::hash_set::Drain`
* `collections::hash_set::HashSet::drain`
* `collections::binary_heap::Drain`
* `collections::binary_heap::BinaryHeap::drain`
* `Vec::extend_from_slice` (renamed from `push_all`)
* `Mutex::get_mut`
* `Mutex::into_inner`
* `RwLock::get_mut`
* `RwLock::into_inner`
* `Iterator::min_by_key` (renamed from `min_by`)
* `Iterator::max_by_key` (renamed from `max_by`)
Deprecated APIs
* `ErrorKind::UnexpectedEOF` (renamed to `UnexpectedEof`)
* `OsString::from_bytes`
* `OsStr::to_cstring`
* `OsStr::to_bytes`
* `fs::walk_dir` and `fs::WalkDir`
* `path::Components::peek`
* `slice::bytes::MutableByteVector`
* `slice::bytes::copy_memory`
* `Vec::push_all` (renamed to `extend_from_slice`)
* `Duration::span`
* `IpAddr`
* `SocketAddr::ip`
* `Read::tee`
* `io::Tee`
* `Write::broadcast`
* `io::Broadcast`
* `Iterator::min_by` (renamed to `min_by_key`)
* `Iterator::max_by` (renamed to `max_by_key`)
* `net::lookup_addr`
New APIs (still unstable)
* `<[T]>::sort_by_key` (added to mirror `min_by_key`)
Closes#27585Closes#27704Closes#27707Closes#27710Closes#27711Closes#27727Closes#27740Closes#27744Closes#27799Closes#27801
cc #27801 (doesn't close as `Chars` is still unstable)
Closes#28968
BinaryHeap: Simplify sift down
Sift down was doing all too much work: it can stop directly when the
current element obeys the heap property in relation to its children.
In the old code, sift down didn't compare the element to sift down at
all, so it was maximally sifted down and relied on the sift up call to
put it in the correct location.
This should speed up heapify and .pop().
Also rename Hole::removed() to Hole::element()
Sift down was doing all too much work: it can stop directly when the
current element obeys the heap property in relation to its children.
In the old code, sift down didn't compare the element to sift down at
all, so it was maximally sifted down and relied on the sift up call to
put it in the correct location.
This should speed up heapify and .pop().
Also rename Hole::removed() to Hole::element()
sort: Fast path for already sorted data
When merging two sorted blocks `left` and `right` if the last element in
`left` is <= the first in `right`, the blocks are already in sorted order.
Add this as an additional fast path by simply copying the whole left
block into the output and advancing the left pointer. The right block is
then treated the same way by the already present logic in the merge
loop.
Can reduce runtime of .sort() to less than 50% of the previous, if the data
was already perfectly sorted. Sorted data with a few swaps are also
sorted quicker than before. The overhead of one comparison per merge
seems to be negligible.
Since commit 46068c9da, call to `reserve()` on empty vec allocates
exactly requested capacity, so unroll of first iteration may help only
with branch prediction.
When merging two sorted blocks `left` and `right` if the last element in
`left` is <= the first in `right`, the blocks are already sorted.
Add this as an additional fast path by simply copying the whole left
block into the output and advancing the left pointer. The right block is
then treated the same way by the already present logic in the merge
loop.
Reduces runtime of .sort() to less than 50% of the previous, if the data
was already perfectly sorted. Sorted data with a few swaps are also
sorted quicker than before. The overhead of one comparison per merge
seems to be negligible.
vec: Remove old comment in Vec::drop
This comment was leftover from an earlier revision of a PR, something
that never was merged. There is no ZST special casing in Vec::drop.
I think this should fix the test failures in debug mode from #29492
The assertion was written incorrectly, and I don't like the way the new assertion is written, but I _think_ it does the right thing now.