914 lines
26 KiB
Rust
Raw Normal View History

complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// This is pretty much entirely stolen from TreeSet, since BTreeMap has an identical interface
// to TreeMap
use core::prelude::*;
use core::borrow::BorrowFrom;
2015-01-03 22:42:21 -05:00
use core::cmp::Ordering::{self, Less, Greater, Equal};
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
use core::default::Default;
std: Rename Show/String to Debug/Display This commit is an implementation of [RFC 565][rfc] which is a stabilization of the `std::fmt` module and the implementations of various formatting traits. Specifically, the following changes were performed: [rfc]: https://github.com/rust-lang/rfcs/blob/master/text/0565-show-string-guidelines.md * The `Show` trait is now deprecated, it was renamed to `Debug` * The `String` trait is now deprecated, it was renamed to `Display` * Many `Debug` and `Display` implementations were audited in accordance with the RFC and audited implementations now have the `#[stable]` attribute * Integers and floats no longer print a suffix * Smart pointers no longer print details that they are a smart pointer * Paths with `Debug` are now quoted and escape characters * The `unwrap` methods on `Result` now require `Display` instead of `Debug` * The `Error` trait no longer has a `detail` method and now requires that `Display` must be implemented. With the loss of `String`, this has moved into libcore. * `impl<E: Error> FromError<E> for Box<Error>` now exists * `derive(Show)` has been renamed to `derive(Debug)`. This is not currently warned about due to warnings being emitted on stage1+ While backwards compatibility is attempted to be maintained with a blanket implementation of `Display` for the old `String` trait (and the same for `Show`/`Debug`) this is still a breaking change due to primitives no longer implementing `String` as well as modifications such as `unwrap` and the `Error` trait. Most code is fairly straightforward to update with a rename or tweaks of method calls. [breaking-change] Closes #21436
2015-01-20 15:45:07 -08:00
use core::fmt::Debug;
use core::fmt;
2015-01-07 22:01:05 -05:00
use core::iter::{Peekable, Map, FromIterator, IntoIterator};
use core::ops::{BitOr, BitAnd, BitXor, Sub};
use btree_map::{BTreeMap, Keys};
use Bound;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
// FIXME(conventions): implement bounded iterators
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// A set based on a B-Tree.
2014-10-05 09:48:38 -04:00
///
/// See BTreeMap's documentation for a detailed discussion of this collection's performance
/// benefits and drawbacks.
#[derive(Clone, Hash, PartialEq, Eq, Ord, PartialOrd)]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
pub struct BTreeSet<T>{
map: BTreeMap<T, ()>,
}
/// An iterator over a BTreeSet's items.
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Iter<'a, T: 'a> {
iter: Keys<'a, T, ()>
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// An owning iterator over a BTreeSet's items.
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct IntoIter<T> {
iter: Map<::btree_map::IntoIter<T, ()>, fn((T, ())) -> T>
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// An iterator over a sub-range of BTreeSet's items.
pub struct Range<'a, T: 'a> {
iter: Map<::btree_map::Range<'a, T, ()>, fn((&'a T, &'a ())) -> &'a T>
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// A lazy iterator producing elements in the set difference (in-order).
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Difference<'a, T:'a> {
a: Peekable<Iter<'a, T>>,
b: Peekable<Iter<'a, T>>,
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// A lazy iterator producing elements in the set symmetric difference (in-order).
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct SymmetricDifference<'a, T:'a> {
a: Peekable<Iter<'a, T>>,
b: Peekable<Iter<'a, T>>,
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// A lazy iterator producing elements in the set intersection (in-order).
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Intersection<'a, T:'a> {
a: Peekable<Iter<'a, T>>,
b: Peekable<Iter<'a, T>>,
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// A lazy iterator producing elements in the set union (in-order).
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Union<'a, T:'a> {
a: Peekable<Iter<'a, T>>,
b: Peekable<Iter<'a, T>>,
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
impl<T: Ord> BTreeSet<T> {
/// Makes a new BTreeSet with a reasonable choice of B.
2014-12-12 22:09:40 -06:00
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut set: BTreeSet<int> = BTreeSet::new();
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
pub fn new() -> BTreeSet<T> {
BTreeSet { map: BTreeMap::new() }
}
/// Makes a new BTreeSet with the given B.
2014-10-05 09:48:38 -04:00
///
/// B cannot be less than 2.
#[unstable(feature = "collections",
reason = "probably want this to be on the type, eventually")]
2015-02-04 21:17:19 -05:00
pub fn with_b(b: usize) -> BTreeSet<T> {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
BTreeSet { map: BTreeMap::with_b(b) }
}
}
impl<T> BTreeSet<T> {
/// Gets an iterator over the BTreeSet's contents.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
2015-02-04 21:17:19 -05:00
/// let set: BTreeSet<usize> = [1u, 2, 3, 4].iter().map(|&x| x).collect();
///
/// for x in set.iter() {
/// println!("{}", x);
/// }
///
2015-02-04 21:17:19 -05:00
/// let v: Vec<usize> = set.iter().map(|&x| x).collect();
/// assert_eq!(v, vec![1u,2,3,4]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-30 19:07:53 -05:00
pub fn iter(&self) -> Iter<T> {
Iter { iter: self.map.keys() }
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Gets an iterator for moving out the BtreeSet's contents.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
2015-02-04 21:17:19 -05:00
/// let set: BTreeSet<usize> = [1u, 2, 3, 4].iter().map(|&x| x).collect();
///
2015-02-04 21:17:19 -05:00
/// let v: Vec<usize> = set.into_iter().collect();
/// assert_eq!(v, vec![1u,2,3,4]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn into_iter(self) -> IntoIter<T> {
2014-12-02 14:07:40 -05:00
fn first<A, B>((a, _): (A, B)) -> A { a }
let first: fn((T, ())) -> T = first; // coerce to fn pointer
2014-12-02 14:07:40 -05:00
IntoIter { iter: self.map.into_iter().map(first) }
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
}
impl<T: Ord> BTreeSet<T> {
/// Constructs a double-ended iterator over a sub-range of elements in the set, starting
/// at min, and ending at max. If min is `Unbounded`, then it will be treated as "negative
/// infinity", and if max is `Unbounded`, then it will be treated as "positive infinity".
/// Thus range(Unbounded, Unbounded) will yield the whole collection.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
/// use std::collections::Bound::{Included, Unbounded};
///
/// let mut set = BTreeSet::new();
/// set.insert(3u);
/// set.insert(5u);
/// set.insert(8u);
/// for &elem in set.range(Included(&4), Included(&8)) {
/// println!("{}", elem);
/// }
/// assert_eq!(Some(&5u), set.range(Included(&4), Unbounded).next());
/// ```
#[unstable(feature = "collections",
reason = "matches collection reform specification, waiting for dust to settle")]
pub fn range<'a>(&'a self, min: Bound<&T>, max: Bound<&T>) -> Range<'a, T> {
fn first<A, B>((a, _): (A, B)) -> A { a }
let first: fn((&'a T, &'a ())) -> &'a T = first; // coerce to fn pointer
Range { iter: self.map.range(min, max).map(first) }
}
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
impl<T: Ord> BTreeSet<T> {
/// Visits the values representing the difference, in ascending order.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut a = BTreeSet::new();
/// a.insert(1u);
/// a.insert(2u);
///
/// let mut b = BTreeSet::new();
/// b.insert(2u);
/// b.insert(3u);
///
2015-02-04 21:17:19 -05:00
/// let diff: Vec<usize> = a.difference(&b).cloned().collect();
/// assert_eq!(diff, vec![1u]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn difference<'a>(&'a self, other: &'a BTreeSet<T>) -> Difference<'a, T> {
Difference{a: self.iter().peekable(), b: other.iter().peekable()}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Visits the values representing the symmetric difference, in ascending order.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut a = BTreeSet::new();
/// a.insert(1u);
/// a.insert(2u);
///
/// let mut b = BTreeSet::new();
/// b.insert(2u);
/// b.insert(3u);
///
2015-02-04 21:17:19 -05:00
/// let sym_diff: Vec<usize> = a.symmetric_difference(&b).cloned().collect();
/// assert_eq!(sym_diff, vec![1u,3]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
pub fn symmetric_difference<'a>(&'a self, other: &'a BTreeSet<T>)
-> SymmetricDifference<'a, T> {
SymmetricDifference{a: self.iter().peekable(), b: other.iter().peekable()}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Visits the values representing the intersection, in ascending order.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut a = BTreeSet::new();
/// a.insert(1u);
/// a.insert(2u);
///
/// let mut b = BTreeSet::new();
/// b.insert(2u);
/// b.insert(3u);
///
2015-02-04 21:17:19 -05:00
/// let intersection: Vec<usize> = a.intersection(&b).cloned().collect();
/// assert_eq!(intersection, vec![2u]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
pub fn intersection<'a>(&'a self, other: &'a BTreeSet<T>)
-> Intersection<'a, T> {
Intersection{a: self.iter().peekable(), b: other.iter().peekable()}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Visits the values representing the union, in ascending order.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut a = BTreeSet::new();
/// a.insert(1u);
///
/// let mut b = BTreeSet::new();
/// b.insert(2u);
///
2015-02-04 21:17:19 -05:00
/// let union: Vec<usize> = a.union(&b).cloned().collect();
/// assert_eq!(union, vec![1u,2]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn union<'a>(&'a self, other: &'a BTreeSet<T>) -> Union<'a, T> {
Union{a: self.iter().peekable(), b: other.iter().peekable()}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Return the number of elements in the set
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut v = BTreeSet::new();
/// assert_eq!(v.len(), 0);
2015-01-25 22:05:03 +01:00
/// v.insert(1);
/// assert_eq!(v.len(), 1);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn len(&self) -> usize { self.map.len() }
/// Returns true if the set contains no elements
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut v = BTreeSet::new();
/// assert!(v.is_empty());
2015-01-25 22:05:03 +01:00
/// v.insert(1);
/// assert!(!v.is_empty());
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_empty(&self) -> bool { self.len() == 0 }
/// Clears the set, removing all values.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut v = BTreeSet::new();
2015-01-25 22:05:03 +01:00
/// v.insert(1);
/// v.clear();
/// assert!(v.is_empty());
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn clear(&mut self) {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
self.map.clear()
}
/// Returns `true` if the set contains a value.
///
/// The value may be any borrowed form of the set's value type,
/// but the ordering on the borrowed form *must* match the
/// ordering on the value type.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
2015-01-25 22:05:03 +01:00
/// let set: BTreeSet<int> = [1, 2, 3].iter().map(|&x| x).collect();
/// assert_eq!(set.contains(&1), true);
/// assert_eq!(set.contains(&4), false);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-06 10:16:49 +13:00
pub fn contains<Q: ?Sized>(&self, value: &Q) -> bool where Q: BorrowFrom<T> + Ord {
self.map.contains_key(value)
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Returns `true` if the set has no elements in common with `other`.
/// This is equivalent to checking for an empty intersection.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
2015-01-25 22:05:03 +01:00
/// let a: BTreeSet<int> = [1, 2, 3].iter().map(|&x| x).collect();
/// let mut b: BTreeSet<int> = BTreeSet::new();
///
/// assert_eq!(a.is_disjoint(&b), true);
/// b.insert(4);
/// assert_eq!(a.is_disjoint(&b), true);
/// b.insert(1);
/// assert_eq!(a.is_disjoint(&b), false);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_disjoint(&self, other: &BTreeSet<T>) -> bool {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
self.intersection(other).next().is_none()
}
/// Returns `true` if the set is a subset of another.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
2015-01-25 22:05:03 +01:00
/// let sup: BTreeSet<int> = [1, 2, 3].iter().map(|&x| x).collect();
/// let mut set: BTreeSet<int> = BTreeSet::new();
///
/// assert_eq!(set.is_subset(&sup), true);
/// set.insert(2);
/// assert_eq!(set.is_subset(&sup), true);
/// set.insert(4);
/// assert_eq!(set.is_subset(&sup), false);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_subset(&self, other: &BTreeSet<T>) -> bool {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
// Stolen from TreeMap
let mut x = self.iter();
let mut y = other.iter();
let mut a = x.next();
let mut b = y.next();
while a.is_some() {
if b.is_none() {
return false;
}
let a1 = a.unwrap();
let b1 = b.unwrap();
match b1.cmp(a1) {
Less => (),
Greater => return false,
Equal => a = x.next(),
}
b = y.next();
}
true
}
/// Returns `true` if the set is a superset of another.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
2015-01-25 22:05:03 +01:00
/// let sub: BTreeSet<int> = [1, 2].iter().map(|&x| x).collect();
/// let mut set: BTreeSet<int> = BTreeSet::new();
///
/// assert_eq!(set.is_superset(&sub), false);
///
/// set.insert(0);
/// set.insert(1);
/// assert_eq!(set.is_superset(&sub), false);
///
/// set.insert(2);
/// assert_eq!(set.is_superset(&sub), true);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_superset(&self, other: &BTreeSet<T>) -> bool {
other.is_subset(self)
}
/// Adds a value to the set. Returns `true` if the value was not already
/// present in the set.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut set = BTreeSet::new();
///
2015-01-25 22:05:03 +01:00
/// assert_eq!(set.insert(2), true);
/// assert_eq!(set.insert(2), false);
/// assert_eq!(set.len(), 1);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn insert(&mut self, value: T) -> bool {
self.map.insert(value, ()).is_none()
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Removes a value from the set. Returns `true` if the value was
/// present in the set.
///
/// The value may be any borrowed form of the set's value type,
/// but the ordering on the borrowed form *must* match the
/// ordering on the value type.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut set = BTreeSet::new();
///
2015-01-25 22:05:03 +01:00
/// set.insert(2);
/// assert_eq!(set.remove(&2), true);
/// assert_eq!(set.remove(&2), false);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-06 10:16:49 +13:00
pub fn remove<Q: ?Sized>(&mut self, value: &Q) -> bool where Q: BorrowFrom<T> + Ord {
self.map.remove(value).is_some()
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
impl<T: Ord> FromIterator<T> for BTreeSet<T> {
2015-01-01 23:15:35 -05:00
fn from_iter<Iter: Iterator<Item=T>>(iter: Iter) -> BTreeSet<T> {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
let mut set = BTreeSet::new();
set.extend(iter);
set
}
}
2015-01-07 22:01:05 -05:00
impl<T> IntoIterator for BTreeSet<T> {
type Iter = IntoIter<T>;
fn into_iter(self) -> IntoIter<T> {
self.into_iter()
}
}
impl<'a, T> IntoIterator for &'a BTreeSet<T> {
type Iter = Iter<'a, T>;
fn into_iter(self) -> Iter<'a, T> {
self.iter()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Ord> Extend<T> for BTreeSet<T> {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
#[inline]
2015-01-31 09:17:50 -05:00
fn extend<Iter: Iterator<Item=T>>(&mut self, iter: Iter) {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
for elem in iter {
self.insert(elem);
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
impl<T: Ord> Default for BTreeSet<T> {
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn default() -> BTreeSet<T> {
BTreeSet::new()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-31 15:45:13 -05:00
impl<'a, 'b, T: Ord + Clone> Sub<&'b BTreeSet<T>> for &'a BTreeSet<T> {
type Output = BTreeSet<T>;
/// Returns the difference of `self` and `rhs` as a new `BTreeSet<T>`.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<int> = vec![1, 2, 3].into_iter().collect();
/// let b: BTreeSet<int> = vec![3, 4, 5].into_iter().collect();
///
/// let result: BTreeSet<int> = &a - &b;
/// let result_vec: Vec<int> = result.into_iter().collect();
/// assert_eq!(result_vec, vec![1, 2]);
/// ```
fn sub(self, rhs: &BTreeSet<T>) -> BTreeSet<T> {
self.difference(rhs).cloned().collect()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-31 15:45:13 -05:00
impl<'a, 'b, T: Ord + Clone> BitXor<&'b BTreeSet<T>> for &'a BTreeSet<T> {
type Output = BTreeSet<T>;
/// Returns the symmetric difference of `self` and `rhs` as a new `BTreeSet<T>`.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<int> = vec![1, 2, 3].into_iter().collect();
/// let b: BTreeSet<int> = vec![2, 3, 4].into_iter().collect();
///
/// let result: BTreeSet<int> = &a ^ &b;
/// let result_vec: Vec<int> = result.into_iter().collect();
/// assert_eq!(result_vec, vec![1, 4]);
/// ```
fn bitxor(self, rhs: &BTreeSet<T>) -> BTreeSet<T> {
self.symmetric_difference(rhs).cloned().collect()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-31 15:45:13 -05:00
impl<'a, 'b, T: Ord + Clone> BitAnd<&'b BTreeSet<T>> for &'a BTreeSet<T> {
type Output = BTreeSet<T>;
/// Returns the intersection of `self` and `rhs` as a new `BTreeSet<T>`.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<int> = vec![1, 2, 3].into_iter().collect();
/// let b: BTreeSet<int> = vec![2, 3, 4].into_iter().collect();
///
/// let result: BTreeSet<int> = &a & &b;
/// let result_vec: Vec<int> = result.into_iter().collect();
/// assert_eq!(result_vec, vec![2, 3]);
/// ```
fn bitand(self, rhs: &BTreeSet<T>) -> BTreeSet<T> {
self.intersection(rhs).cloned().collect()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-31 15:45:13 -05:00
impl<'a, 'b, T: Ord + Clone> BitOr<&'b BTreeSet<T>> for &'a BTreeSet<T> {
type Output = BTreeSet<T>;
/// Returns the union of `self` and `rhs` as a new `BTreeSet<T>`.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<int> = vec![1, 2, 3].into_iter().collect();
/// let b: BTreeSet<int> = vec![3, 4, 5].into_iter().collect();
///
/// let result: BTreeSet<int> = &a | &b;
/// let result_vec: Vec<int> = result.into_iter().collect();
/// assert_eq!(result_vec, vec![1, 2, 3, 4, 5]);
/// ```
fn bitor(self, rhs: &BTreeSet<T>) -> BTreeSet<T> {
self.union(rhs).cloned().collect()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
std: Rename Show/String to Debug/Display This commit is an implementation of [RFC 565][rfc] which is a stabilization of the `std::fmt` module and the implementations of various formatting traits. Specifically, the following changes were performed: [rfc]: https://github.com/rust-lang/rfcs/blob/master/text/0565-show-string-guidelines.md * The `Show` trait is now deprecated, it was renamed to `Debug` * The `String` trait is now deprecated, it was renamed to `Display` * Many `Debug` and `Display` implementations were audited in accordance with the RFC and audited implementations now have the `#[stable]` attribute * Integers and floats no longer print a suffix * Smart pointers no longer print details that they are a smart pointer * Paths with `Debug` are now quoted and escape characters * The `unwrap` methods on `Result` now require `Display` instead of `Debug` * The `Error` trait no longer has a `detail` method and now requires that `Display` must be implemented. With the loss of `String`, this has moved into libcore. * `impl<E: Error> FromError<E> for Box<Error>` now exists * `derive(Show)` has been renamed to `derive(Debug)`. This is not currently warned about due to warnings being emitted on stage1+ While backwards compatibility is attempted to be maintained with a blanket implementation of `Display` for the old `String` trait (and the same for `Show`/`Debug`) this is still a breaking change due to primitives no longer implementing `String` as well as modifications such as `unwrap` and the `Error` trait. Most code is fairly straightforward to update with a rename or tweaks of method calls. [breaking-change] Closes #21436
2015-01-20 15:45:07 -08:00
impl<T: Debug> Debug for BTreeSet<T> {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
try!(write!(f, "BTreeSet {{"));
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
for (i, x) in self.iter().enumerate() {
if i != 0 { try!(write!(f, ", ")); }
try!(write!(f, "{:?}", *x));
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
write!(f, "}}")
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> Iterator for Iter<'a, T> {
type Item = &'a T;
fn next(&mut self) -> Option<&'a T> { self.iter.next() }
2015-02-04 21:17:19 -05:00
fn size_hint(&self) -> (usize, Option<usize>) { self.iter.size_hint() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> DoubleEndedIterator for Iter<'a, T> {
fn next_back(&mut self) -> Option<&'a T> { self.iter.next_back() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> ExactSizeIterator for Iter<'a, T> {}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> Iterator for IntoIter<T> {
type Item = T;
fn next(&mut self) -> Option<T> { self.iter.next() }
2015-02-04 21:17:19 -05:00
fn size_hint(&self) -> (usize, Option<usize>) { self.iter.size_hint() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> DoubleEndedIterator for IntoIter<T> {
fn next_back(&mut self) -> Option<T> { self.iter.next_back() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> ExactSizeIterator for IntoIter<T> {}
impl<'a, T> Iterator for Range<'a, T> {
type Item = &'a T;
fn next(&mut self) -> Option<&'a T> { self.iter.next() }
}
impl<'a, T> DoubleEndedIterator for Range<'a, T> {
fn next_back(&mut self) -> Option<&'a T> { self.iter.next_back() }
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// Compare `x` and `y`, but return `short` if x is None and `long` if y is None
fn cmp_opt<T: Ord>(x: Option<&T>, y: Option<&T>,
short: Ordering, long: Ordering) -> Ordering {
match (x, y) {
(None , _ ) => short,
(_ , None ) => long,
(Some(x1), Some(y1)) => x1.cmp(y1),
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T: Ord> Iterator for Difference<'a, T> {
type Item = &'a T;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn next(&mut self) -> Option<&'a T> {
loop {
match cmp_opt(self.a.peek(), self.b.peek(), Less, Less) {
Less => return self.a.next(),
Equal => { self.a.next(); self.b.next(); }
Greater => { self.b.next(); }
}
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T: Ord> Iterator for SymmetricDifference<'a, T> {
type Item = &'a T;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn next(&mut self) -> Option<&'a T> {
loop {
match cmp_opt(self.a.peek(), self.b.peek(), Greater, Less) {
Less => return self.a.next(),
Equal => { self.a.next(); self.b.next(); }
Greater => return self.b.next(),
}
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T: Ord> Iterator for Intersection<'a, T> {
type Item = &'a T;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn next(&mut self) -> Option<&'a T> {
loop {
let o_cmp = match (self.a.peek(), self.b.peek()) {
(None , _ ) => None,
(_ , None ) => None,
(Some(a1), Some(b1)) => Some(a1.cmp(b1)),
};
match o_cmp {
None => return None,
Some(Less) => { self.a.next(); }
Some(Equal) => { self.b.next(); return self.a.next() }
Some(Greater) => { self.b.next(); }
}
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T: Ord> Iterator for Union<'a, T> {
type Item = &'a T;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn next(&mut self) -> Option<&'a T> {
loop {
match cmp_opt(self.a.peek(), self.b.peek(), Greater, Less) {
Less => return self.a.next(),
Equal => { self.b.next(); return self.a.next() }
Greater => return self.b.next(),
}
}
}
}
#[cfg(test)]
mod test {
use prelude::*;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
use super::BTreeSet;
std: Stabilize the std::hash module This commit aims to prepare the `std::hash` module for alpha by formalizing its current interface whileholding off on adding `#[stable]` to the new APIs. The current usage with the `HashMap` and `HashSet` types is also reconciled by separating out composable parts of the design. The primary goal of this slight redesign is to separate the concepts of a hasher's state from a hashing algorithm itself. The primary change of this commit is to separate the `Hasher` trait into a `Hasher` and a `HashState` trait. Conceptually the old `Hasher` trait was actually just a factory for various states, but hashing had very little control over how these states were used. Additionally the old `Hasher` trait was actually fairly unrelated to hashing. This commit redesigns the existing `Hasher` trait to match what the notion of a `Hasher` normally implies with the following definition: trait Hasher { type Output; fn reset(&mut self); fn finish(&self) -> Output; } This `Hasher` trait emphasizes that hashing algorithms may produce outputs other than a `u64`, so the output type is made generic. Other than that, however, very little is assumed about a particular hasher. It is left up to implementors to provide specific methods or trait implementations to feed data into a hasher. The corresponding `Hash` trait becomes: trait Hash<H: Hasher> { fn hash(&self, &mut H); } The old default of `SipState` was removed from this trait as it's not something that we're willing to stabilize until the end of time, but the type parameter is always required to implement `Hasher`. Note that the type parameter `H` remains on the trait to enable multidispatch for specialization of hashing for particular hashers. Note that `Writer` is not mentioned in either of `Hash` or `Hasher`, it is simply used as part `derive` and the implementations for all primitive types. With these definitions, the old `Hasher` trait is realized as a new `HashState` trait in the `collections::hash_state` module as an unstable addition for now. The current definition looks like: trait HashState { type Hasher: Hasher; fn hasher(&self) -> Hasher; } The purpose of this trait is to emphasize that the one piece of functionality for implementors is that new instances of `Hasher` can be created. This conceptually represents the two keys from which more instances of a `SipHasher` can be created, and a `HashState` is what's stored in a `HashMap`, not a `Hasher`. Implementors of custom hash algorithms should implement the `Hasher` trait, and only hash algorithms intended for use in hash maps need to implement or worry about the `HashState` trait. The entire module and `HashState` infrastructure remains `#[unstable]` due to it being recently redesigned, but some other stability decision made for the `std::hash` module are: * The `Writer` trait remains `#[experimental]` as it's intended to be replaced with an `io::Writer` (more details soon). * The top-level `hash` function is `#[unstable]` as it is intended to be generic over the hashing algorithm instead of hardwired to `SipHasher` * The inner `sip` module is now private as its one export, `SipHasher` is reexported in the `hash` module. And finally, a few changes were made to the default parameters on `HashMap`. * The `RandomSipHasher` default type parameter was renamed to `RandomState`. This renaming emphasizes that it is not a hasher, but rather just state to generate hashers. It also moves away from the name "sip" as it may not always be implemented as `SipHasher`. This type lives in the `std::collections::hash_map` module as `#[unstable]` * The associated `Hasher` type of `RandomState` is creatively called... `Hasher`! This concrete structure lives next to `RandomState` as an implemenation of the "default hashing algorithm" used for a `HashMap`. Under the hood this is currently implemented as `SipHasher`, but it draws an explicit interface for now and allows us to modify the implementation over time if necessary. There are many breaking changes outlined above, and as a result this commit is a: [breaking-change]
2014-12-09 12:37:23 -08:00
use std::hash::{self, SipHasher};
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
#[test]
fn test_clone_eq() {
let mut m = BTreeSet::new();
2015-01-25 22:05:03 +01:00
m.insert(1);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
m.insert(2);
assert!(m.clone() == m);
}
#[test]
fn test_hash() {
let mut x = BTreeSet::new();
let mut y = BTreeSet::new();
2015-01-25 22:05:03 +01:00
x.insert(1);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
x.insert(2);
x.insert(3);
2015-01-25 22:05:03 +01:00
y.insert(3);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
y.insert(2);
y.insert(1);
std: Stabilize the std::hash module This commit aims to prepare the `std::hash` module for alpha by formalizing its current interface whileholding off on adding `#[stable]` to the new APIs. The current usage with the `HashMap` and `HashSet` types is also reconciled by separating out composable parts of the design. The primary goal of this slight redesign is to separate the concepts of a hasher's state from a hashing algorithm itself. The primary change of this commit is to separate the `Hasher` trait into a `Hasher` and a `HashState` trait. Conceptually the old `Hasher` trait was actually just a factory for various states, but hashing had very little control over how these states were used. Additionally the old `Hasher` trait was actually fairly unrelated to hashing. This commit redesigns the existing `Hasher` trait to match what the notion of a `Hasher` normally implies with the following definition: trait Hasher { type Output; fn reset(&mut self); fn finish(&self) -> Output; } This `Hasher` trait emphasizes that hashing algorithms may produce outputs other than a `u64`, so the output type is made generic. Other than that, however, very little is assumed about a particular hasher. It is left up to implementors to provide specific methods or trait implementations to feed data into a hasher. The corresponding `Hash` trait becomes: trait Hash<H: Hasher> { fn hash(&self, &mut H); } The old default of `SipState` was removed from this trait as it's not something that we're willing to stabilize until the end of time, but the type parameter is always required to implement `Hasher`. Note that the type parameter `H` remains on the trait to enable multidispatch for specialization of hashing for particular hashers. Note that `Writer` is not mentioned in either of `Hash` or `Hasher`, it is simply used as part `derive` and the implementations for all primitive types. With these definitions, the old `Hasher` trait is realized as a new `HashState` trait in the `collections::hash_state` module as an unstable addition for now. The current definition looks like: trait HashState { type Hasher: Hasher; fn hasher(&self) -> Hasher; } The purpose of this trait is to emphasize that the one piece of functionality for implementors is that new instances of `Hasher` can be created. This conceptually represents the two keys from which more instances of a `SipHasher` can be created, and a `HashState` is what's stored in a `HashMap`, not a `Hasher`. Implementors of custom hash algorithms should implement the `Hasher` trait, and only hash algorithms intended for use in hash maps need to implement or worry about the `HashState` trait. The entire module and `HashState` infrastructure remains `#[unstable]` due to it being recently redesigned, but some other stability decision made for the `std::hash` module are: * The `Writer` trait remains `#[experimental]` as it's intended to be replaced with an `io::Writer` (more details soon). * The top-level `hash` function is `#[unstable]` as it is intended to be generic over the hashing algorithm instead of hardwired to `SipHasher` * The inner `sip` module is now private as its one export, `SipHasher` is reexported in the `hash` module. And finally, a few changes were made to the default parameters on `HashMap`. * The `RandomSipHasher` default type parameter was renamed to `RandomState`. This renaming emphasizes that it is not a hasher, but rather just state to generate hashers. It also moves away from the name "sip" as it may not always be implemented as `SipHasher`. This type lives in the `std::collections::hash_map` module as `#[unstable]` * The associated `Hasher` type of `RandomState` is creatively called... `Hasher`! This concrete structure lives next to `RandomState` as an implemenation of the "default hashing algorithm" used for a `HashMap`. Under the hood this is currently implemented as `SipHasher`, but it draws an explicit interface for now and allows us to modify the implementation over time if necessary. There are many breaking changes outlined above, and as a result this commit is a: [breaking-change]
2014-12-09 12:37:23 -08:00
assert!(hash::hash::<_, SipHasher>(&x) == hash::hash::<_, SipHasher>(&y));
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
2014-12-03 21:05:25 -05:00
struct Counter<'a, 'b> {
2015-02-04 21:17:19 -05:00
i: &'a mut usize,
expected: &'b [i32],
2014-12-03 21:05:25 -05:00
}
2015-02-04 21:17:19 -05:00
impl<'a, 'b, 'c> FnMut<(&'c i32,)> for Counter<'a, 'b> {
type Output = bool;
2015-02-04 21:17:19 -05:00
extern "rust-call" fn call_mut(&mut self, (&x,): (&'c i32,)) -> bool {
2014-12-03 21:05:25 -05:00
assert_eq!(x, self.expected[*self.i]);
*self.i += 1;
true
}
}
2015-02-04 21:17:19 -05:00
fn check<F>(a: &[i32], b: &[i32], expected: &[i32], f: F) where
2014-12-03 21:05:25 -05:00
// FIXME Replace Counter with `Box<FnMut(_) -> _>`
2015-02-04 21:17:19 -05:00
F: FnOnce(&BTreeSet<i32>, &BTreeSet<i32>, Counter) -> bool,
2014-12-03 21:05:25 -05:00
{
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
let mut set_a = BTreeSet::new();
let mut set_b = BTreeSet::new();
2015-01-31 12:20:46 -05:00
for x in a { assert!(set_a.insert(*x)) }
for y in b { assert!(set_b.insert(*y)) }
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
let mut i = 0;
2014-12-03 21:05:25 -05:00
f(&set_a, &set_b, Counter { i: &mut i, expected: expected });
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
assert_eq!(i, expected.len());
}
#[test]
fn test_intersection() {
2015-02-04 21:17:19 -05:00
fn check_intersection(a: &[i32], b: &[i32], expected: &[i32]) {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
check(a, b, expected, |x, y, f| x.intersection(y).all(f))
}
2014-11-17 21:39:01 +13:00
check_intersection(&[], &[], &[]);
check_intersection(&[1, 2, 3], &[], &[]);
check_intersection(&[], &[1, 2, 3], &[]);
check_intersection(&[2], &[1, 2, 3], &[2]);
check_intersection(&[1, 2, 3], &[2], &[2]);
check_intersection(&[11, 1, 3, 77, 103, 5, -5],
&[2, 11, 77, -9, -42, 5, 3],
&[3, 5, 11, 77]);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
#[test]
fn test_difference() {
2015-02-04 21:17:19 -05:00
fn check_difference(a: &[i32], b: &[i32], expected: &[i32]) {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
check(a, b, expected, |x, y, f| x.difference(y).all(f))
}
2014-11-17 21:39:01 +13:00
check_difference(&[], &[], &[]);
check_difference(&[1, 12], &[], &[1, 12]);
check_difference(&[], &[1, 2, 3, 9], &[]);
check_difference(&[1, 3, 5, 9, 11],
&[3, 9],
&[1, 5, 11]);
check_difference(&[-5, 11, 22, 33, 40, 42],
&[-12, -5, 14, 23, 34, 38, 39, 50],
&[11, 22, 33, 40, 42]);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
#[test]
fn test_symmetric_difference() {
2015-02-04 21:17:19 -05:00
fn check_symmetric_difference(a: &[i32], b: &[i32], expected: &[i32]) {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
check(a, b, expected, |x, y, f| x.symmetric_difference(y).all(f))
}
2014-11-17 21:39:01 +13:00
check_symmetric_difference(&[], &[], &[]);
check_symmetric_difference(&[1, 2, 3], &[2], &[1, 3]);
check_symmetric_difference(&[2], &[1, 2, 3], &[1, 3]);
check_symmetric_difference(&[1, 3, 5, 9, 11],
&[-2, 3, 9, 14, 22],
&[-2, 1, 5, 11, 14, 22]);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
#[test]
fn test_union() {
2015-02-04 21:17:19 -05:00
fn check_union(a: &[i32], b: &[i32], expected: &[i32]) {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
check(a, b, expected, |x, y, f| x.union(y).all(f))
}
2014-11-17 21:39:01 +13:00
check_union(&[], &[], &[]);
check_union(&[1, 2, 3], &[2], &[1, 2, 3]);
check_union(&[2], &[1, 2, 3], &[1, 2, 3]);
check_union(&[1, 3, 5, 9, 11, 16, 19, 24],
&[-2, 1, 5, 9, 13, 19],
&[-2, 1, 3, 5, 9, 11, 13, 16, 19, 24]);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
#[test]
fn test_zip() {
let mut x = BTreeSet::new();
2015-02-04 21:17:19 -05:00
x.insert(5);
x.insert(12);
x.insert(11);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
let mut y = BTreeSet::new();
y.insert("foo");
y.insert("bar");
let x = x;
let y = y;
let mut z = x.iter().zip(y.iter());
// FIXME: #5801: this needs a type hint to compile...
2015-02-04 21:17:19 -05:00
let result: Option<(&usize, & &'static str)> = z.next();
assert_eq!(result.unwrap(), (&5, &("bar")));
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
2015-02-04 21:17:19 -05:00
let result: Option<(&usize, & &'static str)> = z.next();
assert_eq!(result.unwrap(), (&11, &("foo")));
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
2015-02-04 21:17:19 -05:00
let result: Option<(&usize, & &'static str)> = z.next();
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
assert!(result.is_none());
}
#[test]
fn test_from_iter() {
2015-01-25 22:05:03 +01:00
let xs = [1, 2, 3, 4, 5, 6, 7, 8, 9];
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
let set: BTreeSet<int> = xs.iter().map(|&x| x).collect();
2015-01-31 12:20:46 -05:00
for x in &xs {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
assert!(set.contains(x));
}
}
#[test]
fn test_show() {
let mut set: BTreeSet<int> = BTreeSet::new();
let empty: BTreeSet<int> = BTreeSet::new();
set.insert(1);
set.insert(2);
2015-01-06 16:16:35 -08:00
let set_str = format!("{:?}", set);
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
std: Rename Show/String to Debug/Display This commit is an implementation of [RFC 565][rfc] which is a stabilization of the `std::fmt` module and the implementations of various formatting traits. Specifically, the following changes were performed: [rfc]: https://github.com/rust-lang/rfcs/blob/master/text/0565-show-string-guidelines.md * The `Show` trait is now deprecated, it was renamed to `Debug` * The `String` trait is now deprecated, it was renamed to `Display` * Many `Debug` and `Display` implementations were audited in accordance with the RFC and audited implementations now have the `#[stable]` attribute * Integers and floats no longer print a suffix * Smart pointers no longer print details that they are a smart pointer * Paths with `Debug` are now quoted and escape characters * The `unwrap` methods on `Result` now require `Display` instead of `Debug` * The `Error` trait no longer has a `detail` method and now requires that `Display` must be implemented. With the loss of `String`, this has moved into libcore. * `impl<E: Error> FromError<E> for Box<Error>` now exists * `derive(Show)` has been renamed to `derive(Debug)`. This is not currently warned about due to warnings being emitted on stage1+ While backwards compatibility is attempted to be maintained with a blanket implementation of `Display` for the old `String` trait (and the same for `Show`/`Debug`) this is still a breaking change due to primitives no longer implementing `String` as well as modifications such as `unwrap` and the `Error` trait. Most code is fairly straightforward to update with a rename or tweaks of method calls. [breaking-change] Closes #21436
2015-01-20 15:45:07 -08:00
assert_eq!(set_str, "BTreeSet {1, 2}");
2015-01-06 16:16:35 -08:00
assert_eq!(format!("{:?}", empty), "BTreeSet {}");
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
}