759 lines
22 KiB
Rust
Raw Normal View History

complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// This is pretty much entirely stolen from TreeSet, since BTreeMap has an identical interface
// to TreeMap
use core::prelude::*;
2015-01-03 22:42:21 -05:00
use core::cmp::Ordering::{self, Less, Greater, Equal};
std: Rename Show/String to Debug/Display This commit is an implementation of [RFC 565][rfc] which is a stabilization of the `std::fmt` module and the implementations of various formatting traits. Specifically, the following changes were performed: [rfc]: https://github.com/rust-lang/rfcs/blob/master/text/0565-show-string-guidelines.md * The `Show` trait is now deprecated, it was renamed to `Debug` * The `String` trait is now deprecated, it was renamed to `Display` * Many `Debug` and `Display` implementations were audited in accordance with the RFC and audited implementations now have the `#[stable]` attribute * Integers and floats no longer print a suffix * Smart pointers no longer print details that they are a smart pointer * Paths with `Debug` are now quoted and escape characters * The `unwrap` methods on `Result` now require `Display` instead of `Debug` * The `Error` trait no longer has a `detail` method and now requires that `Display` must be implemented. With the loss of `String`, this has moved into libcore. * `impl<E: Error> FromError<E> for Box<Error>` now exists * `derive(Show)` has been renamed to `derive(Debug)`. This is not currently warned about due to warnings being emitted on stage1+ While backwards compatibility is attempted to be maintained with a blanket implementation of `Display` for the old `String` trait (and the same for `Show`/`Debug`) this is still a breaking change due to primitives no longer implementing `String` as well as modifications such as `unwrap` and the `Error` trait. Most code is fairly straightforward to update with a rename or tweaks of method calls. [breaking-change] Closes #21436
2015-01-20 15:45:07 -08:00
use core::fmt::Debug;
use core::fmt;
use core::iter::{Peekable, Map, FromIterator};
use core::ops::{BitOr, BitAnd, BitXor, Sub};
use borrow::Borrow;
use btree_map::{BTreeMap, Keys};
use Bound;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
// FIXME(conventions): implement bounded iterators
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// A set based on a B-Tree.
2014-10-05 09:48:38 -04:00
///
/// See BTreeMap's documentation for a detailed discussion of this collection's performance
/// benefits and drawbacks.
///
/// It is a logic error for an item to be modified in such a way that the item's ordering relative
/// to any other item, as determined by the `Ord` trait, changes while it is in the set. This is
/// normally only possible through `Cell`, `RefCell`, global state, I/O, or unsafe code.
#[derive(Clone, Hash, PartialEq, Eq, Ord, PartialOrd)]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
pub struct BTreeSet<T>{
map: BTreeMap<T, ()>,
}
/// An iterator over a BTreeSet's items.
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Iter<'a, T: 'a> {
iter: Keys<'a, T, ()>
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// An owning iterator over a BTreeSet's items.
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct IntoIter<T> {
iter: Map<::btree_map::IntoIter<T, ()>, fn((T, ())) -> T>
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// An iterator over a sub-range of BTreeSet's items.
pub struct Range<'a, T: 'a> {
iter: Map<::btree_map::Range<'a, T, ()>, fn((&'a T, &'a ())) -> &'a T>
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// A lazy iterator producing elements in the set difference (in-order).
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Difference<'a, T:'a> {
a: Peekable<Iter<'a, T>>,
b: Peekable<Iter<'a, T>>,
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// A lazy iterator producing elements in the set symmetric difference (in-order).
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct SymmetricDifference<'a, T:'a> {
a: Peekable<Iter<'a, T>>,
b: Peekable<Iter<'a, T>>,
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// A lazy iterator producing elements in the set intersection (in-order).
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Intersection<'a, T:'a> {
a: Peekable<Iter<'a, T>>,
b: Peekable<Iter<'a, T>>,
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// A lazy iterator producing elements in the set union (in-order).
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Union<'a, T:'a> {
a: Peekable<Iter<'a, T>>,
b: Peekable<Iter<'a, T>>,
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
impl<T: Ord> BTreeSet<T> {
/// Makes a new BTreeSet with a reasonable choice of B.
2014-12-12 22:09:40 -06:00
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut set: BTreeSet<i32> = BTreeSet::new();
2014-12-12 22:09:40 -06:00
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
pub fn new() -> BTreeSet<T> {
BTreeSet { map: BTreeMap::new() }
}
/// Makes a new BTreeSet with the given B.
2014-10-05 09:48:38 -04:00
///
/// B cannot be less than 2.
#[unstable(feature = "collections",
reason = "probably want this to be on the type, eventually")]
2015-02-04 21:17:19 -05:00
pub fn with_b(b: usize) -> BTreeSet<T> {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
BTreeSet { map: BTreeMap::with_b(b) }
}
}
impl<T> BTreeSet<T> {
/// Gets an iterator over the BTreeSet's contents.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let set: BTreeSet<usize> = [1, 2, 3, 4].iter().cloned().collect();
///
/// for x in set.iter() {
/// println!("{}", x);
/// }
///
/// let v: Vec<_> = set.iter().cloned().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(v, [1, 2, 3, 4]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-30 19:07:53 -05:00
pub fn iter(&self) -> Iter<T> {
Iter { iter: self.map.keys() }
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
}
impl<T: Ord> BTreeSet<T> {
/// Constructs a double-ended iterator over a sub-range of elements in the set, starting
/// at min, and ending at max. If min is `Unbounded`, then it will be treated as "negative
/// infinity", and if max is `Unbounded`, then it will be treated as "positive infinity".
/// Thus range(Unbounded, Unbounded) will yield the whole collection.
///
/// # Examples
///
/// ```
2015-03-13 15:28:35 -07:00
/// # #![feature(collections)]
/// use std::collections::BTreeSet;
/// use std::collections::Bound::{Included, Unbounded};
///
/// let mut set = BTreeSet::new();
/// set.insert(3);
/// set.insert(5);
/// set.insert(8);
/// for &elem in set.range(Included(&4), Included(&8)) {
/// println!("{}", elem);
/// }
/// assert_eq!(Some(&5), set.range(Included(&4), Unbounded).next());
/// ```
#[unstable(feature = "collections",
reason = "matches collection reform specification, waiting for dust to settle")]
pub fn range<'a>(&'a self, min: Bound<&T>, max: Bound<&T>) -> Range<'a, T> {
fn first<A, B>((a, _): (A, B)) -> A { a }
let first: fn((&'a T, &'a ())) -> &'a T = first; // coerce to fn pointer
Range { iter: self.map.range(min, max).map(first) }
}
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
impl<T: Ord> BTreeSet<T> {
/// Visits the values representing the difference, in ascending order.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut a = BTreeSet::new();
/// a.insert(1);
/// a.insert(2);
///
/// let mut b = BTreeSet::new();
/// b.insert(2);
/// b.insert(3);
///
/// let diff: Vec<_> = a.difference(&b).cloned().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(diff, [1]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn difference<'a>(&'a self, other: &'a BTreeSet<T>) -> Difference<'a, T> {
Difference{a: self.iter().peekable(), b: other.iter().peekable()}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Visits the values representing the symmetric difference, in ascending order.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut a = BTreeSet::new();
/// a.insert(1);
/// a.insert(2);
///
/// let mut b = BTreeSet::new();
/// b.insert(2);
/// b.insert(3);
///
/// let sym_diff: Vec<_> = a.symmetric_difference(&b).cloned().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(sym_diff, [1, 3]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
pub fn symmetric_difference<'a>(&'a self, other: &'a BTreeSet<T>)
-> SymmetricDifference<'a, T> {
SymmetricDifference{a: self.iter().peekable(), b: other.iter().peekable()}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Visits the values representing the intersection, in ascending order.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut a = BTreeSet::new();
/// a.insert(1);
/// a.insert(2);
///
/// let mut b = BTreeSet::new();
/// b.insert(2);
/// b.insert(3);
///
/// let intersection: Vec<_> = a.intersection(&b).cloned().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(intersection, [2]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
pub fn intersection<'a>(&'a self, other: &'a BTreeSet<T>)
-> Intersection<'a, T> {
Intersection{a: self.iter().peekable(), b: other.iter().peekable()}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Visits the values representing the union, in ascending order.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut a = BTreeSet::new();
/// a.insert(1);
///
/// let mut b = BTreeSet::new();
/// b.insert(2);
///
/// let union: Vec<_> = a.union(&b).cloned().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(union, [1, 2]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn union<'a>(&'a self, other: &'a BTreeSet<T>) -> Union<'a, T> {
Union{a: self.iter().peekable(), b: other.iter().peekable()}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Returns the number of elements in the set.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut v = BTreeSet::new();
/// assert_eq!(v.len(), 0);
2015-01-25 22:05:03 +01:00
/// v.insert(1);
/// assert_eq!(v.len(), 1);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn len(&self) -> usize { self.map.len() }
/// Returns true if the set contains no elements.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut v = BTreeSet::new();
/// assert!(v.is_empty());
2015-01-25 22:05:03 +01:00
/// v.insert(1);
/// assert!(!v.is_empty());
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_empty(&self) -> bool { self.len() == 0 }
/// Clears the set, removing all values.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut v = BTreeSet::new();
2015-01-25 22:05:03 +01:00
/// v.insert(1);
/// v.clear();
/// assert!(v.is_empty());
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn clear(&mut self) {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
self.map.clear()
}
/// Returns `true` if the set contains a value.
///
/// The value may be any borrowed form of the set's value type,
/// but the ordering on the borrowed form *must* match the
/// ordering on the value type.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let set: BTreeSet<_> = [1, 2, 3].iter().cloned().collect();
/// assert_eq!(set.contains(&1), true);
/// assert_eq!(set.contains(&4), false);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn contains<Q: ?Sized>(&self, value: &Q) -> bool where T: Borrow<Q>, Q: Ord {
self.map.contains_key(value)
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Returns `true` if the set has no elements in common with `other`.
/// This is equivalent to checking for an empty intersection.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<_> = [1, 2, 3].iter().cloned().collect();
/// let mut b = BTreeSet::new();
///
/// assert_eq!(a.is_disjoint(&b), true);
/// b.insert(4);
/// assert_eq!(a.is_disjoint(&b), true);
/// b.insert(1);
/// assert_eq!(a.is_disjoint(&b), false);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_disjoint(&self, other: &BTreeSet<T>) -> bool {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
self.intersection(other).next().is_none()
}
/// Returns `true` if the set is a subset of another.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let sup: BTreeSet<_> = [1, 2, 3].iter().cloned().collect();
/// let mut set = BTreeSet::new();
///
/// assert_eq!(set.is_subset(&sup), true);
/// set.insert(2);
/// assert_eq!(set.is_subset(&sup), true);
/// set.insert(4);
/// assert_eq!(set.is_subset(&sup), false);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_subset(&self, other: &BTreeSet<T>) -> bool {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
// Stolen from TreeMap
let mut x = self.iter();
let mut y = other.iter();
let mut a = x.next();
let mut b = y.next();
while a.is_some() {
if b.is_none() {
return false;
}
let a1 = a.unwrap();
let b1 = b.unwrap();
match b1.cmp(a1) {
Less => (),
Greater => return false,
Equal => a = x.next(),
}
b = y.next();
}
true
}
/// Returns `true` if the set is a superset of another.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let sub: BTreeSet<_> = [1, 2].iter().cloned().collect();
/// let mut set = BTreeSet::new();
///
/// assert_eq!(set.is_superset(&sub), false);
///
/// set.insert(0);
/// set.insert(1);
/// assert_eq!(set.is_superset(&sub), false);
///
/// set.insert(2);
/// assert_eq!(set.is_superset(&sub), true);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_superset(&self, other: &BTreeSet<T>) -> bool {
other.is_subset(self)
}
/// Adds a value to the set. Returns `true` if the value was not already
/// present in the set.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut set = BTreeSet::new();
///
2015-01-25 22:05:03 +01:00
/// assert_eq!(set.insert(2), true);
/// assert_eq!(set.insert(2), false);
/// assert_eq!(set.len(), 1);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn insert(&mut self, value: T) -> bool {
self.map.insert(value, ()).is_none()
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
/// Removes a value from the set. Returns `true` if the value was
/// present in the set.
///
/// The value may be any borrowed form of the set's value type,
/// but the ordering on the borrowed form *must* match the
/// ordering on the value type.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let mut set = BTreeSet::new();
///
2015-01-25 22:05:03 +01:00
/// set.insert(2);
/// assert_eq!(set.remove(&2), true);
/// assert_eq!(set.remove(&2), false);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn remove<Q: ?Sized>(&mut self, value: &Q) -> bool where T: Borrow<Q>, Q: Ord {
self.map.remove(value).is_some()
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
impl<T: Ord> FromIterator<T> for BTreeSet<T> {
fn from_iter<I: IntoIterator<Item=T>>(iter: I) -> BTreeSet<T> {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
let mut set = BTreeSet::new();
set.extend(iter);
set
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> IntoIterator for BTreeSet<T> {
type Item = T;
type IntoIter = IntoIter<T>;
/// Gets an iterator for moving out the BtreeSet's contents.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let set: BTreeSet<usize> = [1, 2, 3, 4].iter().cloned().collect();
///
/// let v: Vec<_> = set.into_iter().collect();
/// assert_eq!(v, [1, 2, 3, 4]);
/// ```
fn into_iter(self) -> IntoIter<T> {
fn first<A, B>((a, _): (A, B)) -> A { a }
let first: fn((T, ())) -> T = first; // coerce to fn pointer
IntoIter { iter: self.map.into_iter().map(first) }
}
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-07 22:01:05 -05:00
impl<'a, T> IntoIterator for &'a BTreeSet<T> {
type Item = &'a T;
type IntoIter = Iter<'a, T>;
2015-01-07 22:01:05 -05:00
fn into_iter(self) -> Iter<'a, T> {
self.iter()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Ord> Extend<T> for BTreeSet<T> {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
#[inline]
fn extend<Iter: IntoIterator<Item=T>>(&mut self, iter: Iter) {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
for elem in iter {
self.insert(elem);
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
impl<T: Ord> Default for BTreeSet<T> {
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn default() -> BTreeSet<T> {
BTreeSet::new()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-31 15:45:13 -05:00
impl<'a, 'b, T: Ord + Clone> Sub<&'b BTreeSet<T>> for &'a BTreeSet<T> {
type Output = BTreeSet<T>;
/// Returns the difference of `self` and `rhs` as a new `BTreeSet<T>`.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<_> = vec![1, 2, 3].into_iter().collect();
/// let b: BTreeSet<_> = vec![3, 4, 5].into_iter().collect();
///
/// let result = &a - &b;
/// let result_vec: Vec<_> = result.into_iter().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(result_vec, [1, 2]);
/// ```
fn sub(self, rhs: &BTreeSet<T>) -> BTreeSet<T> {
self.difference(rhs).cloned().collect()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-31 15:45:13 -05:00
impl<'a, 'b, T: Ord + Clone> BitXor<&'b BTreeSet<T>> for &'a BTreeSet<T> {
type Output = BTreeSet<T>;
/// Returns the symmetric difference of `self` and `rhs` as a new `BTreeSet<T>`.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<_> = vec![1, 2, 3].into_iter().collect();
/// let b: BTreeSet<_> = vec![2, 3, 4].into_iter().collect();
///
/// let result = &a ^ &b;
/// let result_vec: Vec<_> = result.into_iter().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(result_vec, [1, 4]);
/// ```
fn bitxor(self, rhs: &BTreeSet<T>) -> BTreeSet<T> {
self.symmetric_difference(rhs).cloned().collect()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-31 15:45:13 -05:00
impl<'a, 'b, T: Ord + Clone> BitAnd<&'b BTreeSet<T>> for &'a BTreeSet<T> {
type Output = BTreeSet<T>;
/// Returns the intersection of `self` and `rhs` as a new `BTreeSet<T>`.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<_> = vec![1, 2, 3].into_iter().collect();
/// let b: BTreeSet<_> = vec![2, 3, 4].into_iter().collect();
///
/// let result = &a & &b;
/// let result_vec: Vec<_> = result.into_iter().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(result_vec, [2, 3]);
/// ```
fn bitand(self, rhs: &BTreeSet<T>) -> BTreeSet<T> {
self.intersection(rhs).cloned().collect()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2014-12-31 15:45:13 -05:00
impl<'a, 'b, T: Ord + Clone> BitOr<&'b BTreeSet<T>> for &'a BTreeSet<T> {
type Output = BTreeSet<T>;
/// Returns the union of `self` and `rhs` as a new `BTreeSet<T>`.
///
/// # Examples
///
/// ```
/// use std::collections::BTreeSet;
///
/// let a: BTreeSet<_> = vec![1, 2, 3].into_iter().collect();
/// let b: BTreeSet<_> = vec![3, 4, 5].into_iter().collect();
///
/// let result = &a | &b;
/// let result_vec: Vec<_> = result.into_iter().collect();
2015-02-24 21:15:45 +03:00
/// assert_eq!(result_vec, [1, 2, 3, 4, 5]);
/// ```
fn bitor(self, rhs: &BTreeSet<T>) -> BTreeSet<T> {
self.union(rhs).cloned().collect()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
std: Rename Show/String to Debug/Display This commit is an implementation of [RFC 565][rfc] which is a stabilization of the `std::fmt` module and the implementations of various formatting traits. Specifically, the following changes were performed: [rfc]: https://github.com/rust-lang/rfcs/blob/master/text/0565-show-string-guidelines.md * The `Show` trait is now deprecated, it was renamed to `Debug` * The `String` trait is now deprecated, it was renamed to `Display` * Many `Debug` and `Display` implementations were audited in accordance with the RFC and audited implementations now have the `#[stable]` attribute * Integers and floats no longer print a suffix * Smart pointers no longer print details that they are a smart pointer * Paths with `Debug` are now quoted and escape characters * The `unwrap` methods on `Result` now require `Display` instead of `Debug` * The `Error` trait no longer has a `detail` method and now requires that `Display` must be implemented. With the loss of `String`, this has moved into libcore. * `impl<E: Error> FromError<E> for Box<Error>` now exists * `derive(Show)` has been renamed to `derive(Debug)`. This is not currently warned about due to warnings being emitted on stage1+ While backwards compatibility is attempted to be maintained with a blanket implementation of `Display` for the old `String` trait (and the same for `Show`/`Debug`) this is still a breaking change due to primitives no longer implementing `String` as well as modifications such as `unwrap` and the `Error` trait. Most code is fairly straightforward to update with a rename or tweaks of method calls. [breaking-change] Closes #21436
2015-01-20 15:45:07 -08:00
impl<T: Debug> Debug for BTreeSet<T> {
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.debug_set().entries(self.iter()).finish()
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
}
}
impl<'a, T> Clone for Iter<'a, T> {
fn clone(&self) -> Iter<'a, T> { Iter { iter: self.iter.clone() } }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> Iterator for Iter<'a, T> {
type Item = &'a T;
fn next(&mut self) -> Option<&'a T> { self.iter.next() }
2015-02-04 21:17:19 -05:00
fn size_hint(&self) -> (usize, Option<usize>) { self.iter.size_hint() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> DoubleEndedIterator for Iter<'a, T> {
fn next_back(&mut self) -> Option<&'a T> { self.iter.next_back() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> ExactSizeIterator for Iter<'a, T> {}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> Iterator for IntoIter<T> {
type Item = T;
fn next(&mut self) -> Option<T> { self.iter.next() }
2015-02-04 21:17:19 -05:00
fn size_hint(&self) -> (usize, Option<usize>) { self.iter.size_hint() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> DoubleEndedIterator for IntoIter<T> {
fn next_back(&mut self) -> Option<T> { self.iter.next_back() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> ExactSizeIterator for IntoIter<T> {}
impl<'a, T> Clone for Range<'a, T> {
fn clone(&self) -> Range<'a, T> { Range { iter: self.iter.clone() } }
}
impl<'a, T> Iterator for Range<'a, T> {
type Item = &'a T;
fn next(&mut self) -> Option<&'a T> { self.iter.next() }
}
impl<'a, T> DoubleEndedIterator for Range<'a, T> {
fn next_back(&mut self) -> Option<&'a T> { self.iter.next_back() }
}
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
/// Compare `x` and `y`, but return `short` if x is None and `long` if y is None
fn cmp_opt<T: Ord>(x: Option<&T>, y: Option<&T>,
short: Ordering, long: Ordering) -> Ordering {
match (x, y) {
(None , _ ) => short,
(_ , None ) => long,
(Some(x1), Some(y1)) => x1.cmp(y1),
}
}
impl<'a, T> Clone for Difference<'a, T> {
fn clone(&self) -> Difference<'a, T> {
Difference { a: self.a.clone(), b: self.b.clone() }
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T: Ord> Iterator for Difference<'a, T> {
type Item = &'a T;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn next(&mut self) -> Option<&'a T> {
loop {
match cmp_opt(self.a.peek(), self.b.peek(), Less, Less) {
Less => return self.a.next(),
Equal => { self.a.next(); self.b.next(); }
Greater => { self.b.next(); }
}
}
}
}
impl<'a, T> Clone for SymmetricDifference<'a, T> {
fn clone(&self) -> SymmetricDifference<'a, T> {
SymmetricDifference { a: self.a.clone(), b: self.b.clone() }
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T: Ord> Iterator for SymmetricDifference<'a, T> {
type Item = &'a T;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn next(&mut self) -> Option<&'a T> {
loop {
match cmp_opt(self.a.peek(), self.b.peek(), Greater, Less) {
Less => return self.a.next(),
Equal => { self.a.next(); self.b.next(); }
Greater => return self.b.next(),
}
}
}
}
impl<'a, T> Clone for Intersection<'a, T> {
fn clone(&self) -> Intersection<'a, T> {
Intersection { a: self.a.clone(), b: self.b.clone() }
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T: Ord> Iterator for Intersection<'a, T> {
type Item = &'a T;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn next(&mut self) -> Option<&'a T> {
loop {
let o_cmp = match (self.a.peek(), self.b.peek()) {
(None , _ ) => None,
(_ , None ) => None,
(Some(a1), Some(b1)) => Some(a1.cmp(b1)),
};
match o_cmp {
None => return None,
Some(Less) => { self.a.next(); }
Some(Equal) => { self.b.next(); return self.a.next() }
Some(Greater) => { self.b.next(); }
}
}
}
}
impl<'a, T> Clone for Union<'a, T> {
fn clone(&self) -> Union<'a, T> {
Union { a: self.a.clone(), b: self.b.clone() }
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T: Ord> Iterator for Union<'a, T> {
type Item = &'a T;
complete btree rewrite Replaces BTree with BTreeMap and BTreeSet, which are completely new implementations. BTreeMap's internal Node representation is particularly inefficient at the moment to make this first implementation easy to reason about and fairly safe. Both collections are also currently missing some of the tooling specific to sorted collections, which is planned as future work pending reform of these APIs. General implementation issues are discussed with TODOs internally Perf results on x86_64 Linux: test treemap::bench::find_rand_100 ... bench: 76 ns/iter (+/- 4) test treemap::bench::find_rand_10_000 ... bench: 163 ns/iter (+/- 6) test treemap::bench::find_seq_100 ... bench: 77 ns/iter (+/- 3) test treemap::bench::find_seq_10_000 ... bench: 115 ns/iter (+/- 1) test treemap::bench::insert_rand_100 ... bench: 111 ns/iter (+/- 1) test treemap::bench::insert_rand_10_000 ... bench: 996 ns/iter (+/- 18) test treemap::bench::insert_seq_100 ... bench: 486 ns/iter (+/- 20) test treemap::bench::insert_seq_10_000 ... bench: 800 ns/iter (+/- 15) test btree::map::bench::find_rand_100 ... bench: 74 ns/iter (+/- 4) test btree::map::bench::find_rand_10_000 ... bench: 153 ns/iter (+/- 5) test btree::map::bench::find_seq_100 ... bench: 82 ns/iter (+/- 1) test btree::map::bench::find_seq_10_000 ... bench: 108 ns/iter (+/- 0) test btree::map::bench::insert_rand_100 ... bench: 220 ns/iter (+/- 1) test btree::map::bench::insert_rand_10_000 ... bench: 620 ns/iter (+/- 16) test btree::map::bench::insert_seq_100 ... bench: 411 ns/iter (+/- 12) test btree::map::bench::insert_seq_10_000 ... bench: 534 ns/iter (+/- 14) BTreeMap still has a lot of room for optimization, but it's already beating out TreeMap on most access patterns. [breaking-change]
2014-09-16 10:49:26 -04:00
fn next(&mut self) -> Option<&'a T> {
loop {
match cmp_opt(self.a.peek(), self.b.peek(), Greater, Less) {
Less => return self.a.next(),
Equal => { self.b.next(); return self.a.next() }
Greater => return self.b.next(),
}
}
}
}