rust/src/libcollections/bit.rs

2801 lines
82 KiB
Rust
Raw Normal View History

// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// FIXME(Gankro): Bitv and BitvSet are very tightly coupled. Ideally (for maintenance),
// they should be in separate files/modules, with BitvSet only using Bitv's public API.
2014-07-20 07:45:33 -05:00
//! Collections implemented with bit vectors.
//!
//! # Examples
2014-07-20 07:45:33 -05:00
//!
//! This is a simple example of the [Sieve of Eratosthenes][sieve]
//! which calculates prime numbers up to a given limit.
//!
//! [sieve]: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
//!
//! ```
//! use std::collections::{BitvSet, Bitv};
//! use std::num::Float;
2014-07-20 07:45:33 -05:00
//! use std::iter;
//!
//! let max_prime = 10000;
//!
//! // Store the primes as a BitvSet
//! let primes = {
2014-07-20 07:59:13 -05:00
//! // Assume all numbers are prime to begin, and then we
//! // cross off non-primes progressively
2014-07-20 07:45:33 -05:00
//! let mut bv = Bitv::with_capacity(max_prime, true);
//!
//! // Neither 0 nor 1 are prime
//! bv.set(0, false);
//! bv.set(1, false);
//!
//! for i in iter::range_inclusive(2, (max_prime as f64).sqrt() as uint) {
2014-07-20 07:45:33 -05:00
//! // if i is a prime
2014-07-20 07:59:13 -05:00
//! if bv[i] {
//! // Mark all multiples of i as non-prime (any multiples below i * i
2014-07-20 07:45:33 -05:00
//! // will have been marked as non-prime previously)
//! for j in iter::range_step(i * i, max_prime, i) { bv.set(j, false) }
//! }
//! }
//! BitvSet::from_bitv(bv)
//! };
//!
//! // Simple primality tests below our max bound
//! let print_primes = 20;
//! print!("The primes below {} are: ", print_primes);
//! for x in range(0, print_primes) {
//! if primes.contains(&x) {
//! print!("{} ", x);
//! }
//! }
//! println!("");
//!
//! // We can manipulate the internal Bitv
//! let num_primes = primes.get_ref().iter().filter(|x| *x).count();
//! println!("There are {} primes below {}", num_primes, max_prime);
//! ```
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 20:50:12 -05:00
use core::prelude::*;
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 20:50:12 -05:00
use core::cmp;
use core::default::Default;
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 20:50:12 -05:00
use core::fmt;
2014-11-06 11:32:37 -06:00
use core::iter::{Chain, Enumerate, Repeat, Skip, Take, repeat};
use core::iter;
use core::num::Int;
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 20:50:12 -05:00
use core::slice;
use core::u32;
use std::hash;
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 20:50:12 -05:00
use vec::Vec;
// FIXME(conventions): look, we just need to refactor this whole thing. Inside and out.
type MatchWords<'a> = Chain<MaskWords<'a>, Skip<Take<Enumerate<Repeat<u32>>>>>;
// Take two BitV's, and return iterators of their words, where the shorter one
// has been padded with 0's
fn match_words <'a,'b>(a: &'a Bitv, b: &'b Bitv) -> (MatchWords<'a>, MatchWords<'b>) {
let a_len = a.storage.len();
let b_len = b.storage.len();
// have to uselessly pretend to pad the longer one for type matching
if a_len < b_len {
2014-11-06 11:32:37 -06:00
(a.mask_words(0).chain(repeat(0u32).enumerate().take(b_len).skip(a_len)),
b.mask_words(0).chain(repeat(0u32).enumerate().take(0).skip(0)))
} else {
2014-11-06 11:32:37 -06:00
(a.mask_words(0).chain(repeat(0u32).enumerate().take(0).skip(0)),
b.mask_words(0).chain(repeat(0u32).enumerate().take(a_len).skip(b_len)))
}
}
static TRUE: bool = true;
static FALSE: bool = false;
2014-08-04 05:48:39 -05:00
/// The bitvector type.
2014-04-28 23:54:27 -05:00
///
/// # Examples
2014-04-28 23:54:27 -05:00
///
/// ```rust
2014-07-20 10:09:53 -05:00
/// use collections::Bitv;
2014-04-28 23:54:27 -05:00
///
/// let mut bv = Bitv::with_capacity(10, false);
2014-04-28 23:54:27 -05:00
///
/// // insert all primes less than 10
/// bv.set(2, true);
/// bv.set(3, true);
/// bv.set(5, true);
/// bv.set(7, true);
/// println!("{}", bv.to_string());
/// println!("total bits set to true: {}", bv.iter().filter(|x| *x).count());
2014-04-28 23:54:27 -05:00
///
/// // flip all values in bitvector, producing non-primes less than 10
/// bv.negate();
/// println!("{}", bv.to_string());
/// println!("total bits set to true: {}", bv.iter().filter(|x| *x).count());
2014-04-28 23:54:27 -05:00
///
/// // reset bitvector to empty
/// bv.clear();
/// println!("{}", bv.to_string());
/// println!("total bits set to true: {}", bv.iter().filter(|x| *x).count());
2014-04-28 23:54:27 -05:00
/// ```
pub struct Bitv {
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
/// Internal representation of the bit vector
storage: Vec<u32>,
/// The number of valid bits in the internal representation
nbits: uint
}
impl Index<uint,bool> for Bitv {
#[inline]
fn index<'a>(&'a self, i: &uint) -> &'a bool {
if self.get(*i) {
&TRUE
} else {
&FALSE
}
}
}
struct MaskWords<'a> {
iter: slice::Items<'a, u32>,
next_word: Option<&'a u32>,
last_word_mask: u32,
offset: uint
}
impl<'a> Iterator<(uint, u32)> for MaskWords<'a> {
/// Returns (offset, word)
#[inline]
2014-12-12 10:09:32 -06:00
fn next(&mut self) -> Option<(uint, u32)> {
let ret = self.next_word;
match ret {
Some(&w) => {
self.next_word = self.iter.next();
self.offset += 1;
// The last word may need to be masked
if self.next_word.is_none() {
Some((self.offset - 1, w & self.last_word_mask))
} else {
Some((self.offset - 1, w))
}
},
None => None
}
}
}
impl Bitv {
#[inline]
fn process<F>(&mut self, other: &Bitv, mut op: F) -> bool where F: FnMut(u32, u32) -> u32 {
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
let len = other.storage.len();
assert_eq!(self.storage.len(), len);
let mut changed = false;
// Notice: `a` is *not* masked here, which is fine as long as
// `op` is a bitwise operation, since any bits that should've
// been masked were fine to change anyway. `b` is masked to
// make sure its unmasked bits do not cause damage.
2014-09-14 22:27:36 -05:00
for (a, (_, b)) in self.storage.iter_mut()
.zip(other.mask_words(0)) {
let w = op(*a, b);
if *a != w {
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
changed = true;
*a = w;
}
}
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
changed
}
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
#[inline]
fn mask_words<'a>(&'a self, mut start: uint) -> MaskWords<'a> {
if start > self.storage.len() {
start = self.storage.len();
}
let mut iter = self.storage[start..].iter();
MaskWords {
next_word: iter.next(),
iter: iter,
last_word_mask: {
let rem = self.nbits % u32::BITS;
if rem > 0 {
(1 << rem) - 1
} else { !0 }
},
offset: start
}
}
2014-08-04 05:48:39 -05:00
/// Creates an empty `Bitv`.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
/// use std::collections::Bitv;
/// let mut bv = Bitv::new();
/// ```
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn new() -> Bitv {
Bitv { storage: Vec::new(), nbits: 0 }
}
2014-08-04 05:48:39 -05:00
/// Creates a `Bitv` that holds `nbits` elements, setting each element
2014-04-28 23:54:27 -05:00
/// to `init`.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
/// use std::collections::Bitv;
///
/// let mut bv = Bitv::with_capacity(10u, false);
/// assert_eq!(bv.len(), 10u);
/// for x in bv.iter() {
/// assert_eq!(x, false);
/// }
/// ```
pub fn with_capacity(nbits: uint, init: bool) -> Bitv {
let mut bitv = Bitv {
storage: Vec::from_elem((nbits + u32::BITS - 1) / u32::BITS,
if init { !0u32 } else { 0u32 }),
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
nbits: nbits
};
// Zero out any unused bits in the highest word if necessary
let used_bits = bitv.nbits % u32::BITS;
if init && used_bits != 0 {
let largest_used_word = (bitv.nbits + u32::BITS - 1) / u32::BITS - 1;
bitv.storage[largest_used_word] &= (1 << used_bits) - 1;
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
}
bitv
}
2014-08-04 05:48:39 -05:00
/// Retrieves the value at index `i`.
2014-07-20 07:28:40 -05:00
///
/// # Panics
2014-07-20 07:28:40 -05:00
///
/// Panics if `i` is out of bounds.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let bv = bitv::from_bytes(&[0b01100000]);
2014-07-20 07:28:40 -05:00
/// assert_eq!(bv.get(0), false);
/// assert_eq!(bv.get(1), true);
2014-07-20 07:59:13 -05:00
///
/// // Can also use array indexing
/// assert_eq!(bv[1], true);
2014-07-20 07:28:40 -05:00
/// ```
#[inline]
pub fn get(&self, i: uint) -> bool {
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
assert!(i < self.nbits);
let w = i / u32::BITS;
let b = i % u32::BITS;
2014-08-11 18:47:46 -05:00
let x = self.storage[w] & (1 << b);
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
x != 0
}
2014-11-16 10:28:13 -06:00
/// Sets the value of a bit at an index `i`.
2014-07-20 07:28:40 -05:00
///
/// # Panics
2014-07-20 07:28:40 -05:00
///
/// Panics if `i` is out of bounds.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
/// use std::collections::Bitv;
///
/// let mut bv = Bitv::with_capacity(5, false);
/// bv.set(3, true);
2014-07-20 07:59:13 -05:00
/// assert_eq!(bv[3], true);
2014-07-20 07:28:40 -05:00
/// ```
#[inline]
pub fn set(&mut self, i: uint, x: bool) {
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
assert!(i < self.nbits);
let w = i / u32::BITS;
let b = i % u32::BITS;
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
let flag = 1 << b;
let val = if x { self.storage[w] | flag }
else { self.storage[w] & !flag };
self.storage[w] = val;
}
2014-08-04 05:48:39 -05:00
/// Sets all bits to 1.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-07-20 10:09:53 -05:00
/// let before = 0b01100000;
/// let after = 0b11111111;
///
2014-11-17 02:39:01 -06:00
/// let mut bv = bitv::from_bytes(&[before]);
2014-07-20 07:28:40 -05:00
/// bv.set_all();
2014-11-17 02:39:01 -06:00
/// assert_eq!(bv, bitv::from_bytes(&[after]));
2014-07-20 10:09:53 -05:00
/// ```
#[inline]
pub fn set_all(&mut self) {
for w in self.storage.iter_mut() { *w = !0u32; }
}
2014-08-04 05:48:39 -05:00
/// Flips all bits.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
///
/// let before = 0b01100000;
/// let after = 0b10011111;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let mut bv = bitv::from_bytes(&[before]);
2014-07-20 07:28:40 -05:00
/// bv.negate();
2014-11-17 02:39:01 -06:00
/// assert_eq!(bv, bitv::from_bytes(&[after]));
2014-07-20 10:09:53 -05:00
/// ```
#[inline]
pub fn negate(&mut self) {
2014-09-14 22:27:36 -05:00
for w in self.storage.iter_mut() { *w = !*w; }
}
2014-08-04 05:48:39 -05:00
/// Calculates the union of two bitvectors. This acts like the bitwise `or`
/// function.
2014-07-20 07:28:40 -05:00
///
2014-08-04 05:48:39 -05:00
/// Sets `self` to the union of `self` and `other`. Both bitvectors must be
/// the same length. Returns `true` if `self` changed.
2014-07-20 07:28:40 -05:00
///
/// # Panics
2014-07-20 07:28:40 -05:00
///
/// Panics if the bitvectors are of different lengths.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
///
/// let a = 0b01100100;
/// let b = 0b01011010;
/// let res = 0b01111110;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let mut a = bitv::from_bytes(&[a]);
/// let b = bitv::from_bytes(&[b]);
2014-07-20 07:28:40 -05:00
///
2014-07-20 10:09:53 -05:00
/// assert!(a.union(&b));
2014-11-17 02:39:01 -06:00
/// assert_eq!(a, bitv::from_bytes(&[res]));
2014-07-20 07:28:40 -05:00
/// ```
#[inline]
pub fn union(&mut self, other: &Bitv) -> bool {
self.process(other, |w1, w2| w1 | w2)
}
2014-08-04 05:48:39 -05:00
/// Calculates the intersection of two bitvectors. This acts like the
/// bitwise `and` function.
2014-07-20 07:28:40 -05:00
///
2014-08-04 05:48:39 -05:00
/// Sets `self` to the intersection of `self` and `other`. Both bitvectors
/// must be the same length. Returns `true` if `self` changed.
2014-07-20 07:28:40 -05:00
///
/// # Panics
2014-07-20 07:28:40 -05:00
///
/// Panics if the bitvectors are of different lengths.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-07-20 10:09:53 -05:00
/// let a = 0b01100100;
/// let b = 0b01011010;
/// let res = 0b01000000;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let mut a = bitv::from_bytes(&[a]);
/// let b = bitv::from_bytes(&[b]);
2014-07-20 10:09:53 -05:00
///
/// assert!(a.intersect(&b));
2014-11-17 02:39:01 -06:00
/// assert_eq!(a, bitv::from_bytes(&[res]));
2014-07-20 07:28:40 -05:00
/// ```
#[inline]
pub fn intersect(&mut self, other: &Bitv) -> bool {
self.process(other, |w1, w2| w1 & w2)
}
2014-08-04 05:48:39 -05:00
/// Calculates the difference between two bitvectors.
2014-07-20 07:28:40 -05:00
///
2014-08-04 05:48:39 -05:00
/// Sets each element of `self` to the value of that element minus the
2014-07-20 07:28:40 -05:00
/// element of `other` at the same index. Both bitvectors must be the same
2014-08-04 05:48:39 -05:00
/// length. Returns `true` if `self` changed.
2014-07-20 07:28:40 -05:00
///
/// # Panics
2014-07-20 07:28:40 -05:00
///
/// Panics if the bitvectors are of different length.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
///
/// let a = 0b01100100;
/// let b = 0b01011010;
/// let a_b = 0b00100100; // a - b
/// let b_a = 0b00011010; // b - a
///
2014-11-17 02:39:01 -06:00
/// let mut bva = bitv::from_bytes(&[a]);
/// let bvb = bitv::from_bytes(&[b]);
2014-07-20 07:28:40 -05:00
///
2014-07-20 10:09:53 -05:00
/// assert!(bva.difference(&bvb));
2014-11-17 02:39:01 -06:00
/// assert_eq!(bva, bitv::from_bytes(&[a_b]));
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let bva = bitv::from_bytes(&[a]);
/// let mut bvb = bitv::from_bytes(&[b]);
2014-07-20 10:09:53 -05:00
///
/// assert!(bvb.difference(&bva));
2014-11-17 02:39:01 -06:00
/// assert_eq!(bvb, bitv::from_bytes(&[b_a]));
2014-07-20 07:28:40 -05:00
/// ```
#[inline]
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
pub fn difference(&mut self, other: &Bitv) -> bool {
self.process(other, |w1, w2| w1 & !w2)
}
2014-07-20 07:28:40 -05:00
/// Returns `true` if all bits are 1.
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::Bitv;
2014-07-20 07:28:40 -05:00
///
/// let mut bv = Bitv::with_capacity(5, true);
/// assert_eq!(bv.all(), true);
///
/// bv.set(1, false);
/// assert_eq!(bv.all(), false);
/// ```
#[inline]
pub fn all(&self) -> bool {
let mut last_word = !0u32;
// Check that every word but the last is all-ones...
self.mask_words(0).all(|(_, elem)|
{ let tmp = last_word; last_word = elem; tmp == !0u32 }) &&
// ...and that the last word is ones as far as it needs to be
(last_word == ((1 << self.nbits % u32::BITS) - 1) || last_word == !0u32)
}
2014-08-04 05:48:39 -05:00
/// Returns an iterator over the elements of the vector in order.
2014-04-28 23:54:27 -05:00
///
/// # Examples
2014-04-28 23:54:27 -05:00
///
2014-07-20 07:28:40 -05:00
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let bv = bitv::from_bytes(&[0b01110100, 0b10010010]);
2014-07-20 10:09:53 -05:00
/// assert_eq!(bv.iter().filter(|x| *x).count(), 7);
2014-04-28 23:54:27 -05:00
/// ```
#[inline]
pub fn iter<'a>(&'a self) -> Bits<'a> {
Bits {bitv: self, next_idx: 0, end_idx: self.nbits}
2013-05-02 17:33:27 -05:00
}
2014-08-04 05:48:39 -05:00
/// Returns `true` if all bits are 0.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::Bitv;
2014-07-20 07:28:40 -05:00
///
/// let mut bv = Bitv::with_capacity(10, false);
/// assert_eq!(bv.none(), true);
///
/// bv.set(3, true);
/// assert_eq!(bv.none(), false);
/// ```
pub fn none(&self) -> bool {
self.mask_words(0).all(|(_, w)| w == 0)
}
2014-08-04 05:48:39 -05:00
/// Returns `true` if any bit is 1.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::Bitv;
2014-07-20 07:28:40 -05:00
///
/// let mut bv = Bitv::with_capacity(10, false);
/// assert_eq!(bv.any(), false);
///
/// bv.set(3, true);
/// assert_eq!(bv.any(), true);
/// ```
#[inline]
pub fn any(&self) -> bool {
!self.none()
}
2014-08-04 05:48:39 -05:00
/// Organises the bits into bytes, such that the first bit in the
2014-07-20 07:28:40 -05:00
/// `Bitv` becomes the high-order bit of the first byte. If the
2014-08-04 05:48:39 -05:00
/// size of the `Bitv` is not a multiple of eight then trailing bits
2014-07-20 07:59:13 -05:00
/// will be filled-in with `false`.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::Bitv;
2014-07-20 07:28:40 -05:00
///
/// let mut bv = Bitv::with_capacity(3, true);
/// bv.set(1, false);
///
/// assert_eq!(bv.to_bytes(), vec!(0b10100000));
///
/// let mut bv = Bitv::with_capacity(9, false);
/// bv.set(2, true);
/// bv.set(8, true);
///
/// assert_eq!(bv.to_bytes(), vec!(0b00100000, 0b10000000));
/// ```
pub fn to_bytes(&self) -> Vec<u8> {
fn bit (bitv: &Bitv, byte: uint, bit: uint) -> u8 {
let offset = byte * 8 + bit;
if offset >= bitv.nbits {
0
} else {
bitv.get(offset) as u8 << (7 - bit)
}
}
let len = self.nbits/8 +
if self.nbits % 8 == 0 { 0 } else { 1 };
Vec::from_fn(len, |i|
bit(self, i, 0) |
bit(self, i, 1) |
bit(self, i, 2) |
bit(self, i, 3) |
bit(self, i, 4) |
bit(self, i, 5) |
bit(self, i, 6) |
bit(self, i, 7)
)
}
2014-08-04 05:48:39 -05:00
/// Transforms `self` into a `Vec<bool>` by turning each bit into a `bool`.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let bv = bitv::from_bytes(&[0b10100000]);
2014-07-20 10:09:53 -05:00
/// assert_eq!(bv.to_bools(), vec!(true, false, true, false,
/// false, false, false, false));
2014-07-20 07:28:40 -05:00
/// ```
pub fn to_bools(&self) -> Vec<bool> {
Vec::from_fn(self.nbits, |i| self.get(i))
}
2014-08-04 05:48:39 -05:00
/// Compares a `Bitv` to a slice of `bool`s.
/// Both the `Bitv` and slice must have the same length.
///
/// # Panics
2014-07-20 07:28:40 -05:00
///
2014-11-16 10:28:13 -06:00
/// Panics if the `Bitv` and slice are of different length.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let bv = bitv::from_bytes(&[0b10100000]);
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// assert!(bv.eq_vec(&[true, false, true, false,
/// false, false, false, false]));
2014-07-20 07:28:40 -05:00
/// ```
pub fn eq_vec(&self, v: &[bool]) -> bool {
assert_eq!(self.nbits, v.len());
let mut i = 0;
while i < self.nbits {
if self.get(i) != v[i] { return false; }
i = i + 1;
}
true
}
2014-08-04 05:48:39 -05:00
/// Shortens a `Bitv`, dropping excess elements.
///
/// If `len` is greater than the vector's current length, this has no
/// effect.
///
/// # Examples
///
2014-07-20 07:28:40 -05:00
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let mut bv = bitv::from_bytes(&[0b01001011]);
2014-07-20 07:28:40 -05:00
/// bv.truncate(2);
2014-11-17 02:39:01 -06:00
/// assert!(bv.eq_vec(&[false, true]));
/// ```
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn truncate(&mut self, len: uint) {
if len < self.len() {
self.nbits = len;
let word_len = (len + u32::BITS - 1) / u32::BITS;
self.storage.truncate(word_len);
if len % u32::BITS > 0 {
let mask = (1 << len % u32::BITS) - 1;
self.storage[word_len - 1] &= mask;
}
}
}
2014-08-04 05:48:39 -05:00
/// Grows the vector to be able to store `size` bits without resizing.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::Bitv;
2014-07-20 07:28:40 -05:00
///
/// let mut bv = Bitv::with_capacity(3, false);
/// bv.reserve(10);
/// assert_eq!(bv.len(), 3);
/// assert!(bv.capacity() >= 10);
/// ```
pub fn reserve(&mut self, size: uint) {
let old_size = self.storage.len();
let new_size = (size + u32::BITS - 1) / u32::BITS;
if old_size < new_size {
self.storage.grow(new_size - old_size, 0);
}
}
2014-08-04 05:48:39 -05:00
/// Returns the capacity in bits for this bit vector. Inserting any
/// element less than this amount will not trigger a resizing.
2014-07-20 07:28:40 -05:00
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::Bitv;
2014-07-20 07:28:40 -05:00
///
/// let mut bv = Bitv::new();
/// bv.reserve(10);
/// assert!(bv.capacity() >= 10);
/// ```
#[inline]
pub fn capacity(&self) -> uint {
self.storage.len() * u32::BITS
}
2014-08-04 05:48:39 -05:00
/// Grows the `Bitv` in-place, adding `n` copies of `value` to the `Bitv`.
///
/// # Examples
///
2014-07-20 07:28:40 -05:00
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let mut bv = bitv::from_bytes(&[0b01001011]);
2014-07-20 07:28:40 -05:00
/// bv.grow(2, true);
2014-07-20 10:09:53 -05:00
/// assert_eq!(bv.len(), 10);
/// assert_eq!(bv.to_bytes(), vec!(0b01001011, 0b11000000));
/// ```
pub fn grow(&mut self, n: uint, value: bool) {
let new_nbits = self.nbits + n;
let new_nwords = (new_nbits + u32::BITS - 1) / u32::BITS;
let full_value = if value { !0 } else { 0 };
// Correct the old tail word
let old_last_word = (self.nbits + u32::BITS - 1) / u32::BITS - 1;
if self.nbits % u32::BITS > 0 {
let overhang = self.nbits % u32::BITS; // # of already-used bits
let mask = !((1 << overhang) - 1); // e.g. 5 unused bits => 111110....0
if value {
self.storage[old_last_word] |= mask;
} else {
self.storage[old_last_word] &= !mask;
}
}
// Fill in words after the old tail word
let stop_idx = cmp::min(self.storage.len(), new_nwords);
for idx in range(old_last_word + 1, stop_idx) {
self.storage[idx] = full_value;
}
// Allocate new words, if needed
if new_nwords > self.storage.len() {
let to_add = new_nwords - self.storage.len();
self.storage.grow(to_add, full_value);
// Zero out and unused bits in the new tail word
if value {
let tail_word = new_nwords - 1;
let used_bits = new_nbits % u32::BITS;
self.storage[tail_word] &= (1 << used_bits) - 1;
}
}
// Adjust internal bit count
self.nbits = new_nbits;
}
2014-08-04 05:48:39 -05:00
/// Shortens by one element and returns the removed element.
2014-07-20 07:28:40 -05:00
///
/// # Panics
2014-07-20 07:28:40 -05:00
///
/// Assert if empty.
///
/// # Examples
///
2014-07-20 07:28:40 -05:00
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let mut bv = bitv::from_bytes(&[0b01001001]);
2014-07-20 10:09:53 -05:00
/// assert_eq!(bv.pop(), true);
2014-07-20 07:28:40 -05:00
/// assert_eq!(bv.pop(), false);
2014-07-20 10:09:53 -05:00
/// assert_eq!(bv.len(), 6);
/// assert_eq!(bv.to_bytes(), vec!(0b01001000));
/// ```
pub fn pop(&mut self) -> bool {
let ret = self.get(self.nbits - 1);
// If we are unusing a whole word, make sure it is zeroed out
if self.nbits % u32::BITS == 1 {
self.storage[self.nbits / u32::BITS] = 0;
}
self.nbits -= 1;
ret
}
2014-08-04 05:48:39 -05:00
/// Pushes a `bool` onto the end.
///
/// # Examples
///
2014-07-20 07:28:40 -05:00
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::Bitv;
2014-07-20 07:28:40 -05:00
///
2014-07-20 10:09:53 -05:00
/// let mut bv = Bitv::new();
2014-07-20 07:28:40 -05:00
/// bv.push(true);
/// bv.push(false);
2014-11-17 02:39:01 -06:00
/// assert!(bv.eq_vec(&[true, false]));
/// ```
pub fn push(&mut self, elem: bool) {
let insert_pos = self.nbits;
self.nbits += 1;
if self.storage.len() * u32::BITS < self.nbits {
self.storage.push(0);
}
self.set(insert_pos, elem);
}
/// Return the total number of bits in this vector
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn len(&self) -> uint { self.nbits }
/// Returns true if there are no bits in this vector
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn is_empty(&self) -> bool { self.len() == 0 }
/// Clears all bits in this vector.
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn clear(&mut self) {
for w in self.storage.iter_mut() { *w = 0u32; }
}
}
2014-08-04 05:48:39 -05:00
/// Transforms a byte-vector into a `Bitv`. Each byte becomes eight bits,
2014-07-20 07:28:40 -05:00
/// with the most significant bits of each byte coming first. Each
/// bit becomes `true` if equal to 1 or `false` if equal to 0.
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:28:40 -05:00
///
2014-11-17 02:39:01 -06:00
/// let bv = bitv::from_bytes(&[0b10100000, 0b00010010]);
/// assert!(bv.eq_vec(&[true, false, true, false,
/// false, false, false, false,
/// false, false, false, true,
/// false, false, true, false]));
2014-07-20 07:28:40 -05:00
/// ```
2013-09-25 15:58:57 -05:00
pub fn from_bytes(bytes: &[u8]) -> Bitv {
from_fn(bytes.len() * 8, |i| {
let b = bytes[i / 8] as u32;
let offset = i % 8;
b >> (7 - offset) & 1 == 1
})
}
2014-08-04 05:48:39 -05:00
/// Creates a `Bitv` of the specified length where the value at each
2014-07-20 07:28:40 -05:00
/// index is `f(index)`.
///
/// # Examples
2014-07-20 07:28:40 -05:00
///
/// ```
/// use std::collections::bitv::from_fn;
///
/// let bv = from_fn(5, |i| { i % 2 == 0 });
2014-11-17 02:39:01 -06:00
/// assert!(bv.eq_vec(&[true, false, true, false, true]));
2014-07-20 07:28:40 -05:00
/// ```
pub fn from_fn<F>(len: uint, mut f: F) -> Bitv where F: FnMut(uint) -> bool {
let mut bitv = Bitv::with_capacity(len, false);
for i in range(0u, len) {
bitv.set(i, f(i));
}
2013-02-15 01:30:30 -06:00
bitv
}
#[stable]
impl Default for Bitv {
#[inline]
#[stable]
fn default() -> Bitv { Bitv::new() }
}
impl FromIterator<bool> for Bitv {
fn from_iter<I:Iterator<bool>>(iterator: I) -> Bitv {
let mut ret = Bitv::new();
ret.extend(iterator);
ret
}
}
impl Extend<bool> for Bitv {
#[inline]
fn extend<I: Iterator<bool>>(&mut self, mut iterator: I) {
let (min, _) = iterator.size_hint();
let nbits = self.nbits;
self.reserve(nbits + min);
for element in iterator {
self.push(element)
}
}
}
impl Clone for Bitv {
#[inline]
fn clone(&self) -> Bitv {
Bitv { storage: self.storage.clone(), nbits: self.nbits }
}
#[inline]
fn clone_from(&mut self, source: &Bitv) {
self.nbits = source.nbits;
self.storage.clone_from(&source.storage);
}
}
impl PartialOrd for Bitv {
#[inline]
fn partial_cmp(&self, other: &Bitv) -> Option<Ordering> {
iter::order::partial_cmp(self.iter(), other.iter())
}
}
impl Ord for Bitv {
#[inline]
fn cmp(&self, other: &Bitv) -> Ordering {
iter::order::cmp(self.iter(), other.iter())
}
}
impl fmt::Show for Bitv {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
for bit in self.iter() {
try!(write!(fmt, "{}", if bit { 1u } else { 0u }));
}
Ok(())
}
}
impl<S: hash::Writer> hash::Hash<S> for Bitv {
fn hash(&self, state: &mut S) {
self.nbits.hash(state);
for (_, elem) in self.mask_words(0) {
elem.hash(state);
}
}
}
2014-06-19 23:13:39 -05:00
impl cmp::PartialEq for Bitv {
#[inline]
fn eq(&self, other: &Bitv) -> bool {
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
if self.nbits != other.nbits {
return false;
}
self.mask_words(0).zip(other.mask_words(0)).all(|((_, w1), (_, w2))| w1 == w2)
2014-06-19 23:13:39 -05:00
}
}
impl cmp::Eq for Bitv {}
/// An iterator for `Bitv`.
pub struct Bits<'a> {
bitv: &'a Bitv,
next_idx: uint,
end_idx: uint,
2013-07-10 00:06:38 -05:00
}
impl<'a> Iterator<bool> for Bits<'a> {
2013-07-14 13:52:49 -05:00
#[inline]
2013-07-10 00:06:38 -05:00
fn next(&mut self) -> Option<bool> {
if self.next_idx != self.end_idx {
2013-07-10 00:06:38 -05:00
let idx = self.next_idx;
self.next_idx += 1;
Some(self.bitv.get(idx))
} else {
None
}
}
fn size_hint(&self) -> (uint, Option<uint>) {
let rem = self.end_idx - self.next_idx;
2013-07-10 00:06:38 -05:00
(rem, Some(rem))
}
}
impl<'a> DoubleEndedIterator<bool> for Bits<'a> {
#[inline]
fn next_back(&mut self) -> Option<bool> {
if self.next_idx != self.end_idx {
self.end_idx -= 1;
Some(self.bitv.get(self.end_idx))
} else {
None
}
}
}
2014-11-06 11:32:37 -06:00
impl<'a> ExactSizeIterator<bool> for Bits<'a> {}
impl<'a> RandomAccessIterator<bool> for Bits<'a> {
#[inline]
fn indexable(&self) -> uint {
self.end_idx - self.next_idx
}
#[inline]
fn idx(&mut self, index: uint) -> Option<bool> {
if index >= self.indexable() {
None
} else {
Some(self.bitv.get(index))
}
}
}
/// An implementation of a set using a bit vector as an underlying
2014-07-20 07:32:18 -05:00
/// representation for holding unsigned numerical elements.
///
/// It should also be noted that the amount of storage necessary for holding a
/// set of objects is proportional to the maximum of the objects when viewed
/// as a `uint`.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::{BitvSet, Bitv};
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
/// // It's a regular set
/// let mut s = BitvSet::new();
/// s.insert(0);
/// s.insert(3);
/// s.insert(7);
///
/// s.remove(&7);
///
/// if !s.contains(&7) {
/// println!("There is no 7");
/// }
///
/// // Can initialize from a `Bitv`
2014-11-17 02:39:01 -06:00
/// let other = BitvSet::from_bitv(bitv::from_bytes(&[0b11010000]));
2014-07-20 07:32:18 -05:00
///
/// s.union_with(&other);
///
/// // Print 0, 1, 3 in some order
/// for x in s.iter() {
/// println!("{}", x);
/// }
///
/// // Can convert back to a `Bitv`
/// let bv: Bitv = s.into_bitv();
/// assert!(bv.get(3));
2014-07-20 07:32:18 -05:00
/// ```
#[deriving(Clone)]
pub struct BitvSet(Bitv);
impl Default for BitvSet {
#[inline]
fn default() -> BitvSet { BitvSet::new() }
}
impl FromIterator<bool> for BitvSet {
fn from_iter<I:Iterator<bool>>(iterator: I) -> BitvSet {
let mut ret = BitvSet::new();
ret.extend(iterator);
ret
}
}
impl Extend<bool> for BitvSet {
#[inline]
fn extend<I: Iterator<bool>>(&mut self, iterator: I) {
let &BitvSet(ref mut self_bitv) = self;
self_bitv.extend(iterator);
}
}
impl PartialOrd for BitvSet {
#[inline]
fn partial_cmp(&self, other: &BitvSet) -> Option<Ordering> {
let (a_iter, b_iter) = match_words(self.get_ref(), other.get_ref());
iter::order::partial_cmp(a_iter, b_iter)
}
}
impl Ord for BitvSet {
#[inline]
fn cmp(&self, other: &BitvSet) -> Ordering {
let (a_iter, b_iter) = match_words(self.get_ref(), other.get_ref());
iter::order::cmp(a_iter, b_iter)
}
}
impl cmp::PartialEq for BitvSet {
#[inline]
fn eq(&self, other: &BitvSet) -> bool {
let (a_iter, b_iter) = match_words(self.get_ref(), other.get_ref());
iter::order::eq(a_iter, b_iter)
}
}
impl cmp::Eq for BitvSet {}
impl BitvSet {
2014-08-04 05:48:39 -05:00
/// Creates a new bit vector set with initially no contents.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
/// let mut s = BitvSet::new();
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn new() -> BitvSet {
BitvSet(Bitv::new())
}
2014-08-04 05:48:39 -05:00
/// Creates a new bit vector set with initially no contents, able to
2014-07-20 07:32:18 -05:00
/// hold `nbits` elements without resizing.
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
/// let mut s = BitvSet::with_capacity(100);
/// assert!(s.capacity() >= 100);
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn with_capacity(nbits: uint) -> BitvSet {
let bitv = Bitv::with_capacity(nbits, false);
BitvSet::from_bitv(bitv)
}
2012-07-17 17:16:09 -05:00
2014-08-04 05:48:39 -05:00
/// Creates a new bit vector set from the given bit vector.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
2014-07-20 10:09:53 -05:00
/// use std::collections::{bitv, BitvSet};
2014-07-20 07:32:18 -05:00
///
2014-11-17 02:39:01 -06:00
/// let bv = bitv::from_bytes(&[0b01100000]);
2014-07-20 07:32:18 -05:00
/// let s = BitvSet::from_bitv(bv);
///
/// // Print 1, 2 in arbitrary order
/// for x in s.iter() {
/// println!("{}", x);
/// }
/// ```
#[inline]
pub fn from_bitv(mut bitv: Bitv) -> BitvSet {
// Mark every bit as valid
bitv.nbits = bitv.capacity();
BitvSet(bitv)
}
/// Returns the capacity in bits for this bit vector. Inserting any
/// element less than this amount will not trigger a resizing.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
///
/// let mut s = BitvSet::with_capacity(100);
/// assert!(s.capacity() >= 100);
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn capacity(&self) -> uint {
let &BitvSet(ref bitv) = self;
bitv.capacity()
}
2014-07-20 07:32:18 -05:00
/// Grows the underlying vector to be able to store `size` bits.
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
///
/// let mut s = BitvSet::new();
/// s.reserve(10);
/// assert!(s.capacity() >= 10);
/// ```
pub fn reserve(&mut self, size: uint) {
let &BitvSet(ref mut bitv) = self;
bitv.reserve(size);
if bitv.nbits < size {
bitv.nbits = bitv.capacity();
}
}
2014-08-04 05:48:39 -05:00
/// Consumes this set to return the underlying bit vector.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
///
/// let mut s = BitvSet::new();
/// s.insert(0);
/// s.insert(3);
///
/// let bv = s.into_bitv();
/// assert!(bv.get(0));
/// assert!(bv.get(3));
2014-07-20 07:32:18 -05:00
/// ```
#[inline]
pub fn into_bitv(self) -> Bitv {
let BitvSet(bitv) = self;
bitv
}
2014-08-04 05:48:39 -05:00
/// Returns a reference to the underlying bit vector.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
///
/// let mut s = BitvSet::new();
/// s.insert(0);
///
/// let bv = s.get_ref();
2014-07-20 07:59:13 -05:00
/// assert_eq!(bv[0], true);
2014-07-20 07:32:18 -05:00
/// ```
#[inline]
pub fn get_ref<'a>(&'a self) -> &'a Bitv {
let &BitvSet(ref bitv) = self;
bitv
}
#[inline]
fn other_op<F>(&mut self, other: &BitvSet, mut f: F) where F: FnMut(u32, u32) -> u32 {
// Expand the vector if necessary
self.reserve(other.capacity());
// Unwrap Bitvs
let &BitvSet(ref mut self_bitv) = self;
let &BitvSet(ref other_bitv) = other;
// virtually pad other with 0's for equal lengths
let mut other_words = {
let (_, result) = match_words(self_bitv, other_bitv);
result
};
// Apply values found in other
for (i, w) in other_words {
2014-08-11 18:47:46 -05:00
let old = self_bitv.storage[i];
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
let new = f(old, w);
self_bitv.storage[i] = new;
}
}
2014-08-04 05:48:39 -05:00
/// Truncates the underlying vector to the least length required.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
///
/// let mut s = BitvSet::new();
/// s.insert(32183231);
/// s.remove(&32183231);
///
/// // Internal storage will probably be bigger than necessary
/// println!("old capacity: {}", s.capacity());
///
/// // Now should be smaller
/// s.shrink_to_fit();
/// println!("new capacity: {}", s.capacity());
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn shrink_to_fit(&mut self) {
let &BitvSet(ref mut bitv) = self;
collections::bitv: Remove SmallBitv/BigBitv dichotomy The old `Bitv` structure had two variations: one represented by a vector of uints, and another represented by a single uint for bit vectors containing fewer than uint::BITS bits. The purpose of this is to avoid the indirection of using a Vec, but the speedup is only available to users who (a) are storing less than uints::BITS bits (b) know this when they create the vector (since `Bitv`s cannot be resized) (c) don't know this at compile time (else they could use uint directly) Giving such specific users a (questionable) speed benefit at the cost of adding explicit checks to almost every single bit call, frequently writing the same method twice and making iteration much much more difficult, does not seem like a worthwhile tradeoff to me. Also, rustc does not use Bitv anywhere, only through BitvSet, which does not have this optimization. For reference, here is some speed data from before and after this PR: BEFORE: test bitv::tests::bench_bitv_big ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 4858 ns/iter (+/- 22) test bitv::tests::bench_bitv_big_union ... bench: 507 ns/iter (+/- 35) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_small ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12930 ns/iter (+/- 662) test bitv::tests::bench_btv_small_iter ... bench: 39 ns/iter (+/- 1) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1) AFTER: test bitv::tests::bench_bitv_big ... bench: 5 ns/iter (+/- 1) test bitv::tests::bench_bitv_big_iter ... bench: 5004 ns/iter (+/- 102) test bitv::tests::bench_bitv_big_union ... bench: 356 ns/iter (+/- 26) test bitv::tests::bench_bitv_set_big ... bench: 6 ns/iter (+/- 0) test bitv::tests::bench_bitv_set_small ... bench: 6 ns/iter (+/- 1) test bitv::tests::bench_bitv_small ... bench: 4 ns/iter (+/- 1) test bitv::tests::bench_bitvset_iter ... bench: 12918 ns/iter (+/- 621) test bitv::tests::bench_btv_small_iter ... bench: 50 ns/iter (+/- 5) test bitv::tests::bench_uint_small ... bench: 4 ns/iter (+/- 1)
2014-06-30 08:43:39 -05:00
// Obtain original length
let old_len = bitv.storage.len();
// Obtain coarse trailing zero length
let n = bitv.storage.iter().rev().take_while(|&&n| n == 0).count();
// Truncate
let trunc_len = cmp::max(old_len - n, 1);
bitv.storage.truncate(trunc_len);
bitv.nbits = trunc_len * u32::BITS;
}
/// Iterator over each u32 stored in the `BitvSet`.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-11-17 02:39:01 -06:00
/// let s = BitvSet::from_bitv(bitv::from_bytes(&[0b01001010]));
2014-07-20 07:32:18 -05:00
///
/// // Print 1, 4, 6 in arbitrary order
/// for x in s.iter() {
/// println!("{}", x);
/// }
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn iter<'a>(&'a self) -> BitPositions<'a> {
BitPositions {set: self, next_idx: 0u}
}
/// Iterator over each u32 stored in `self` union `other`.
2014-07-20 07:32:18 -05:00
/// See [union_with](#method.union_with) for an efficient in-place version.
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-11-17 02:39:01 -06:00
/// let a = BitvSet::from_bitv(bitv::from_bytes(&[0b01101000]));
/// let b = BitvSet::from_bitv(bitv::from_bytes(&[0b10100000]));
2014-07-20 07:32:18 -05:00
///
/// // Print 0, 1, 2, 4 in arbitrary order
/// for x in a.union(&b) {
/// println!("{}", x);
/// }
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn union<'a>(&'a self, other: &'a BitvSet) -> TwoBitPositions<'a> {
fn or(w1: u32, w2: u32) -> u32 { w1 | w2 }
TwoBitPositions {
set: self,
other: other,
merge: or,
current_word: 0u32,
next_idx: 0u
}
}
/// Iterator over each uint stored in `self` intersect `other`.
/// See [intersect_with](#method.intersect_with) for an efficient in-place version.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-11-17 02:39:01 -06:00
/// let a = BitvSet::from_bitv(bitv::from_bytes(&[0b01101000]));
/// let b = BitvSet::from_bitv(bitv::from_bytes(&[0b10100000]));
2014-07-20 07:32:18 -05:00
///
/// // Print 2
/// for x in a.intersection(&b) {
2014-07-20 07:32:18 -05:00
/// println!("{}", x);
/// }
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn intersection<'a>(&'a self, other: &'a BitvSet) -> Take<TwoBitPositions<'a>> {
fn bitand(w1: u32, w2: u32) -> u32 { w1 & w2 }
let min = cmp::min(self.capacity(), other.capacity());
TwoBitPositions {
set: self,
other: other,
merge: bitand,
current_word: 0u32,
next_idx: 0
}.take(min)
}
/// Iterator over each uint stored in the `self` setminus `other`.
/// See [difference_with](#method.difference_with) for an efficient in-place version.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-11-17 02:39:01 -06:00
/// let a = BitvSet::from_bitv(bitv::from_bytes(&[0b01101000]));
/// let b = BitvSet::from_bitv(bitv::from_bytes(&[0b10100000]));
2014-07-20 07:32:18 -05:00
///
/// // Print 1, 4 in arbitrary order
/// for x in a.difference(&b) {
/// println!("{}", x);
/// }
///
/// // Note that difference is not symmetric,
/// // and `b - a` means something else.
/// // This prints 0
/// for x in b.difference(&a) {
2014-07-20 07:32:18 -05:00
/// println!("{}", x);
/// }
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn difference<'a>(&'a self, other: &'a BitvSet) -> TwoBitPositions<'a> {
fn diff(w1: u32, w2: u32) -> u32 { w1 & !w2 }
TwoBitPositions {
set: self,
other: other,
merge: diff,
current_word: 0u32,
next_idx: 0
}
}
/// Iterator over each u32 stored in the symmetric difference of `self` and `other`.
/// See [symmetric_difference_with](#method.symmetric_difference_with) for
/// an efficient in-place version.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-11-17 02:39:01 -06:00
/// let a = BitvSet::from_bitv(bitv::from_bytes(&[0b01101000]));
/// let b = BitvSet::from_bitv(bitv::from_bytes(&[0b10100000]));
2014-07-20 07:32:18 -05:00
///
/// // Print 0, 1, 4 in arbitrary order
/// for x in a.symmetric_difference(&b) {
2014-07-20 07:32:18 -05:00
/// println!("{}", x);
/// }
/// ```
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn symmetric_difference<'a>(&'a self, other: &'a BitvSet) -> TwoBitPositions<'a> {
fn bitxor(w1: u32, w2: u32) -> u32 { w1 ^ w2 }
TwoBitPositions {
set: self,
other: other,
merge: bitxor,
current_word: 0u32,
next_idx: 0
}
}
2014-08-04 05:48:39 -05:00
/// Unions in-place with the specified other bit vector.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-07-20 10:09:53 -05:00
/// let a = 0b01101000;
/// let b = 0b10100000;
/// let res = 0b11101000;
///
2014-11-17 02:39:01 -06:00
/// let mut a = BitvSet::from_bitv(bitv::from_bytes(&[a]));
/// let b = BitvSet::from_bitv(bitv::from_bytes(&[b]));
/// let res = BitvSet::from_bitv(bitv::from_bytes(&[res]));
2014-07-20 07:32:18 -05:00
///
/// a.union_with(&b);
/// assert_eq!(a, res);
2014-07-20 07:32:18 -05:00
/// ```
#[inline]
pub fn union_with(&mut self, other: &BitvSet) {
self.other_op(other, |w1, w2| w1 | w2);
}
2014-08-04 05:48:39 -05:00
/// Intersects in-place with the specified other bit vector.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-07-20 10:09:53 -05:00
/// let a = 0b01101000;
/// let b = 0b10100000;
/// let res = 0b00100000;
///
2014-11-17 02:39:01 -06:00
/// let mut a = BitvSet::from_bitv(bitv::from_bytes(&[a]));
/// let b = BitvSet::from_bitv(bitv::from_bytes(&[b]));
/// let res = BitvSet::from_bitv(bitv::from_bytes(&[res]));
2014-07-20 07:32:18 -05:00
///
/// a.intersect_with(&b);
/// assert_eq!(a, res);
2014-07-20 07:32:18 -05:00
/// ```
#[inline]
pub fn intersect_with(&mut self, other: &BitvSet) {
self.other_op(other, |w1, w2| w1 & w2);
}
2014-08-04 05:48:39 -05:00
/// Makes this bit vector the difference with the specified other bit vector
/// in-place.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-07-20 10:09:53 -05:00
/// let a = 0b01101000;
/// let b = 0b10100000;
/// let a_b = 0b01001000; // a - b
/// let b_a = 0b10000000; // b - a
///
2014-11-17 02:39:01 -06:00
/// let mut bva = BitvSet::from_bitv(bitv::from_bytes(&[a]));
/// let bvb = BitvSet::from_bitv(bitv::from_bytes(&[b]));
/// let bva_b = BitvSet::from_bitv(bitv::from_bytes(&[a_b]));
/// let bvb_a = BitvSet::from_bitv(bitv::from_bytes(&[b_a]));
2014-07-20 10:09:53 -05:00
///
/// bva.difference_with(&bvb);
/// assert_eq!(bva, bva_b);
2014-07-20 07:32:18 -05:00
///
2014-11-17 02:39:01 -06:00
/// let bva = BitvSet::from_bitv(bitv::from_bytes(&[a]));
/// let mut bvb = BitvSet::from_bitv(bitv::from_bytes(&[b]));
2014-07-20 10:09:53 -05:00
///
/// bvb.difference_with(&bva);
/// assert_eq!(bvb, bvb_a);
2014-07-20 07:32:18 -05:00
/// ```
#[inline]
pub fn difference_with(&mut self, other: &BitvSet) {
self.other_op(other, |w1, w2| w1 & !w2);
}
2014-08-04 05:48:39 -05:00
/// Makes this bit vector the symmetric difference with the specified other
/// bit vector in-place.
2014-07-20 07:32:18 -05:00
///
/// # Examples
2014-07-20 07:32:18 -05:00
///
/// ```
/// use std::collections::BitvSet;
2014-07-20 07:59:13 -05:00
/// use std::collections::bitv;
2014-07-20 07:32:18 -05:00
///
2014-07-20 10:09:53 -05:00
/// let a = 0b01101000;
/// let b = 0b10100000;
/// let res = 0b11001000;
///
2014-11-17 02:39:01 -06:00
/// let mut a = BitvSet::from_bitv(bitv::from_bytes(&[a]));
/// let b = BitvSet::from_bitv(bitv::from_bytes(&[b]));
/// let res = BitvSet::from_bitv(bitv::from_bytes(&[res]));
2014-07-20 07:32:18 -05:00
///
/// a.symmetric_difference_with(&b);
/// assert_eq!(a, res);
2014-07-20 07:32:18 -05:00
/// ```
#[inline]
pub fn symmetric_difference_with(&mut self, other: &BitvSet) {
self.other_op(other, |w1, w2| w1 ^ w2);
}
/// Return the number of set bits in this set.
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn len(&self) -> uint {
let &BitvSet(ref bitv) = self;
bitv.storage.iter().fold(0, |acc, &n| acc + n.count_ones())
}
/// Returns whether there are no bits set in this set
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn is_empty(&self) -> bool {
let &BitvSet(ref bitv) = self;
bitv.storage.iter().all(|&n| n == 0)
}
/// Clears all bits in this set
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn clear(&mut self) {
let &BitvSet(ref mut bitv) = self;
bitv.clear();
}
/// Returns `true` if this set contains the specified integer.
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn contains(&self, value: &uint) -> bool {
let &BitvSet(ref bitv) = self;
*value < bitv.nbits && bitv.get(*value)
}
/// Returns `true` if the set has no elements in common with `other`.
/// This is equivalent to checking for an empty intersection.
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn is_disjoint(&self, other: &BitvSet) -> bool {
self.intersection(other).next().is_none()
}
/// Returns `true` if the set is a subset of another.
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn is_subset(&self, other: &BitvSet) -> bool {
let &BitvSet(ref self_bitv) = self;
let &BitvSet(ref other_bitv) = other;
// Check that `self` intersect `other` is self
self_bitv.mask_words(0).zip(other_bitv.mask_words(0))
.all(|((_, w1), (_, w2))| w1 & w2 == w1) &&
// Check that `self` setminus `other` is empty
self_bitv.mask_words(other_bitv.storage.len()).all(|(_, w)| w == 0)
}
/// Returns `true` if the set is a superset of another.
#[inline]
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn is_superset(&self, other: &BitvSet) -> bool {
other.is_subset(self)
}
/// Adds a value to the set. Returns `true` if the value was not already
/// present in the set.
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn insert(&mut self, value: uint) -> bool {
if self.contains(&value) {
return false;
}
// Ensure we have enough space to hold the new element
if value >= self.capacity() {
let new_cap = cmp::max(value + 1, self.capacity() * 2);
self.reserve(new_cap);
}
let &BitvSet(ref mut bitv) = self;
bitv.set(value, true);
return true;
}
/// Removes a value from the set. Returns `true` if the value was
/// present in the set.
#[unstable = "matches collection reform specification, waiting for dust to settle"]
pub fn remove(&mut self, value: &uint) -> bool {
if !self.contains(value) {
return false;
}
let &BitvSet(ref mut bitv) = self;
bitv.set(*value, false);
return true;
}
}
impl fmt::Show for BitvSet {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
try!(write!(fmt, "{{"));
let mut first = true;
for n in self.iter() {
if !first {
try!(write!(fmt, ", "));
}
try!(write!(fmt, "{}", n));
first = false;
}
write!(fmt, "}}")
}
}
impl<S: hash::Writer> hash::Hash<S> for BitvSet {
fn hash(&self, state: &mut S) {
for pos in self.iter() {
pos.hash(state);
}
}
}
/// An iterator for `BitvSet`.
pub struct BitPositions<'a> {
set: &'a BitvSet,
next_idx: uint
2013-07-10 00:06:38 -05:00
}
2014-08-01 18:40:21 -05:00
/// An iterator combining two `BitvSet` iterators.
pub struct TwoBitPositions<'a> {
set: &'a BitvSet,
other: &'a BitvSet,
merge: fn(u32, u32) -> u32,
current_word: u32,
next_idx: uint
}
impl<'a> Iterator<uint> for BitPositions<'a> {
2013-07-10 00:06:38 -05:00
fn next(&mut self) -> Option<uint> {
while self.next_idx < self.set.capacity() {
let idx = self.next_idx;
self.next_idx += 1;
if self.set.contains(&idx) {
return Some(idx);
}
}
return None;
}
#[inline]
2013-07-10 00:06:38 -05:00
fn size_hint(&self) -> (uint, Option<uint>) {
(0, Some(self.set.capacity() - self.next_idx))
}
}
impl<'a> Iterator<uint> for TwoBitPositions<'a> {
fn next(&mut self) -> Option<uint> {
while self.next_idx < self.set.capacity() ||
self.next_idx < self.other.capacity() {
let bit_idx = self.next_idx % u32::BITS;
if bit_idx == 0 {
let &BitvSet(ref s_bitv) = self.set;
let &BitvSet(ref o_bitv) = self.other;
// Merging the two words is a bit of an awkward dance since
// one Bitv might be longer than the other
let word_idx = self.next_idx / u32::BITS;
let w1 = if word_idx < s_bitv.storage.len() {
2014-08-11 18:47:46 -05:00
s_bitv.storage[word_idx]
} else { 0 };
let w2 = if word_idx < o_bitv.storage.len() {
2014-08-11 18:47:46 -05:00
o_bitv.storage[word_idx]
} else { 0 };
self.current_word = (self.merge)(w1, w2);
}
self.next_idx += 1;
if self.current_word & (1 << bit_idx) != 0 {
return Some(self.next_idx - 1);
}
}
return None;
}
#[inline]
fn size_hint(&self) -> (uint, Option<uint>) {
let cap = cmp::max(self.set.capacity(), self.other.capacity());
(0, Some(cap - self.next_idx))
}
}
2012-01-17 21:05:07 -06:00
#[cfg(test)]
mod tests {
use std::prelude::*;
use std::iter::range_step;
use std::rand;
use std::rand::Rng;
use std::u32;
use test::{Bencher, black_box};
use super::{Bitv, BitvSet, from_fn, from_bytes};
use bitv;
use vec::Vec;
static BENCH_BITS : uint = 1 << 14;
#[test]
fn test_to_str() {
let zerolen = Bitv::new();
assert_eq!(zerolen.to_string(), "");
let eightbits = Bitv::with_capacity(8u, false);
assert_eq!(eightbits.to_string(), "00000000")
}
2012-01-17 21:05:07 -06:00
#[test]
fn test_0_elements() {
let act = Bitv::new();
let exp = Vec::from_elem(0u, false);
assert!(act.eq_vec(exp.as_slice()));
2012-01-17 21:05:07 -06:00
}
#[test]
fn test_1_element() {
let mut act = Bitv::with_capacity(1u, false);
2014-11-17 02:39:01 -06:00
assert!(act.eq_vec(&[false]));
act = Bitv::with_capacity(1u, true);
2014-11-17 02:39:01 -06:00
assert!(act.eq_vec(&[true]));
2012-01-17 21:05:07 -06:00
}
2012-08-21 13:41:29 -05:00
#[test]
fn test_2_elements() {
let mut b = bitv::Bitv::with_capacity(2, false);
2012-08-21 13:41:29 -05:00
b.set(0, true);
b.set(1, false);
assert_eq!(b.to_string(), "10");
2012-08-21 13:41:29 -05:00
}
2012-01-17 21:05:07 -06:00
#[test]
fn test_10_elements() {
let mut act;
2012-01-17 21:05:07 -06:00
// all 0
act = Bitv::with_capacity(10u, false);
assert!((act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false])));
2012-01-17 21:05:07 -06:00
// all 1
act = Bitv::with_capacity(10u, true);
2014-11-17 02:39:01 -06:00
assert!((act.eq_vec(&[true, true, true, true, true, true, true, true, true, true])));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(10u, false);
act.set(0u, true);
act.set(1u, true);
act.set(2u, true);
act.set(3u, true);
act.set(4u, true);
2014-11-17 02:39:01 -06:00
assert!((act.eq_vec(&[true, true, true, true, true, false, false, false, false, false])));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(10u, false);
act.set(5u, true);
act.set(6u, true);
act.set(7u, true);
act.set(8u, true);
act.set(9u, true);
2014-11-17 02:39:01 -06:00
assert!((act.eq_vec(&[false, false, false, false, false, true, true, true, true, true])));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(10u, false);
act.set(0u, true);
act.set(3u, true);
act.set(6u, true);
act.set(9u, true);
2014-11-17 02:39:01 -06:00
assert!((act.eq_vec(&[true, false, false, true, false, false, true, false, false, true])));
2012-01-17 21:05:07 -06:00
}
#[test]
fn test_31_elements() {
let mut act;
2012-01-17 21:05:07 -06:00
// all 0
act = Bitv::with_capacity(31u, false);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// all 1
act = Bitv::with_capacity(31u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[true, true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(31u, false);
act.set(0u, true);
act.set(1u, true);
act.set(2u, true);
act.set(3u, true);
act.set(4u, true);
act.set(5u, true);
act.set(6u, true);
act.set(7u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[true, true, true, true, true, true, true, true, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(31u, false);
act.set(16u, true);
act.set(17u, true);
act.set(18u, true);
act.set(19u, true);
act.set(20u, true);
act.set(21u, true);
act.set(22u, true);
act.set(23u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, true, true, true, true, true, true, true, true,
false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(31u, false);
act.set(24u, true);
act.set(25u, true);
act.set(26u, true);
act.set(27u, true);
act.set(28u, true);
act.set(29u, true);
act.set(30u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, true, true, true, true, true, true, true]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(31u, false);
act.set(3u, true);
act.set(17u, true);
act.set(30u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, true, false, false, false, false, false, false, false, false,
false, false, false, false, false, true, false, false, false, false, false, false,
false, false, false, false, false, false, true]));
2012-01-17 21:05:07 -06:00
}
#[test]
fn test_32_elements() {
let mut act;
2012-01-17 21:05:07 -06:00
// all 0
act = Bitv::with_capacity(32u, false);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// all 1
act = Bitv::with_capacity(32u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[true, true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(32u, false);
act.set(0u, true);
act.set(1u, true);
act.set(2u, true);
act.set(3u, true);
act.set(4u, true);
act.set(5u, true);
act.set(6u, true);
act.set(7u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[true, true, true, true, true, true, true, true, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(32u, false);
act.set(16u, true);
act.set(17u, true);
act.set(18u, true);
act.set(19u, true);
act.set(20u, true);
act.set(21u, true);
act.set(22u, true);
act.set(23u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, true, true, true, true, true, true, true, true,
false, false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(32u, false);
act.set(24u, true);
act.set(25u, true);
act.set(26u, true);
act.set(27u, true);
act.set(28u, true);
act.set(29u, true);
act.set(30u, true);
act.set(31u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, true, true, true, true, true, true, true, true]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(32u, false);
act.set(3u, true);
act.set(17u, true);
act.set(30u, true);
act.set(31u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, true, false, false, false, false, false, false, false, false,
false, false, false, false, false, true, false, false, false, false, false, false,
false, false, false, false, false, false, true, true]));
2012-01-17 21:05:07 -06:00
}
#[test]
fn test_33_elements() {
let mut act;
2012-01-17 21:05:07 -06:00
// all 0
act = Bitv::with_capacity(33u, false);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// all 1
act = Bitv::with_capacity(33u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[true, true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true, true, true, true, true, true, true,
true, true, true, true, true, true, true]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(33u, false);
act.set(0u, true);
act.set(1u, true);
act.set(2u, true);
act.set(3u, true);
act.set(4u, true);
act.set(5u, true);
act.set(6u, true);
act.set(7u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[true, true, true, true, true, true, true, true, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(33u, false);
act.set(16u, true);
act.set(17u, true);
act.set(18u, true);
act.set(19u, true);
act.set(20u, true);
act.set(21u, true);
act.set(22u, true);
act.set(23u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, true, true, true, true, true, true, true, true,
false, false, false, false, false, false, false, false, false]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(33u, false);
act.set(24u, true);
act.set(25u, true);
act.set(26u, true);
act.set(27u, true);
act.set(28u, true);
act.set(29u, true);
act.set(30u, true);
act.set(31u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false,
false, false, true, true, true, true, true, true, true, true, false]));
2012-01-17 21:05:07 -06:00
// mixed
act = Bitv::with_capacity(33u, false);
act.set(3u, true);
act.set(17u, true);
act.set(30u, true);
act.set(31u, true);
act.set(32u, true);
2013-03-28 20:39:09 -05:00
assert!(act.eq_vec(
2014-11-17 02:39:01 -06:00
&[false, false, false, true, false, false, false, false, false, false, false, false,
false, false, false, false, false, true, false, false, false, false, false, false,
false, false, false, false, false, false, true, true, true]));
2012-01-17 21:05:07 -06:00
}
2012-05-31 13:37:39 -05:00
#[test]
fn test_equal_differing_sizes() {
let v0 = Bitv::with_capacity(10u, false);
let v1 = Bitv::with_capacity(11u, false);
2014-06-19 23:13:39 -05:00
assert!(v0 != v1);
2012-05-31 13:37:39 -05:00
}
#[test]
fn test_equal_greatly_differing_sizes() {
let v0 = Bitv::with_capacity(10u, false);
let v1 = Bitv::with_capacity(110u, false);
2014-06-19 23:13:39 -05:00
assert!(v0 != v1);
2012-05-31 13:37:39 -05:00
}
#[test]
fn test_equal_sneaky_small() {
let mut a = bitv::Bitv::with_capacity(1, false);
a.set(0, true);
let mut b = bitv::Bitv::with_capacity(1, true);
b.set(0, true);
2014-06-19 23:13:39 -05:00
assert_eq!(a, b);
}
#[test]
fn test_equal_sneaky_big() {
let mut a = bitv::Bitv::with_capacity(100, false);
for i in range(0u, 100) {
a.set(i, true);
}
let mut b = bitv::Bitv::with_capacity(100, true);
for i in range(0u, 100) {
b.set(i, true);
}
2014-06-19 23:13:39 -05:00
assert_eq!(a, b);
}
#[test]
2013-09-25 15:58:57 -05:00
fn test_from_bytes() {
2014-11-17 02:39:01 -06:00
let bitv = from_bytes(&[0b10110110, 0b00000000, 0b11111111]);
let str = format!("{}{}{}", "10110110", "00000000", "11111111");
assert_eq!(bitv.to_string(), str);
}
#[test]
fn test_to_bytes() {
let mut bv = Bitv::with_capacity(3, true);
bv.set(1, false);
assert_eq!(bv.to_bytes(), vec!(0b10100000));
let mut bv = Bitv::with_capacity(9, false);
bv.set(2, true);
bv.set(8, true);
assert_eq!(bv.to_bytes(), vec!(0b00100000, 0b10000000));
}
#[test]
fn test_from_bools() {
let bools = vec![true, false, true, true];
let bitv: Bitv = bools.iter().map(|n| *n).collect();
assert_eq!(bitv.to_string(), "1011");
}
#[test]
fn test_bitv_set_from_bools() {
let bools = vec![true, false, true, true];
let a: BitvSet = bools.iter().map(|n| *n).collect();
let mut b = BitvSet::new();
b.insert(0);
b.insert(2);
b.insert(3);
assert_eq!(a, b);
}
#[test]
fn test_to_bools() {
let bools = vec!(false, false, true, false, false, true, true, false);
2014-11-17 02:39:01 -06:00
assert_eq!(from_bytes(&[0b00100110]).iter().collect::<Vec<bool>>(), bools);
}
2013-07-12 01:13:26 -05:00
#[test]
fn test_bitv_iterator() {
let bools = vec![true, false, true, true];
let bitv: Bitv = bools.iter().map(|n| *n).collect();
2013-07-12 01:13:26 -05:00
assert_eq!(bitv.iter().collect::<Vec<bool>>(), bools);
let long = Vec::from_fn(10000, |i| i % 2 == 0);
let bitv: Bitv = long.iter().map(|n| *n).collect();
assert_eq!(bitv.iter().collect::<Vec<bool>>(), long)
2013-07-12 01:13:26 -05:00
}
#[test]
fn test_bitv_set_iterator() {
let bools = [true, false, true, true];
let bitv: BitvSet = bools.iter().map(|n| *n).collect();
2013-07-12 01:13:26 -05:00
let idxs: Vec<uint> = bitv.iter().collect();
assert_eq!(idxs, vec!(0, 2, 3));
let long: BitvSet = range(0u, 10000).map(|n| n % 2 == 0).collect();
let real = range_step(0, 10000, 2).collect::<Vec<uint>>();
let idxs: Vec<uint> = long.iter().collect();
assert_eq!(idxs, real);
2013-07-12 01:13:26 -05:00
}
#[test]
fn test_bitv_set_frombitv_init() {
let bools = [true, false];
let lengths = [10, 64, 100];
for &b in bools.iter() {
for &l in lengths.iter() {
let bitset = BitvSet::from_bitv(Bitv::with_capacity(l, b));
assert_eq!(bitset.contains(&1u), b);
assert_eq!(bitset.contains(&(l-1u)), b);
assert!(!bitset.contains(&l))
}
}
}
#[test]
fn test_small_difference() {
let mut b1 = Bitv::with_capacity(3, false);
let mut b2 = Bitv::with_capacity(3, false);
b1.set(0, true);
b1.set(1, true);
b2.set(1, true);
b2.set(2, true);
2013-03-28 20:39:09 -05:00
assert!(b1.difference(&b2));
assert!(b1.get(0));
assert!(!b1.get(1));
assert!(!b1.get(2));
}
#[test]
fn test_big_difference() {
let mut b1 = Bitv::with_capacity(100, false);
let mut b2 = Bitv::with_capacity(100, false);
b1.set(0, true);
b1.set(40, true);
b2.set(40, true);
b2.set(80, true);
2013-03-28 20:39:09 -05:00
assert!(b1.difference(&b2));
assert!(b1.get(0));
assert!(!b1.get(40));
assert!(!b1.get(80));
}
#[test]
fn test_small_clear() {
let mut b = Bitv::with_capacity(14, true);
b.clear();
assert!(b.none());
}
#[test]
fn test_big_clear() {
let mut b = Bitv::with_capacity(140, true);
b.clear();
assert!(b.none());
}
#[test]
fn test_bitv_masking() {
let b = Bitv::with_capacity(140, true);
let mut bs = BitvSet::from_bitv(b);
assert!(bs.contains(&139));
assert!(!bs.contains(&140));
assert!(bs.insert(150));
assert!(!bs.contains(&140));
assert!(!bs.contains(&149));
assert!(bs.contains(&150));
assert!(!bs.contains(&151));
}
#[test]
fn test_bitv_set_basic() {
// calculate nbits with u32::BITS granularity
fn calc_nbits(bits: uint) -> uint {
u32::BITS * ((bits + u32::BITS - 1) / u32::BITS)
}
let mut b = BitvSet::new();
assert_eq!(b.capacity(), calc_nbits(0));
2013-03-28 20:39:09 -05:00
assert!(b.insert(3));
assert_eq!(b.capacity(), calc_nbits(3));
2013-03-28 20:39:09 -05:00
assert!(!b.insert(3));
assert!(b.contains(&3));
assert!(b.insert(4));
assert!(!b.insert(4));
assert!(b.contains(&3));
2013-03-28 20:39:09 -05:00
assert!(b.insert(400));
assert_eq!(b.capacity(), calc_nbits(400));
2013-03-28 20:39:09 -05:00
assert!(!b.insert(400));
assert!(b.contains(&400));
assert_eq!(b.len(), 3);
}
#[test]
fn test_bitv_set_intersection() {
let mut a = BitvSet::new();
let mut b = BitvSet::new();
2013-03-28 20:39:09 -05:00
assert!(a.insert(11));
assert!(a.insert(1));
assert!(a.insert(3));
assert!(a.insert(77));
assert!(a.insert(103));
assert!(a.insert(5));
2013-03-28 20:39:09 -05:00
assert!(b.insert(2));
assert!(b.insert(11));
assert!(b.insert(77));
assert!(b.insert(5));
assert!(b.insert(3));
let expected = [3, 5, 11, 77];
let actual = a.intersection(&b).collect::<Vec<uint>>();
assert_eq!(actual, expected);
}
#[test]
fn test_bitv_set_difference() {
let mut a = BitvSet::new();
let mut b = BitvSet::new();
2013-03-28 20:39:09 -05:00
assert!(a.insert(1));
assert!(a.insert(3));
assert!(a.insert(5));
assert!(a.insert(200));
assert!(a.insert(500));
2013-03-28 20:39:09 -05:00
assert!(b.insert(3));
assert!(b.insert(200));
let expected = [1, 5, 500];
let actual = a.difference(&b).collect::<Vec<uint>>();
assert_eq!(actual, expected);
}
#[test]
fn test_bitv_set_symmetric_difference() {
let mut a = BitvSet::new();
let mut b = BitvSet::new();
2013-03-28 20:39:09 -05:00
assert!(a.insert(1));
assert!(a.insert(3));
assert!(a.insert(5));
assert!(a.insert(9));
assert!(a.insert(11));
2013-03-28 20:39:09 -05:00
assert!(b.insert(3));
assert!(b.insert(9));
assert!(b.insert(14));
assert!(b.insert(220));
let expected = [1, 5, 11, 14, 220];
let actual = a.symmetric_difference(&b).collect::<Vec<uint>>();
assert_eq!(actual, expected);
}
#[test]
fn test_bitv_set_union() {
let mut a = BitvSet::new();
let mut b = BitvSet::new();
2013-03-28 20:39:09 -05:00
assert!(a.insert(1));
assert!(a.insert(3));
assert!(a.insert(5));
assert!(a.insert(9));
assert!(a.insert(11));
assert!(a.insert(160));
assert!(a.insert(19));
assert!(a.insert(24));
assert!(a.insert(200));
2013-03-28 20:39:09 -05:00
assert!(b.insert(1));
assert!(b.insert(5));
assert!(b.insert(9));
assert!(b.insert(13));
assert!(b.insert(19));
let expected = [1, 3, 5, 9, 11, 13, 19, 24, 160, 200];
let actual = a.union(&b).collect::<Vec<uint>>();
assert_eq!(actual, expected);
}
#[test]
fn test_bitv_set_subset() {
let mut set1 = BitvSet::new();
let mut set2 = BitvSet::new();
assert!(set1.is_subset(&set2)); // {} {}
set2.insert(100);
assert!(set1.is_subset(&set2)); // {} { 1 }
set2.insert(200);
assert!(set1.is_subset(&set2)); // {} { 1, 2 }
set1.insert(200);
assert!(set1.is_subset(&set2)); // { 2 } { 1, 2 }
set1.insert(300);
assert!(!set1.is_subset(&set2)); // { 2, 3 } { 1, 2 }
set2.insert(300);
assert!(set1.is_subset(&set2)); // { 2, 3 } { 1, 2, 3 }
set2.insert(400);
assert!(set1.is_subset(&set2)); // { 2, 3 } { 1, 2, 3, 4 }
set2.remove(&100);
assert!(set1.is_subset(&set2)); // { 2, 3 } { 2, 3, 4 }
set2.remove(&300);
assert!(!set1.is_subset(&set2)); // { 2, 3 } { 2, 4 }
set1.remove(&300);
assert!(set1.is_subset(&set2)); // { 2 } { 2, 4 }
}
#[test]
fn test_bitv_set_is_disjoint() {
2014-11-17 02:39:01 -06:00
let a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let b = BitvSet::from_bitv(from_bytes(&[0b01000000]));
let c = BitvSet::new();
2014-11-17 02:39:01 -06:00
let d = BitvSet::from_bitv(from_bytes(&[0b00110000]));
assert!(!a.is_disjoint(&d));
assert!(!d.is_disjoint(&a));
assert!(a.is_disjoint(&b));
assert!(a.is_disjoint(&c));
assert!(b.is_disjoint(&a));
assert!(b.is_disjoint(&c));
assert!(c.is_disjoint(&a));
assert!(c.is_disjoint(&b));
}
#[test]
fn test_bitv_set_union_with() {
//a should grow to include larger elements
let mut a = BitvSet::new();
a.insert(0);
let mut b = BitvSet::new();
b.insert(5);
2014-11-17 02:39:01 -06:00
let expected = BitvSet::from_bitv(from_bytes(&[0b10000100]));
a.union_with(&b);
assert_eq!(a, expected);
// Standard
2014-11-17 02:39:01 -06:00
let mut a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let mut b = BitvSet::from_bitv(from_bytes(&[0b01100010]));
let c = a.clone();
a.union_with(&b);
b.union_with(&c);
assert_eq!(a.len(), 4);
assert_eq!(b.len(), 4);
}
#[test]
fn test_bitv_set_intersect_with() {
// Explicitly 0'ed bits
2014-11-17 02:39:01 -06:00
let mut a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let mut b = BitvSet::from_bitv(from_bytes(&[0b00000000]));
let c = a.clone();
a.intersect_with(&b);
b.intersect_with(&c);
assert!(a.is_empty());
assert!(b.is_empty());
// Uninitialized bits should behave like 0's
2014-11-17 02:39:01 -06:00
let mut a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let mut b = BitvSet::new();
let c = a.clone();
a.intersect_with(&b);
b.intersect_with(&c);
assert!(a.is_empty());
assert!(b.is_empty());
// Standard
2014-11-17 02:39:01 -06:00
let mut a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let mut b = BitvSet::from_bitv(from_bytes(&[0b01100010]));
let c = a.clone();
a.intersect_with(&b);
b.intersect_with(&c);
assert_eq!(a.len(), 2);
assert_eq!(b.len(), 2);
}
#[test]
fn test_bitv_set_difference_with() {
// Explicitly 0'ed bits
2014-11-17 02:39:01 -06:00
let mut a = BitvSet::from_bitv(from_bytes(&[0b00000000]));
let b = BitvSet::from_bitv(from_bytes(&[0b10100010]));
a.difference_with(&b);
assert!(a.is_empty());
// Uninitialized bits should behave like 0's
let mut a = BitvSet::new();
2014-11-17 02:39:01 -06:00
let b = BitvSet::from_bitv(from_bytes(&[0b11111111]));
a.difference_with(&b);
assert!(a.is_empty());
// Standard
2014-11-17 02:39:01 -06:00
let mut a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let mut b = BitvSet::from_bitv(from_bytes(&[0b01100010]));
let c = a.clone();
a.difference_with(&b);
b.difference_with(&c);
assert_eq!(a.len(), 1);
assert_eq!(b.len(), 1);
}
#[test]
fn test_bitv_set_symmetric_difference_with() {
//a should grow to include larger elements
let mut a = BitvSet::new();
a.insert(0);
a.insert(1);
let mut b = BitvSet::new();
b.insert(1);
b.insert(5);
2014-11-17 02:39:01 -06:00
let expected = BitvSet::from_bitv(from_bytes(&[0b10000100]));
a.symmetric_difference_with(&b);
assert_eq!(a, expected);
2014-11-17 02:39:01 -06:00
let mut a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let b = BitvSet::new();
let c = a.clone();
a.symmetric_difference_with(&b);
assert_eq!(a, c);
// Standard
2014-11-17 02:39:01 -06:00
let mut a = BitvSet::from_bitv(from_bytes(&[0b11100010]));
let mut b = BitvSet::from_bitv(from_bytes(&[0b01101010]));
let c = a.clone();
a.symmetric_difference_with(&b);
b.symmetric_difference_with(&c);
assert_eq!(a.len(), 2);
assert_eq!(b.len(), 2);
}
#[test]
fn test_bitv_set_eq() {
2014-11-17 02:39:01 -06:00
let a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let b = BitvSet::from_bitv(from_bytes(&[0b00000000]));
let c = BitvSet::new();
assert!(a == a);
assert!(a != b);
assert!(a != c);
assert!(b == b);
assert!(b == c);
assert!(c == c);
}
#[test]
fn test_bitv_set_cmp() {
2014-11-17 02:39:01 -06:00
let a = BitvSet::from_bitv(from_bytes(&[0b10100010]));
let b = BitvSet::from_bitv(from_bytes(&[0b00000000]));
let c = BitvSet::new();
assert_eq!(a.cmp(&b), Greater);
assert_eq!(a.cmp(&c), Greater);
assert_eq!(b.cmp(&a), Less);
assert_eq!(b.cmp(&c), Equal);
assert_eq!(c.cmp(&a), Less);
assert_eq!(c.cmp(&b), Equal);
}
#[test]
fn test_bitv_remove() {
let mut a = BitvSet::new();
2013-03-28 20:39:09 -05:00
assert!(a.insert(1));
assert!(a.remove(&1));
2013-03-28 20:39:09 -05:00
assert!(a.insert(100));
assert!(a.remove(&100));
2013-03-28 20:39:09 -05:00
assert!(a.insert(1000));
assert!(a.remove(&1000));
a.shrink_to_fit();
assert_eq!(a.capacity(), u32::BITS);
}
#[test]
fn test_bitv_lt() {
let mut a = Bitv::with_capacity(5u, false);
let mut b = Bitv::with_capacity(5u, false);
assert!(!(a < b) && !(b < a));
b.set(2, true);
assert!(a < b);
a.set(3, true);
assert!(a < b);
a.set(2, true);
assert!(!(a < b) && b < a);
b.set(0, true);
assert!(a < b);
}
#[test]
fn test_ord() {
let mut a = Bitv::with_capacity(5u, false);
let mut b = Bitv::with_capacity(5u, false);
assert!(a <= b && a >= b);
a.set(1, true);
assert!(a > b && a >= b);
assert!(b < a && b <= a);
b.set(1, true);
b.set(2, true);
assert!(b > a && b >= a);
assert!(a < b && a <= b);
}
2013-07-10 00:57:07 -05:00
#[test]
fn test_bitv_clone() {
let mut a = BitvSet::new();
assert!(a.insert(1));
assert!(a.insert(100));
assert!(a.insert(1000));
let mut b = a.clone();
assert!(a == b);
2013-07-10 00:57:07 -05:00
assert!(b.remove(&1));
assert!(a.contains(&1));
assert!(a.remove(&1000));
assert!(b.contains(&1000));
}
#[test]
fn test_small_bitv_tests() {
2014-11-17 02:39:01 -06:00
let v = from_bytes(&[0]);
assert!(!v.all());
assert!(!v.any());
assert!(v.none());
2014-11-17 02:39:01 -06:00
let v = from_bytes(&[0b00010100]);
assert!(!v.all());
assert!(v.any());
assert!(!v.none());
2014-11-17 02:39:01 -06:00
let v = from_bytes(&[0xFF]);
assert!(v.all());
assert!(v.any());
assert!(!v.none());
}
#[test]
fn test_big_bitv_tests() {
2014-11-17 02:39:01 -06:00
let v = from_bytes(&[ // 88 bits
0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 0]);
assert!(!v.all());
assert!(!v.any());
assert!(v.none());
2014-11-17 02:39:01 -06:00
let v = from_bytes(&[ // 88 bits
0, 0, 0b00010100, 0,
0, 0, 0, 0b00110100,
0, 0, 0]);
assert!(!v.all());
assert!(v.any());
assert!(!v.none());
2014-11-17 02:39:01 -06:00
let v = from_bytes(&[ // 88 bits
0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xFF, 0xFF]);
assert!(v.all());
assert!(v.any());
assert!(!v.none());
}
#[test]
fn test_bitv_push_pop() {
let mut s = Bitv::with_capacity(5 * u32::BITS - 2, false);
assert_eq!(s.len(), 5 * u32::BITS - 2);
assert_eq!(s.get(5 * u32::BITS - 3), false);
s.push(true);
s.push(true);
assert_eq!(s.get(5 * u32::BITS - 2), true);
assert_eq!(s.get(5 * u32::BITS - 1), true);
// Here the internal vector will need to be extended
s.push(false);
assert_eq!(s.get(5 * u32::BITS), false);
s.push(false);
assert_eq!(s.get(5 * u32::BITS + 1), false);
assert_eq!(s.len(), 5 * u32::BITS + 2);
// Pop it all off
assert_eq!(s.pop(), false);
assert_eq!(s.pop(), false);
assert_eq!(s.pop(), true);
assert_eq!(s.pop(), true);
assert_eq!(s.len(), 5 * u32::BITS - 2);
}
#[test]
fn test_bitv_truncate() {
let mut s = Bitv::with_capacity(5 * u32::BITS, true);
assert_eq!(s, Bitv::with_capacity(5 * u32::BITS, true));
assert_eq!(s.len(), 5 * u32::BITS);
s.truncate(4 * u32::BITS);
assert_eq!(s, Bitv::with_capacity(4 * u32::BITS, true));
assert_eq!(s.len(), 4 * u32::BITS);
// Truncating to a size > s.len() should be a noop
s.truncate(5 * u32::BITS);
assert_eq!(s, Bitv::with_capacity(4 * u32::BITS, true));
assert_eq!(s.len(), 4 * u32::BITS);
s.truncate(3 * u32::BITS - 10);
assert_eq!(s, Bitv::with_capacity(3 * u32::BITS - 10, true));
assert_eq!(s.len(), 3 * u32::BITS - 10);
s.truncate(0);
assert_eq!(s, Bitv::with_capacity(0, true));
assert_eq!(s.len(), 0);
}
#[test]
fn test_bitv_reserve() {
let mut s = Bitv::with_capacity(5 * u32::BITS, true);
// Check capacity
assert_eq!(s.capacity(), 5 * u32::BITS);
s.reserve(2 * u32::BITS);
assert_eq!(s.capacity(), 5 * u32::BITS);
s.reserve(7 * u32::BITS);
assert_eq!(s.capacity(), 7 * u32::BITS);
s.reserve(7 * u32::BITS);
assert_eq!(s.capacity(), 7 * u32::BITS);
s.reserve(7 * u32::BITS + 1);
assert_eq!(s.capacity(), 8 * u32::BITS);
// Check that length hasn't changed
assert_eq!(s.len(), 5 * u32::BITS);
s.push(true);
s.push(false);
s.push(true);
assert_eq!(s.get(5 * u32::BITS - 1), true);
assert_eq!(s.get(5 * u32::BITS - 0), true);
assert_eq!(s.get(5 * u32::BITS + 1), false);
assert_eq!(s.get(5 * u32::BITS + 2), true);
}
#[test]
fn test_bitv_grow() {
2014-11-17 02:39:01 -06:00
let mut bitv = from_bytes(&[0b10110110, 0b00000000, 0b10101010]);
bitv.grow(32, true);
2014-11-17 02:39:01 -06:00
assert_eq!(bitv, from_bytes(&[0b10110110, 0b00000000, 0b10101010,
0xFF, 0xFF, 0xFF, 0xFF]));
bitv.grow(64, false);
2014-11-17 02:39:01 -06:00
assert_eq!(bitv, from_bytes(&[0b10110110, 0b00000000, 0b10101010,
0xFF, 0xFF, 0xFF, 0xFF, 0, 0, 0, 0, 0, 0, 0, 0]));
bitv.grow(16, true);
2014-11-17 02:39:01 -06:00
assert_eq!(bitv, from_bytes(&[0b10110110, 0b00000000, 0b10101010,
0xFF, 0xFF, 0xFF, 0xFF, 0, 0, 0, 0, 0, 0, 0, 0, 0xFF, 0xFF]));
}
#[test]
fn test_bitv_extend() {
2014-11-17 02:39:01 -06:00
let mut bitv = from_bytes(&[0b10110110, 0b00000000, 0b11111111]);
let ext = from_bytes(&[0b01001001, 0b10010010, 0b10111101]);
bitv.extend(ext.iter());
2014-11-17 02:39:01 -06:00
assert_eq!(bitv, from_bytes(&[0b10110110, 0b00000000, 0b11111111,
0b01001001, 0b10010010, 0b10111101]));
}
#[test]
fn test_bitv_set_show() {
let mut s = BitvSet::new();
s.insert(1);
s.insert(10);
s.insert(50);
s.insert(2);
assert_eq!("{1, 2, 10, 50}", s.to_string());
}
fn rng() -> rand::IsaacRng {
let seed: &[_] = &[1, 2, 3, 4, 5, 6, 7, 8, 9, 0];
rand::SeedableRng::from_seed(seed)
}
#[bench]
fn bench_uint_small(b: &mut Bencher) {
2013-05-07 19:57:58 -05:00
let mut r = rng();
let mut bitv = 0 as uint;
b.iter(|| {
for _ in range(0u, 100) {
bitv |= 1 << ((r.next_u32() as uint) % u32::BITS);
}
black_box(&bitv)
});
}
#[bench]
fn bench_bitv_set_big_fixed(b: &mut Bencher) {
2013-05-07 19:57:58 -05:00
let mut r = rng();
let mut bitv = Bitv::with_capacity(BENCH_BITS, false);
b.iter(|| {
for _ in range(0u, 100) {
bitv.set((r.next_u32() as uint) % BENCH_BITS, true);
}
black_box(&bitv)
});
}
#[bench]
fn bench_bitv_set_big_variable(b: &mut Bencher) {
let mut r = rng();
let mut bitv = Bitv::with_capacity(BENCH_BITS, false);
b.iter(|| {
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 05:11:17 -05:00
for _ in range(0u, 100) {
bitv.set((r.next_u32() as uint) % BENCH_BITS, r.gen());
}
black_box(&bitv);
});
}
#[bench]
fn bench_bitv_set_small(b: &mut Bencher) {
2013-05-07 19:57:58 -05:00
let mut r = rng();
let mut bitv = Bitv::with_capacity(u32::BITS, false);
b.iter(|| {
for _ in range(0u, 100) {
bitv.set((r.next_u32() as uint) % u32::BITS, true);
}
black_box(&bitv);
});
}
#[bench]
fn bench_bitvset_small(b: &mut Bencher) {
2013-05-07 19:57:58 -05:00
let mut r = rng();
let mut bitv = BitvSet::new();
b.iter(|| {
for _ in range(0u, 100) {
bitv.insert((r.next_u32() as uint) % u32::BITS);
}
black_box(&bitv);
});
}
#[bench]
fn bench_bitvset_big(b: &mut Bencher) {
2013-05-07 19:57:58 -05:00
let mut r = rng();
let mut bitv = BitvSet::new();
b.iter(|| {
for _ in range(0u, 100) {
bitv.insert((r.next_u32() as uint) % BENCH_BITS);
}
black_box(&bitv);
});
}
#[bench]
fn bench_bitv_big_union(b: &mut Bencher) {
let mut b1 = Bitv::with_capacity(BENCH_BITS, false);
let b2 = Bitv::with_capacity(BENCH_BITS, false);
b.iter(|| {
b1.union(&b2)
})
}
2013-07-14 13:52:49 -05:00
#[bench]
fn bench_bitv_small_iter(b: &mut Bencher) {
let bitv = Bitv::with_capacity(u32::BITS, false);
b.iter(|| {
let mut sum = 0u;
for _ in range(0u, 10) {
for pres in bitv.iter() {
sum += pres as uint;
}
2013-07-14 13:52:49 -05:00
}
sum
})
2013-07-14 13:52:49 -05:00
}
#[bench]
fn bench_bitv_big_iter(b: &mut Bencher) {
let bitv = Bitv::with_capacity(BENCH_BITS, false);
b.iter(|| {
let mut sum = 0u;
for pres in bitv.iter() {
sum += pres as uint;
2013-07-14 13:52:49 -05:00
}
sum
})
2013-07-14 13:52:49 -05:00
}
#[bench]
fn bench_bitvset_iter(b: &mut Bencher) {
2013-07-14 13:52:49 -05:00
let bitv = BitvSet::from_bitv(from_fn(BENCH_BITS,
|idx| {idx % 3 == 0}));
b.iter(|| {
let mut sum = 0u;
for idx in bitv.iter() {
sum += idx as uint;
2013-07-14 13:52:49 -05:00
}
sum
})
2013-07-14 13:52:49 -05:00
}
2012-01-17 21:05:07 -06:00
}