2967 lines
85 KiB
Rust
Raw Normal View History

// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! A growable list type with heap-allocated contents, written `Vec<T>` but pronounced 'vector.'
2014-08-30 17:11:22 -04:00
//!
//! Vectors have `O(1)` indexing, push (to the end) and pop (from the end).
2014-12-16 20:12:30 -05:00
//!
//! # Examples
//!
//! Explicitly creating a `Vec<T>` with `new()`:
//!
//! ```
//! let xs: Vec<i32> = Vec::new();
//! ```
//!
//! Using the `vec!` macro:
//!
//! ```
//! let ys: Vec<i32> = vec![];
//!
//! let zs = vec![1i32, 2, 3, 4, 5];
2014-12-16 20:12:30 -05:00
//! ```
//!
//! Push:
//!
//! ```
//! let mut xs = vec![1i32, 2];
2014-12-16 20:12:30 -05:00
//!
//! xs.push(3);
//! ```
//!
//! And pop:
//!
//! ```
//! let mut xs = vec![1i32, 2];
2014-12-16 20:12:30 -05:00
//!
//! let two = xs.pop();
//! ```
2015-01-23 21:48:20 -08:00
#![stable(feature = "rust1", since = "1.0.0")]
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 18:50:12 -07:00
use core::prelude::*;
use alloc::boxed::Box;
use alloc::heap::{EMPTY, allocate, reallocate, deallocate};
use core::borrow::{Cow, IntoCow};
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 18:50:12 -07:00
use core::cmp::max;
use core::cmp::{Ordering};
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 18:50:12 -07:00
use core::default::Default;
use core::fmt;
2015-01-03 22:42:21 -05:00
use core::hash::{self, Hash};
2015-01-07 22:01:05 -05:00
use core::iter::{repeat, FromIterator, IntoIterator};
use core::marker::{self, ContravariantLifetime, InvariantType};
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 18:50:12 -07:00
use core::mem;
2014-12-11 22:29:24 -05:00
use core::nonzero::NonZero;
use core::num::{Int, UnsignedInt};
use core::ops::{Index, IndexMut, Deref, Add};
use core::ops;
2014-12-11 22:29:24 -05:00
use core::ptr;
use core::raw::Slice as RawSlice;
2015-01-07 22:01:05 -05:00
use core::slice;
2015-02-04 21:17:19 -05:00
use core::usize;
std: Recreate a `collections` module As with the previous commit with `librand`, this commit shuffles around some `collections` code. The new state of the world is similar to that of librand: * The libcollections crate now only depends on libcore and liballoc. * The standard library has a new module, `std::collections`. All functionality of libcollections is reexported through this module. I would like to stress that this change is purely cosmetic. There are very few alterations to these primitives. There are a number of notable points about the new organization: * std::{str, slice, string, vec} all moved to libcollections. There is no reason that these primitives shouldn't be necessarily usable in a freestanding context that has allocation. These are all reexported in their usual places in the standard library. * The `hashmap`, and transitively the `lru_cache`, modules no longer reside in `libcollections`, but rather in libstd. The reason for this is because the `HashMap::new` contructor requires access to the OSRng for initially seeding the hash map. Beyond this requirement, there is no reason that the hashmap could not move to libcollections. I do, however, have a plan to move the hash map to the collections module. The `HashMap::new` function could be altered to require that the `H` hasher parameter ascribe to the `Default` trait, allowing the entire `hashmap` module to live in libcollections. The key idea would be that the default hasher would be different in libstd. Something along the lines of: // src/libstd/collections/mod.rs pub type HashMap<K, V, H = RandomizedSipHasher> = core_collections::HashMap<K, V, H>; This is not possible today because you cannot invoke static methods through type aliases. If we modified the compiler, however, to allow invocation of static methods through type aliases, then this type definition would essentially be switching the default hasher from `SipHasher` in libcollections to a libstd-defined `RandomizedSipHasher` type. This type's `Default` implementation would randomly seed the `SipHasher` instance, and otherwise perform the same as `SipHasher`. This future state doesn't seem incredibly far off, but until that time comes, the hashmap module will live in libstd to not compromise on functionality. * In preparation for the hashmap moving to libcollections, the `hash` module has moved from libstd to libcollections. A previously snapshotted commit enables a distinct `Writer` trait to live in the `hash` module which `Hash` implementations are now parameterized over. Due to using a custom trait, the `SipHasher` implementation has lost its specialized methods for writing integers. These can be re-added backwards-compatibly in the future via default methods if necessary, but the FNV hashing should satisfy much of the need for speedier hashing. A list of breaking changes: * HashMap::{get, get_mut} no longer fails with the key formatted into the error message with `{:?}`, instead, a generic message is printed. With backtraces, it should still be not-too-hard to track down errors. * The HashMap, HashSet, and LruCache types are now available through std::collections instead of the collections crate. * Manual implementations of hash should be parameterized over `hash::Writer` instead of just `Writer`. [breaking-change]
2014-05-29 18:50:12 -07:00
2014-12-16 20:12:30 -05:00
/// A growable list type, written `Vec<T>` but pronounced 'vector.'
///
2014-03-19 23:01:08 -07:00
/// # Examples
///
/// ```
2014-03-19 23:01:08 -07:00
/// let mut vec = Vec::new();
2015-01-25 22:05:03 +01:00
/// vec.push(1);
/// vec.push(2);
2014-03-19 23:01:08 -07:00
///
/// assert_eq!(vec.len(), 2);
2014-07-15 11:37:25 +12:00
/// assert_eq!(vec[0], 1);
2014-03-19 23:01:08 -07:00
///
/// assert_eq!(vec.pop(), Some(2));
/// assert_eq!(vec.len(), 1);
///
2015-01-25 22:05:03 +01:00
/// vec[0] = 7;
/// assert_eq!(vec[0], 7);
///
2014-11-17 21:39:01 +13:00
/// vec.push_all(&[1, 2, 3]);
///
/// for x in vec.iter() {
/// println!("{}", x);
/// }
2015-01-25 22:05:03 +01:00
/// assert_eq!(vec, vec![7, 1, 2, 3]);
2014-03-19 23:01:08 -07:00
/// ```
///
/// The `vec!` macro is provided to make initialization more convenient:
///
/// ```
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1, 2, 3];
/// vec.push(4);
/// assert_eq!(vec, vec![1, 2, 3, 4]);
/// ```
///
2014-12-16 20:12:30 -05:00
/// Use a `Vec<T>` as an efficient stack:
///
/// ```
/// let mut stack = Vec::new();
///
2015-01-25 22:05:03 +01:00
/// stack.push(1);
/// stack.push(2);
/// stack.push(3);
///
/// loop {
/// let top = match stack.pop() {
/// None => break, // empty
/// Some(x) => x,
/// };
/// // Prints 3, 2, 1
/// println!("{}", top);
/// }
/// ```
///
/// # Capacity and reallocation
///
2014-12-16 20:12:30 -05:00
/// The capacity of a vector is the amount of space allocated for any future elements that will be
/// added onto the vector. This is not to be confused with the *length* of a vector, which
/// specifies the number of actual elements within the vector. If a vector's length exceeds its
/// capacity, its capacity will automatically be increased, but its elements will have to be
/// reallocated.
///
2014-12-16 20:12:30 -05:00
/// For example, a vector with capacity 10 and length 0 would be an empty vector with space for 10
/// more elements. Pushing 10 or fewer elements onto the vector will not change its capacity or
/// cause reallocation to occur. However, if the vector's length is increased to 11, it will have
/// to reallocate, which can be slow. For this reason, it is recommended to use
/// `Vec::with_capacity` whenever possible to specify how big the vector is expected to get.
#[unsafe_no_drop_flag]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct Vec<T> {
2014-12-03 17:22:11 -05:00
ptr: NonZero<*mut T>,
2015-02-04 21:17:19 -05:00
len: usize,
cap: usize,
_own: marker::PhantomData<T>,
}
unsafe impl<T: Send> Send for Vec<T> { }
unsafe impl<T: Sync> Sync for Vec<T> { }
////////////////////////////////////////////////////////////////////////////////
// Inherent methods
////////////////////////////////////////////////////////////////////////////////
impl<T> Vec<T> {
2014-12-16 20:12:30 -05:00
/// Constructs a new, empty `Vec<T>`.
///
/// The vector will not allocate until elements are pushed onto it.
///
/// # Examples
///
/// ```
/// let mut vec: Vec<i32> = Vec::new();
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn new() -> Vec<T> {
// We want ptr to never be NULL so instead we set it to some arbitrary
// non-null value which is fine since we never call deallocate on the ptr
// if cap is 0. The reason for this is because the pointer of a slice
// being NULL would break the null pointer optimization for enums.
unsafe { Vec::from_raw_parts(EMPTY as *mut T, 0, 0) }
}
2014-12-16 20:12:30 -05:00
/// Constructs a new, empty `Vec<T>` with the specified capacity.
///
2014-12-16 20:12:30 -05:00
/// The vector will be able to hold exactly `capacity` elements without reallocating. If
/// `capacity` is 0, the vector will not allocate.
///
2014-12-16 20:12:30 -05:00
/// It is important to note that this function does not specify the *length* of the returned
/// vector, but only the *capacity*. (For an explanation of the difference between length and
2014-12-30 10:51:18 -08:00
/// capacity, see the main `Vec<T>` docs above, 'Capacity and reallocation'.)
///
/// # Examples
///
/// ```
/// let mut vec: Vec<_> = Vec::with_capacity(10);
///
/// // The vector contains no items, even though it has capacity for more
/// assert_eq!(vec.len(), 0);
///
/// // These are all done without reallocating...
2015-01-25 22:05:03 +01:00
/// for i in 0..10 {
/// vec.push(i);
/// }
///
/// // ...but this may make the vector reallocate
/// vec.push(11);
/// ```
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn with_capacity(capacity: usize) -> Vec<T> {
core: Remove the cast module This commit revisits the `cast` module in libcore and libstd, and scrutinizes all functions inside of it. The result was to remove the `cast` module entirely, folding all functionality into the `mem` module. Specifically, this is the fate of each function in the `cast` module. * transmute - This function was moved to `mem`, but it is now marked as #[unstable]. This is due to planned changes to the `transmute` function and how it can be invoked (see the #[unstable] comment). For more information, see RFC 5 and #12898 * transmute_copy - This function was moved to `mem`, with clarification that is is not an error to invoke it with T/U that are different sizes, but rather that it is strongly discouraged. This function is now #[stable] * forget - This function was moved to `mem` and marked #[stable] * bump_box_refcount - This function was removed due to the deprecation of managed boxes as well as its questionable utility. * transmute_mut - This function was previously deprecated, and removed as part of this commit. * transmute_mut_unsafe - This function doesn't serve much of a purpose when it can be achieved with an `as` in safe code, so it was removed. * transmute_lifetime - This function was removed because it is likely a strong indication that code is incorrect in the first place. * transmute_mut_lifetime - This function was removed for the same reasons as `transmute_lifetime` * copy_lifetime - This function was moved to `mem`, but it is marked `#[unstable]` now due to the likelihood of being removed in the future if it is found to not be very useful. * copy_mut_lifetime - This function was also moved to `mem`, but had the same treatment as `copy_lifetime`. * copy_lifetime_vec - This function was removed because it is not used today, and its existence is not necessary with DST (copy_lifetime will suffice). In summary, the cast module was stripped down to these functions, and then the functions were moved to the `mem` module. transmute - #[unstable] transmute_copy - #[stable] forget - #[stable] copy_lifetime - #[unstable] copy_mut_lifetime - #[unstable] [breaking-change]
2014-05-09 10:34:51 -07:00
if mem::size_of::<T>() == 0 {
unsafe { Vec::from_raw_parts(EMPTY as *mut T, 0, usize::MAX) }
core: Remove the cast module This commit revisits the `cast` module in libcore and libstd, and scrutinizes all functions inside of it. The result was to remove the `cast` module entirely, folding all functionality into the `mem` module. Specifically, this is the fate of each function in the `cast` module. * transmute - This function was moved to `mem`, but it is now marked as #[unstable]. This is due to planned changes to the `transmute` function and how it can be invoked (see the #[unstable] comment). For more information, see RFC 5 and #12898 * transmute_copy - This function was moved to `mem`, with clarification that is is not an error to invoke it with T/U that are different sizes, but rather that it is strongly discouraged. This function is now #[stable] * forget - This function was moved to `mem` and marked #[stable] * bump_box_refcount - This function was removed due to the deprecation of managed boxes as well as its questionable utility. * transmute_mut - This function was previously deprecated, and removed as part of this commit. * transmute_mut_unsafe - This function doesn't serve much of a purpose when it can be achieved with an `as` in safe code, so it was removed. * transmute_lifetime - This function was removed because it is likely a strong indication that code is incorrect in the first place. * transmute_mut_lifetime - This function was removed for the same reasons as `transmute_lifetime` * copy_lifetime - This function was moved to `mem`, but it is marked `#[unstable]` now due to the likelihood of being removed in the future if it is found to not be very useful. * copy_mut_lifetime - This function was also moved to `mem`, but had the same treatment as `copy_lifetime`. * copy_lifetime_vec - This function was removed because it is not used today, and its existence is not necessary with DST (copy_lifetime will suffice). In summary, the cast module was stripped down to these functions, and then the functions were moved to the `mem` module. transmute - #[unstable] transmute_copy - #[stable] forget - #[stable] copy_lifetime - #[unstable] copy_mut_lifetime - #[unstable] [breaking-change]
2014-05-09 10:34:51 -07:00
} else if capacity == 0 {
Vec::new()
} else {
let size = capacity.checked_mul(mem::size_of::<T>())
.expect("capacity overflow");
core: Remove the cast module This commit revisits the `cast` module in libcore and libstd, and scrutinizes all functions inside of it. The result was to remove the `cast` module entirely, folding all functionality into the `mem` module. Specifically, this is the fate of each function in the `cast` module. * transmute - This function was moved to `mem`, but it is now marked as #[unstable]. This is due to planned changes to the `transmute` function and how it can be invoked (see the #[unstable] comment). For more information, see RFC 5 and #12898 * transmute_copy - This function was moved to `mem`, with clarification that is is not an error to invoke it with T/U that are different sizes, but rather that it is strongly discouraged. This function is now #[stable] * forget - This function was moved to `mem` and marked #[stable] * bump_box_refcount - This function was removed due to the deprecation of managed boxes as well as its questionable utility. * transmute_mut - This function was previously deprecated, and removed as part of this commit. * transmute_mut_unsafe - This function doesn't serve much of a purpose when it can be achieved with an `as` in safe code, so it was removed. * transmute_lifetime - This function was removed because it is likely a strong indication that code is incorrect in the first place. * transmute_mut_lifetime - This function was removed for the same reasons as `transmute_lifetime` * copy_lifetime - This function was moved to `mem`, but it is marked `#[unstable]` now due to the likelihood of being removed in the future if it is found to not be very useful. * copy_mut_lifetime - This function was also moved to `mem`, but had the same treatment as `copy_lifetime`. * copy_lifetime_vec - This function was removed because it is not used today, and its existence is not necessary with DST (copy_lifetime will suffice). In summary, the cast module was stripped down to these functions, and then the functions were moved to the `mem` module. transmute - #[unstable] transmute_copy - #[stable] forget - #[stable] copy_lifetime - #[unstable] copy_mut_lifetime - #[unstable] [breaking-change]
2014-05-09 10:34:51 -07:00
let ptr = unsafe { allocate(size, mem::min_align_of::<T>()) };
if ptr.is_null() { ::alloc::oom() }
unsafe { Vec::from_raw_parts(ptr as *mut T, 0, capacity) }
}
}
/// Creates a `Vec<T>` directly from the raw components of another vector.
///
/// This is highly unsafe, due to the number of invariants that aren't checked.
///
/// # Examples
///
/// ```
/// use std::ptr;
/// use std::mem;
///
/// fn main() {
2015-01-25 22:05:03 +01:00
/// let mut v = vec![1, 2, 3];
///
/// // Pull out the various important pieces of information about `v`
/// let p = v.as_mut_ptr();
/// let len = v.len();
/// let cap = v.capacity();
///
/// unsafe {
/// // Cast `v` into the void: no destructor run, so we are in
/// // complete control of the allocation to which `p` points.
/// mem::forget(v);
///
/// // Overwrite memory with 4, 5, 6
/// for i in 0..len as isize {
/// ptr::write(p.offset(i), 4 + i);
/// }
///
/// // Put everything back together into a Vec
/// let rebuilt = Vec::from_raw_parts(p, len, cap);
2015-01-25 22:05:03 +01:00
/// assert_eq!(rebuilt, vec![4, 5, 6]);
/// }
/// }
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub unsafe fn from_raw_parts(ptr: *mut T, length: usize,
capacity: usize) -> Vec<T> {
Vec {
ptr: NonZero::new(ptr),
len: length,
cap: capacity,
_own: marker::PhantomData,
}
}
/// Creates a vector by copying the elements from a raw pointer.
///
2014-12-16 20:12:30 -05:00
/// This function will copy `elts` contiguous elements starting at `ptr` into a new allocation
/// owned by the returned `Vec<T>`. The elements of the buffer are copied into the vector
/// without cloning, as if `ptr::read()` were called on them.
#[inline]
#[unstable(feature = "collections",
reason = "may be better expressed via composition")]
2015-02-04 21:17:19 -05:00
pub unsafe fn from_raw_buf(ptr: *const T, elts: usize) -> Vec<T> {
let mut dst = Vec::with_capacity(elts);
dst.set_len(elts);
ptr::copy_nonoverlapping_memory(dst.as_mut_ptr(), ptr, elts);
dst
}
/// Returns the number of elements the vector can hold without
/// reallocating.
///
/// # Examples
///
/// ```
/// let vec: Vec<i32> = Vec::with_capacity(10);
/// assert_eq!(vec.capacity(), 10);
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn capacity(&self) -> usize {
self.cap
}
/// Reserves capacity for at least `additional` more elements to be inserted in the given
2014-12-30 10:51:18 -08:00
/// `Vec<T>`. The collection may reserve more space to avoid frequent reallocations.
///
/// # Panics
///
2015-02-04 21:17:19 -05:00
/// Panics if the new capacity overflows `usize`.
///
/// # Examples
///
/// ```
/// let mut vec = vec![1];
/// vec.reserve(10);
/// assert!(vec.capacity() >= 11);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn reserve(&mut self, additional: usize) {
if self.cap - self.len < additional {
2015-02-04 21:17:19 -05:00
let err_msg = "Vec::reserve: `usize` overflow";
let new_cap = self.len.checked_add(additional).expect(err_msg)
.checked_next_power_of_two().expect(err_msg);
self.grow_capacity(new_cap);
}
}
/// Reserves the minimum capacity for exactly `additional` more elements to
2014-12-30 10:51:18 -08:00
/// be inserted in the given `Vec<T>`. Does nothing if the capacity is already
/// sufficient.
///
/// Note that the allocator may give the collection more space than it
/// requests. Therefore capacity can not be relied upon to be precisely
/// minimal. Prefer `reserve` if future insertions are expected.
///
/// # Panics
///
2015-02-04 21:17:19 -05:00
/// Panics if the new capacity overflows `usize`.
///
/// # Examples
///
/// ```
/// let mut vec = vec![1];
/// vec.reserve_exact(10);
/// assert!(vec.capacity() >= 11);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn reserve_exact(&mut self, additional: usize) {
if self.cap - self.len < additional {
match self.len.checked_add(additional) {
2015-02-04 21:17:19 -05:00
None => panic!("Vec::reserve: `usize` overflow"),
Some(new_cap) => self.grow_capacity(new_cap)
}
collections: Optimize Vec when cloning from a slice llvm is currently not able to conver `Vec::extend` into a memcpy for `Copy` types, which results in methods like `Vec::push_all` to run twice as slow as it should be running. This patch takes the unsafe `Vec::clone` optimization to speed up all the operations that are cloning a slice into a `Vec`. before: test vec::tests::bench_clone_from_0000_0000 ... bench: 12 ns/iter (+/- 2) test vec::tests::bench_clone_from_0000_0010 ... bench: 125 ns/iter (+/- 4) = 80 MB/s test vec::tests::bench_clone_from_0000_0100 ... bench: 360 ns/iter (+/- 33) = 277 MB/s test vec::tests::bench_clone_from_0000_1000 ... bench: 2601 ns/iter (+/- 175) = 384 MB/s test vec::tests::bench_clone_from_0010_0000 ... bench: 12 ns/iter (+/- 2) test vec::tests::bench_clone_from_0010_0010 ... bench: 125 ns/iter (+/- 10) = 80 MB/s test vec::tests::bench_clone_from_0010_0100 ... bench: 361 ns/iter (+/- 28) = 277 MB/s test vec::tests::bench_clone_from_0100_0010 ... bench: 131 ns/iter (+/- 13) = 76 MB/s test vec::tests::bench_clone_from_0100_0100 ... bench: 360 ns/iter (+/- 9) = 277 MB/s test vec::tests::bench_clone_from_0100_1000 ... bench: 2575 ns/iter (+/- 168) = 388 MB/s test vec::tests::bench_clone_from_1000_0100 ... bench: 356 ns/iter (+/- 20) = 280 MB/s test vec::tests::bench_clone_from_1000_1000 ... bench: 2605 ns/iter (+/- 167) = 383 MB/s test vec::tests::bench_from_slice_0000 ... bench: 11 ns/iter (+/- 0) test vec::tests::bench_from_slice_0010 ... bench: 115 ns/iter (+/- 5) = 86 MB/s test vec::tests::bench_from_slice_0100 ... bench: 309 ns/iter (+/- 170) = 323 MB/s test vec::tests::bench_from_slice_1000 ... bench: 2065 ns/iter (+/- 198) = 484 MB/s test vec::tests::bench_push_all_0000_0000 ... bench: 7 ns/iter (+/- 0) test vec::tests::bench_push_all_0000_0010 ... bench: 79 ns/iter (+/- 7) = 126 MB/s test vec::tests::bench_push_all_0000_0100 ... bench: 342 ns/iter (+/- 18) = 292 MB/s test vec::tests::bench_push_all_0000_1000 ... bench: 2873 ns/iter (+/- 75) = 348 MB/s test vec::tests::bench_push_all_0010_0010 ... bench: 154 ns/iter (+/- 8) = 64 MB/s test vec::tests::bench_push_all_0100_0100 ... bench: 518 ns/iter (+/- 18) = 193 MB/s test vec::tests::bench_push_all_1000_1000 ... bench: 4490 ns/iter (+/- 223) = 222 MB/s after: test vec::tests::bench_clone_from_0000_0000 ... bench: 12 ns/iter (+/- 1) test vec::tests::bench_clone_from_0000_0010 ... bench: 123 ns/iter (+/- 5) = 81 MB/s test vec::tests::bench_clone_from_0000_0100 ... bench: 367 ns/iter (+/- 23) = 272 MB/s test vec::tests::bench_clone_from_0000_1000 ... bench: 2618 ns/iter (+/- 252) = 381 MB/s test vec::tests::bench_clone_from_0010_0000 ... bench: 12 ns/iter (+/- 1) test vec::tests::bench_clone_from_0010_0010 ... bench: 124 ns/iter (+/- 7) = 80 MB/s test vec::tests::bench_clone_from_0010_0100 ... bench: 369 ns/iter (+/- 34) = 271 MB/s test vec::tests::bench_clone_from_0100_0010 ... bench: 123 ns/iter (+/- 6) = 81 MB/s test vec::tests::bench_clone_from_0100_0100 ... bench: 371 ns/iter (+/- 25) = 269 MB/s test vec::tests::bench_clone_from_0100_1000 ... bench: 2713 ns/iter (+/- 532) = 368 MB/s test vec::tests::bench_clone_from_1000_0100 ... bench: 369 ns/iter (+/- 14) = 271 MB/s test vec::tests::bench_clone_from_1000_1000 ... bench: 2611 ns/iter (+/- 194) = 382 MB/s test vec::tests::bench_from_slice_0000 ... bench: 7 ns/iter (+/- 0) test vec::tests::bench_from_slice_0010 ... bench: 108 ns/iter (+/- 4) = 92 MB/s test vec::tests::bench_from_slice_0100 ... bench: 235 ns/iter (+/- 24) = 425 MB/s test vec::tests::bench_from_slice_1000 ... bench: 1318 ns/iter (+/- 96) = 758 MB/s test vec::tests::bench_push_all_0000_0000 ... bench: 7 ns/iter (+/- 0) test vec::tests::bench_push_all_0000_0010 ... bench: 70 ns/iter (+/- 4) = 142 MB/s test vec::tests::bench_push_all_0000_0100 ... bench: 176 ns/iter (+/- 16) = 568 MB/s test vec::tests::bench_push_all_0000_1000 ... bench: 1125 ns/iter (+/- 94) = 888 MB/s test vec::tests::bench_push_all_0010_0010 ... bench: 159 ns/iter (+/- 15) = 62 MB/s test vec::tests::bench_push_all_0100_0100 ... bench: 363 ns/iter (+/- 12) = 275 MB/s test vec::tests::bench_push_all_1000_1000 ... bench: 2860 ns/iter (+/- 415) = 349 MB/s
2014-07-05 23:11:18 -07:00
}
}
2014-12-30 10:51:18 -08:00
/// Shrinks the capacity of the vector as much as possible.
///
2014-12-30 10:51:18 -08:00
/// It will drop down as close as possible to the length but the allocator
/// may still inform the vector that there is space for a few more elements.
///
/// # Examples
///
/// ```
/// let mut vec = Vec::with_capacity(10);
/// vec.push_all(&[1, 2, 3]);
/// assert_eq!(vec.capacity(), 10);
/// vec.shrink_to_fit();
/// assert!(vec.capacity() >= 3);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn shrink_to_fit(&mut self) {
if mem::size_of::<T>() == 0 { return }
if self.len == 0 {
if self.cap != 0 {
unsafe {
2014-12-30 10:51:18 -08:00
dealloc(*self.ptr, self.cap)
}
self.cap = 0;
}
} else if self.cap != self.len {
unsafe {
// Overflow check is unnecessary as the vector is already at
// least this large.
2014-12-30 10:51:18 -08:00
let ptr = reallocate(*self.ptr as *mut u8,
self.cap * mem::size_of::<T>(),
self.len * mem::size_of::<T>(),
mem::min_align_of::<T>()) as *mut T;
if ptr.is_null() { ::alloc::oom() }
self.ptr = NonZero::new(ptr);
}
self.cap = self.len;
}
}
/// Convert the vector into Box<[T]>.
///
/// Note that this will drop any excess capacity. Calling this and
/// converting back to a vector with `into_vec()` is equivalent to calling
/// `shrink_to_fit()`.
#[unstable(feature = "collections")]
pub fn into_boxed_slice(mut self) -> Box<[T]> {
self.shrink_to_fit();
unsafe {
let xs: Box<[T]> = mem::transmute(&mut *self);
mem::forget(self);
xs
}
}
/// Shorten a vector, dropping excess elements.
///
/// If `len` is greater than the vector's current length, this has no
/// effect.
///
/// # Examples
///
/// ```
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1, 2, 3, 4];
/// vec.truncate(2);
/// assert_eq!(vec, vec![1, 2]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn truncate(&mut self, len: usize) {
unsafe {
// drop any extra elements
while len < self.len {
// decrement len before the read(), so a panic on Drop doesn't
// re-drop the just-failed value.
self.len -= 1;
ptr::read(self.get_unchecked(self.len));
}
}
}
/// Returns a mutable slice of the elements of `self`.
///
/// # Examples
///
/// ```
/// fn foo(slice: &mut [i32]) {}
///
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1, 2];
/// foo(vec.as_mut_slice());
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn as_mut_slice(&mut self) -> &mut [T] {
unsafe {
mem::transmute(RawSlice {
data: *self.ptr,
len: self.len,
})
}
}
2014-12-30 10:51:18 -08:00
/// Creates a consuming iterator, that is, one that moves each value out of
/// the vector (from start to end). The vector cannot be used after calling
/// this.
///
/// # Examples
///
/// ```
/// let v = vec!["a".to_string(), "b".to_string()];
/// for s in v.into_iter() {
/// // s has type String, not &String
/// println!("{}", s);
/// }
/// ```
2014-07-14 11:03:23 +12:00
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn into_iter(self) -> IntoIter<T> {
unsafe {
2014-12-30 10:51:18 -08:00
let ptr = *self.ptr;
let cap = self.cap;
2014-12-30 10:51:18 -08:00
let begin = ptr as *const T;
let end = if mem::size_of::<T>() == 0 {
2015-02-04 21:17:19 -05:00
(ptr as usize + self.len()) as *const T
} else {
ptr.offset(self.len() as isize) as *const T
};
mem::forget(self);
IntoIter { allocation: ptr, cap: cap, ptr: begin, end: end }
}
}
/// Sets the length of a vector.
///
/// This will explicitly set the size of the vector, without actually
/// modifying its buffers, so it is up to the caller to ensure that the
/// vector is actually the specified size.
///
/// # Examples
///
/// ```
/// let mut v = vec![1, 2, 3, 4];
/// unsafe {
/// v.set_len(1);
/// }
/// ```
2014-07-14 11:03:23 +12:00
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub unsafe fn set_len(&mut self, len: usize) {
self.len = len;
2014-07-14 11:03:23 +12:00
}
/// Removes an element from anywhere in the vector and return it, replacing
/// it with the last element.
2014-12-16 20:12:30 -05:00
///
/// This does not preserve ordering, but is O(1).
///
/// # Panics
///
/// Panics if `index` is out of bounds.
///
/// # Examples
///
/// ```
/// let mut v = vec!["foo", "bar", "baz", "qux"];
///
2014-12-30 10:51:18 -08:00
/// assert_eq!(v.swap_remove(1), "bar");
/// assert_eq!(v, vec!["foo", "qux", "baz"]);
///
2014-12-30 10:51:18 -08:00
/// assert_eq!(v.swap_remove(0), "foo");
/// assert_eq!(v, vec!["baz", "qux"]);
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn swap_remove(&mut self, index: usize) -> T {
let length = self.len();
self.swap(index, length - 1);
self.pop().unwrap()
}
/// Inserts an element at position `index` within the vector, shifting all
/// elements after position `i` one position to the right.
///
/// # Panics
///
/// Panics if `index` is not between `0` and the vector's length (both
/// bounds inclusive).
///
/// # Examples
///
/// ```
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1, 2, 3];
/// vec.insert(1, 4);
/// assert_eq!(vec, vec![1, 4, 2, 3]);
/// vec.insert(4, 5);
/// assert_eq!(vec, vec![1, 4, 2, 3, 5]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn insert(&mut self, index: usize, element: T) {
let len = self.len();
assert!(index <= len);
// space for the new element
self.reserve(1);
unsafe { // infallible
// The spot to put the new value
{
let p = self.as_mut_ptr().offset(index as isize);
// Shift everything over to make space. (Duplicating the
// `index`th element into two consecutive places.)
ptr::copy_memory(p.offset(1), &*p, len - index);
// Write it in, overwriting the first copy of the `index`th
// element.
ptr::write(&mut *p, element);
}
self.set_len(len + 1);
}
}
/// Removes and returns the element at position `index` within the vector,
/// shifting all elements after position `index` one position to the left.
///
/// # Panics
///
/// Panics if `i` is out of bounds.
///
/// # Examples
///
/// ```
2015-01-25 22:05:03 +01:00
/// let mut v = vec![1, 2, 3];
2014-12-30 10:51:18 -08:00
/// assert_eq!(v.remove(1), 2);
/// assert_eq!(v, vec![1, 3]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn remove(&mut self, index: usize) -> T {
2014-03-11 00:53:23 -04:00
let len = self.len();
assert!(index < len);
unsafe { // infallible
let ret;
{
// the place we are taking from.
let ptr = self.as_mut_ptr().offset(index as isize);
// copy it out, unsafely having a copy of the value on
// the stack and in the vector at the same time.
ret = ptr::read(ptr);
// Shift everything down to fill in that spot.
ptr::copy_memory(ptr, &*ptr.offset(1), len - index - 1);
}
self.set_len(len - 1);
ret
}
}
2014-04-02 23:10:36 -04:00
/// Retains only the elements specified by the predicate.
2014-12-16 20:12:30 -05:00
///
/// In other words, remove all elements `e` such that `f(&e)` returns false.
/// This method operates in place and preserves the order of the retained
/// elements.
///
/// # Examples
///
/// ```
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1, 2, 3, 4];
/// vec.retain(|&x| x%2 == 0);
/// assert_eq!(vec, vec![2, 4]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn retain<F>(&mut self, mut f: F) where F: FnMut(&T) -> bool {
2014-04-02 23:10:36 -04:00
let len = self.len();
2015-02-04 21:17:19 -05:00
let mut del = 0;
2014-04-02 23:10:36 -04:00
{
let v = &mut **self;
core: Remove the cast module This commit revisits the `cast` module in libcore and libstd, and scrutinizes all functions inside of it. The result was to remove the `cast` module entirely, folding all functionality into the `mem` module. Specifically, this is the fate of each function in the `cast` module. * transmute - This function was moved to `mem`, but it is now marked as #[unstable]. This is due to planned changes to the `transmute` function and how it can be invoked (see the #[unstable] comment). For more information, see RFC 5 and #12898 * transmute_copy - This function was moved to `mem`, with clarification that is is not an error to invoke it with T/U that are different sizes, but rather that it is strongly discouraged. This function is now #[stable] * forget - This function was moved to `mem` and marked #[stable] * bump_box_refcount - This function was removed due to the deprecation of managed boxes as well as its questionable utility. * transmute_mut - This function was previously deprecated, and removed as part of this commit. * transmute_mut_unsafe - This function doesn't serve much of a purpose when it can be achieved with an `as` in safe code, so it was removed. * transmute_lifetime - This function was removed because it is likely a strong indication that code is incorrect in the first place. * transmute_mut_lifetime - This function was removed for the same reasons as `transmute_lifetime` * copy_lifetime - This function was moved to `mem`, but it is marked `#[unstable]` now due to the likelihood of being removed in the future if it is found to not be very useful. * copy_mut_lifetime - This function was also moved to `mem`, but had the same treatment as `copy_lifetime`. * copy_lifetime_vec - This function was removed because it is not used today, and its existence is not necessary with DST (copy_lifetime will suffice). In summary, the cast module was stripped down to these functions, and then the functions were moved to the `mem` module. transmute - #[unstable] transmute_copy - #[stable] forget - #[stable] copy_lifetime - #[unstable] copy_mut_lifetime - #[unstable] [breaking-change]
2014-05-09 10:34:51 -07:00
2015-02-04 21:17:19 -05:00
for i in 0..len {
2014-04-02 23:10:36 -04:00
if !f(&v[i]) {
del += 1;
} else if del > 0 {
v.swap(i-del, i);
2014-05-06 17:01:16 -04:00
}
}
2014-04-02 23:10:36 -04:00
}
if del > 0 {
self.truncate(len - del);
}
}
/// Appends an element to the back of a collection.
///
/// # Panics
///
2015-02-04 21:17:19 -05:00
/// Panics if the number of elements in the vector overflows a `usize`.
///
/// # Examples
///
/// ```rust
2015-01-25 22:05:03 +01:00
/// let mut vec = vec!(1, 2);
/// vec.push(3);
/// assert_eq!(vec, vec!(1, 2, 3));
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn push(&mut self, value: T) {
if mem::size_of::<T>() == 0 {
// zero-size types consume no memory, so we can't rely on the
// address space running out
self.len = self.len.checked_add(1).expect("length overflow");
unsafe { mem::forget(value); }
return
}
if self.len == self.cap {
let old_size = self.cap * mem::size_of::<T>();
let size = max(old_size, 2 * mem::size_of::<T>()) * 2;
if old_size > size { panic!("capacity overflow") }
unsafe {
let ptr = alloc_or_realloc(*self.ptr, old_size, size);
2014-12-03 17:22:11 -05:00
if ptr.is_null() { ::alloc::oom() }
self.ptr = NonZero::new(ptr);
}
self.cap = max(self.cap, 2) * 2;
}
unsafe {
let end = (*self.ptr).offset(self.len as isize);
ptr::write(&mut *end, value);
self.len += 1;
}
}
2014-12-16 20:12:30 -05:00
/// Removes the last element from a vector and returns it, or `None` if it is empty.
///
/// # Examples
///
/// ```rust
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1, 2, 3];
/// assert_eq!(vec.pop(), Some(3));
/// assert_eq!(vec, vec![1, 2]);
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn pop(&mut self) -> Option<T> {
if self.len == 0 {
None
} else {
unsafe {
self.len -= 1;
Some(ptr::read(self.get_unchecked(self.len())))
}
}
}
2015-01-17 14:30:16 -08:00
/// Moves all the elements of `other` into `Self`, leaving `other` empty.
///
/// # Panics
///
2015-02-04 21:17:19 -05:00
/// Panics if the number of elements in the vector overflows a `usize`.
2015-01-17 14:30:16 -08:00
///
/// # Examples
///
/// ```
2015-01-17 14:30:16 -08:00
/// let mut vec = vec![1, 2, 3];
/// let mut vec2 = vec![4, 5, 6];
/// vec.append(&mut vec2);
/// assert_eq!(vec, vec![1, 2, 3, 4, 5, 6]);
/// assert_eq!(vec2, vec![]);
/// ```
#[inline]
#[unstable(feature = "collections",
reason = "new API, waiting for dust to settle")]
2015-01-17 14:30:16 -08:00
pub fn append(&mut self, other: &mut Self) {
if mem::size_of::<T>() == 0 {
// zero-size types consume no memory, so we can't rely on the
// address space running out
self.len = self.len.checked_add(other.len()).expect("length overflow");
unsafe { other.set_len(0) }
return;
}
self.reserve(other.len());
let len = self.len();
unsafe {
ptr::copy_nonoverlapping_memory(
self.get_unchecked_mut(len),
other.as_ptr(),
other.len());
}
self.len += other.len();
unsafe { other.set_len(0); }
}
/// Creates a draining iterator that clears the `Vec` and iterates over
/// the removed items from start to end.
///
/// # Examples
///
/// ```
/// let mut v = vec!["a".to_string(), "b".to_string()];
/// for s in v.drain() {
/// // s has type String, not &String
/// println!("{}", s);
/// }
/// assert!(v.is_empty());
/// ```
#[inline]
#[unstable(feature = "collections",
reason = "matches collection reform specification, waiting for dust to settle")]
pub fn drain(&mut self) -> Drain<T> {
unsafe {
let begin = *self.ptr as *const T;
let end = if mem::size_of::<T>() == 0 {
2015-02-04 21:17:19 -05:00
(*self.ptr as usize + self.len()) as *const T
} else {
(*self.ptr).offset(self.len() as isize) as *const T
};
self.set_len(0);
Drain {
ptr: begin,
end: end,
marker: ContravariantLifetime,
}
}
}
/// Clears the vector, removing all values.
///
/// # Examples
///
/// ```
2015-01-25 22:05:03 +01:00
/// let mut v = vec![1, 2, 3];
2014-12-16 20:12:30 -05:00
///
/// v.clear();
2014-12-16 20:12:30 -05:00
///
/// assert!(v.is_empty());
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn clear(&mut self) {
self.truncate(0)
}
2014-12-16 20:12:30 -05:00
/// Returns the number of elements in the vector.
///
/// # Examples
///
/// ```
2015-01-25 22:05:03 +01:00
/// let a = vec![1, 2, 3];
/// assert_eq!(a.len(), 3);
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
pub fn len(&self) -> usize { self.len }
2014-12-16 20:12:30 -05:00
/// Returns `true` if the vector contains no elements.
///
/// # Examples
///
/// ```
/// let mut v = Vec::new();
/// assert!(v.is_empty());
2014-12-16 20:12:30 -05:00
///
2015-01-25 22:05:03 +01:00
/// v.push(1);
/// assert!(!v.is_empty());
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn is_empty(&self) -> bool { self.len() == 0 }
/// Converts a `Vec<T>` to a `Vec<U>` where `T` and `U` have the same
/// size and in case they are not zero-sized the same minimal alignment.
///
/// # Panics
///
/// Panics if `T` and `U` have differing sizes or are not zero-sized and
/// have differing minimal alignments.
///
/// # Examples
///
/// ```
2015-02-04 21:17:19 -05:00
/// let v = vec![0, 1, 2];
/// let w = v.map_in_place(|i| i + 3);
/// assert_eq!(w.as_slice(), [3, 4, 5].as_slice());
///
2015-01-28 08:34:18 -05:00
/// #[derive(PartialEq, Debug)]
/// struct Newtype(u8);
/// let bytes = vec![0x11, 0x22];
/// let newtyped_bytes = bytes.map_in_place(|x| Newtype(x));
/// assert_eq!(newtyped_bytes.as_slice(), [Newtype(0x11), Newtype(0x22)].as_slice());
/// ```
#[unstable(feature = "collections",
reason = "API may change to provide stronger guarantees")]
pub fn map_in_place<U, F>(self, mut f: F) -> Vec<U> where F: FnMut(T) -> U {
// FIXME: Assert statically that the types `T` and `U` have the same
// size.
assert!(mem::size_of::<T>() == mem::size_of::<U>());
let mut vec = self;
if mem::size_of::<T>() != 0 {
// FIXME: Assert statically that the types `T` and `U` have the
// same minimal alignment in case they are not zero-sized.
// These asserts are necessary because the `min_align_of` of the
// types are passed to the allocator by `Vec`.
assert!(mem::min_align_of::<T>() == mem::min_align_of::<U>());
// This `as isize` cast is safe, because the size of the elements of the
// vector is not 0, and:
//
// 1) If the size of the elements in the vector is 1, the `isize` may
// overflow, but it has the correct bit pattern so that the
// `.offset()` function will work.
//
// Example:
// Address space 0x0-0xF.
// `u8` array at: 0x1.
// Size of `u8` array: 0x8.
// Calculated `offset`: -0x8.
// After `array.offset(offset)`: 0x9.
// (0x1 + 0x8 = 0x1 - 0x8)
//
2015-02-04 21:17:19 -05:00
// 2) If the size of the elements in the vector is >1, the `usize` ->
// `isize` conversion can't overflow.
let offset = vec.len() as isize;
let start = vec.as_mut_ptr();
let mut pv = PartialVecNonZeroSized {
vec: vec,
start_t: start,
// This points inside the vector, as the vector has length
// `offset`.
end_t: unsafe { start.offset(offset) },
start_u: start as *mut U,
end_u: start as *mut U,
};
// start_t
// start_u
// |
// +-+-+-+-+-+-+
// |T|T|T|...|T|
// +-+-+-+-+-+-+
// | |
// end_u end_t
while pv.end_u as *mut T != pv.end_t {
unsafe {
// start_u start_t
// | |
// +-+-+-+-+-+-+-+-+-+
// |U|...|U|T|T|...|T|
// +-+-+-+-+-+-+-+-+-+
// | |
// end_u end_t
let t = ptr::read(pv.start_t);
// start_u start_t
// | |
// +-+-+-+-+-+-+-+-+-+
// |U|...|U|X|T|...|T|
// +-+-+-+-+-+-+-+-+-+
// | |
// end_u end_t
// We must not panic here, one cell is marked as `T`
// although it is not `T`.
pv.start_t = pv.start_t.offset(1);
// start_u start_t
// | |
// +-+-+-+-+-+-+-+-+-+
// |U|...|U|X|T|...|T|
// +-+-+-+-+-+-+-+-+-+
// | |
// end_u end_t
// We may panic again.
// The function given by the user might panic.
let u = f(t);
ptr::write(pv.end_u, u);
// start_u start_t
// | |
// +-+-+-+-+-+-+-+-+-+
// |U|...|U|U|T|...|T|
// +-+-+-+-+-+-+-+-+-+
// | |
// end_u end_t
// We should not panic here, because that would leak the `U`
// pointed to by `end_u`.
pv.end_u = pv.end_u.offset(1);
// start_u start_t
// | |
// +-+-+-+-+-+-+-+-+-+
// |U|...|U|U|T|...|T|
// +-+-+-+-+-+-+-+-+-+
// | |
// end_u end_t
// We may panic again.
2014-03-11 00:53:23 -04:00
}
}
// start_u start_t
// | |
// +-+-+-+-+-+-+
// |U|...|U|U|U|
// +-+-+-+-+-+-+
// |
// end_t
// end_u
// Extract `vec` and prevent the destructor of
// `PartialVecNonZeroSized` from running. Note that none of the
// function calls can panic, thus no resources can be leaked (as the
// `vec` member of `PartialVec` is the only one which holds
// allocations -- and it is returned from this function. None of
// this can panic.
unsafe {
let vec_len = pv.vec.len();
let vec_cap = pv.vec.capacity();
let vec_ptr = pv.vec.as_mut_ptr() as *mut U;
mem::forget(pv);
Vec::from_raw_parts(vec_ptr, vec_len, vec_cap)
2014-03-11 00:53:23 -04:00
}
} else {
// Put the `Vec` into the `PartialVecZeroSized` structure and
// prevent the destructor of the `Vec` from running. Since the
// `Vec` contained zero-sized objects, it did not allocate, so we
// are not leaking memory here.
let mut pv = PartialVecZeroSized::<T,U> {
num_t: vec.len(),
num_u: 0,
marker_t: InvariantType,
marker_u: InvariantType,
};
unsafe { mem::forget(vec); }
2014-03-11 00:53:23 -04:00
while pv.num_t != 0 {
unsafe {
// Create a `T` out of thin air and decrement `num_t`. This
// must not panic between these steps, as otherwise a
// destructor of `T` which doesn't exist runs.
let t = mem::uninitialized();
pv.num_t -= 1;
2014-04-02 23:10:36 -04:00
// The function given by the user might panic.
let u = f(t);
// Forget the `U` and increment `num_u`. This increment
2015-02-04 21:17:19 -05:00
// cannot overflow the `usize` as we only do this for a
// number of times that fits into a `usize` (and start with
// `0`). Again, we should not panic between these steps.
mem::forget(u);
pv.num_u += 1;
2014-04-02 23:10:36 -04:00
}
}
// Create a `Vec` from our `PartialVecZeroSized` and make sure the
// destructor of the latter will not run. None of this can panic.
let mut result = Vec::new();
unsafe {
result.set_len(pv.num_u);
mem::forget(pv);
}
result
2014-04-02 23:10:36 -04:00
}
}
2015-01-24 17:23:26 -08:00
/// Splits the collection into two at the given index.
///
/// Returns a newly allocated `Self`. `self` contains elements `[0, at)`,
/// and the returned `Self` contains elements `[at, len)`.
///
/// Note that the capacity of `self` does not change.
///
/// # Panics
///
/// Panics if `at > len`.
///
2015-01-24 17:23:26 -08:00
/// # Examples
///
/// ```
2015-01-24 17:23:26 -08:00
/// let mut vec = vec![1,2,3];
/// let vec2 = vec.split_off(1);
/// assert_eq!(vec, vec![1]);
/// assert_eq!(vec2, vec![2, 3]);
/// ```
#[inline]
#[unstable(feature = "collections",
reason = "new API, waiting for dust to settle")]
2015-01-24 17:23:26 -08:00
pub fn split_off(&mut self, at: usize) -> Self {
assert!(at <= self.len(), "`at` out of bounds");
2015-01-24 17:23:26 -08:00
let other_len = self.len - at;
let mut other = Vec::with_capacity(other_len);
// Unsafely `set_len` and copy items to `other`.
unsafe {
self.set_len(at);
other.set_len(other_len);
ptr::copy_nonoverlapping_memory(
other.as_mut_ptr(),
self.as_ptr().offset(at as isize),
other.len());
}
other
}
}
2014-04-02 23:10:36 -04:00
impl<T: Clone> Vec<T> {
/// Resizes the `Vec` in-place so that `len()` is equal to `new_len`.
///
/// Calls either `extend()` or `truncate()` depending on whether `new_len`
/// is larger than the current value of `len()` or not.
///
/// # Examples
///
/// ```
/// let mut vec = vec!["hello"];
/// vec.resize(3, "world");
/// assert_eq!(vec, vec!["hello", "world", "world"]);
///
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1, 2, 3, 4];
/// vec.resize(2, 0);
/// assert_eq!(vec, vec![1, 2]);
/// ```
#[unstable(feature = "collections",
reason = "matches collection reform specification; waiting for dust to settle")]
2015-02-04 21:17:19 -05:00
pub fn resize(&mut self, new_len: usize, value: T) {
let len = self.len();
if new_len > len {
self.extend(repeat(value).take(new_len - len));
} else {
self.truncate(new_len);
}
}
/// Appends all elements in a slice to the `Vec`.
2014-12-16 20:12:30 -05:00
///
/// Iterates over the slice `other`, clones each element, and then appends
/// it to this `Vec`. The `other` vector is traversed in-order.
///
/// # Examples
///
/// ```
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1];
/// vec.push_all(&[2, 3, 4]);
/// assert_eq!(vec, vec![1, 2, 3, 4]);
/// ```
#[inline]
#[unstable(feature = "collections",
reason = "likely to be replaced by a more optimized extend")]
pub fn push_all(&mut self, other: &[T]) {
self.reserve(other.len());
for i in 0..other.len() {
let len = self.len();
// Unsafe code so this can be optimised to a memcpy (or something similarly
// fast) when T is Copy. LLVM is easily confused, so any extra operations
// during the loop can prevent this optimisation.
unsafe {
ptr::write(
self.get_unchecked_mut(len),
other.get_unchecked(i).clone());
self.set_len(len + 1);
}
}
}
}
2014-08-04 22:48:39 +12:00
impl<T: PartialEq> Vec<T> {
/// Removes consecutive repeated elements in the vector.
///
/// If the vector is sorted, this removes all duplicates.
///
/// # Examples
///
/// ```
2015-01-25 22:05:03 +01:00
/// let mut vec = vec![1, 2, 2, 3, 2];
2014-12-16 20:12:30 -05:00
///
/// vec.dedup();
2014-12-16 20:12:30 -05:00
///
2015-01-25 22:05:03 +01:00
/// assert_eq!(vec, vec![1, 2, 3, 2]);
/// ```
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn dedup(&mut self) {
unsafe {
// Although we have a mutable reference to `self`, we cannot make
// *arbitrary* changes. The `PartialEq` comparisons could panic, so we
// must ensure that the vector is in a valid state at all time.
//
// The way that we handle this is by using swaps; we iterate
// over all the elements, swapping as we go so that at the end
// the elements we wish to keep are in the front, and those we
// wish to reject are at the back. We can then truncate the
// vector. This operation is still O(n).
//
// Example: We start in this state, where `r` represents "next
// read" and `w` represents "next_write`.
//
// r
// +---+---+---+---+---+---+
// | 0 | 1 | 1 | 2 | 3 | 3 |
// +---+---+---+---+---+---+
// w
//
2014-04-28 15:39:11 +02:00
// Comparing self[r] against self[w-1], this is not a duplicate, so
// we swap self[r] and self[w] (no effect as r==w) and then increment both
// r and w, leaving us with:
//
// r
// +---+---+---+---+---+---+
// | 0 | 1 | 1 | 2 | 3 | 3 |
// +---+---+---+---+---+---+
// w
//
// Comparing self[r] against self[w-1], this value is a duplicate,
// so we increment `r` but leave everything else unchanged:
//
// r
// +---+---+---+---+---+---+
// | 0 | 1 | 1 | 2 | 3 | 3 |
// +---+---+---+---+---+---+
// w
//
// Comparing self[r] against self[w-1], this is not a duplicate,
// so swap self[r] and self[w] and advance r and w:
//
// r
// +---+---+---+---+---+---+
// | 0 | 1 | 2 | 1 | 3 | 3 |
// +---+---+---+---+---+---+
// w
//
// Not a duplicate, repeat:
//
// r
// +---+---+---+---+---+---+
// | 0 | 1 | 2 | 3 | 1 | 3 |
// +---+---+---+---+---+---+
// w
//
// Duplicate, advance r. End of vec. Truncate to w.
let ln = self.len();
if ln < 1 { return; }
// Avoid bounds checks by using unsafe pointers.
let p = self.as_mut_ptr();
let mut r = 1;
let mut w = 1;
while r < ln {
let p_r = p.offset(r as isize);
let p_wm1 = p.offset((w - 1) as isize);
if *p_r != *p_wm1 {
if r != w {
let p_w = p_wm1.offset(1);
mem::swap(&mut *p_r, &mut *p_w);
}
w += 1;
}
r += 1;
}
self.truncate(w);
}
}
}
////////////////////////////////////////////////////////////////////////////////
// Internal methods and functions
////////////////////////////////////////////////////////////////////////////////
impl<T> Vec<T> {
/// Reserves capacity for exactly `capacity` elements in the given vector.
///
/// If the capacity for `self` is already equal to or greater than the
/// requested capacity, then no action is taken.
2015-02-04 21:17:19 -05:00
fn grow_capacity(&mut self, capacity: usize) {
if mem::size_of::<T>() == 0 { return }
if capacity > self.cap {
let size = capacity.checked_mul(mem::size_of::<T>())
.expect("capacity overflow");
unsafe {
2014-12-30 10:51:18 -08:00
let ptr = alloc_or_realloc(*self.ptr, self.cap * mem::size_of::<T>(), size);
if ptr.is_null() { ::alloc::oom() }
self.ptr = NonZero::new(ptr);
}
self.cap = capacity;
}
}
}
// FIXME: #13996: need a way to mark the return value as `noalias`
#[inline(never)]
2015-02-04 21:17:19 -05:00
unsafe fn alloc_or_realloc<T>(ptr: *mut T, old_size: usize, size: usize) -> *mut T {
if old_size == 0 {
allocate(size, mem::min_align_of::<T>()) as *mut T
} else {
reallocate(ptr as *mut u8, old_size, size, mem::min_align_of::<T>()) as *mut T
}
}
#[inline]
2015-02-04 21:17:19 -05:00
unsafe fn dealloc<T>(ptr: *mut T, len: usize) {
if mem::size_of::<T>() != 0 {
deallocate(ptr as *mut u8,
len * mem::size_of::<T>(),
mem::min_align_of::<T>())
}
}
#[doc(hidden)]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn from_elem<T: Clone>(elem: T, n: usize) -> Vec<T> {
unsafe {
let mut v = Vec::with_capacity(n);
let mut ptr = v.as_mut_ptr();
// Write all elements except the last one
for i in 1..n {
ptr::write(ptr, Clone::clone(&elem));
ptr = ptr.offset(1);
v.set_len(i); // Increment the length in every step in case Clone::clone() panics
}
if n > 0 {
// We can write the last element directly without cloning needlessly
ptr::write(ptr, elem);
v.set_len(n);
}
v
}
}
////////////////////////////////////////////////////////////////////////////////
// Common trait implementations for Vec
////////////////////////////////////////////////////////////////////////////////
#[unstable(feature = "collections")]
impl<T:Clone> Clone for Vec<T> {
fn clone(&self) -> Vec<T> { ::slice::SliceExt::to_vec(&**self) }
fn clone_from(&mut self, other: &Vec<T>) {
// drop anything in self that will not be overwritten
if self.len() > other.len() {
self.truncate(other.len())
}
// reuse the contained values' allocations/resources.
for (place, thing) in self.iter_mut().zip(other.iter()) {
place.clone_from(thing)
}
// self.len <= other.len due to the truncate above, so the
// slice here is always in-bounds.
2015-01-07 11:58:31 -05:00
let slice = &other[self.len()..];
self.push_all(slice);
}
}
std: Stabilize the std::hash module This commit aims to prepare the `std::hash` module for alpha by formalizing its current interface whileholding off on adding `#[stable]` to the new APIs. The current usage with the `HashMap` and `HashSet` types is also reconciled by separating out composable parts of the design. The primary goal of this slight redesign is to separate the concepts of a hasher's state from a hashing algorithm itself. The primary change of this commit is to separate the `Hasher` trait into a `Hasher` and a `HashState` trait. Conceptually the old `Hasher` trait was actually just a factory for various states, but hashing had very little control over how these states were used. Additionally the old `Hasher` trait was actually fairly unrelated to hashing. This commit redesigns the existing `Hasher` trait to match what the notion of a `Hasher` normally implies with the following definition: trait Hasher { type Output; fn reset(&mut self); fn finish(&self) -> Output; } This `Hasher` trait emphasizes that hashing algorithms may produce outputs other than a `u64`, so the output type is made generic. Other than that, however, very little is assumed about a particular hasher. It is left up to implementors to provide specific methods or trait implementations to feed data into a hasher. The corresponding `Hash` trait becomes: trait Hash<H: Hasher> { fn hash(&self, &mut H); } The old default of `SipState` was removed from this trait as it's not something that we're willing to stabilize until the end of time, but the type parameter is always required to implement `Hasher`. Note that the type parameter `H` remains on the trait to enable multidispatch for specialization of hashing for particular hashers. Note that `Writer` is not mentioned in either of `Hash` or `Hasher`, it is simply used as part `derive` and the implementations for all primitive types. With these definitions, the old `Hasher` trait is realized as a new `HashState` trait in the `collections::hash_state` module as an unstable addition for now. The current definition looks like: trait HashState { type Hasher: Hasher; fn hasher(&self) -> Hasher; } The purpose of this trait is to emphasize that the one piece of functionality for implementors is that new instances of `Hasher` can be created. This conceptually represents the two keys from which more instances of a `SipHasher` can be created, and a `HashState` is what's stored in a `HashMap`, not a `Hasher`. Implementors of custom hash algorithms should implement the `Hasher` trait, and only hash algorithms intended for use in hash maps need to implement or worry about the `HashState` trait. The entire module and `HashState` infrastructure remains `#[unstable]` due to it being recently redesigned, but some other stability decision made for the `std::hash` module are: * The `Writer` trait remains `#[experimental]` as it's intended to be replaced with an `io::Writer` (more details soon). * The top-level `hash` function is `#[unstable]` as it is intended to be generic over the hashing algorithm instead of hardwired to `SipHasher` * The inner `sip` module is now private as its one export, `SipHasher` is reexported in the `hash` module. And finally, a few changes were made to the default parameters on `HashMap`. * The `RandomSipHasher` default type parameter was renamed to `RandomState`. This renaming emphasizes that it is not a hasher, but rather just state to generate hashers. It also moves away from the name "sip" as it may not always be implemented as `SipHasher`. This type lives in the `std::collections::hash_map` module as `#[unstable]` * The associated `Hasher` type of `RandomState` is creatively called... `Hasher`! This concrete structure lives next to `RandomState` as an implemenation of the "default hashing algorithm" used for a `HashMap`. Under the hood this is currently implemented as `SipHasher`, but it draws an explicit interface for now and allows us to modify the implementation over time if necessary. There are many breaking changes outlined above, and as a result this commit is a: [breaking-change]
2014-12-09 12:37:23 -08:00
impl<S: hash::Writer + hash::Hasher, T: Hash<S>> Hash<S> for Vec<T> {
#[inline]
fn hash(&self, state: &mut S) {
Hash::hash(&**self, state)
std: Stabilize the std::hash module This commit aims to prepare the `std::hash` module for alpha by formalizing its current interface whileholding off on adding `#[stable]` to the new APIs. The current usage with the `HashMap` and `HashSet` types is also reconciled by separating out composable parts of the design. The primary goal of this slight redesign is to separate the concepts of a hasher's state from a hashing algorithm itself. The primary change of this commit is to separate the `Hasher` trait into a `Hasher` and a `HashState` trait. Conceptually the old `Hasher` trait was actually just a factory for various states, but hashing had very little control over how these states were used. Additionally the old `Hasher` trait was actually fairly unrelated to hashing. This commit redesigns the existing `Hasher` trait to match what the notion of a `Hasher` normally implies with the following definition: trait Hasher { type Output; fn reset(&mut self); fn finish(&self) -> Output; } This `Hasher` trait emphasizes that hashing algorithms may produce outputs other than a `u64`, so the output type is made generic. Other than that, however, very little is assumed about a particular hasher. It is left up to implementors to provide specific methods or trait implementations to feed data into a hasher. The corresponding `Hash` trait becomes: trait Hash<H: Hasher> { fn hash(&self, &mut H); } The old default of `SipState` was removed from this trait as it's not something that we're willing to stabilize until the end of time, but the type parameter is always required to implement `Hasher`. Note that the type parameter `H` remains on the trait to enable multidispatch for specialization of hashing for particular hashers. Note that `Writer` is not mentioned in either of `Hash` or `Hasher`, it is simply used as part `derive` and the implementations for all primitive types. With these definitions, the old `Hasher` trait is realized as a new `HashState` trait in the `collections::hash_state` module as an unstable addition for now. The current definition looks like: trait HashState { type Hasher: Hasher; fn hasher(&self) -> Hasher; } The purpose of this trait is to emphasize that the one piece of functionality for implementors is that new instances of `Hasher` can be created. This conceptually represents the two keys from which more instances of a `SipHasher` can be created, and a `HashState` is what's stored in a `HashMap`, not a `Hasher`. Implementors of custom hash algorithms should implement the `Hasher` trait, and only hash algorithms intended for use in hash maps need to implement or worry about the `HashState` trait. The entire module and `HashState` infrastructure remains `#[unstable]` due to it being recently redesigned, but some other stability decision made for the `std::hash` module are: * The `Writer` trait remains `#[experimental]` as it's intended to be replaced with an `io::Writer` (more details soon). * The top-level `hash` function is `#[unstable]` as it is intended to be generic over the hashing algorithm instead of hardwired to `SipHasher` * The inner `sip` module is now private as its one export, `SipHasher` is reexported in the `hash` module. And finally, a few changes were made to the default parameters on `HashMap`. * The `RandomSipHasher` default type parameter was renamed to `RandomState`. This renaming emphasizes that it is not a hasher, but rather just state to generate hashers. It also moves away from the name "sip" as it may not always be implemented as `SipHasher`. This type lives in the `std::collections::hash_map` module as `#[unstable]` * The associated `Hasher` type of `RandomState` is creatively called... `Hasher`! This concrete structure lives next to `RandomState` as an implemenation of the "default hashing algorithm" used for a `HashMap`. Under the hood this is currently implemented as `SipHasher`, but it draws an explicit interface for now and allows us to modify the implementation over time if necessary. There are many breaking changes outlined above, and as a result this commit is a: [breaking-change]
2014-12-09 12:37:23 -08:00
}
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
impl<T> Index<usize> for Vec<T> {
2015-01-03 10:40:10 -05:00
type Output = T;
#[inline]
fn index(&self, index: &usize) -> &T {
// NB built-in indexing via `&[T]`
&(**self)[*index]
2015-01-03 10:40:10 -05:00
}
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
impl<T> IndexMut<usize> for Vec<T> {
2015-01-03 10:40:10 -05:00
#[inline]
fn index_mut(&mut self, index: &usize) -> &mut T {
// NB built-in indexing via `&mut [T]`
&mut (**self)[*index]
2015-01-03 10:40:10 -05:00
}
}
2015-01-04 17:43:24 +13:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
impl<T> ops::Index<ops::Range<usize>> for Vec<T> {
2015-01-04 17:43:24 +13:00
type Output = [T];
#[inline]
2015-02-04 21:17:19 -05:00
fn index(&self, index: &ops::Range<usize>) -> &[T] {
Index::index(&**self, index)
}
2014-12-31 20:20:40 +13:00
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
impl<T> ops::Index<ops::RangeTo<usize>> for Vec<T> {
2015-01-04 17:43:24 +13:00
type Output = [T];
#[inline]
2015-02-04 21:17:19 -05:00
fn index(&self, index: &ops::RangeTo<usize>) -> &[T] {
Index::index(&**self, index)
}
2014-12-31 20:20:40 +13:00
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
impl<T> ops::Index<ops::RangeFrom<usize>> for Vec<T> {
2015-01-04 17:43:24 +13:00
type Output = [T];
#[inline]
2015-02-04 21:17:19 -05:00
fn index(&self, index: &ops::RangeFrom<usize>) -> &[T] {
Index::index(&**self, index)
}
}
2015-01-28 17:06:46 +13:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> ops::Index<ops::RangeFull> for Vec<T> {
type Output = [T];
#[inline]
fn index(&self, _index: &ops::RangeFull) -> &[T] {
self.as_slice()
}
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
impl<T> ops::IndexMut<ops::Range<usize>> for Vec<T> {
#[inline]
2015-02-04 21:17:19 -05:00
fn index_mut(&mut self, index: &ops::Range<usize>) -> &mut [T] {
IndexMut::index_mut(&mut **self, index)
}
2014-12-31 20:20:40 +13:00
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
impl<T> ops::IndexMut<ops::RangeTo<usize>> for Vec<T> {
#[inline]
2015-02-04 21:17:19 -05:00
fn index_mut(&mut self, index: &ops::RangeTo<usize>) -> &mut [T] {
IndexMut::index_mut(&mut **self, index)
}
2014-12-31 20:20:40 +13:00
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-04 21:17:19 -05:00
impl<T> ops::IndexMut<ops::RangeFrom<usize>> for Vec<T> {
#[inline]
2015-02-04 21:17:19 -05:00
fn index_mut(&mut self, index: &ops::RangeFrom<usize>) -> &mut [T] {
IndexMut::index_mut(&mut **self, index)
2014-12-31 20:20:40 +13:00
}
}
2015-01-28 17:06:46 +13:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> ops::IndexMut<ops::RangeFull> for Vec<T> {
#[inline]
fn index_mut(&mut self, _index: &ops::RangeFull) -> &mut [T] {
self.as_mut_slice()
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 14:53:20 -05:00
impl<T> ops::Deref for Vec<T> {
type Target = [T];
fn deref(&self) -> &[T] { self.as_slice() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 14:53:20 -05:00
impl<T> ops::DerefMut for Vec<T> {
fn deref_mut(&mut self) -> &mut [T] { self.as_mut_slice() }
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> FromIterator<T> for Vec<T> {
#[inline]
fn from_iter<I:Iterator<Item=T>>(mut iterator: I) -> Vec<T> {
let (lower, _) = iterator.size_hint();
let mut vector = Vec::with_capacity(lower);
// This function should be the moral equivalent of:
//
// for item in iterator {
// vector.push(item);
// }
//
// This equivalent crucially runs the iterator precisely once. Below we
// actually in theory run the iterator twice (one without bounds checks
// and one with). To achieve the "moral equivalent", we use the `if`
// statement below to break out early.
//
// If the first loop has terminated, then we have one of two conditions.
//
// 1. The underlying iterator returned `None`. In this case we are
// guaranteed that less than `vector.capacity()` elements have been
// returned, so we break out early.
// 2. The underlying iterator yielded `vector.capacity()` elements and
// has not yielded `None` yet. In this case we run the iterator to
// its end below.
for element in iterator.by_ref().take(vector.capacity()) {
let len = vector.len();
unsafe {
ptr::write(vector.get_unchecked_mut(len), element);
vector.set_len(len + 1);
}
}
if vector.len() == vector.capacity() {
for element in iterator {
vector.push(element);
}
}
vector
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> IntoIterator for Vec<T> {
type Item = T;
type IntoIter = IntoIter<T>;
fn into_iter(self) -> IntoIter<T> {
self.into_iter()
}
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-07 22:01:05 -05:00
impl<'a, T> IntoIterator for &'a Vec<T> {
type Item = &'a T;
type IntoIter = slice::Iter<'a, T>;
2015-01-07 22:01:05 -05:00
fn into_iter(self) -> slice::Iter<'a, T> {
self.iter()
}
}
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-07 22:01:05 -05:00
impl<'a, T> IntoIterator for &'a mut Vec<T> {
type Item = &'a mut T;
type IntoIter = slice::IterMut<'a, T>;
2015-01-07 22:01:05 -05:00
fn into_iter(mut self) -> slice::IterMut<'a, T> {
self.iter_mut()
}
}
#[unstable(feature = "collections", reason = "waiting on Extend stability")]
impl<T> Extend<T> for Vec<T> {
#[inline]
2015-01-31 09:17:50 -05:00
fn extend<I: Iterator<Item=T>>(&mut self, iterator: I) {
let (lower, _) = iterator.size_hint();
self.reserve(lower);
for element in iterator {
self.push(element)
}
}
}
impl<A, B> PartialEq<Vec<B>> for Vec<A> where A: PartialEq<B> {
#[inline]
fn eq(&self, other: &Vec<B>) -> bool { PartialEq::eq(&**self, &**other) }
#[inline]
fn ne(&self, other: &Vec<B>) -> bool { PartialEq::ne(&**self, &**other) }
}
macro_rules! impl_eq {
($lhs:ty, $rhs:ty) => {
impl<'b, A, B> PartialEq<$rhs> for $lhs where A: PartialEq<B> {
#[inline]
fn eq(&self, other: &$rhs) -> bool { PartialEq::eq(&**self, &**other) }
#[inline]
fn ne(&self, other: &$rhs) -> bool { PartialEq::ne(&**self, &**other) }
}
impl<'b, A, B> PartialEq<$lhs> for $rhs where B: PartialEq<A> {
#[inline]
fn eq(&self, other: &$lhs) -> bool { PartialEq::eq(&**self, &**other) }
#[inline]
fn ne(&self, other: &$lhs) -> bool { PartialEq::ne(&**self, &**other) }
}
}
}
impl_eq! { Vec<A>, &'b [B] }
impl_eq! { Vec<A>, &'b mut [B] }
impl<'a, A, B> PartialEq<Vec<B>> for CowVec<'a, A> where A: PartialEq<B> + Clone {
#[inline]
fn eq(&self, other: &Vec<B>) -> bool { PartialEq::eq(&**self, &**other) }
#[inline]
fn ne(&self, other: &Vec<B>) -> bool { PartialEq::ne(&**self, &**other) }
}
impl<'a, A, B> PartialEq<CowVec<'a, A>> for Vec<B> where A: Clone, B: PartialEq<A> {
#[inline]
fn eq(&self, other: &CowVec<'a, A>) -> bool { PartialEq::eq(&**self, &**other) }
#[inline]
fn ne(&self, other: &CowVec<'a, A>) -> bool { PartialEq::ne(&**self, &**other) }
}
macro_rules! impl_eq_for_cowvec {
($rhs:ty) => {
impl<'a, 'b, A, B> PartialEq<$rhs> for CowVec<'a, A> where A: PartialEq<B> + Clone {
#[inline]
fn eq(&self, other: &$rhs) -> bool { PartialEq::eq(&**self, &**other) }
#[inline]
fn ne(&self, other: &$rhs) -> bool { PartialEq::ne(&**self, &**other) }
}
impl<'a, 'b, A, B> PartialEq<CowVec<'a, A>> for $rhs where A: Clone, B: PartialEq<A> {
#[inline]
fn eq(&self, other: &CowVec<'a, A>) -> bool { PartialEq::eq(&**self, &**other) }
#[inline]
fn ne(&self, other: &CowVec<'a, A>) -> bool { PartialEq::ne(&**self, &**other) }
}
}
}
impl_eq_for_cowvec! { &'b [B] }
impl_eq_for_cowvec! { &'b mut [B] }
#[unstable(feature = "collections",
reason = "waiting on PartialOrd stability")]
impl<T: PartialOrd> PartialOrd for Vec<T> {
#[inline]
fn partial_cmp(&self, other: &Vec<T>) -> Option<Ordering> {
PartialOrd::partial_cmp(&**self, &**other)
}
}
#[unstable(feature = "collections", reason = "waiting on Eq stability")]
impl<T: Eq> Eq for Vec<T> {}
#[unstable(feature = "collections", reason = "waiting on Ord stability")]
impl<T: Ord> Ord for Vec<T> {
#[inline]
fn cmp(&self, other: &Vec<T>) -> Ordering {
Ord::cmp(&**self, &**other)
}
}
2014-10-05 12:22:42 +13:00
impl<T> AsSlice<T> for Vec<T> {
2014-08-04 22:48:39 +12:00
/// Returns a slice into `self`.
2014-03-06 10:22:21 -08:00
///
/// # Examples
2014-03-06 10:22:21 -08:00
///
/// ```
/// fn foo(slice: &[i32]) {}
2014-03-06 10:22:21 -08:00
///
2015-01-25 22:05:03 +01:00
/// let vec = vec![1, 2];
2014-03-06 10:22:21 -08:00
/// foo(vec.as_slice());
/// ```
#[inline]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
fn as_slice(&self) -> &[T] {
unsafe {
mem::transmute(RawSlice {
data: *self.ptr,
len: self.len
})
}
2014-03-06 10:22:21 -08:00
}
}
#[unstable(feature = "collections",
reason = "recent addition, needs more experience")]
2014-12-31 15:45:13 -05:00
impl<'a, T: Clone> Add<&'a [T]> for Vec<T> {
type Output = Vec<T>;
2014-12-01 14:01:15 -05:00
#[inline]
fn add(mut self, rhs: &[T]) -> Vec<T> {
self.push_all(rhs);
self
}
}
#[unsafe_destructor]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> Drop for Vec<T> {
fn drop(&mut self) {
// This is (and should always remain) a no-op if the fields are
// zeroed (when moving out, because of #[unsafe_no_drop_flag]).
2014-05-06 17:01:16 -04:00
if self.cap != 0 {
unsafe {
2015-01-31 12:20:46 -05:00
for x in &*self {
2014-05-06 17:01:16 -04:00
ptr::read(x);
}
dealloc(*self.ptr, self.cap)
}
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> Default for Vec<T> {
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
fn default() -> Vec<T> {
Vec::new()
}
}
#[stable(feature = "rust1", since = "1.0.0")]
std: Rename Show/String to Debug/Display This commit is an implementation of [RFC 565][rfc] which is a stabilization of the `std::fmt` module and the implementations of various formatting traits. Specifically, the following changes were performed: [rfc]: https://github.com/rust-lang/rfcs/blob/master/text/0565-show-string-guidelines.md * The `Show` trait is now deprecated, it was renamed to `Debug` * The `String` trait is now deprecated, it was renamed to `Display` * Many `Debug` and `Display` implementations were audited in accordance with the RFC and audited implementations now have the `#[stable]` attribute * Integers and floats no longer print a suffix * Smart pointers no longer print details that they are a smart pointer * Paths with `Debug` are now quoted and escape characters * The `unwrap` methods on `Result` now require `Display` instead of `Debug` * The `Error` trait no longer has a `detail` method and now requires that `Display` must be implemented. With the loss of `String`, this has moved into libcore. * `impl<E: Error> FromError<E> for Box<Error>` now exists * `derive(Show)` has been renamed to `derive(Debug)`. This is not currently warned about due to warnings being emitted on stage1+ While backwards compatibility is attempted to be maintained with a blanket implementation of `Display` for the old `String` trait (and the same for `Show`/`Debug`) this is still a breaking change due to primitives no longer implementing `String` as well as modifications such as `unwrap` and the `Error` trait. Most code is fairly straightforward to update with a rename or tweaks of method calls. [breaking-change] Closes #21436
2015-01-20 15:45:07 -08:00
impl<T: fmt::Debug> fmt::Debug for Vec<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fmt::Debug::fmt(&**self, f)
}
}
////////////////////////////////////////////////////////////////////////////////
// Clone-on-write
////////////////////////////////////////////////////////////////////////////////
#[unstable(feature = "collections",
reason = "unclear how valuable this alias is")]
/// A clone-on-write vector
pub type CowVec<'a, T> = Cow<'a, Vec<T>, [T]>;
#[unstable(feature = "collections")]
impl<'a, T> FromIterator<T> for CowVec<'a, T> where T: Clone {
2015-01-01 23:15:35 -05:00
fn from_iter<I: Iterator<Item=T>>(it: I) -> CowVec<'a, T> {
Cow::Owned(FromIterator::from_iter(it))
}
}
impl<'a, T: 'a> IntoCow<'a, Vec<T>, [T]> for Vec<T> where T: Clone {
fn into_cow(self) -> CowVec<'a, T> {
Cow::Owned(self)
}
}
impl<'a, T> IntoCow<'a, Vec<T>, [T]> for &'a [T] where T: Clone {
fn into_cow(self) -> CowVec<'a, T> {
Cow::Borrowed(self)
}
}
////////////////////////////////////////////////////////////////////////////////
// Iterators
////////////////////////////////////////////////////////////////////////////////
/// An iterator that moves out of a vector.
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
pub struct IntoIter<T> {
2014-05-10 00:35:56 -04:00
allocation: *mut T, // the block of memory allocated for the vector
2015-02-04 21:17:19 -05:00
cap: usize, // the capacity of the vector
ptr: *const T,
end: *const T
}
unsafe impl<T: Send> Send for IntoIter<T> { }
unsafe impl<T: Sync> Sync for IntoIter<T> { }
impl<T> IntoIter<T> {
#[inline]
/// Drops all items that have not yet been moved and returns the empty vector.
#[unstable(feature = "collections")]
pub fn into_inner(mut self) -> Vec<T> {
2014-08-31 15:29:22 +02:00
unsafe {
2015-01-10 21:50:07 -05:00
for _x in self.by_ref() { }
let IntoIter { allocation, cap, ptr: _ptr, end: _end } = self;
2014-08-31 15:29:22 +02:00
mem::forget(self);
Vec::from_raw_parts(allocation, 0, cap)
2014-08-31 15:29:22 +02:00
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> Iterator for IntoIter<T> {
type Item = T;
#[inline]
fn next(&mut self) -> Option<T> {
unsafe {
if self.ptr == self.end {
None
} else {
if mem::size_of::<T>() == 0 {
// purposefully don't use 'ptr.offset' because for
// vectors with 0-size elements this would return the
// same pointer.
2015-02-04 21:17:19 -05:00
self.ptr = mem::transmute(self.ptr as usize + 1);
// Use a non-null pointer value
Some(ptr::read(EMPTY as *mut T))
} else {
let old = self.ptr;
self.ptr = self.ptr.offset(1);
Some(ptr::read(old))
}
}
}
}
#[inline]
2015-02-04 21:17:19 -05:00
fn size_hint(&self) -> (usize, Option<usize>) {
let diff = (self.end as usize) - (self.ptr as usize);
let size = mem::size_of::<T>();
let exact = diff / (if size == 0 {1} else {size});
(exact, Some(exact))
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> DoubleEndedIterator for IntoIter<T> {
#[inline]
fn next_back(&mut self) -> Option<T> {
unsafe {
if self.end == self.ptr {
None
} else {
if mem::size_of::<T>() == 0 {
// See above for why 'ptr.offset' isn't used
2015-02-04 21:17:19 -05:00
self.end = mem::transmute(self.end as usize - 1);
// Use a non-null pointer value
Some(ptr::read(EMPTY as *mut T))
} else {
self.end = self.end.offset(-1);
Some(ptr::read(mem::transmute(self.end)))
}
}
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<T> ExactSizeIterator for IntoIter<T> {}
2014-09-14 23:25:08 -04:00
#[unsafe_destructor]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> Drop for IntoIter<T> {
fn drop(&mut self) {
// destroy the remaining elements
2014-05-06 17:01:16 -04:00
if self.cap != 0 {
2015-01-10 21:50:07 -05:00
for _x in self.by_ref() {}
2014-05-06 17:01:16 -04:00
unsafe {
2014-05-10 00:35:56 -04:00
dealloc(self.allocation, self.cap);
2014-05-06 17:01:16 -04:00
}
}
}
}
/// An iterator that drains a vector.
#[unsafe_no_drop_flag]
#[unstable(feature = "collections",
reason = "recently added as part of collections reform 2")]
pub struct Drain<'a, T> {
ptr: *const T,
end: *const T,
marker: ContravariantLifetime<'a>,
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> Iterator for Drain<'a, T> {
type Item = T;
#[inline]
fn next(&mut self) -> Option<T> {
unsafe {
if self.ptr == self.end {
None
} else {
if mem::size_of::<T>() == 0 {
// purposefully don't use 'ptr.offset' because for
// vectors with 0-size elements this would return the
// same pointer.
2015-02-04 21:17:19 -05:00
self.ptr = mem::transmute(self.ptr as usize + 1);
// Use a non-null pointer value
Some(ptr::read(EMPTY as *mut T))
} else {
let old = self.ptr;
self.ptr = self.ptr.offset(1);
Some(ptr::read(old))
}
}
}
}
#[inline]
2015-02-04 21:17:19 -05:00
fn size_hint(&self) -> (usize, Option<usize>) {
let diff = (self.end as usize) - (self.ptr as usize);
let size = mem::size_of::<T>();
let exact = diff / (if size == 0 {1} else {size});
(exact, Some(exact))
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> DoubleEndedIterator for Drain<'a, T> {
#[inline]
fn next_back(&mut self) -> Option<T> {
unsafe {
if self.end == self.ptr {
None
} else {
if mem::size_of::<T>() == 0 {
// See above for why 'ptr.offset' isn't used
2015-02-04 21:17:19 -05:00
self.end = mem::transmute(self.end as usize - 1);
// Use a non-null pointer value
Some(ptr::read(EMPTY as *mut T))
} else {
self.end = self.end.offset(-1);
Some(ptr::read(self.end))
}
}
}
}
}
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-01-01 23:15:35 -05:00
impl<'a, T> ExactSizeIterator for Drain<'a, T> {}
#[unsafe_destructor]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<'a, T> Drop for Drain<'a, T> {
fn drop(&mut self) {
// self.ptr == self.end == null if drop has already been called,
// so we can use #[unsafe_no_drop_flag].
// destroy the remaining elements
2015-01-10 21:50:07 -05:00
for _x in self.by_ref() {}
}
}
////////////////////////////////////////////////////////////////////////////////
// Conversion from &[T] to &Vec<T>
////////////////////////////////////////////////////////////////////////////////
/// Wrapper type providing a `&Vec<T>` reference via `Deref`.
#[unstable(feature = "collections")]
pub struct DerefVec<'a, T> {
x: Vec<T>,
l: ContravariantLifetime<'a>
}
#[unstable(feature = "collections")]
2015-01-01 14:53:20 -05:00
impl<'a, T> Deref for DerefVec<'a, T> {
type Target = Vec<T>;
fn deref<'b>(&'b self) -> &'b Vec<T> {
&self.x
}
}
// Prevent the inner `Vec<T>` from attempting to deallocate memory.
#[unsafe_destructor]
2015-01-23 21:48:20 -08:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<'a, T> Drop for DerefVec<'a, T> {
fn drop(&mut self) {
self.x.len = 0;
self.x.cap = 0;
}
}
/// Convert a slice to a wrapper type providing a `&Vec<T>` reference.
#[unstable(feature = "collections")]
pub fn as_vec<'a, T>(x: &'a [T]) -> DerefVec<'a, T> {
unsafe {
DerefVec {
x: Vec::from_raw_parts(x.as_ptr() as *mut T, x.len(), x.len()),
l: ContravariantLifetime::<'a>
}
}
}
////////////////////////////////////////////////////////////////////////////////
// Partial vec, used for map_in_place
////////////////////////////////////////////////////////////////////////////////
/// An owned, partially type-converted vector of elements with non-zero size.
///
/// `T` and `U` must have the same, non-zero size. They must also have the same
/// alignment.
///
/// When the destructor of this struct runs, all `U`s from `start_u` (incl.) to
/// `end_u` (excl.) and all `T`s from `start_t` (incl.) to `end_t` (excl.) are
/// destructed. Additionally the underlying storage of `vec` will be freed.
struct PartialVecNonZeroSized<T,U> {
vec: Vec<T>,
start_u: *mut U,
end_u: *mut U,
start_t: *mut T,
end_t: *mut T,
}
/// An owned, partially type-converted vector of zero-sized elements.
///
/// When the destructor of this struct runs, all `num_t` `T`s and `num_u` `U`s
/// are destructed.
struct PartialVecZeroSized<T,U> {
2015-02-04 21:17:19 -05:00
num_t: usize,
num_u: usize,
marker_t: InvariantType<T>,
marker_u: InvariantType<U>,
}
#[unsafe_destructor]
impl<T,U> Drop for PartialVecNonZeroSized<T,U> {
fn drop(&mut self) {
unsafe {
// `vec` hasn't been modified until now. As it has a length
// currently, this would run destructors of `T`s which might not be
// there. So at first, set `vec`s length to `0`. This must be done
// at first to remain memory-safe as the destructors of `U` or `T`
// might cause unwinding where `vec`s destructor would be executed.
self.vec.set_len(0);
// We have instances of `U`s and `T`s in `vec`. Destruct them.
while self.start_u != self.end_u {
let _ = ptr::read(self.start_u); // Run a `U` destructor.
self.start_u = self.start_u.offset(1);
}
while self.start_t != self.end_t {
let _ = ptr::read(self.start_t); // Run a `T` destructor.
self.start_t = self.start_t.offset(1);
}
// After this destructor ran, the destructor of `vec` will run,
// deallocating the underlying memory.
}
}
}
#[unsafe_destructor]
impl<T,U> Drop for PartialVecZeroSized<T,U> {
fn drop(&mut self) {
unsafe {
// Destruct the instances of `T` and `U` this struct owns.
while self.num_t != 0 {
let _: T = mem::uninitialized(); // Run a `T` destructor.
self.num_t -= 1;
}
while self.num_u != 0 {
let _: U = mem::uninitialized(); // Run a `U` destructor.
self.num_u -= 1;
}
}
}
}
#[cfg(test)]
mod tests {
use prelude::*;
use core::mem::size_of;
use core::iter::repeat;
use test::Bencher;
use super::as_vec;
struct DropCounter<'a> {
2015-02-04 21:17:19 -05:00
count: &'a mut u32
}
#[unsafe_destructor]
impl<'a> Drop for DropCounter<'a> {
fn drop(&mut self) {
*self.count += 1;
}
}
#[test]
fn test_as_vec() {
let xs = [1u8, 2u8, 3u8];
assert_eq!(&**as_vec(&xs), xs);
}
#[test]
fn test_as_vec_dtor() {
let (mut count_x, mut count_y) = (0, 0);
{
let xs = &[DropCounter { count: &mut count_x }, DropCounter { count: &mut count_y }];
assert_eq!(as_vec(xs).len(), 2);
}
assert_eq!(count_x, 1);
assert_eq!(count_y, 1);
}
#[test]
fn test_small_vec_struct() {
2015-02-04 21:17:19 -05:00
assert!(size_of::<Vec<u8>>() == size_of::<usize>() * 3);
}
#[test]
fn test_double_drop() {
struct TwoVec<T> {
x: Vec<T>,
y: Vec<T>
}
let (mut count_x, mut count_y) = (0, 0);
{
let mut tv = TwoVec {
x: Vec::new(),
y: Vec::new()
};
tv.x.push(DropCounter {count: &mut count_x});
tv.y.push(DropCounter {count: &mut count_y});
// If Vec had a drop flag, here is where it would be zeroed.
// Instead, it should rely on its internal state to prevent
// doing anything significant when dropped multiple times.
drop(tv.x);
// Here tv goes out of scope, tv.y should be dropped, but not tv.x.
}
assert_eq!(count_x, 1);
assert_eq!(count_y, 1);
}
#[test]
fn test_reserve() {
let mut v = Vec::new();
assert_eq!(v.capacity(), 0);
v.reserve(2);
assert!(v.capacity() >= 2);
2015-01-25 22:05:03 +01:00
for i in 0..16 {
v.push(i);
}
assert!(v.capacity() >= 16);
v.reserve(16);
assert!(v.capacity() >= 32);
v.push(16);
v.reserve(16);
assert!(v.capacity() >= 33)
}
#[test]
fn test_extend() {
let mut v = Vec::new();
let mut w = Vec::new();
2015-01-25 22:05:03 +01:00
v.extend(0..3);
for i in 0..3 { w.push(i) }
assert_eq!(v, w);
2015-01-25 22:05:03 +01:00
v.extend(3..10);
for i in 3..10 { w.push(i) }
assert_eq!(v, w);
}
#[test]
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
fn test_slice_from_mut() {
2015-02-04 21:17:19 -05:00
let mut values = vec![1, 2, 3, 4, 5];
{
2015-01-24 17:23:26 -08:00
let slice = &mut values[2 ..];
2014-11-21 01:20:04 -05:00
assert!(slice == [3, 4, 5]);
for p in slice {
*p += 2;
}
}
assert!(values == [1, 2, 5, 6, 7]);
}
#[test]
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
fn test_slice_to_mut() {
2015-02-04 21:17:19 -05:00
let mut values = vec![1, 2, 3, 4, 5];
{
2015-01-24 17:23:26 -08:00
let slice = &mut values[.. 2];
2014-11-21 01:20:04 -05:00
assert!(slice == [1, 2]);
for p in slice {
*p += 1;
}
}
assert!(values == [2, 3, 3, 4, 5]);
}
#[test]
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
fn test_split_at_mut() {
2015-02-04 21:17:19 -05:00
let mut values = vec![1, 2, 3, 4, 5];
{
2014-09-14 20:27:36 -07:00
let (left, right) = values.split_at_mut(2);
{
let left: &[_] = left;
2015-01-04 17:43:24 +13:00
assert!(&left[..left.len()] == &[1, 2][]);
}
for p in left {
*p += 1;
}
{
let right: &[_] = right;
2015-01-04 17:43:24 +13:00
assert!(&right[..right.len()] == &[3, 4, 5][]);
}
for p in right {
*p += 2;
}
}
2015-02-04 21:17:19 -05:00
assert!(values == vec![2, 3, 5, 6, 7]);
}
#[test]
fn test_clone() {
2015-02-04 21:17:19 -05:00
let v: Vec<i32> = vec![];
2015-01-25 22:05:03 +01:00
let w = vec!(1, 2, 3);
assert_eq!(v, v.clone());
let z = w.clone();
assert_eq!(w, z);
// they should be disjoint in memory.
assert!(w.as_ptr() != z.as_ptr())
}
#[test]
fn test_clone_from() {
let mut v = vec!();
2015-01-25 22:05:03 +01:00
let three = vec!(box 1, box 2, box 3);
let two = vec!(box 4, box 5);
// zero, long
v.clone_from(&three);
assert_eq!(v, three);
// equal
v.clone_from(&three);
assert_eq!(v, three);
// long, short
v.clone_from(&two);
assert_eq!(v, two);
// short, long
v.clone_from(&three);
assert_eq!(v, three)
}
2014-04-02 23:10:36 -04:00
#[test]
fn test_retain() {
2015-02-04 21:17:19 -05:00
let mut vec = vec![1, 2, 3, 4];
vec.retain(|&x| x % 2 == 0);
2015-02-04 21:17:19 -05:00
assert!(vec == vec![2, 4]);
2014-04-02 23:10:36 -04:00
}
#[test]
fn zero_sized_values() {
let mut v = Vec::new();
assert_eq!(v.len(), 0);
v.push(());
assert_eq!(v.len(), 1);
v.push(());
assert_eq!(v.len(), 2);
assert_eq!(v.pop(), Some(()));
assert_eq!(v.pop(), Some(()));
assert_eq!(v.pop(), None);
assert_eq!(v.iter().count(), 0);
v.push(());
assert_eq!(v.iter().count(), 1);
v.push(());
assert_eq!(v.iter().count(), 2);
2015-01-31 12:20:46 -05:00
for &() in &v {}
2014-09-14 20:27:36 -07:00
assert_eq!(v.iter_mut().count(), 2);
v.push(());
2014-09-14 20:27:36 -07:00
assert_eq!(v.iter_mut().count(), 3);
v.push(());
2014-09-14 20:27:36 -07:00
assert_eq!(v.iter_mut().count(), 4);
for &mut () in &mut v {}
unsafe { v.set_len(0); }
2014-09-14 20:27:36 -07:00
assert_eq!(v.iter_mut().count(), 0);
}
#[test]
fn test_partition() {
assert_eq!(vec![].into_iter().partition(|x: &i32| *x < 3), (vec![], vec![]));
assert_eq!(vec![1, 2, 3].into_iter().partition(|x| *x < 4), (vec![1, 2, 3], vec![]));
assert_eq!(vec![1, 2, 3].into_iter().partition(|x| *x < 2), (vec![1], vec![2, 3]));
assert_eq!(vec![1, 2, 3].into_iter().partition(|x| *x < 0), (vec![], vec![1, 2, 3]));
}
#[test]
fn test_zip_unzip() {
2015-01-25 22:05:03 +01:00
let z1 = vec![(1, 4), (2, 5), (3, 6)];
let (left, right): (Vec<_>, Vec<_>) = z1.iter().cloned().unzip();
assert_eq!((1, 4), (left[0], right[0]));
assert_eq!((2, 5), (left[1], right[1]));
assert_eq!((3, 6), (left[2], right[2]));
}
#[test]
fn test_unsafe_ptrs() {
unsafe {
// Test on-stack copy-from-buf.
2015-01-25 22:05:03 +01:00
let a = [1, 2, 3];
let ptr = a.as_ptr();
2015-02-04 21:17:19 -05:00
let b = Vec::from_raw_buf(ptr, 3);
assert_eq!(b, vec![1, 2, 3]);
// Test on-heap copy-from-buf.
2015-01-25 22:05:03 +01:00
let c = vec![1, 2, 3, 4, 5];
let ptr = c.as_ptr();
2015-02-04 21:17:19 -05:00
let d = Vec::from_raw_buf(ptr, 5);
assert_eq!(d, vec![1, 2, 3, 4, 5]);
}
}
#[test]
fn test_vec_truncate_drop() {
2015-02-04 21:17:19 -05:00
static mut drops: u32 = 0;
struct Elem(i32);
impl Drop for Elem {
fn drop(&mut self) {
unsafe { drops += 1; }
}
}
let mut v = vec![Elem(1), Elem(2), Elem(3), Elem(4), Elem(5)];
assert_eq!(unsafe { drops }, 0);
v.truncate(3);
assert_eq!(unsafe { drops }, 2);
v.truncate(0);
assert_eq!(unsafe { drops }, 5);
}
#[test]
#[should_fail]
fn test_vec_truncate_fail() {
2015-02-04 21:17:19 -05:00
struct BadElem(i32);
impl Drop for BadElem {
fn drop(&mut self) {
let BadElem(ref mut x) = *self;
if *x == 0xbadbeef {
panic!("BadElem panic: 0xbadbeef")
}
}
}
let mut v = vec![BadElem(1), BadElem(2), BadElem(0xbadbeef), BadElem(4)];
v.truncate(0);
}
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
2014-07-14 11:03:23 +12:00
#[test]
fn test_index() {
2015-02-04 21:17:19 -05:00
let vec = vec![1, 2, 3];
2014-07-14 11:03:23 +12:00
assert!(vec[1] == 2);
}
#[test]
#[should_fail]
fn test_index_out_of_bounds() {
2015-02-04 21:17:19 -05:00
let vec = vec![1, 2, 3];
2014-07-14 11:03:23 +12:00
let _ = vec[3];
}
#[test]
#[should_fail]
fn test_slice_out_of_bounds_1() {
2015-02-04 21:17:19 -05:00
let x = vec![1, 2, 3, 4, 5];
&x[-1..];
}
#[test]
#[should_fail]
fn test_slice_out_of_bounds_2() {
2015-02-04 21:17:19 -05:00
let x = vec![1, 2, 3, 4, 5];
2015-01-04 17:43:24 +13:00
&x[..6];
}
#[test]
#[should_fail]
fn test_slice_out_of_bounds_3() {
2015-02-04 21:17:19 -05:00
let x = vec![1, 2, 3, 4, 5];
&x[-1..4];
}
#[test]
#[should_fail]
fn test_slice_out_of_bounds_4() {
2015-02-04 21:17:19 -05:00
let x = vec![1, 2, 3, 4, 5];
2015-01-04 17:43:24 +13:00
&x[1..6];
}
#[test]
#[should_fail]
fn test_slice_out_of_bounds_5() {
2015-02-04 21:17:19 -05:00
let x = vec![1, 2, 3, 4, 5];
2015-01-04 17:43:24 +13:00
&x[3..2];
}
#[test]
2014-12-30 10:51:18 -08:00
#[should_fail]
fn test_swap_remove_empty() {
2015-02-04 21:17:19 -05:00
let mut vec= Vec::<i32>::new();
2014-12-30 10:51:18 -08:00
vec.swap_remove(0);
}
2014-08-31 15:29:22 +02:00
#[test]
fn test_move_iter_unwrap() {
2015-02-04 21:17:19 -05:00
let mut vec = Vec::with_capacity(7);
2014-08-31 15:29:22 +02:00
vec.push(1);
vec.push(2);
let ptr = vec.as_ptr();
vec = vec.into_iter().into_inner();
2014-08-31 15:29:22 +02:00
assert_eq!(vec.as_ptr(), ptr);
assert_eq!(vec.capacity(), 7);
assert_eq!(vec.len(), 0);
}
#[test]
#[should_fail]
fn test_map_in_place_incompatible_types_fail() {
2015-02-04 21:17:19 -05:00
let v = vec![0, 1, 2];
v.map_in_place(|_| ());
}
#[test]
fn test_map_in_place() {
2015-02-04 21:17:19 -05:00
let v = vec![0, 1, 2];
assert_eq!(v.map_in_place(|i: u32| i as i32 - 1), [-1, 0, 1]);
2014-08-31 15:29:22 +02:00
}
#[test]
fn test_map_in_place_zero_sized() {
let v = vec![(), ()];
2015-01-28 08:34:18 -05:00
#[derive(PartialEq, Debug)]
struct ZeroSized;
assert_eq!(v.map_in_place(|_| ZeroSized), [ZeroSized, ZeroSized]);
}
2014-11-28 13:20:37 +01:00
#[test]
fn test_map_in_place_zero_drop_count() {
use std::sync::atomic::{AtomicUsize, Ordering, ATOMIC_USIZE_INIT};
2014-11-28 13:20:37 +01:00
2015-01-28 08:34:18 -05:00
#[derive(Clone, PartialEq, Debug)]
2014-11-28 13:20:37 +01:00
struct Nothing;
impl Drop for Nothing { fn drop(&mut self) { } }
2015-01-28 08:34:18 -05:00
#[derive(Clone, PartialEq, Debug)]
2014-11-28 13:20:37 +01:00
struct ZeroSized;
impl Drop for ZeroSized {
fn drop(&mut self) {
DROP_COUNTER.fetch_add(1, Ordering::Relaxed);
2014-11-28 13:20:37 +01:00
}
}
2015-02-04 21:17:19 -05:00
const NUM_ELEMENTS: usize = 2;
static DROP_COUNTER: AtomicUsize = ATOMIC_USIZE_INIT;
2014-11-28 13:20:37 +01:00
let v = repeat(Nothing).take(NUM_ELEMENTS).collect::<Vec<_>>();
2014-11-28 13:20:37 +01:00
DROP_COUNTER.store(0, Ordering::Relaxed);
2014-11-28 13:20:37 +01:00
let v = v.map_in_place(|_| ZeroSized);
assert_eq!(DROP_COUNTER.load(Ordering::Relaxed), 0);
2014-11-28 13:20:37 +01:00
drop(v);
assert_eq!(DROP_COUNTER.load(Ordering::Relaxed), NUM_ELEMENTS);
2014-11-28 13:20:37 +01:00
}
2014-10-02 11:19:47 -04:00
#[test]
fn test_move_items() {
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
let vec = vec![1, 2, 3];
2015-02-04 21:17:19 -05:00
let mut vec2 = vec![];
for i in vec {
2014-10-02 11:19:47 -04:00
vec2.push(i);
}
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
assert!(vec2 == vec![1, 2, 3]);
2014-10-02 11:19:47 -04:00
}
#[test]
fn test_move_items_reverse() {
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
let vec = vec![1, 2, 3];
2015-02-04 21:17:19 -05:00
let mut vec2 = vec![];
2014-10-02 14:06:31 -04:00
for i in vec.into_iter().rev() {
2014-10-02 11:19:47 -04:00
vec2.push(i);
}
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
assert!(vec2 == vec![3, 2, 1]);
2014-10-02 11:19:47 -04:00
}
#[test]
fn test_move_items_zero_sized() {
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
let vec = vec![(), (), ()];
2015-02-04 21:17:19 -05:00
let mut vec2 = vec![];
for i in vec {
2014-10-02 11:19:47 -04:00
vec2.push(i);
}
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
assert!(vec2 == vec![(), (), ()]);
2014-10-02 11:19:47 -04:00
}
#[test]
fn test_drain_items() {
let mut vec = vec![1, 2, 3];
2015-02-04 21:17:19 -05:00
let mut vec2 = vec![];
for i in vec.drain() {
vec2.push(i);
}
assert_eq!(vec, []);
assert_eq!(vec2, [ 1, 2, 3 ]);
}
#[test]
fn test_drain_items_reverse() {
let mut vec = vec![1, 2, 3];
2015-02-04 21:17:19 -05:00
let mut vec2 = vec![];
for i in vec.drain().rev() {
vec2.push(i);
}
assert_eq!(vec, []);
2015-02-04 21:17:19 -05:00
assert_eq!(vec2, [3, 2, 1]);
}
#[test]
fn test_drain_items_zero_sized() {
let mut vec = vec![(), (), ()];
2015-02-04 21:17:19 -05:00
let mut vec2 = vec![];
for i in vec.drain() {
vec2.push(i);
}
assert_eq!(vec, []);
assert_eq!(vec2, [(), (), ()]);
}
#[test]
fn test_into_boxed_slice() {
2015-02-04 21:17:19 -05:00
let xs = vec![1, 2, 3];
let ys = xs.into_boxed_slice();
2015-02-04 21:17:19 -05:00
assert_eq!(ys, [1, 2, 3]);
}
2015-01-17 14:30:16 -08:00
#[test]
fn test_append() {
let mut vec = vec![1, 2, 3];
let mut vec2 = vec![4, 5, 6];
vec.append(&mut vec2);
assert_eq!(vec, vec![1, 2, 3, 4, 5, 6]);
assert_eq!(vec2, vec![]);
}
2015-01-24 17:23:26 -08:00
#[test]
fn test_split_off() {
let mut vec = vec![1, 2, 3, 4, 5, 6];
let vec2 = vec.split_off(4);
assert_eq!(vec, vec![1, 2, 3, 4]);
assert_eq!(vec2, vec![5, 6]);
}
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
#[bench]
fn bench_new(b: &mut Bencher) {
b.iter(|| {
2015-02-04 21:17:19 -05:00
let v: Vec<u32> = Vec::new();
2014-07-05 23:07:28 -07:00
assert_eq!(v.len(), 0);
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
assert_eq!(v.capacity(), 0);
})
}
2015-02-04 21:17:19 -05:00
fn do_bench_with_capacity(b: &mut Bencher, src_len: usize) {
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2015-02-04 21:17:19 -05:00
let v: Vec<u32> = Vec::with_capacity(src_len);
2014-07-05 23:07:28 -07:00
assert_eq!(v.len(), 0);
assert_eq!(v.capacity(), src_len);
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
})
}
2014-07-05 23:07:28 -07:00
#[bench]
fn bench_with_capacity_0000(b: &mut Bencher) {
do_bench_with_capacity(b, 0)
}
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_with_capacity_0010(b: &mut Bencher) {
do_bench_with_capacity(b, 10)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_with_capacity_0100(b: &mut Bencher) {
do_bench_with_capacity(b, 100)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_with_capacity_1000(b: &mut Bencher) {
do_bench_with_capacity(b, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_from_fn(b: &mut Bencher, src_len: usize) {
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
let dst = (0..src_len).collect::<Vec<_>>();
2014-07-05 23:07:28 -07:00
assert_eq!(dst.len(), src_len);
assert!(dst.iter().enumerate().all(|(i, x)| i == *x));
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
})
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_from_fn_0000(b: &mut Bencher) {
do_bench_from_fn(b, 0)
}
#[bench]
fn bench_from_fn_0010(b: &mut Bencher) {
do_bench_from_fn(b, 10)
}
#[bench]
fn bench_from_fn_0100(b: &mut Bencher) {
do_bench_from_fn(b, 100)
}
#[bench]
fn bench_from_fn_1000(b: &mut Bencher) {
do_bench_from_fn(b, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_from_elem(b: &mut Bencher, src_len: usize) {
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2015-02-04 21:17:19 -05:00
let dst: Vec<usize> = repeat(5).take(src_len).collect();
2014-07-05 23:07:28 -07:00
assert_eq!(dst.len(), src_len);
assert!(dst.iter().all(|x| *x == 5));
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
})
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_from_elem_0000(b: &mut Bencher) {
do_bench_from_elem(b, 0)
}
#[bench]
fn bench_from_elem_0010(b: &mut Bencher) {
do_bench_from_elem(b, 10)
}
#[bench]
fn bench_from_elem_0100(b: &mut Bencher) {
do_bench_from_elem(b, 100)
}
#[bench]
fn bench_from_elem_1000(b: &mut Bencher) {
do_bench_from_elem(b, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_from_slice(b: &mut Bencher, src_len: usize) {
let src: Vec<_> = FromIterator::from_iter(0..src_len);
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2015-01-04 17:43:24 +13:00
let dst = src.clone()[].to_vec();
2014-07-05 23:07:28 -07:00
assert_eq!(dst.len(), src_len);
assert!(dst.iter().enumerate().all(|(i, x)| i == *x));
});
}
#[bench]
fn bench_from_slice_0000(b: &mut Bencher) {
do_bench_from_slice(b, 0)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_from_slice_0010(b: &mut Bencher) {
do_bench_from_slice(b, 10)
}
#[bench]
fn bench_from_slice_0100(b: &mut Bencher) {
do_bench_from_slice(b, 100)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_from_slice_1000(b: &mut Bencher) {
do_bench_from_slice(b, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_from_iter(b: &mut Bencher, src_len: usize) {
let src: Vec<_> = FromIterator::from_iter(0..src_len);
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2015-02-04 21:17:19 -05:00
let dst: Vec<_> = FromIterator::from_iter(src.clone().into_iter());
2014-07-05 23:07:28 -07:00
assert_eq!(dst.len(), src_len);
assert!(dst.iter().enumerate().all(|(i, x)| i == *x));
});
}
#[bench]
fn bench_from_iter_0000(b: &mut Bencher) {
do_bench_from_iter(b, 0)
}
#[bench]
fn bench_from_iter_0010(b: &mut Bencher) {
do_bench_from_iter(b, 10)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_from_iter_0100(b: &mut Bencher) {
do_bench_from_iter(b, 100)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_from_iter_1000(b: &mut Bencher) {
do_bench_from_iter(b, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_extend(b: &mut Bencher, dst_len: usize, src_len: usize) {
let dst: Vec<_> = FromIterator::from_iter(0..dst_len);
let src: Vec<_> = FromIterator::from_iter(dst_len..dst_len + src_len);
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2014-07-05 23:07:28 -07:00
let mut dst = dst.clone();
2014-09-14 20:27:36 -07:00
dst.extend(src.clone().into_iter());
2014-07-05 23:07:28 -07:00
assert_eq!(dst.len(), dst_len + src_len);
assert!(dst.iter().enumerate().all(|(i, x)| i == *x));
});
}
#[bench]
fn bench_extend_0000_0000(b: &mut Bencher) {
do_bench_extend(b, 0, 0)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_extend_0000_0010(b: &mut Bencher) {
do_bench_extend(b, 0, 10)
}
#[bench]
fn bench_extend_0000_0100(b: &mut Bencher) {
do_bench_extend(b, 0, 100)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_extend_0000_1000(b: &mut Bencher) {
do_bench_extend(b, 0, 1000)
}
#[bench]
fn bench_extend_0010_0010(b: &mut Bencher) {
do_bench_extend(b, 10, 10)
}
#[bench]
fn bench_extend_0100_0100(b: &mut Bencher) {
do_bench_extend(b, 100, 100)
}
#[bench]
fn bench_extend_1000_1000(b: &mut Bencher) {
do_bench_extend(b, 1000, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_push_all(b: &mut Bencher, dst_len: usize, src_len: usize) {
let dst: Vec<_> = FromIterator::from_iter(0..dst_len);
let src: Vec<_> = FromIterator::from_iter(dst_len..dst_len + src_len);
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2014-07-05 23:07:28 -07:00
let mut dst = dst.clone();
dst.push_all(&src);
2014-07-05 23:07:28 -07:00
assert_eq!(dst.len(), dst_len + src_len);
assert!(dst.iter().enumerate().all(|(i, x)| i == *x));
});
}
#[bench]
fn bench_push_all_0000_0000(b: &mut Bencher) {
do_bench_push_all(b, 0, 0)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_push_all_0000_0010(b: &mut Bencher) {
do_bench_push_all(b, 0, 10)
}
#[bench]
fn bench_push_all_0000_0100(b: &mut Bencher) {
do_bench_push_all(b, 0, 100)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_push_all_0000_1000(b: &mut Bencher) {
do_bench_push_all(b, 0, 1000)
}
#[bench]
fn bench_push_all_0010_0010(b: &mut Bencher) {
do_bench_push_all(b, 10, 10)
}
#[bench]
fn bench_push_all_0100_0100(b: &mut Bencher) {
do_bench_push_all(b, 100, 100)
}
#[bench]
fn bench_push_all_1000_1000(b: &mut Bencher) {
do_bench_push_all(b, 1000, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_push_all_move(b: &mut Bencher, dst_len: usize, src_len: usize) {
let dst: Vec<_> = FromIterator::from_iter(0..dst_len);
let src: Vec<_> = FromIterator::from_iter(dst_len..dst_len + src_len);
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2014-07-05 23:07:28 -07:00
let mut dst = dst.clone();
Clean up rustc warnings. compiletest: compact "linux" "macos" etc.as "unix". liballoc: remove a superfluous "use". libcollections: remove invocations of deprecated methods in favor of their suggested replacements and use "_" for a loop counter. libcoretest: remove invocations of deprecated methods; also add "allow(deprecated)" for testing a deprecated method itself. libglob: use "cfg_attr". libgraphviz: add a test for one of data constructors. libgreen: remove a superfluous "use". libnum: "allow(type_overflow)" for type cast into u8 in a test code. librustc: names of static variables should be in upper case. libserialize: v[i] instead of get(). libstd/ascii: to_lowercase() instead of to_lower(). libstd/bitflags: modify AnotherSetOfFlags to use i8 as its backend. It will serve better for testing various aspects of bitflags!. libstd/collections: "allow(deprecated)" for testing a deprecated method itself. libstd/io: remove invocations of deprecated methods and superfluous "use". Also add #[test] where it was missing. libstd/num: introduce a helper function to effectively remove invocations of a deprecated method. libstd/path and rand: remove invocations of deprecated methods and superfluous "use". libstd/task and libsync/comm: "allow(deprecated)" for testing a deprecated method itself. libsync/deque: remove superfluous "unsafe". libsync/mutex and once: names of static variables should be in upper case. libterm: introduce a helper function to effectively remove invocations of a deprecated method. We still see a few warnings about using obsoleted native::task::spawn() in the test modules for libsync. I'm not sure how I should replace them with std::task::TaksBuilder and native::task::NativeTaskBuilder (dependency to libstd?) Signed-off-by: NODA, Kai <nodakai@gmail.com>
2014-10-05 18:11:17 +08:00
dst.extend(src.clone().into_iter());
2014-07-05 23:07:28 -07:00
assert_eq!(dst.len(), dst_len + src_len);
assert!(dst.iter().enumerate().all(|(i, x)| i == *x));
});
}
#[bench]
fn bench_push_all_move_0000_0000(b: &mut Bencher) {
do_bench_push_all_move(b, 0, 0)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_push_all_move_0000_0010(b: &mut Bencher) {
do_bench_push_all_move(b, 0, 10)
}
#[bench]
fn bench_push_all_move_0000_0100(b: &mut Bencher) {
do_bench_push_all_move(b, 0, 100)
}
#[bench]
fn bench_push_all_move_0000_1000(b: &mut Bencher) {
do_bench_push_all_move(b, 0, 1000)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_push_all_move_0010_0010(b: &mut Bencher) {
do_bench_push_all_move(b, 10, 10)
}
#[bench]
fn bench_push_all_move_0100_0100(b: &mut Bencher) {
do_bench_push_all_move(b, 100, 100)
}
#[bench]
fn bench_push_all_move_1000_1000(b: &mut Bencher) {
do_bench_push_all_move(b, 1000, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_clone(b: &mut Bencher, src_len: usize) {
let src: Vec<usize> = FromIterator::from_iter(0..src_len);
2014-07-05 23:07:28 -07:00
b.bytes = src_len as u64;
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2014-07-05 23:07:28 -07:00
let dst = src.clone();
assert_eq!(dst.len(), src_len);
assert!(dst.iter().enumerate().all(|(i, x)| i == *x));
});
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
#[bench]
2014-07-05 23:07:28 -07:00
fn bench_clone_0000(b: &mut Bencher) {
do_bench_clone(b, 0)
}
#[bench]
fn bench_clone_0010(b: &mut Bencher) {
do_bench_clone(b, 10)
}
#[bench]
fn bench_clone_0100(b: &mut Bencher) {
do_bench_clone(b, 100)
}
#[bench]
fn bench_clone_1000(b: &mut Bencher) {
do_bench_clone(b, 1000)
}
2015-02-04 21:17:19 -05:00
fn do_bench_clone_from(b: &mut Bencher, times: usize, dst_len: usize, src_len: usize) {
let dst: Vec<_> = FromIterator::from_iter(0..src_len);
let src: Vec<_> = FromIterator::from_iter(dst_len..dst_len + src_len);
2014-07-05 23:07:28 -07:00
b.bytes = (times * src_len) as u64;
2014-07-05 23:07:28 -07:00
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
b.iter(|| {
2014-07-05 23:07:28 -07:00
let mut dst = dst.clone();
for _ in 0..times {
dst.clone_from(&src);
assert_eq!(dst.len(), src_len);
assert!(dst.iter().enumerate().all(|(i, x)| dst_len + i == *x));
}
2014-07-05 23:07:28 -07:00
});
}
#[bench]
fn bench_clone_from_01_0000_0000(b: &mut Bencher) {
do_bench_clone_from(b, 1, 0, 0)
}
#[bench]
fn bench_clone_from_01_0000_0010(b: &mut Bencher) {
do_bench_clone_from(b, 1, 0, 10)
}
#[bench]
fn bench_clone_from_01_0000_0100(b: &mut Bencher) {
do_bench_clone_from(b, 1, 0, 100)
}
#[bench]
fn bench_clone_from_01_0000_1000(b: &mut Bencher) {
do_bench_clone_from(b, 1, 0, 1000)
}
#[bench]
fn bench_clone_from_01_0010_0010(b: &mut Bencher) {
do_bench_clone_from(b, 1, 10, 10)
}
#[bench]
fn bench_clone_from_01_0100_0100(b: &mut Bencher) {
do_bench_clone_from(b, 1, 100, 100)
}
#[bench]
fn bench_clone_from_01_1000_1000(b: &mut Bencher) {
do_bench_clone_from(b, 1, 1000, 1000)
}
#[bench]
fn bench_clone_from_01_0010_0100(b: &mut Bencher) {
do_bench_clone_from(b, 1, 10, 100)
}
#[bench]
fn bench_clone_from_01_0100_1000(b: &mut Bencher) {
do_bench_clone_from(b, 1, 100, 1000)
}
#[bench]
fn bench_clone_from_01_0010_0000(b: &mut Bencher) {
do_bench_clone_from(b, 1, 10, 0)
}
#[bench]
fn bench_clone_from_01_0100_0010(b: &mut Bencher) {
do_bench_clone_from(b, 1, 100, 10)
}
#[bench]
fn bench_clone_from_01_1000_0100(b: &mut Bencher) {
do_bench_clone_from(b, 1, 1000, 100)
}
#[bench]
fn bench_clone_from_10_0000_0000(b: &mut Bencher) {
do_bench_clone_from(b, 10, 0, 0)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0000_0010(b: &mut Bencher) {
do_bench_clone_from(b, 10, 0, 10)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0000_0100(b: &mut Bencher) {
do_bench_clone_from(b, 10, 0, 100)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0000_1000(b: &mut Bencher) {
do_bench_clone_from(b, 10, 0, 1000)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0010_0010(b: &mut Bencher) {
do_bench_clone_from(b, 10, 10, 10)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0100_0100(b: &mut Bencher) {
do_bench_clone_from(b, 10, 100, 100)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_1000_1000(b: &mut Bencher) {
do_bench_clone_from(b, 10, 1000, 1000)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0010_0100(b: &mut Bencher) {
do_bench_clone_from(b, 10, 10, 100)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0100_1000(b: &mut Bencher) {
do_bench_clone_from(b, 10, 100, 1000)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0010_0000(b: &mut Bencher) {
do_bench_clone_from(b, 10, 10, 0)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_0100_0010(b: &mut Bencher) {
do_bench_clone_from(b, 10, 100, 10)
2014-07-05 23:07:28 -07:00
}
#[bench]
fn bench_clone_from_10_1000_0100(b: &mut Bencher) {
do_bench_clone_from(b, 10, 1000, 100)
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-02 21:56:29 -07:00
}
}