This is currently unsound since `bool` is represented as `i8`. It will
become sound when `bool` is stored as `i8` but always used as `i1`.
However, the current behaviour will always be identical to `x & 1 != 0`,
so there's no need for it. It's also surprising, since `x != 0` is the
expected behaviour.
Closes#7311
Current access methods are nestable and unsafe. This patch renames
current methods implementation - prepends unsafe_ - and implements 2 new
methods that are both safe and un-nestable.
Fixes#7473
Current access methods are nestable and unsafe. This patch renames
current methods implementation - prepends unsafe_ - and implements 2 new
methods that are both safe and un-nestable.
Fixes#7473
The message of the first commit explains (edited for changed trait name):
The trait `ExactSize` is introduced to solve a few small niggles:
* We can't reverse (`.invert()`) an enumeration iterator
* for a vector, we have `v.iter().position(f)` but `v.rposition(f)`.
* We can't reverse `Zip` even if both iterators are from vectors
`ExactSize` is an empty trait that is intended to indicate that an
iterator, for example `VecIterator`, knows its exact finite size and
reports it correctly using `.size_hint()`. Only adaptors that preserve
this at all times, can expose this trait further. (Where here we say
finite for fitting in uint).
---
It may seem complicated just to solve these small "niggles",
(It's really the reversible enumerate case that's the most interesting)
but only a few core iterators need to implement this trait.
While we gain more capabilities generically for some iterators,
it becomes a tad more complicated to figure out if a type has
the right trait impls for it.
In this commit I:
- removed unneeded heap allocations
- added extra whitespace to crowded expressions
- and removed unneeded syntax
Also, CC @bblum although this change is fairly unobjectionable.
This is a generalization of the vector .rposition() method, to all
double-ended iterators that have the ExactSizeHint trait.
This resolves the slight asymmetry around `position` and `rposition`
* position from front is `vec.iter().position()`
* position from the back was, `vec.rposition()` is now `vec.iter().rposition()`
Additionally, other indexed sequences (only `extra::ringbuf` I think),
will have the same method available once it implements ExactSizeHint.
Fix a bug in `s.slice_chars(a, b)` that did not accept `a == s.len()`.
Fix a bug in `!=` defined for DList.
Also simplify NormalizationIterator to use the CharIterator directly instead of mimicing the iteration itself.
This removes the stacking of type parameters that occurs when invoking
trait methods, and fixes all places in the standard library that were
relying on it. It is somewhat awkward in places; I think we'll probably
want something like the `Foo::<for T>::new()` syntax.
`UnsafeAtomicRcBox` → `UnsafeArc` (#7674), and `AtomicRcBoxData` → `ArcData` to reflect this.
Also, the inner pointer of `UnsafeArc` is now `*mut ArcData`, which avoids some transmutes to `~`: i.e. less chance of mistakes.
As for now, rekillable is an unsafe function, instead, it should behave
just like unkillable by encapsulating unsafe code within an unsafe
block.
This patch does that and removes unsafe blocks that were encapsulating
rekillable calls throughout rust's libs.
Fixes#8232
to_str, to_pretty_str, to_writer, and to_pretty_writer were at the top
level of extra::json, this moves them into an impl for Json to match
with what's been done for the rest of libextra and libstd. (or at least for vec and str)
Also meant changing some tests.
Closes#8676.
This makes it relatively easy for us to split testsuite load between machines in buildbot. I've added buildbot-side support for setting up builders with -a.b suffixes (eg. linux-64-opt-vg-0.5, linux-64-opt-vg-1.5, linux-64-opt-vg-2.5, linux-64-opt-vg-3.5, linux-64-opt-vg-4.5 causes the valgrind-supervised testsuite to split 5 ways across hosts).
This resolves issue #908.
Notable changes:
- On Windows, LLVM integrated assembler emits bad stack unwind tables when segmented stacks are enabled. However, unwind info directives in the assembly output are correct, so we generate assembly first and then run it through an external assembler, just like it is already done for Android builds.
- Linker is invoked via "g++" command instead of "gcc": g++ passes the appropriate magic parameters to the linker, which ensure correct registration of stack unwind tables in dynamic libraries.
to_str, to_pretty_str, to_writer, and to_pretty_writer were at the top
level of extra::json, this moves them into an impl for Json to match
with what's been done for the rest of libextra and libstd.
@thestinger and I talked about this in IRC. There are a couple of use
cases for a persistent map, but they aren't common enough to justify
inclusion in libextra and vary enough that they would require multiple
implementations anyways.
In any case, fun_treemap in its current state is basically useless.
I need `Clone` for `Tm` for my latest work on [rust-http](https://github.com/chris-morgan/rust-http) (static typing for headers, and headers like `Date` are a time), so here it is.
@huonw recommended deriving DeepClone while I was at it.
I also had to implement `DeepClone` for `~str` to get a derived implementation of `DeepClone` for `Tm`; I did `@str` while I was at it, for consistency.
An MD5 implementation was originally included in #8097, but, since there are a couple different implementations of that digest algorithm (@alco mentioned his implementation on the mailing list just before I opened that PR), it was suggested that I remove it from that PR and open up a new PR to discuss the different implementations and the best way forward. If anyone wants to discuss a different implementation, feel free to present it here and discuss and compare it to this one. I'll just discuss my implementation and I'll leave it to others to present details of theirs.
This implementation relies on the FixedBuffer struct from cryptoutil.rs for managing the input buffer, just like the Sha1 and Sha2 digest implementations do. I tried manually unrolling the loops in the compression function, but I got slightly worse performance when I did that.
Outside of the #[test]s, I also tested the implementation by generating 1,000 inputs of up to 10MB in size and checking the MD5 digest calculated by this code against the MD5 digest calculated by Java's implementation.
On my computer, I'm getting the following performance:
```
test md5::bench::md5_10 ... bench: 52 ns/iter (+/- 1) = 192 MB/s
test md5::bench::md5_1k ... bench: 2819 ns/iter (+/- 44) = 363 MB/s
test md5::bench::md5_64k ... bench: 178566 ns/iter (+/- 4927) = 367 MB/s
```
Addresses part of #7104
This module adds the ability to generate UUIDs (on all Rust-supported platforms).
I reviewed the existing UUID support in libraries for a range of languages; Go, D, C#, Java and Boost++. The features were all very similar, and this patch essentially covers the union. The implmentation is quite straightforward, and uses the underlying rng support which is assumed to be sufficiently strong for this purpose.
This patch is not complete, however I have put this up for review to gather feedback before finalising. It has tests for most features and documentation for most functions.
Outstanding issues:
* Only generates V4 (Random) UUIDs. Do we want to support the SHA-1 hash based flavour as well?
* Is it worth having the field-based struct public as well as the byte array?
* Formatting the string with '-' between groups not done yet.
* Parsing full string not done as there appears to be no regexp support yet. I can write a simple manual parser for now?
* D has a generator as well. This would be easy to add. However, given the simple interface for creating a new one, and the presence of the macro, is this useful?
* Is it worth having a separate UUID trait and specific implementation? Or should it just have a struct+impl with the same name? Currently it feels weird to have the trait (which can't be named UUID so as to conflict) a separate thing.
* Should the macro be visible at the top level scope?
As this is a first attempt, some code may not be idiomatic. Please comment below...
Thanks for all feedback!
The shift_add_check_overflow and shift_add_check_overflow_tuple functions are
re-written to be more efficient and to make use of the CheckedAdd instrinsic
instead of manually checking for integer overflow.
* The invokation leading_zeros() is removed and replaced with simple integer
comparison. The leading_zeros() method results in a ctpop LLVM instruction
and it may not be efficient on all architectures; integer comparisons,
however, are efficient on just about any architecture.
* The methods lose the ability for the caller to specify a particular shift
value - that functionality wasn't being used and removing it allows for the
code to be simplified.
* Finally, the methods are renamed to add_bytes_to_bits and
add_bytes_to_bits_tuple to reflect their very specific purposes.
- generate random UUIDs
- convert to and from strings and bytes
- parse common string formats
- implements Zero, Clone, FromStr, ToStr, Eq, TotalEq and Rand
- unit tests and documentation
- parsing error codes and strings
- incorporate feedback from PR review
If they are on the trait then it is extremely annoying to use them as
generic parameters to a function, e.g. with the iterator param on the trait
itself, if one was to pass an Extendable<int> to a function that filled it
either from a Range or a Map<VecIterator>, one needs to write something
like:
fn foo<E: Extendable<int, Range<int>> +
Extendable<int, Map<&'self int, int, VecIterator<int>>>
(e: &mut E, ...) { ... }
since using a generic, i.e. `foo<E: Extendable<int, I>, I: Iterator<int>>`
means that `foo` takes 2 type parameters, and the caller has to specify them
(which doesn't work anyway, as they'll mismatch with the iterators used in
`foo` itself).
This patch changes it to:
fn foo<E: Extendable<int>>(e: &mut E, ...) { ... }
.with_c_str() is a replacement for the old .as_c_str(), to avoid
unnecessary boilerplate.
Replace all usages of .to_c_str().with_ref() with .with_c_str().
If they are on the trait then it is extremely annoying to use them as
generic parameters to a function, e.g. with the iterator param on the trait
itself, if one was to pass an Extendable<int> to a function that filled it
either from a Range or a Map<VecIterator>, one needs to write something
like:
fn foo<E: Extendable<int, Range<int>> +
Extendable<int, Map<&'self int, int, VecIterator<int>>>
(e: &mut E, ...) { ... }
since using a generic, i.e. `foo<E: Extendable<int, I>, I: Iterator<int>>`
means that `foo` takes 2 type parameters, and the caller has to specify them
(which doesn't work anyway, as they'll mismatch with the iterators used in
`foo` itself).
This patch changes it to:
fn foo<E: Extendable<int>>(e: &mut E, ...) { ... }
Use Eq + Ord for lexicographical ordering of sequences.
For each of <, <=, >= or > as R, use::
[x, ..xs] R [y, ..ys] = if x != y { x R y } else { xs R ys }
Previous code using `a < b` and then `!(b < a)` for short-circuiting
fails on cases such as [1.0, 2.0] < [0.0/0.0, 3.0], where the first
element was effectively considered equal.
Containers like &[T] did also implement only one comparison operator `<`,
and derived the comparison results from this. This isn't correct either for
Ord.
Implement functions in `std::iterator::order::{lt,le,gt,ge,equal,cmp}` that all
iterable containers can use for lexical order.
We also visit tuple ordering, having the same problem and same solution
(but differing implementation).
I'm a bit disappointed that I couldn't figure out how to factor out more of the code implementing `extra::sync` but I feel this is an okay start. Also I added some documentation explaining that `WaitQueue` isn't thread safe, and needs an exclusive lock.
@bblum
This PR fixes#7235 and #3371, which removes trailing nulls from `str` types. Instead, it replaces the creation of c strings with a new type, `std::c_str::CString`, which wraps a malloced byte array, and respects:
* No interior nulls
* Ends with a trailing null
Basically, generic containers should not use the default methods since a
type of elements may not guarantees total order. str could use them
since u8's Ord guarantees total order. Floating point numbers are also
broken with the default methods because of NaN. Thanks for @thestinger.
Timespec also guarantees total order AIUI. I'm unsure whether
extra::semver::Identifier does so I left it alone. Proof needed.
Signed-off-by: OGINO Masanori <masanori.ogino@gmail.com>
This is a fairly large rollup, but I've tested everything locally, and none of
it should be platform-specific.
r=alexcrichton (bdfdbdd)
r=brson (d803c18)
r=alexcrichton (a5041d0)
r=bstrie (317412a)
r=alexcrichton (135c85e)
r=thestinger (8805baa)
r=pcwalton (0661178)
r=cmr (9397fe0)
r=cmr (caa4135)
r=cmr (6a21d93)
r=cmr (4dc3379)
r=cmr (0aa5154)
r=cmr (18be261)
r=thestinger (f10be03)
Write external iterators for Difference, Sym. Difference, Intersection
and Union set operations.
These iterators are generic insofar that they could work on any ordered
sequence iterators, even though they are type specialized to the
TreeSetIterator in this case.
Looking at the `check` function in the treeset tests, rustc seems
unwilling to compile a function resembling::
fn check<'a, T: Iterator<&'a int>>(... )
so the tests for these iterators are still running the legacy loop
protocol.
According to #7887, we've decided to use the syntax of `fn map<U>(f: &fn(&T) -> U) -> U`, which passes a reference to the closure, and to `fn map_move<U>(f: &fn(T) -> U) -> U` which moves the value into the closure. This PR adds these `.map_move()` functions to `Option` and `Result`.
In addition, it has these other minor features:
* Replaces a couple uses of `option.get()`, `result.get()`, and `result.get_err()` with `option.unwrap()`, `result.unwrap()`, and `result.unwrap_err()`. (See #8268 and #8288 for a more thorough adaptation of this functionality.
* Removes `option.take_map()` and `option.take_map_default()`. These two functions can be easily written as `.take().map_move(...)`.
* Adds a better error message to `result.unwrap()` and `result.unwrap_err()`.
This module provided adaptors for the old internal iterator protocol,
but they proved to be quite unreadable and are not generic enough to
handle borrowed pointers well.
Since Rust no longer defines an internal iteration protocol, I don't
think there's going to be any reuse via these adaptors.
Encoding should really only be done from [u8]<->str. The extra
convenience implementations don't really have a place, especially since
they're so trivial.
Also improved error messages in FromBase64.
The overhead of str::push_char is high enough to cripple the performance
of these two functions. I've switched them to build the output in a
~[u8] and then convert to a string later. Since we know exactly the
bytes going into the vector, we can use the unsafe version to avoid the
is_utf8 check.
I could have riced it further with vec::raw::get, but it only added
~10MB/s so I didn't think it was worth it. ToHex is still ~30% slower
than FromHex, which is puzzling.
Before:
```
test base64::test::from_base64 ... bench: 1000 ns/iter (+/- 349) = 204 MB/s
test base64::test::to_base64 ... bench: 2390 ns/iter (+/- 1130) = 63 MB/s
...
test hex::tests::bench_from_hex ... bench: 884 ns/iter (+/- 220) = 341 MB/s
test hex::tests::bench_to_hex ... bench: 2453 ns/iter (+/- 919) = 61 MB/s
```
After:
```
test base64::test::from_base64 ... bench: 1271 ns/iter (+/- 600) = 160 MB/s
test base64::test::to_base64 ... bench: 759 ns/iter (+/- 286) = 198 MB/s
...
test hex::tests::bench_from_hex ... bench: 875 ns/iter (+/- 377) = 345 MB/s
test hex::tests::bench_to_hex ... bench: 593 ns/iter (+/- 240) = 254 MB/s
```
FromHex ignores whitespace and parses either upper or lower case hex
digits. ToHex outputs lower case hex digits with no whitespace. Unlike
ToBase64, ToHex doesn't allow you to configure the output format. I
don't feel that it's super useful in this case.
- Made naming schemes consistent between Option, Result and Either
- Changed Options Add implementation to work like the maybe monad (return None if any of the inputs is None)
- Removed duplicate Option::get and renamed all related functions to use the term `unwrap` instead
It seems that relatively new code uses `Foo::new()` instead of `Foo()` so I wrote a patch to migrate some structs to the former style.
Is it a right direction? If there are any guidelines not to use new()-style, could you add them to the [style guide](https://github.com/omasanori/rust/wiki/Note-style-guide)?
The compiler-generated dtor for DList recurses deeply to drop Nodes.
For big lists this can overflow the stack.
This is a problem for the new scheduler, where split stacks are not implemented.
Thanks @blake2-ppc
WIth this patch `RUSTFLAGS='--cfg unicode' make check"` passed successfully.
* Why doesn't `#[link_name="icuuc"]` make libextra to link against libicuuc.so?
* In `extra::unicode::tests`, `use unicode; unicode::is_foo('a')` failed but `use unicode::*; is_foo('a')` succeeded. Is it right?
RWArc had a clone() method, but it was part of impl RWArc instead of
an implementation of Clone.
Stick with the explicit implementation instead of deriving Clone so we
can have a docstring.
Fixes#8052.
Added functions to cryptoutil.rs that perform an addition after shifting
the 2nd parameter by a specified constant. These function fail!() if integer
overflow will result. Updated the Sha2 implementation to use these functions.
Create a helper function in cryptoutil.rs which feeds 1,000,000 'a's into
a Digest with varying input sizes and then checks the result. This is
essentially the same as one of Sha1's existing tests, so, that test was
re-implemented using this method. New tests were added using this method for
Sha512 and Sha256.
The Sha2 compression functions were re-written to execute the message
scheduling calculations in the same loop as the rest of the compression
function. The compiler is able to generate much better code. Additionally,
innermost part of the compression functions were turned into macros to
reduce code duplicate and to make the functions more concise.
There are 2 main pieces of functionality in cryptoutil.rs:
* A set of unsafe function for efficiently reading and writing u32 and u64
values. All of these functions are fairly easy to audit to confirm that
they do what they are supposed to.
* A FixedBuffer struct. This struct keeps track of input data until there
is enough of it to execute the a function on it which expects a fixed
block of data.
The Sha2 module was rewritten to take advantage of the new functions in
cryptoutil as well as FixedBuffer. The result is that the duplicate code
for maintaining a buffer of input data is removed from the Sha512 and
Sha256 implementation. Additionally, the FixedBuffer code is much more
efficient than the previous code was.
The result_X() methods just calculate an output of a fixed size. They don't
really have much to do with running the actually hash algorithm until the very
last step - the output. It makes much more sense to put all this logic into
the Digest impls for each specific variation on the hash function.
The code was arranged so that the core Sha2 code came first, and then
all of the various implementation of Digest followed along later. The
problem is that the Sha512 compression function code is far away from
the Sha512 Digest implementation, so, if you are trying to read over
the code, you need to scroll all around the file for no good reason. The
code was rearranged so that all of the Sha512 code is in one place and
all of the Sha256 code is in another and so that all impls for a struct
are near the definition of that struct.
Change the former repetition::
for 5.times { }
to::
do 5.times { }
.times() cannot be broken with `break` or `return` anymore; for those
cases, use a numerical range loop instead.
.intersection(), .union() etc methods in trait std::container::Set use
internal iters. Remove these methods from the trait.
I reported issue #8154 for the reinstatement of iterator-based set algebra
methods to the Set trait.
For bitv and treemap, that lack Iterator implementations of set
operations, preserve them as methods directly on the types themselves.
For HashSet, these methods are replaced by the present .union_iter()
etc.
This removes a bunch of options from the task builder interface that are irrelevant to the new scheduler and were generally unused anyway. It also bumps the stack size of new scheduler tasks so that there's enough room to run rustc and changes the interface to `Thread` to not implicitly join threads on destruction, but instead require an explicit, and mandatory, call to `join`.
Main logic in ```Implement select() for new runtime pipes.```. The guts of the ```PortOne::try_recv()``` implementation are now split up across several functions, ```optimistic_check```, ```block_on```, and ```recv_ready```.
There is one weird FIXME I left open here, in the "implement select" commit -- an assertion I couldn't get to work in the receive path, on an invariant that for some reason doesn't hold with ```SharedPort```. Still investigating this.
Renamed bytes_iter to byte_iter to match other iterators
Refactored str Iterators to use DoubleEnded Iterators and typedefs instead of wrapper structs
Reordered the Iterator section
Whitespace fixup
Moved clunky `each_split_within` function to the one place in the tree where it's actually needed
Replaced all block doccomments in str with line doccomments
Drop the "Iterator" suffix for the the structs in std::iterator.
Filter, Zip, Chain etc. are shorter type names for when iterator
pipelines need their types written out in full in return value types, so
it's easier to read and write. the iterator module already forms enough
namespace.
Good evening,
This is a superset of @MaikKlein's #7969 commit, that I've fixed up to compile. I had a couple commits I wanted to do on top of @MaikKlein's work that I didn't want to bitrot.
To be more specific:
`UPPERCASETYPE` was changed to `UppercaseType`
`type_new` was changed to `Type::new`
`type_function(value)` was changed to `value.method()`
With the recent fixes to method resolution, we can now remove the
dummy type parameters used as crutches in the iterator module.
For example, the zip adaptor type is just ZipIterator<T, U> now.
This is a cleanup pull request that does:
* removes `os::as_c_charp`
* moves `str::as_buf` and `str::as_c_str` into `StrSlice`
* converts some functions from `StrSlice::as_buf` to `StrSlice::as_c_str`
* renames `StrSlice::as_buf` to `StrSlice::as_imm_buf` (and adds `StrSlice::as_mut_buf` to match `vec.rs`.
* renames `UniqueStr::as_bytes_with_null_consume` to `UniqueStr::to_bytes`
* and other misc cleanups and minor optimizations
Factor out internal methods to pop/push list nodes so that .merge() and .rotate_to_front(), .rotate_to_back() (new methods) can be implemented without allocating nodes.
With that, some cleanup changes to DList use of Option, and adding a missing Encodable implementation.
SmallIntSet is equivalent to BitvSet but with 64 times the memory
overhead. There's no reason for it to exist.
SmallIntSet's overhead should really only be 8 times, but for some
reason, `sys::size_of::<Option<()>>() == 8`, not 1.
Switched Bitv and BitvSet to external iterators. They still use some internal iterators internally (ha).
Derived clone for all Bitv types.
Removed indirection in BitvVariant. It previously held a unique pointer to the appropriate Bitv struct, even though those structs are the size of a pointer themselves. BitvVariant is the same size (16 bytes) as it was previously.
.peek_next() needs to check the element counter just like the .next()
and .next_back() iterators do.
Also clarify .insert_next() doc w.r.t double ended iteration.