to_str, to_pretty_str, to_writer, and to_pretty_writer were at the top
level of extra::json, this moves them into an impl for Json to match
with what's been done for the rest of libextra and libstd. (or at least for vec and str)
Also meant changing some tests.
Closes#8676.
This makes it relatively easy for us to split testsuite load between machines in buildbot. I've added buildbot-side support for setting up builders with -a.b suffixes (eg. linux-64-opt-vg-0.5, linux-64-opt-vg-1.5, linux-64-opt-vg-2.5, linux-64-opt-vg-3.5, linux-64-opt-vg-4.5 causes the valgrind-supervised testsuite to split 5 ways across hosts).
This resolves issue #908.
Notable changes:
- On Windows, LLVM integrated assembler emits bad stack unwind tables when segmented stacks are enabled. However, unwind info directives in the assembly output are correct, so we generate assembly first and then run it through an external assembler, just like it is already done for Android builds.
- Linker is invoked via "g++" command instead of "gcc": g++ passes the appropriate magic parameters to the linker, which ensure correct registration of stack unwind tables in dynamic libraries.
to_str, to_pretty_str, to_writer, and to_pretty_writer were at the top
level of extra::json, this moves them into an impl for Json to match
with what's been done for the rest of libextra and libstd.
@thestinger and I talked about this in IRC. There are a couple of use
cases for a persistent map, but they aren't common enough to justify
inclusion in libextra and vary enough that they would require multiple
implementations anyways.
In any case, fun_treemap in its current state is basically useless.
I need `Clone` for `Tm` for my latest work on [rust-http](https://github.com/chris-morgan/rust-http) (static typing for headers, and headers like `Date` are a time), so here it is.
@huonw recommended deriving DeepClone while I was at it.
I also had to implement `DeepClone` for `~str` to get a derived implementation of `DeepClone` for `Tm`; I did `@str` while I was at it, for consistency.
An MD5 implementation was originally included in #8097, but, since there are a couple different implementations of that digest algorithm (@alco mentioned his implementation on the mailing list just before I opened that PR), it was suggested that I remove it from that PR and open up a new PR to discuss the different implementations and the best way forward. If anyone wants to discuss a different implementation, feel free to present it here and discuss and compare it to this one. I'll just discuss my implementation and I'll leave it to others to present details of theirs.
This implementation relies on the FixedBuffer struct from cryptoutil.rs for managing the input buffer, just like the Sha1 and Sha2 digest implementations do. I tried manually unrolling the loops in the compression function, but I got slightly worse performance when I did that.
Outside of the #[test]s, I also tested the implementation by generating 1,000 inputs of up to 10MB in size and checking the MD5 digest calculated by this code against the MD5 digest calculated by Java's implementation.
On my computer, I'm getting the following performance:
```
test md5::bench::md5_10 ... bench: 52 ns/iter (+/- 1) = 192 MB/s
test md5::bench::md5_1k ... bench: 2819 ns/iter (+/- 44) = 363 MB/s
test md5::bench::md5_64k ... bench: 178566 ns/iter (+/- 4927) = 367 MB/s
```
Addresses part of #7104
This module adds the ability to generate UUIDs (on all Rust-supported platforms).
I reviewed the existing UUID support in libraries for a range of languages; Go, D, C#, Java and Boost++. The features were all very similar, and this patch essentially covers the union. The implmentation is quite straightforward, and uses the underlying rng support which is assumed to be sufficiently strong for this purpose.
This patch is not complete, however I have put this up for review to gather feedback before finalising. It has tests for most features and documentation for most functions.
Outstanding issues:
* Only generates V4 (Random) UUIDs. Do we want to support the SHA-1 hash based flavour as well?
* Is it worth having the field-based struct public as well as the byte array?
* Formatting the string with '-' between groups not done yet.
* Parsing full string not done as there appears to be no regexp support yet. I can write a simple manual parser for now?
* D has a generator as well. This would be easy to add. However, given the simple interface for creating a new one, and the presence of the macro, is this useful?
* Is it worth having a separate UUID trait and specific implementation? Or should it just have a struct+impl with the same name? Currently it feels weird to have the trait (which can't be named UUID so as to conflict) a separate thing.
* Should the macro be visible at the top level scope?
As this is a first attempt, some code may not be idiomatic. Please comment below...
Thanks for all feedback!
The shift_add_check_overflow and shift_add_check_overflow_tuple functions are
re-written to be more efficient and to make use of the CheckedAdd instrinsic
instead of manually checking for integer overflow.
* The invokation leading_zeros() is removed and replaced with simple integer
comparison. The leading_zeros() method results in a ctpop LLVM instruction
and it may not be efficient on all architectures; integer comparisons,
however, are efficient on just about any architecture.
* The methods lose the ability for the caller to specify a particular shift
value - that functionality wasn't being used and removing it allows for the
code to be simplified.
* Finally, the methods are renamed to add_bytes_to_bits and
add_bytes_to_bits_tuple to reflect their very specific purposes.
- generate random UUIDs
- convert to and from strings and bytes
- parse common string formats
- implements Zero, Clone, FromStr, ToStr, Eq, TotalEq and Rand
- unit tests and documentation
- parsing error codes and strings
- incorporate feedback from PR review
If they are on the trait then it is extremely annoying to use them as
generic parameters to a function, e.g. with the iterator param on the trait
itself, if one was to pass an Extendable<int> to a function that filled it
either from a Range or a Map<VecIterator>, one needs to write something
like:
fn foo<E: Extendable<int, Range<int>> +
Extendable<int, Map<&'self int, int, VecIterator<int>>>
(e: &mut E, ...) { ... }
since using a generic, i.e. `foo<E: Extendable<int, I>, I: Iterator<int>>`
means that `foo` takes 2 type parameters, and the caller has to specify them
(which doesn't work anyway, as they'll mismatch with the iterators used in
`foo` itself).
This patch changes it to:
fn foo<E: Extendable<int>>(e: &mut E, ...) { ... }
.with_c_str() is a replacement for the old .as_c_str(), to avoid
unnecessary boilerplate.
Replace all usages of .to_c_str().with_ref() with .with_c_str().
If they are on the trait then it is extremely annoying to use them as
generic parameters to a function, e.g. with the iterator param on the trait
itself, if one was to pass an Extendable<int> to a function that filled it
either from a Range or a Map<VecIterator>, one needs to write something
like:
fn foo<E: Extendable<int, Range<int>> +
Extendable<int, Map<&'self int, int, VecIterator<int>>>
(e: &mut E, ...) { ... }
since using a generic, i.e. `foo<E: Extendable<int, I>, I: Iterator<int>>`
means that `foo` takes 2 type parameters, and the caller has to specify them
(which doesn't work anyway, as they'll mismatch with the iterators used in
`foo` itself).
This patch changes it to:
fn foo<E: Extendable<int>>(e: &mut E, ...) { ... }