This a followup to #7510. @catamorphism requested a test - so I have created one, but in doing so I noticed some inconsistency in the error messages resulting from referencing nonexistent traits, so I changed the messages to be more consistent.
Change the signature of Iterator.size_hint() to always have a lower bound.
Implement .size_hint() on all remaining iterators (if it differs from the default).
It's broken/unmaintained and needs to be rewritten to avoid managed
pointers and needless copies. A full rewrite is necessary and the API
will need to be redone so it's not worth keeping this around.
Closes#2236, #2744
Fix an assertion in grow when using add_front.
Speeds up grow (and the functions that use it) by a lot. We retain the vector instead of creating it anew for each grow.
The struct field hi is removed since it is redundant, and all iterators are updated to use a representation closer to the Deque itself, it should make it work even if deque sizes are not powers of two.
Deque::with_capacity is also implemented, but .reserve() is not yet done.
They simply byte-swap an integer to a specific endian, like the hton* functions in C.
These intrinsics are synthesized, so maybe they should be in another file. But since they are just a single line of code each, based on the bswap intrinsics and aren't really intended for public consumption I thought they would fit in the intrinsics file.
The next step working on this could be to expose a trait / generic function for byteswapping.
Adding iterators for extra::smallintmap
Working on mutability error
Ran into ICE
More mutability problems
Working through mutability issue
working on getting tests passing
SmallIntMa tests passing
Added SmallIntSet iterators, and the tests are passing
Stripped trailing spaces
Removed extra use directive
Since ' ' is by far one of the most common characters, it is worthwhile
to put it first, and short-circuit the rest of the function.
On the same JSON benchmark, as the json_perf improvement, reading example.json
10 times from https://code.google.com/p/rapidjson/wiki/Performance.
Before: 0.16s
After: 0.11s
The deque is split at the marker lo, or logical index 0. Move the
shortest part (split by lo) when growing. This way add_front is just as
fast as add_back, on average.
The previous implementation of reverse iterators used modulus (%) of
negative indices, which did work but was fragile due to dependency on
the divisor.
The deque is determined by vec self.elts.len(), self.nelts, and self.lo,
and self.hi is calculated from these.
self.hi is just the raw index of element number `self.nelts`
Fix some issues with the deque being very slow, keep the same vec around
instead of constructing a new. Move as few elements as possible, so the
self.lo point is not moved after grow.
[o o o o o|o o o]
hi...^ ^.... lo
grows to
[. . . . .|o o o o o o o o|. . .]
^.. lo ^.. hi
If the deque is append-only, it will result in moving no elements on
grow. If the deque is prepend-only, all will be moved each time.
The bench tests added show big improvements:
Timed using `rust build -O --test extra.rs && ./extra --bench deque`
Old version:
test deque::tests::bench_add_back ... bench: 4976 ns/iter (+/- 9)
test deque::tests::bench_add_front ... bench: 4108 ns/iter (+/- 18)
test deque::tests::bench_grow ... bench: 416964 ns/iter (+/- 4197)
test deque::tests::bench_new ... bench: 408 ns/iter (+/- 12)
With this commit:
test deque::tests::bench_add_back ... bench: 12 ns/iter (+/- 0)
test deque::tests::bench_add_front ... bench: 16 ns/iter (+/- 0)
test deque::tests::bench_grow ... bench: 1515 ns/iter (+/- 30)
test deque::tests::bench_new ... bench: 419 ns/iter (+/- 3)
Avoids the overhead of read_char for every character.
Benchmark reading example.json 10 times from
https://code.google.com/p/rapidjson/wiki/Performance
Before: 2.55s
After: 0.16s
Regression testing is already done by isrustfastyet.
Add a function to safely retrieve the first element of a ~[T], as
Option<T>. Implement shift() using shift_opt().
Add tests for both .shift() and .shift_opt()
Add a function to safely retrieve the last element of a ~[T], as
Option<T>. Implement pop() using pop_opt(); it benches the same as the
old implementation when tested with optimization level 2.
Where * = tcp, ip, url.
Formerly, extra::net::* were aliases of extra::net_*, but were the
recommended path to use. Thus, the documentation talked of the `net_*`
modules while everything else was written expecting `net::*`.
This moves the actual modules so that `extra::net::*` is the actual
location of the modules.
This will naturally break any code which used `extra::net_*` directly.
They should be altered to use `extra::net::*` (which has been the
documented way of doing things for some time).
This ensures that there is one, and only one, obvious way of doing
things.
Add the x64 windows target to platform.mk and reorder some headers to fix build on mingw64. There are still some issues with building llvm but this gets us one step closer.
The Base64 package previously had extremely basic functionality. It only
suported the standard encoding character set, didn't support line breaks
and always padded output. This commit makes it significantly more
powerful.
The FromBase64 impl now supports all of the standard variants of Base64.
It ignores newlines,interprets '-' and '_' as well as '+' and '/' and
doesn't require padding. It isn't incredibly pedantic and will
successfully parse strings that are not strictly valid, but I don't
think the extra complexity required to make it accept _only_ valid
strings is worth it.
The ToBase64 trait has been modified such that to_base64 now takes a
base64::Config struct which contains the output format configuration.
This currently includes the selection of character set (standard or
url safe), whether or not to pad and an optional line break width. The
package comes with three static Config structs for the RFC 4648
standard, RFC 4648 url safe and RFC 2045 MIME formats.
The other option for configuring ToBase64 output would be to have one
method with the configuration flags passed and other traits with default
impls for the common cases, but I think that's a little messier.
FromBase64 still kills the task if you pass it invalid input, which isn't
particularly appropriate for a function into which you'll be passing
unvalidated input. Would it be worth changing its signature to return a
Result?
This fixes a segfault when configuring rust to use local-rust-root. The
libraries were renamed in the 0.6-0.7 transition, and the script was not copying
them all. I also removed the line referencing libcore (now libstd).