An iterator that allows mutating the list is very useful but needs care
to not be unsound. ListIteration exposes only insert_before (used for
insert_ordered) and peek_next so far.
This is an owned sendable linked list which allows insertion and
deletion at both ends, with fast traversal through iteration, and fast
append/prepend.
It is indended to replace the previous managed DList with exposed list
nodes. It does not match it feature by feature, but DList could grow
more methods if needed.
This is much faster for strings, and eventually when there is a
buffered reader of some sort.
Reading example.json 100 times before was around 1.18s.
After:
- reading from string 0.68s
- reading from file 1.08s (extra time is all in io::Reader)
r? @graydon, @nikomatsakis, @pcwalton, or @catamorphism
Sorry this is so huge, but it's been accumulating for about a month. There's lots of stuff here, mostly oriented toward enabling multithreaded scheduling and improving compatibility between the old and new runtimes. Adds task pinning so that we can create the 'platform thread' in servo.
[Here](e1555f9b56/src/libstd/rt/mod.rs (L201)) is the current runtime setup code.
About half of this has already been reviewed.
The free-standing functions in f32, f64, i8, i16, i32, i64, u8, u16,
u32, u64, float, int, and uint are replaced with generic functions in
num instead.
This means that instead of having to know everywhere what the type is, like
~~~
f64::sin(x)
~~~
You can simply write code that uses the type-generic versions in num instead, this works for all types that implement the corresponding trait in num.
~~~
num::sin(x)
~~~
Note 1: If you were previously using any of those functions, just replace them
with the corresponding function with the same name in num.
Note 2: If you were using a function that corresponds to an operator, use the
operator instead.
Note 3: This is just https://github.com/mozilla/rust/pull/7090 reopened against master.
Correct treatment of irrefutable patterns. The old code was wrong in many, many ways. `ref` bindings didn't work, it sometimes copied when it should have moved, the borrow checker didn't even look at such patterns at all, we weren't consistent about preventing values with destructors from being pulled apart, etc.
Fixes#3224.
Fixes#3225.
Fixes#3255.
Fixes#6225.
Fixes#6386.
r? @catamorphism
Avoids the overhead of read_char for every character.
Benchmark reading example.json 10 times from
https://code.google.com/p/rapidjson/wiki/Performance
Before: 2.55s
After: 0.16s
Regression testing is already done by isrustfastyet.
The free-standing functions in f32, f64, i8, i16, i32, i64, u8, u16,
u32, u64, float, int, and uint are replaced with generic functions in
num instead.
If you were previously using any of those functions, just replace them
with the corresponding function with the same name in num.
Note: If you were using a function that corresponds to an operator, use
the operator instead.
It's broken/unmaintained and needs to be rewritten to avoid managed
pointers and needless copies. A full rewrite is necessary and the API
will need to be redone so it's not worth keeping this around (#7628).
Closes#2236, #2744
Where * = tcp, ip, url.
Formerly, extra::net::* were aliases of extra::net_*, but were the
recommended path to use. Thus, the documentation talked of the `net_*`
modules while everything else was written expecting `net::*`.
This moves the actual modules so that `extra::net::*` is the actual
location of the modules.
This will naturally break any code which used `extra::net_*` directly.
They should be altered to use `extra::net::*` (which has been the
documented way of doing things for some time).
This ensures that there is one, and only one, obvious way of doing
things.
Change the signature of Iterator.size_hint() to always have a lower bound.
Implement .size_hint() on all remaining iterators (if it differs from the default).
It's broken/unmaintained and needs to be rewritten to avoid managed
pointers and needless copies. A full rewrite is necessary and the API
will need to be redone so it's not worth keeping this around.
Closes#2236, #2744
Adding iterators for extra::smallintmap
Working on mutability error
Ran into ICE
More mutability problems
Working through mutability issue
working on getting tests passing
SmallIntMa tests passing
Added SmallIntSet iterators, and the tests are passing
Stripped trailing spaces
Removed extra use directive
The deque is split at the marker lo, or logical index 0. Move the
shortest part (split by lo) when growing. This way add_front is just as
fast as add_back, on average.
The previous implementation of reverse iterators used modulus (%) of
negative indices, which did work but was fragile due to dependency on
the divisor.
The deque is determined by vec self.elts.len(), self.nelts, and self.lo,
and self.hi is calculated from these.
self.hi is just the raw index of element number `self.nelts`
Fix some issues with the deque being very slow, keep the same vec around
instead of constructing a new. Move as few elements as possible, so the
self.lo point is not moved after grow.
[o o o o o|o o o]
hi...^ ^.... lo
grows to
[. . . . .|o o o o o o o o|. . .]
^.. lo ^.. hi
If the deque is append-only, it will result in moving no elements on
grow. If the deque is prepend-only, all will be moved each time.
The bench tests added show big improvements:
Timed using `rust build -O --test extra.rs && ./extra --bench deque`
Old version:
test deque::tests::bench_add_back ... bench: 4976 ns/iter (+/- 9)
test deque::tests::bench_add_front ... bench: 4108 ns/iter (+/- 18)
test deque::tests::bench_grow ... bench: 416964 ns/iter (+/- 4197)
test deque::tests::bench_new ... bench: 408 ns/iter (+/- 12)
With this commit:
test deque::tests::bench_add_back ... bench: 12 ns/iter (+/- 0)
test deque::tests::bench_add_front ... bench: 16 ns/iter (+/- 0)
test deque::tests::bench_grow ... bench: 1515 ns/iter (+/- 30)
test deque::tests::bench_new ... bench: 419 ns/iter (+/- 3)
Avoids the overhead of read_char for every character.
Benchmark reading example.json 10 times from
https://code.google.com/p/rapidjson/wiki/Performance
Before: 2.55s
After: 0.16s
Regression testing is already done by isrustfastyet.
Where * = tcp, ip, url.
Formerly, extra::net::* were aliases of extra::net_*, but were the
recommended path to use. Thus, the documentation talked of the `net_*`
modules while everything else was written expecting `net::*`.
This moves the actual modules so that `extra::net::*` is the actual
location of the modules.
This will naturally break any code which used `extra::net_*` directly.
They should be altered to use `extra::net::*` (which has been the
documented way of doing things for some time).
This ensures that there is one, and only one, obvious way of doing
things.
There's now an enum to pick the character set instead of a url_safe
bool.
from_base64 now returns a Result<~[u8], ~str> and returns an Err instead
of killing the task when it is called on invalid input.
Fixed documentation examples.
The Base64 package previously had extremely basic functionality. It only
suported the standard encoding character set, didn't support line breaks
and always padded output. This commit makes it significantly more
powerful.
The FromBase64 impl now supports all of the standard variants of Base64.
It ignores newlines,interprets '-' and '_' as well as '+' and '/' and
doesn't require padding. It isn't incredibly pedantic and will
successfully parse strings that are not strictly valid, but I don't
think the extra complexity required to make it accept _only_ valid
strings is worth it.
The ToBase64 trait has been modified such that to_base64 now takes a
base64::Config struct which contains the output format configuration.
This currently includes the selection of character set (standard or
url safe), whether or not to pad and an optional line break width. The
package comes with three static Config structs for the RFC 4648
standard, RFC 4648 url safe and RFC 2045 MIME formats.
The other option for configuring ToBase64 output would be to have one
method with the configuration flags passed and other traits with default
impls for the common cases, but I think that's a little messier.