Replace syntax::opt_vec with syntax::owned_slice
The `owned_slice::OwnedSlice` is `(*T, uint)` (i.e. a direct equivalent to DSTs `~[T]`).
This shaves two words off the old OptVec type; and also makes substituting in other implementations easy, by removing all the mutation methods. (And also everything that's very rarely/never used.)
* `Ord` inherits from `Eq`
* `TotalOrd` inherits from `TotalEq`
* `TotalOrd` inherits from `Ord`
* `TotalEq` inherits from `Eq`
This is a partial implementation of #12517.
Formatting via reflection has been a little questionable for some time now, and
it's a little unfortunate that one of the standard macros will silently use
reflection when you weren't expecting it. This adds small bits of code bloat to
libraries, as well as not always being necessary. In light of this information,
this commit switches assert_eq!() to using {} in the error message instead of
{:?}.
In updating existing code, there were a few error cases that I encountered:
* It's impossible to define Show for [T, ..N]. I think DST will alleviate this
because we can define Show for [T].
* A few types here and there just needed a #[deriving(Show)]
* Type parameters needed a Show bound, I often moved this to `assert!(a == b)`
* `Path` doesn't implement `Show`, so assert_eq!() cannot be used on two paths.
I don't think this is much of a regression though because {:?} on paths looks
awful (it's a byte array).
Concretely speaking, this shaved 10K off a 656K binary. Not a lot, but sometime
significant for smaller binaries.
This weeds out a bunch of warnings building stdtest on windows, and it also adds
a check! macro to the io::fs tests to help diagnose errors that are cropping up
on windows platforms as well.
cc #12516
With the stability attributes we can put public-but unstable modules next to others, so this moves `intrinsics` and `raw` out of the `unstable` module (and marks both as `#[experimental]`).
These two containers are indeed collections, so their place is in
libcollections, not in libstd. There will always be a hash map as part of the
standard distribution of Rust, but by moving it out of the standard library it
makes libstd that much more portable to more platforms and environments.
This conveniently also removes the stuttering of 'std::hashmap::HashMap',
although 'collections::HashMap' is only one character shorter.
This "bubble up an error" macro was originally named if_ok! in order to get it
landed, but after the fact it was discovered that this name is not exactly
desirable.
The name `if_ok!` isn't immediately clear that is has much to do with error
handling, and it doesn't look fantastic in all contexts (if if_ok!(...) {}). In
general, the agreed opinion about `if_ok!` is that is came in as subpar.
The name `try!` is more invocative of error handling, it's shorter by 2 letters,
and it looks fitting in almost all circumstances. One concern about the word
`try!` is that it's too invocative of exceptions, but the belief is that this
will be overcome with documentation and examples.
Close#12037
It unsafe assumptions that any impl of RawPtr is for actual pointers,
that they can be copied by memcpy. Removing it is easy, so I don't
think it's solving a real problem.
This pull request:
1) Changes the initial insertion sort to be in-place, and defers allocation of working set until merge is needed.
2) Increases the increases the maximum run length to use insertion sort for from 8 to 32 elements. This increases the size of vectors that will not allocate, and reduces the number of merge passes by two. It seemed to be the sweet spot in the benchmarks that I ran.
Here are the results of some benchmarks. Note that they are sorting u64s, so types that are more expensive to compare or copy may have different behaviors.
Before changes:
```
test vec::bench::sort_random_large bench: 719753 ns/iter (+/- 130173) = 111 MB/s
test vec::bench::sort_random_medium bench: 4726 ns/iter (+/- 742) = 169 MB/s
test vec::bench::sort_random_small bench: 344 ns/iter (+/- 76) = 116 MB/s
test vec::bench::sort_sorted bench: 437244 ns/iter (+/- 70043) = 182 MB/s
```
Deferred allocation (8 element insertion sort):
```
test vec::bench::sort_random_large bench: 702630 ns/iter (+/- 88158) = 113 MB/s
test vec::bench::sort_random_medium bench: 4529 ns/iter (+/- 497) = 176 MB/s
test vec::bench::sort_random_small bench: 185 ns/iter (+/- 49) = 216 MB/s
test vec::bench::sort_sorted bench: 425853 ns/iter (+/- 60907) = 187 MB/s
```
Deferred allocation (16 element insertion sort):
```
test vec::bench::sort_random_large bench: 692783 ns/iter (+/- 165837) = 115 MB/s
test vec::bench::sort_random_medium bench: 4434 ns/iter (+/- 722) = 180 MB/s
test vec::bench::sort_random_small bench: 187 ns/iter (+/- 38) = 213 MB/s
test vec::bench::sort_sorted bench: 393783 ns/iter (+/- 85548) = 203 MB/s
```
Deferred allocation (32 element insertion sort):
```
test vec::bench::sort_random_large bench: 682556 ns/iter (+/- 131008) = 117 MB/s
test vec::bench::sort_random_medium bench: 4370 ns/iter (+/- 1369) = 183 MB/s
test vec::bench::sort_random_small bench: 179 ns/iter (+/- 32) = 223 MB/s
test vec::bench::sort_sorted bench: 358353 ns/iter (+/- 65423) = 223 MB/s
```
Deferred allocation (64 element insertion sort):
```
test vec::bench::sort_random_large bench: 712040 ns/iter (+/- 132454) = 112 MB/s
test vec::bench::sort_random_medium bench: 4425 ns/iter (+/- 784) = 180 MB/s
test vec::bench::sort_random_small bench: 179 ns/iter (+/- 81) = 223 MB/s
test vec::bench::sort_sorted bench: 317812 ns/iter (+/- 62675) = 251 MB/s
```
This is the best I could manage with the basic merge sort while keeping the invariant that the original vector must contain each element exactly once when the comparison function is called. If one is not married to a stable sort, an in-place n*log(n) sorting algorithm may have better performance in some cases.
for #12011
cc @huonw
Added a seperate in-place insertion sort for short vectors.
Increased threshold for insertion short for 8 to 32 elements
for small types and 16 for larger types. Added benchmarks
for sorting larger types.
This is part of the overall strategy I would like to take when approaching
issue #11165. The only two I/O objects that reasonably want to be "split" are
the network stream objects. Everything else can be "split" by just creating
another version.
The initial idea I had was the literally split the object into a reader and a
writer half, but that would just introduce lots of clutter with extra interfaces
that were a little unnnecssary, or it would return a ~Reader and a ~Writer which
means you couldn't access things like the remote peer name or local socket name.
The solution I found to be nicer was to just clone the stream itself. The clone
is just a clone of the handle, nothing fancy going on at the kernel level.
Conceptually I found this very easy to wrap my head around (everything else
supports clone()), and it solved the "split" problem at the same time.
The cloning support is pretty specific per platform/lib combination:
* native/win32 - uses some specific WSA apis to clone the SOCKET handle
* native/unix - uses dup() to get another file descriptor
* green/all - This is where things get interesting. When we support full clones
of a handle, this implies that we're allowing simultaneous writes
and reads to happen. It turns out that libuv doesn't support two
simultaneous reads or writes of the same object. It does support
*one* read and *one* write at the same time, however. Some extra
infrastructure was added to just block concurrent writers/readers
until the previous read/write operation was completed.
I've added tests to the tcp/unix modules to make sure that this functionality is
supported everywhere.
This removes @[] from the parser as well as much of the handling of it (and `@str`) from the compiler as I can find.
I've just rebased @pcwalton's (already reviewed) `@str` removal (and fixed the problems in a separate commit); the only new work is the trailing commits with my authorship.
Closes#11967
Renamed the invert() function in iter.rs to flip().
Also renamed the Invert<T> type to Flip<T>.
Some related code comments changed. Documentation that I could find has
been updated, and all the instances I could locate where the
function/type were called have been updated as well.
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
r? @pcwalton
Major changes:
- Define temporary scopes in a syntax-based way that basically defaults
to the innermost statement or conditional block, except for in
a `let` initializer, where we default to the innermost block. Rules
are documented in the code, but not in the manual (yet).
See new test run-pass/cleanup-value-scopes.rs for examples.
- Refactors Datum to better define cleanup roles.
- Refactor cleanup scopes to not be tied to basic blocks, permitting
us to have a very large number of scopes (one per AST node).
- Introduce nascent documentation in trans/doc.rs covering datums and
cleanup in a more comprehensive way.
Unique pointers and vectors currently contain a reference counting
header when containing a managed pointer.
This `{ ref_count, type_desc, prev, next }` header is not necessary and
not a sensible foundation for tracing. It adds needless complexity to
library code and is responsible for breakage in places where the branch
has been left out.
The `borrow_offset` field can now be removed from `TyDesc` along with
the associated handling in the compiler.
Closes#9510Closes#11533
The `print!` and `println!` macros are now the preferred method of printing, and so there is no reason to export the `stdio` functions in the prelude. The functions have also been replaced by their macro counterparts in the tutorial and other documentation so that newcomers don't get confused about what they should be using.
This commit uniforms the short title of modules provided by libstd,
in order to make their roles more explicit when glancing at the index.
Signed-off-by: Luca Bruno <lucab@debian.org>
This uses quite a bit of unsafe code for speed and failure safety, and allocates `2*n` temporary storage.
[Performance](https://gist.github.com/huonw/5547f2478380288a28c2):
| n | new | priority_queue | quick3 |
|-------:|---------:|---------------:|---------:|
| 5 | 200 | 155 | 106 |
| 100 | 6490 | 8750 | 5810 |
| 10000 | 1300000 | 1790000 | 1060000 |
| 100000 | 16700000 | 23600000 | 12700000 |
| sorted | 520000 | 1380000 | 53900000 |
| trend | 1310000 | 1690000 | 1100000 |
(The times are in nanoseconds, having subtracted the set-up time (i.e. the `just_generate` bench target).)
I imagine that there is still significant room for improvement, particularly because both priority_queue and quick3 are doing a static call via `Ord` or `TotalOrd` for the comparisons, while this is using a (boxed) closure.
Also, this code does not `clone`, unlike `quick_sort3`; and is stable, unlike both of the others.
Update the next() method to just return self.v in the case that we've reached
the last element that the iterator will yield. This produces equivalent
behavior as before, but without the cost of updating the field.
Update the size_hint() method to return a better hint now that #9629 is fixed.
very small runs.
This uses a lot of unsafe code for speed, otherwise we would be having
to sort by sorting lists of indices and then do a pile of swaps to put
everything in the correct place.
Fixes#9819.
Also, add `.remove_opt` and replace `.unshift` with `.remove(0)`. The
code size reduction seem to compensate for not having the optimised
special cases.
This makes the included benchmark more than 3 times faster.
This makes the included benchmark more than 3 times faster. Also,
`.unshift(x)` is now faster as `.insert(0, x)` which can reuse the
allocation if necessary.
The removal of the aliasing &mut[] and &[] from `shift_opt` also comes with its simplification.
The above also allows the use of `copy_nonoverlapping_memory` in `[].copy_memory` (I did an audit of each use of `.copy_memory` and `std::vec::bytes::copy_memory`, and I believe none of them are called with arguments can ever alias). This changes requires that `unsafe` code using `copy_memory` **needs** to respect the aliasing rules of `&mut[]`.