Declare a `type SendStr = MaybeOwned<'static>` to ease readibility of
types that needed the old SendStr behavior.
Implement all the traits for MaybeOwned that SendStr used to implement.
- Convert the formatting traits to `&self` rather than `_: &Self`
- Rejig `syntax::ext::{format,deriving}` a little in preparation
- Implement `#[deriving(Show)]`
This also drops support for the managed pointer POISON_ON_FREE feature
as it's not worth adding back the support for it. After a snapshot, the
leftovers can be removed.
This pull request:
1) Changes the initial insertion sort to be in-place, and defers allocation of working set until merge is needed.
2) Increases the increases the maximum run length to use insertion sort for from 8 to 32 elements. This increases the size of vectors that will not allocate, and reduces the number of merge passes by two. It seemed to be the sweet spot in the benchmarks that I ran.
Here are the results of some benchmarks. Note that they are sorting u64s, so types that are more expensive to compare or copy may have different behaviors.
Before changes:
```
test vec::bench::sort_random_large bench: 719753 ns/iter (+/- 130173) = 111 MB/s
test vec::bench::sort_random_medium bench: 4726 ns/iter (+/- 742) = 169 MB/s
test vec::bench::sort_random_small bench: 344 ns/iter (+/- 76) = 116 MB/s
test vec::bench::sort_sorted bench: 437244 ns/iter (+/- 70043) = 182 MB/s
```
Deferred allocation (8 element insertion sort):
```
test vec::bench::sort_random_large bench: 702630 ns/iter (+/- 88158) = 113 MB/s
test vec::bench::sort_random_medium bench: 4529 ns/iter (+/- 497) = 176 MB/s
test vec::bench::sort_random_small bench: 185 ns/iter (+/- 49) = 216 MB/s
test vec::bench::sort_sorted bench: 425853 ns/iter (+/- 60907) = 187 MB/s
```
Deferred allocation (16 element insertion sort):
```
test vec::bench::sort_random_large bench: 692783 ns/iter (+/- 165837) = 115 MB/s
test vec::bench::sort_random_medium bench: 4434 ns/iter (+/- 722) = 180 MB/s
test vec::bench::sort_random_small bench: 187 ns/iter (+/- 38) = 213 MB/s
test vec::bench::sort_sorted bench: 393783 ns/iter (+/- 85548) = 203 MB/s
```
Deferred allocation (32 element insertion sort):
```
test vec::bench::sort_random_large bench: 682556 ns/iter (+/- 131008) = 117 MB/s
test vec::bench::sort_random_medium bench: 4370 ns/iter (+/- 1369) = 183 MB/s
test vec::bench::sort_random_small bench: 179 ns/iter (+/- 32) = 223 MB/s
test vec::bench::sort_sorted bench: 358353 ns/iter (+/- 65423) = 223 MB/s
```
Deferred allocation (64 element insertion sort):
```
test vec::bench::sort_random_large bench: 712040 ns/iter (+/- 132454) = 112 MB/s
test vec::bench::sort_random_medium bench: 4425 ns/iter (+/- 784) = 180 MB/s
test vec::bench::sort_random_small bench: 179 ns/iter (+/- 81) = 223 MB/s
test vec::bench::sort_sorted bench: 317812 ns/iter (+/- 62675) = 251 MB/s
```
This is the best I could manage with the basic merge sort while keeping the invariant that the original vector must contain each element exactly once when the comparison function is called. If one is not married to a stable sort, an in-place n*log(n) sorting algorithm may have better performance in some cases.
for #12011
cc @huonw
Added a seperate in-place insertion sort for short vectors.
Increased threshold for insertion short for 8 to 32 elements
for small types and 16 for larger types. Added benchmarks
for sorting larger types.
`from_utf8_lossy()` takes a byte vector and produces a `~str`, converting
any invalid UTF-8 sequence into the U+FFFD REPLACEMENT CHARACTER.
The replacement follows the guidelines in §5.22 Best Practice for U+FFFD
Substitution from the Unicode Standard (Version 6.2)[1], which also
matches the WHATWG rules for utf-8 decoding[2].
[1]: http://www.unicode.org/versions/Unicode6.2.0/ch05.pdf
[2]: http://encoding.spec.whatwg.org/#utf-8Closes#9516.
from_utf8_lossy() takes a byte vector and produces a ~str, converting
any invalid UTF-8 sequence into the U+FFFD REPLACEMENT CHARACTER.
The replacement follows the guidelines in §5.22 Best Practice for U+FFFD
Substitution from the Unicode Standard (Version 6.2)[1], which also
matches the WHATWG rules for utf-8 decoding[2].
[1]: http://www.unicode.org/versions/Unicode6.2.0/ch05.pdf
[2]: http://encoding.spec.whatwg.org/#utf-8
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
This has been a long time coming. Conditions in rust were initially envisioned
as being a good alternative to error code return pattern. The idea is that all
errors are fatal-by-default, and you can opt-in to handling the error by
registering an error handler.
While sounding nice, conditions ended up having some unforseen shortcomings:
* Actually handling an error has some very awkward syntax:
let mut result = None;
let mut answer = None;
io::io_error::cond.trap(|e| { result = Some(e) }).inside(|| {
answer = Some(some_io_operation());
});
match result {
Some(err) => { /* hit an I/O error */ }
None => {
let answer = answer.unwrap();
/* deal with the result of I/O */
}
}
This pattern can certainly use functions like io::result, but at its core
actually handling conditions is fairly difficult
* The "zero value" of a function is often confusing. One of the main ideas
behind using conditions was to change the signature of I/O functions. Instead
of read_be_u32() returning a result, it returned a u32. Errors were notified
via a condition, and if you caught the condition you understood that the "zero
value" returned is actually a garbage value. These zero values are often
difficult to understand, however.
One case of this is the read_bytes() function. The function takes an integer
length of the amount of bytes to read, and returns an array of that size. The
array may actually be shorter, however, if an error occurred.
Another case is fs::stat(). The theoretical "zero value" is a blank stat
struct, but it's a little awkward to create and return a zero'd out stat
struct on a call to stat().
In general, the return value of functions that can raise error are much more
natural when using a Result as opposed to an always-usable zero-value.
* Conditions impose a necessary runtime requirement on *all* I/O. In theory I/O
is as simple as calling read() and write(), but using conditions imposed the
restriction that a rust local task was required if you wanted to catch errors
with I/O. While certainly an surmountable difficulty, this was always a bit of
a thorn in the side of conditions.
* Functions raising conditions are not always clear that they are raising
conditions. This suffers a similar problem to exceptions where you don't
actually know whether a function raises a condition or not. The documentation
likely explains, but if someone retroactively adds a condition to a function
there's nothing forcing upstream users to acknowledge a new point of task
failure.
* Libaries using I/O are not guaranteed to correctly raise on conditions when an
error occurs. In developing various I/O libraries, it's much easier to just
return `None` from a read rather than raising an error. The silent contract of
"don't raise on EOF" was a little difficult to understand and threw a wrench
into the answer of the question "when do I raise a condition?"
Many of these difficulties can be overcome through documentation, examples, and
general practice. In the end, all of these difficulties added together ended up
being too overwhelming and improving various aspects didn't end up helping that
much.
A result-based I/O error handling strategy also has shortcomings, but the
cognitive burden is much smaller. The tooling necessary to make this strategy as
usable as conditions were is much smaller than the tooling necessary for
conditions.
Perhaps conditions may manifest themselves as a future entity, but for now
we're going to remove them from the standard library.
Closes#9795Closes#8968
A weak pointer inside itself will have its destructor run when the last
strong pointer to that data disappears, so we need to make sure that the
Weak and Rc destructors don't duplicate work (i.e. freeing).
By making the Rcs effectively take a weak pointer, we ensure that no
Weak destructor will free the pointer while still ensuring that Weak
pointers can't be upgraded to strong ones as the destructors run.
This approach of starting weak at 1 is what libstdc++ does.
Fixes#12046.
I have a hunch this just deadlocked the windows bots. Due to UDP being a lossy
protocol, I don't think we can guarantee that the server can receive both
packets, so just listen for one of them.
A weak pointer inside itself will have its destructor run when the last
strong pointer to that data disappears, so we need to make sure that the
Weak and Rc destructors don't duplicate work (i.e. freeing).
By making the Rcs effectively take a weak pointer, we ensure that no
Weak destructor will free the pointer while still ensuring that Weak
pointers can't be upgraded to strong ones as the destructors run.
This approach of starting weak at 1 is what libstdc++ does.
Fixes#12046.
This is part of the overall strategy I would like to take when approaching
issue #11165. The only two I/O objects that reasonably want to be "split" are
the network stream objects. Everything else can be "split" by just creating
another version.
The initial idea I had was the literally split the object into a reader and a
writer half, but that would just introduce lots of clutter with extra interfaces
that were a little unnnecssary, or it would return a ~Reader and a ~Writer which
means you couldn't access things like the remote peer name or local socket name.
The solution I found to be nicer was to just clone the stream itself. The clone
is just a clone of the handle, nothing fancy going on at the kernel level.
Conceptually I found this very easy to wrap my head around (everything else
supports clone()), and it solved the "split" problem at the same time.
The cloning support is pretty specific per platform/lib combination:
* native/win32 - uses some specific WSA apis to clone the SOCKET handle
* native/unix - uses dup() to get another file descriptor
* green/all - This is where things get interesting. When we support full clones
of a handle, this implies that we're allowing simultaneous writes
and reads to happen. It turns out that libuv doesn't support two
simultaneous reads or writes of the same object. It does support
*one* read and *one* write at the same time, however. Some extra
infrastructure was added to just block concurrent writers/readers
until the previous read/write operation was completed.
I've added tests to the tcp/unix modules to make sure that this functionality is
supported everywhere.
This is part of the overall strategy I would like to take when approaching
issue #11165. The only two I/O objects that reasonably want to be "split" are
the network stream objects. Everything else can be "split" by just creating
another version.
The initial idea I had was the literally split the object into a reader and a
writer half, but that would just introduce lots of clutter with extra interfaces
that were a little unnnecssary, or it would return a ~Reader and a ~Writer which
means you couldn't access things like the remote peer name or local socket name.
The solution I found to be nicer was to just clone the stream itself. The clone
is just a clone of the handle, nothing fancy going on at the kernel level.
Conceptually I found this very easy to wrap my head around (everything else
supports clone()), and it solved the "split" problem at the same time.
The cloning support is pretty specific per platform/lib combination:
* native/win32 - uses some specific WSA apis to clone the SOCKET handle
* native/unix - uses dup() to get another file descriptor
* green/all - This is where things get interesting. When we support full clones
of a handle, this implies that we're allowing simultaneous writes
and reads to happen. It turns out that libuv doesn't support two
simultaneous reads or writes of the same object. It does support
*one* read and *one* write at the same time, however. Some extra
infrastructure was added to just block concurrent writers/readers
until the previous read/write operation was completed.
I've added tests to the tcp/unix modules to make sure that this functionality is
supported everywhere.
This allows patch adds a new arc type that allows for creation of copy-on-write data structures. The idea is that it is safe to mutate any data structure as long as it has only one reference to it. If there are multiple, it requires cloning of the data structure before mutation is possible.
This allows for easier static initialization of a pthread mutex, although the
windows mutexes still sadly suffer.
Note that this commit removes the clone() method from a mutex because it no
longer makes sense for pthreads mutexes. This also removes the Once type for
now, but it'll get added back shortly.
* All I/O now returns IoResult<T> = Result<T, IoError>
* All formatting traits now return fmt::Result = IoResult<()>
* The if_ok!() macro was added to libstd
- renames `Default` to `Show`
- introduces some hidden `std::fmt::secret_...` functions, designed to work-around the lack of UFCS (with UFCS they can be replaced by referencing the trait methods directly) because I'm going to convert the traits to have methods rather than static functions, since `#[deriving]` works much better with true methods.
I'm blocked on a snapshot after this. (I could probably do a large number of `#[cfg]`s, but I can work on other things in the meantime.)