with_end_to_cap is enormously expensive now that it's initializing
memory since it involves 64k allocation + memset on every call. This is
most noticable when calling read_to_end on very small readers, where the
new version if **4 orders of magnitude** faster.
BufReader also depended on with_end_to_cap so I've rewritten it in its
original form.
As a bonus, converted the buffered IO struct Debug impls to use the
debug builders.
I first came across this in sfackler/rust-postgres#106 where a user reported a 10x performance regression. A call to read_to_end turned out to be the culprit: 9cd413d42c.
The new version differs from the old in a couple of ways. The buffer size used is now adaptive. It starts at 32 bytes and doubles each time EOF hasn't been reached up to a limit of 64k. In addition, the buffer is only truncated when EOF or an error has been reached, rather than after every call to read as was the case for the old implementation.
I wrote up a benchmark to compare the old version and new version: https://gist.github.com/sfackler/e979711b0ee2f2063462
It tests a couple of different cases: a high bandwidth reader, a low bandwidth reader, and a low bandwidth reader that won't return more than 10k per call to `read`. The high bandwidth reader should be analagous to use cases when reading from e.g. a `BufReader` or `Vec`, and the low bandwidth readers should be analogous to reading from something like a `TcpStream`.
Of special note, reads from a high bandwith reader containing 4 bytes are now *4,495 times faster*.
```
~/foo ❯ cargo bench
Compiling foo v0.0.1 (file:///home/sfackler/foo)
Running target/release/foo-7498d7dd7faecf5c
running 13 tests
test test_new ... ignored
test new_delay_4 ... bench: 230768 ns/iter (+/- 14812)
test new_delay_4_cap ... bench: 231421 ns/iter (+/- 7211)
test new_delay_5m ... bench: 14495370 ns/iter (+/- 4008648)
test new_delay_5m_cap ... bench: 73127954 ns/iter (+/- 59908587)
test new_nodelay_4 ... bench: 83 ns/iter (+/- 2)
test new_nodelay_5m ... bench: 12527237 ns/iter (+/- 335243)
test std_delay_4 ... bench: 373095 ns/iter (+/- 12613)
test std_delay_4_cap ... bench: 374190 ns/iter (+/- 19611)
test std_delay_5m ... bench: 17356012 ns/iter (+/- 15906588)
test std_delay_5m_cap ... bench: 883555035 ns/iter (+/- 205559857)
test std_nodelay_4 ... bench: 144937 ns/iter (+/- 2448)
test std_nodelay_5m ... bench: 16095893 ns/iter (+/- 3315116)
test result: ok. 0 passed; 0 failed; 1 ignored; 12 measured
```
r? @alexcrichton
To not use `old_io` and `os`, which are deprecated. Since there is no more `MemoryMap` used byte parsing instead to generate the second potential error.
Hello! I noticed that the email address in AUTHORS.txt doesn't point to my home email. Could somebody merge this patch to correct it please?
Thanks very much!
This isn't really possible to test in an automatic way, since the only traits
you can negative impl are `Send` and `Sync`, and the implementors page for
those only exists in libstd.
Closes#21310
This saves a bunch of a time and will make distributions smaller, as well as
avoiding filling the implementors page with internal garbage. Turn it back on
with `--enable-compiler-docs` if you want compiler docs during development.
Crates behind the facade are only documented on nightly/dev builds (where they
can be used).
[breaking-change]
Closes#23772Closes#21297
This was originally used to set up the guessing game, but that no longer
exists. This version uses `old_io`, and updating it involves talking
about `&mut` and such, which we haven't covered yet. So, for now, let's
just remove it.
Fixes#23760
with_end_to_cap is enormously expensive now that it's initializing
memory since it involves 64k allocation + memset on every call. This is
most noticable when calling read_to_end on very small readers, where the
new version if **4 orders of magnitude** faster.
BufReader also depended on with_end_to_cap so I've rewritten it in its
original form.
As a bonus, converted the buffered IO struct Debug impls to use the
debug builders.
Fixes#23815
The collections debug helpers no longer prefix output with the
collection name, in line with the current conventions for Debug
implementations. Implementations that want to preserve the current
behavior can simply add a `try!(write!(fmt, "TypeName "));` at the
beginning of the `fmt` method.
[breaking-change]
Port of pcwalton removal of `#[unsafe_destructor]` check.
Earlier commits impose rules on lifetimes that make generic destructors safe; thus we no longer need the `#[unsafe_destructor]` attribute nor its associated check.
----
So remove the check for the unsafe_destructor attribute.
And remove outdated compile-fail tests from when lifetime-parameteric dtors were disallowed/unsafe.
In addition, when one uses the attribute without the associated feature, report that the attribute is deprecated.
However, I do not think this is a breaking-change, because the attribute and feature are still currently accepted by the compiler. (After the next snapshot that has this commit, we can remove the feature itself and the attribute as well.)
----
I consider this to:
Fix#22196
(technically there is still the post snapshot work of removing the last remnants of the feature and the attribute, but the ticket can still be closed in my opinion).
Earlier commits impose rules on lifetimes that make generic
destructors safe; thus we no longer need the `#[unsafe_destructor]`
attribute nor its associated check.
----
So remove the check for the unsafe_destructor attribute.
And remove outdated compile-fail tests from when lifetime-parameteric
dtors were disallowed/unsafe.
In addition, when one uses the attribute without the associated
feature, report that the attribute is deprecated.
However, I do not think this is a breaking-change, because the
attribute and feature are still currently accepted by the compiler.
(After the next snapshot that has this commit, we can remove the
feature itself and the attribute as well.)
----
I consider this to:
Fix#22196
(techincally there is still the post snapshot work of removing the
last remants of the feature and the attribute, but the ticket can
still be closed in my opinion).
This was originally used to set up the guessing game, but that no longer
exists. This version uses `old_io`, and updating it involves talking
about `&mut` and such, which we haven't covered yet. So, for now, let's
just remove it.
Fixes#23760
The collections debug helpers no longer prefix output with the
collection name, in line with the current conventions for Debug
implementations. Implementations that want to preserve the current
behavior can simply add a `try!(write!(fmt, "TypeName "));` at the
beginning of the `fmt` method.
[breaking-change]
curl's progress meter would otherwise interfere with sudo's password prompt.
In addition, add the -f flag to make sure 4xx status codes are treated as errors.
r? @brson
The Send bound is an unnecessary restriction, and though provided as a convenience, can't be removed by downstream code.
The removal of this bound is a [breaking-change] since it removes an implicit Send bound on all `E: Error` and all `Error` trait objects.
To migrate, consider if your code actually requires the Send bound and, if so, add it explicitly.
Fixes#23774
r? @aturon
Previously a panic was generated for recursive prints due to a double-borrow of
a `RefCell`. This was solved by the second borrow's output being directed
towards the global stdout instead of the per-thread stdout (still experimental
functionality).
After this functionality was altered, however, recursive prints still deadlocked
due to the overridden `write_fmt` method which locked itself first and then
wrote all the data. This was fixed by removing the override of the `write_fmt`
method. This means that unlocked usage of `write!` on a `Stdout`/`Stderr` may be
slower due to acquiring more locks, but it's easy to make more performant with a
call to `.lock()`.
Closes#23781
Previously a panic was generated for recursive prints due to a double-borrow of
a `RefCell`. This was solved by the second borrow's output being directed
towards the global stdout instead of the per-thread stdout (still experimental
functionality).
After this functionality was altered, however, recursive prints still deadlocked
due to the overridden `write_fmt` method which locked itself first and then
wrote all the data. This was fixed by removing the override of the `write_fmt`
method. This means that unlocked usage of `write!` on a `Stdout`/`Stderr` may be
slower due to acquiring more locks, but it's easy to make more performant with a
call to `.lock()`.
Closes#23781