Add tables of small powers of ten used in the fast path. The tables are redundant: We could also use the big, more accurate table and round the value to the correct type (in fact we did just that before this commit). However, the rounding is extra work and slows down the fast path.
Because only very small exponents enter the fast path, the table and thus the space overhead is negligible. Speed-wise, this is a clear win on a [benchmark] comparing the fast path to a naive, hand-optimized, inaccurate algorithm. Specifically, this change narrows the gap from a roughly 5x difference to a roughly 3.4x difference.
[benchmark]: https://gist.github.com/Veedrac/dbb0c07994bc7882098e
This speeds up the ascii case (and long stretches of ascii in otherwise
mixed UTF-8 data) when checking UTF-8 validity.
Benchmark results suggest that on purely ASCII input, we can improve
throughput (megabytes verified / second) by a factor of 13 to 14!
On xml and mostly english language input (en.wikipedia xml dump),
throughput increases by a factor 7.
On mostly non-ASCII input, performance increases slightly or is the
same.
The UTF-8 validation is rewritten to use indexed access; since all
access is preceded by a (mandatory for validation) length check, they
are statically elided by llvm and this formulation is in fact the best
for performance. A previous version had losses due to slice to iterator
conversions.
A large credit to Björn Steinbrink who improved this patch immensely,
writing this second version.
Benchmark results on x86-64 (Sandy Bridge) compiled with -C opt-level=3.
Old code is `regular`, this PR is called `fast`.
Datasets:
- `ascii` is just ascii (2.5 kB)
- `cyr` is cyrillic script with ascii spaces (5 kB)
- `dewik10` is 10MB of a de.wikipedia xml dump
- `enwik10` is 100MB of an en.wikipedia xml dump
- `jawik10` is 10MB of a ja.wikipedia xml dump
```
test from_utf8_ascii_fast ... bench: 140 ns/iter (+/- 4) = 18221 MB/s
test from_utf8_ascii_regular ... bench: 1,932 ns/iter (+/- 19) = 1320 MB/s
test from_utf8_cyr_fast ... bench: 10,025 ns/iter (+/- 245) = 511 MB/s
test from_utf8_cyr_regular ... bench: 12,250 ns/iter (+/- 437) = 418 MB/s
test from_utf8_dewik10_fast ... bench: 6,017,909 ns/iter (+/- 105,755) = 1740 MB/s
test from_utf8_dewik10_regular ... bench: 11,669,493 ns/iter (+/- 264,045) = 891 MB/s
test from_utf8_enwik8_fast ... bench: 14,085,692 ns/iter (+/- 1,643,316) = 7000 MB/s
test from_utf8_enwik8_regular ... bench: 93,657,410 ns/iter (+/- 5,353,353) = 1000 MB/s
test from_utf8_jawik10_fast ... bench: 29,154,073 ns/iter (+/- 4,659,534) = 340 MB/s
test from_utf8_jawik10_regular ... bench: 29,112,917 ns/iter (+/- 2,475,123) = 340 MB/s
```
Co-authored-by: Björn Steinbrink <bsteinbr@gmail.com>
In 8d90d3f368 `BufStream`, the only
consumer of `InternalBufWriter`, was removed. As implied by the name,
this type is private, hence it is currently dead code.
In particular, bring back the `zero` flag for `lvalue_scratch_datum`,
which controls whether the alloca's created immediately at function
start are uninitialized at that point or have their embedded
drop-flags initialized to "dropped".
Then made `to_lvalue_datum_in_scope` pass "dropped" as `zero` flag.
I also re-enabled the use of `#[thread_local]` on AArch64. It was originally disabled in the PR that introduced AArch64 (#19790), but the reasons for this were not explained. `#[thread_local]` seems to work fine in my tests on AArch64, so I don't think this should be an issue.
cc @alexcrichton @akiss77
`siginfo_si_addr()` function is used once, and the returned value is
casted to `usize`. So make the function returns a `usize`.
it simplifies OpenBSD case, where the return type wouldn't be a `*mut
libc::c_void` but a `*mut libc::c_char`.
This commit migrates all of the methods on `num::wrapping::OverflowingOps` onto
inherent methods of the integer types. This also fills out some missing gaps in
the saturating and checked departments such as:
* `saturating_mul`
* `checked_{neg,rem,shl,shr}`
This is done in preparation for stabilization,
cc #27755
Get rid of that nasty unit_ty temporary variable created just because it might be handy to have one around, when in reality it isn’t really that useful at all.
r? @nikomatsakis
Fixes https://github.com/rust-lang/rust/issues/30637
This mixes in additional information into the hash that is
passed to -C extra-filename. It can be used to further distinguish
the standard libraries if they must be installed next to each
other.
Closes#29559
Frankly, I'm not sure if this solves a real problem. It's meant to help with side-by-side and overlapping installations where there are two sets of libs in /usr, but there are other potential issues there as well, including that some of our artifacts don't use this extra-filename munging, and it's not something our installers can support at all.
cc @jauhien Do you still think this helps the Gentoo case?
Previously it was returning a clone, mostly for the two reasons:
* Cloning Lvalue is very cheap most of the time (i.e. when Lvalue is not a Projection);
* There’s users who want &mut lvalue and there’s users who want &lvalue. Returning a value allows
to make either one easier when pattern matching (i.e. Some(ref dest) or Some(ref mut dest)).
However, I’m now convinced this is an invalid approach. Namely the users which want a mutable
reference may modify the Lvalue in-place, but the changes won’t be reflected in the final MIR,
since the Lvalue modified is merely a clone.
Instead, we have two accessors `destination` and `destination_mut` which return a reference to the
destination in desired mode.
r? @nikomatsakis
These should probably be submitted upstream. They're inevitably going to
complicate merges, and because they're non-functional changes this just
isn't worth our time.
BinaryHeap: Use full sift down in .pop()
.sift_down can either choose to compare the element on the way down (and
place it during descent), or to sift down an element fully, then sift
back up to place it.
A previous PR changed .sift_down() to the former behavior, which is much
faster for relatively small heaps and for elements that are cheap to
compare.
A benchmarking run suggested that BinaryHeap::pop() suffers
improportionally from this, and that it should use the second strategy
instead. It's logical since .pop() brings last element from the
heapified vector into index 0, it's very likely that this element will
end up at the bottom again.
Closes#29969
Previous PR #29811
This is an alternative to https://github.com/rust-lang/rust/pull/29954 for fixing #29857 that seems to me to be more inline with the general strategy around `TyError`. It also includes the fix for #30589 -- in fact, just the minimal change of making `ty_is_local` tolerate `TyError` avoids the ICE, but you get a lot of duplicate error reports, so in the case where the impl's trait reference already includes `TyError`, we just ignore the impl altogether.
cc @arielb1 @sanxiyn
Fixes#29857.
Fixes#30589.
Downgrade unit struct match via S(..) warnings to errors
The error signalling was introduced in #29383
It was noted as a warning-cycle-less regression in #30379Fix#30379