Commit Graph

32 Commits

Author SHA1 Message Date
The 8472
963d6f757c add a benchmark for slice_iter.copied().array_chunks() 2022-10-17 23:40:21 +02:00
Tim Vermeulen
db2b4a3a7e Use internal iteration in Iterator::{cmp_by, partial_cmp_by, eq_by} 2022-08-21 12:23:10 +02:00
Eric Holk
c18f22058b Rename integer log* methods to ilog*
This reflects the concensus from the libs team as reported at
https://github.com/rust-lang/rust/issues/70887#issuecomment-1209513261

Co-authored-by: Yosh Wuyts <github@yosh.is>
2022-08-09 10:20:49 -07:00
Nilstrieb
3358a41acb Add unicode fast path to is_printable
Before, it would enter the full expensive check even for normal ascii
characters. Now, it skips the check for the ascii characters in
`32..127`. This range was checked manually from the current behavior.
2022-05-31 10:51:35 +02:00
bors
12d3f107c1 Auto merge of #96626 - thomcc:rand-bump, r=m-ou-se
Avoid using `rand::thread_rng` in the stdlib benchmarks.

This is kind of an anti-pattern because it introduces extra nondeterminism for no real reason. In thread_rng's case this comes both from the random seed and also from the reseeding operations it does, which occasionally does syscalls (which adds additional nondeterminism). The impact of this would be pretty small in most cases, but it's a good practice to avoid (particularly because avoiding it was not hard).

Anyway, several of our benchmarks already did the right thing here anyway, so the change was pretty easy and mostly just applying it more universally. That said, the stdlib benchmarks aren't particularly stable (nor is our benchmark framework particularly great), so arguably this doesn't matter that much in practice.

~~Anyway, this also bumps the `rand` dev-dependency to 0.8, since it had fallen somewhat out of date.~~ Nevermind, too much of a headache.
2022-05-05 05:08:44 +00:00
The 8472
e3db41bf97 add benchmark 2022-05-02 20:54:46 +02:00
Thom Chiovoloni
0812759840
Avoid use of rand::thread_rng in stdlib benchmarks 2022-05-02 00:08:21 -07:00
Eduardo Sánchez Muñoz
93ae6f80e3 Make some usize-typed masks definition agnostic to the size of usize
Some masks where defined as
```rust
const NONASCII_MASK: usize = 0x80808080_80808080u64 as usize;
```
where it was assumed that `usize` is never wider than 64, which is currently true.

To make those constants valid in a hypothetical 128-bit target, these constants have been redefined in an `usize`-width-agnostic way
```rust
const NONASCII_MASK: usize = usize::from_ne_bytes([0x80; size_of::<usize>()]);
```

There are already some cases where Rust anticipates the possibility of supporting 128-bit targets, such as not implementing `From<usize>` for `u64`.
2022-04-15 17:04:59 +02:00
T-O-R-U-S
72a25d05bf Use implicit capture syntax in format_args
This updates the standard library's documentation to use the new syntax. The
documentation is worthwhile to update as it should be more idiomatic
(particularly for features like this, which are nice for users to get acquainted
with). The general codebase is likely more hassle than benefit to update: it'll
hurt git blame, and generally updates can be done by folks updating the code if
(and when) that makes things more readable with the new format.

A few places in the compiler and library code are updated (mostly just due to
already having been done when this commit was first authored).
2022-03-10 10:23:40 -05:00
Scott McMurray
8ca47d7ae4 Stop manually SIMDing in swap_nonoverlapping
Like I previously did for `reverse`, this leaves it to LLVM to pick how to vectorize it, since it can know better the chunk size to use, compared to the "32 bytes always" approach we currently have.

It does still need logic to type-erase where appropriate, though, as while LLVM is now smart enough to vectorize over slices of things like `[u8; 4]`, it fails to do so over slices of `[u8; 3]`.

As a bonus, this also means one no longer gets the spurious `memcpy`(s?) at the end up swapping a slice of `__m256`s: <https://rust.godbolt.org/z/joofr4v8Y>
2022-02-21 00:54:02 -08:00
Thom Chiovoloni
ebbccaf6bf
Respond to review feedback, and improve implementation somewhat 2022-02-05 11:15:18 -08:00
Thom Chiovoloni
ed01324835
Fix zh::SMALL string in core::str benchmarks 2022-02-05 11:15:17 -08:00
Thom Chiovoloni
628b217326
Optimize core::str::Chars::count 2022-02-05 11:15:17 -08:00
bors
ffdf18d144 Auto merge of #88788 - falk-hueffner:speedup-int-log10-branchless, r=joshtriplett
Speedup int log10 branchless

This is achieved with a branchless bit-twiddling implementation of the case x < 100_000, and using this as building block.

Benchmark on an Intel i7-8700K (Coffee Lake):

```
name                                   old ns/iter  new ns/iter  diff ns/iter   diff %  speedup
num::int_log::u8_log10_predictable     165          169                     4    2.42%   x 0.98
num::int_log::u8_log10_random          438          423                   -15   -3.42%   x 1.04
num::int_log::u8_log10_random_small    438          423                   -15   -3.42%   x 1.04
num::int_log::u16_log10_predictable    633          417                  -216  -34.12%   x 1.52
num::int_log::u16_log10_random         908          471                  -437  -48.13%   x 1.93
num::int_log::u16_log10_random_small   945          471                  -474  -50.16%   x 2.01
num::int_log::u32_log10_predictable    1,496        1,340                -156  -10.43%   x 1.12
num::int_log::u32_log10_random         1,076        873                  -203  -18.87%   x 1.23
num::int_log::u32_log10_random_small   1,145        874                  -271  -23.67%   x 1.31
num::int_log::u64_log10_predictable    4,005        3,171                -834  -20.82%   x 1.26
num::int_log::u64_log10_random         1,247        1,021                -226  -18.12%   x 1.22
num::int_log::u64_log10_random_small   1,265        921                  -344  -27.19%   x 1.37
num::int_log::u128_log10_predictable   39,667       39,579                -88   -0.22%   x 1.00
num::int_log::u128_log10_random        6,456        6,696                 240    3.72%   x 0.96
num::int_log::u128_log10_random_small  4,108        3,903                -205   -4.99%   x 1.05
```

Benchmark on an M1 Mac Mini:

```
name                                   old ns/iter  new ns/iter  diff ns/iter   diff %  speedup
num::int_log::u8_log10_predictable     143          130                   -13   -9.09%   x 1.10
num::int_log::u8_log10_random          375          325                   -50  -13.33%   x 1.15
num::int_log::u8_log10_random_small    376          325                   -51  -13.56%   x 1.16
num::int_log::u16_log10_predictable    500          322                  -178  -35.60%   x 1.55
num::int_log::u16_log10_random         794          405                  -389  -48.99%   x 1.96
num::int_log::u16_log10_random_small   1,035        405                  -630  -60.87%   x 2.56
num::int_log::u32_log10_predictable    1,144        894                  -250  -21.85%   x 1.28
num::int_log::u32_log10_random         832          786                   -46   -5.53%   x 1.06
num::int_log::u32_log10_random_small   832          787                   -45   -5.41%   x 1.06
num::int_log::u64_log10_predictable    2,681        2,057                -624  -23.27%   x 1.30
num::int_log::u64_log10_random         1,015        806                  -209  -20.59%   x 1.26
num::int_log::u64_log10_random_small   1,004        795                  -209  -20.82%   x 1.26
num::int_log::u128_log10_predictable   56,825       56,526               -299   -0.53%   x 1.01
num::int_log::u128_log10_random        9,056        8,861                -195   -2.15%   x 1.02
num::int_log::u128_log10_random_small  1,528        1,527                  -1   -0.07%   x 1.00
```

The 128 bit case remains ridiculously slow because llvm fails to optimize division by a constant 128-bit value to multiplications. This could be worked around but it seems preferable to fix this in llvm.

From u32 up, table lookup (like suggested [here](https://github.com/rust-lang/rust/issues/70887#issuecomment-881099813)) is still faster, but requires a hardware `leading_zeros` to be viable, and might clog up the cache.
2021-10-12 03:18:54 +00:00
The8472
4c44f061d8 benchmark for str.chars().count() 2021-09-11 00:25:41 +02:00
Falk Hüffner
57c623570a Cosmetic fixes. 2021-09-09 20:06:46 +02:00
Falk Hüffner
0c26a3bc0c Add benchmark for integer log10. 2021-09-06 12:19:24 +02:00
Smitty
bdfcb88e8b Use HTTPS links where possible 2021-06-23 16:26:46 -04:00
Ralf Jung
23d54ad96f move core::hint::black_box under its own feature gate 2021-04-25 11:08:12 +02:00
The8472
d7fdd90655 add bench 2021-03-18 00:25:20 +01:00
bors
0ce0fedb67 Auto merge of #81358 - mcastorina:to-upper-lower-speed, r=joshtriplett
Add a check for ASCII characters in to_upper and to_lower

This extra check has better performance. See discussion here:
https://internals.rust-lang.org/t/to-upper-speed/13896

Thanks to `@gilescope` for helping discover and test this.
2021-03-17 11:17:18 +00:00
bors
caca2121ff Auto merge of #74024 - Folyd:master, r=m-ou-se
Improve slice.binary_search_by()'s best-case performance to O(1)

This PR aimed to improve the [slice.binary_search_by()](https://doc.rust-lang.org/std/primitive.slice.html#method.binary_search_by)'s best-case performance to O(1).

# Noticed

I don't know why the docs of `binary_search_by` said `"If there are multiple matches, then any one of the matches could be returned."`, but the implementation isn't the same thing. Actually, it returns the **last one** if multiple matches found.

Then we got two options:

## If returns the last one is the correct or desired result

Then I can rectify the docs and revert my changes.

## If the docs are correct or desired result

Then my changes can be merged after fully reviewed.

However, if my PR gets merged, another issue raised: this could be a **breaking change** since if multiple matches found, the returning order no longer the last one instead of it could be any one.

For example:
```rust
let mut s = vec![0, 1, 1, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55];
let num = 1;
let idx = s.binary_search(&num);
s.insert(idx, 2);

// Old implementations
assert_eq!(s, [0, 1, 1, 1, 1, 2, 2, 3, 5, 8, 13, 21, 34, 42, 55]);

// New implementations
assert_eq!(s, [0, 1, 1, 1, 2, 1, 2, 3, 5, 8, 13, 21, 34, 42, 55]);
```

# Benchmarking

**Old implementations**
```sh
$ ./x.py bench --stage 1 library/libcore
test slice::binary_search_l1           ... bench:          59 ns/iter (+/- 4)
test slice::binary_search_l1_with_dups ... bench:          59 ns/iter (+/- 3)
test slice::binary_search_l2           ... bench:          76 ns/iter (+/- 5)
test slice::binary_search_l2_with_dups ... bench:          77 ns/iter (+/- 17)
test slice::binary_search_l3           ... bench:         183 ns/iter (+/- 23)
test slice::binary_search_l3_with_dups ... bench:         185 ns/iter (+/- 19)
```

**New implementations (1)**

Implemented by this PR.
```rust
if cmp == Equal {
    return Ok(mid);
} else if cmp == Less {
    base = mid
}
```
```sh
$ ./x.py bench --stage 1 library/libcore
test slice::binary_search_l1           ... bench:          58 ns/iter (+/- 2)
test slice::binary_search_l1_with_dups ... bench:          37 ns/iter (+/- 4)
test slice::binary_search_l2           ... bench:          76 ns/iter (+/- 3)
test slice::binary_search_l2_with_dups ... bench:          57 ns/iter (+/- 6)
test slice::binary_search_l3           ... bench:         200 ns/iter (+/- 30)
test slice::binary_search_l3_with_dups ... bench:         157 ns/iter (+/- 6)

$ ./x.py bench --stage 1 library/libcore
test slice::binary_search_l1           ... bench:          59 ns/iter (+/- 8)
test slice::binary_search_l1_with_dups ... bench:          37 ns/iter (+/- 2)
test slice::binary_search_l2           ... bench:          77 ns/iter (+/- 2)
test slice::binary_search_l2_with_dups ... bench:          57 ns/iter (+/- 2)
test slice::binary_search_l3           ... bench:         198 ns/iter (+/- 21)
test slice::binary_search_l3_with_dups ... bench:         158 ns/iter (+/- 11)

```

**New implementations (2)**

Suggested by `@nbdd0121` in [comment](https://github.com/rust-lang/rust/pull/74024#issuecomment-665430239).
```rust
base = if cmp == Greater { base } else { mid };
if cmp == Equal { break }
```

```sh
$ ./x.py bench --stage 1 library/libcore
test slice::binary_search_l1           ... bench:          59 ns/iter (+/- 7)
test slice::binary_search_l1_with_dups ... bench:          37 ns/iter (+/- 5)
test slice::binary_search_l2           ... bench:          75 ns/iter (+/- 3)
test slice::binary_search_l2_with_dups ... bench:          56 ns/iter (+/- 3)
test slice::binary_search_l3           ... bench:         195 ns/iter (+/- 15)
test slice::binary_search_l3_with_dups ... bench:         151 ns/iter (+/- 7)

$ ./x.py bench --stage 1 library/libcore
test slice::binary_search_l1           ... bench:          57 ns/iter (+/- 2)
test slice::binary_search_l1_with_dups ... bench:          38 ns/iter (+/- 2)
test slice::binary_search_l2           ... bench:          77 ns/iter (+/- 11)
test slice::binary_search_l2_with_dups ... bench:          57 ns/iter (+/- 4)
test slice::binary_search_l3           ... bench:         194 ns/iter (+/- 15)
test slice::binary_search_l3_with_dups ... bench:         151 ns/iter (+/- 18)

```

I run some benchmarking testings against on two implementations. The new implementation has a lot of improvement in duplicates cases, while in `binary_search_l3` case, it's a little bit slower than the old one.
2021-03-05 20:12:13 +00:00
Miccah Castorina
229fdf839a Add two more benchmarks for strictly ASCII and non ASCII cases 2021-02-26 11:42:59 -06:00
Miccah Castorina
8acb566507 Add to_lowercase and to_uppercase char benchmarks 2021-02-26 11:41:37 -06:00
Giles Cope
cadcf5ed99
Unify way to flip 6th bit. (Same assembly generated) 2021-02-08 12:21:36 +00:00
Giles Cope
f165f49d22
Slight perf improvement on char::to_ascii_lowercase 2021-02-06 19:56:43 +00:00
Folyd
18e44a1be4 Improve slice.binary_search_by()'s best-case performance to O(1) 2021-01-30 14:11:47 +08:00
kadmin
e5094a2851 Add more benchmarks 2021-01-08 09:50:35 +00:00
est31
a687420d17 Remove redundant 'static from library crates 2020-10-18 17:25:51 +02:00
kadmin
3f1d2aadd1 Use more efficient scheme for display u128/i128
Add zero padding

Add benchmarks for fmt u128

This tests both when there is the max amount of work(all characters used)
And least amount of work(1 character used)
2020-09-28 20:38:38 +00:00
Ralf Jung
56129d39c0 flt2dec: properly handle uninitialized memory 2020-09-02 12:41:38 +02:00
mark
2c31b45ae8 mv std libs to library/ 2020-07-27 19:51:13 -05:00