Add proc_macro::Span::{before, after}.
This adds `proc_macro::Span::before()` and `proc_macro::Span::after()` to get a zero width span at the start or end of the span.
These are equivalent to rustc's `Span::shrink_to_lo()` and `Span::shrink_to_hi()` but with a less cryptic name. They are useful when generating diagnostlics like "missing \<thing\> after \<thing\>".
E.g.
```rust
syn::Error::new(ident.span().after(), "missing `:` after field name").into_compile_error()
```
I originally added this bound in an attempt to make the specialization
sound for owning iterators but it was never correct here and the correct
and already implemented solution is is to place it on the IntoIter
implementation.
This is achieved with a branchless bit-twiddling implementation of the
case x < 100_000, and using this as building block.
Benchmark on an Intel i7-8700K (Coffee Lake):
name old ns/iter new ns/iter diff ns/iter diff % speedup
num::int_log::u8_log10_predictable 165 169 4 2.42% x 0.98
num::int_log::u8_log10_random 438 423 -15 -3.42% x 1.04
num::int_log::u8_log10_random_small 438 423 -15 -3.42% x 1.04
num::int_log::u16_log10_predictable 633 417 -216 -34.12% x 1.52
num::int_log::u16_log10_random 908 471 -437 -48.13% x 1.93
num::int_log::u16_log10_random_small 945 471 -474 -50.16% x 2.01
num::int_log::u32_log10_predictable 1,496 1,340 -156 -10.43% x 1.12
num::int_log::u32_log10_random 1,076 873 -203 -18.87% x 1.23
num::int_log::u32_log10_random_small 1,145 874 -271 -23.67% x 1.31
num::int_log::u64_log10_predictable 4,005 3,171 -834 -20.82% x 1.26
num::int_log::u64_log10_random 1,247 1,021 -226 -18.12% x 1.22
num::int_log::u64_log10_random_small 1,265 921 -344 -27.19% x 1.37
num::int_log::u128_log10_predictable 39,667 39,579 -88 -0.22% x 1.00
num::int_log::u128_log10_random 6,456 6,696 240 3.72% x 0.96
num::int_log::u128_log10_random_small 4,108 3,903 -205 -4.99% x 1.05
Benchmark on an M1 Mac Mini:
name old ns/iter new ns/iter diff ns/iter diff % speedup
num::int_log::u8_log10_predictable 143 130 -13 -9.09% x 1.10
num::int_log::u8_log10_random 375 325 -50 -13.33% x 1.15
num::int_log::u8_log10_random_small 376 325 -51 -13.56% x 1.16
num::int_log::u16_log10_predictable 500 322 -178 -35.60% x 1.55
num::int_log::u16_log10_random 794 405 -389 -48.99% x 1.96
num::int_log::u16_log10_random_small 1,035 405 -630 -60.87% x 2.56
num::int_log::u32_log10_predictable 1,144 894 -250 -21.85% x 1.28
num::int_log::u32_log10_random 832 786 -46 -5.53% x 1.06
num::int_log::u32_log10_random_small 832 787 -45 -5.41% x 1.06
num::int_log::u64_log10_predictable 2,681 2,057 -624 -23.27% x 1.30
num::int_log::u64_log10_random 1,015 806 -209 -20.59% x 1.26
num::int_log::u64_log10_random_small 1,004 795 -209 -20.82% x 1.26
num::int_log::u128_log10_predictable 56,825 56,526 -299 -0.53% x 1.01
num::int_log::u128_log10_random 9,056 8,861 -195 -2.15% x 1.02
num::int_log::u128_log10_random_small 1,528 1,527 -1 -0.07% x 1.00
The 128 bit case remains ridiculously slow because llvm fails to optimize division by
a constant 128-bit value to multiplications. This could be worked around but it seems
preferable to fix this in llvm.
From u32 up, table lookup (like suggested here
https://github.com/rust-lang/rust/issues/70887#issuecomment-881099813) is still
faster, but requires a hardware leading_zero to be viable, and might clog up the
cache.
Correct “copies” to “moves” in `<Option<T> as From<T>>::from` doc, and other copyediting
The `impl<T> From<T> for Option<T>` has no `Copy` or `Clone` bound, so its operation is guaranteed to be a move. The call site might copy, but the function itself cannot.
Since that would have been a rather small PR, I also reviewed the other documentation in the file and made other improvements (in separate commits): adding periods and commas, linking `Deref::Target`, and clarifying what "a container" is in `FromIterator`.
More symbolic doc aliases
A bunch of small changes, mostly adding `#[doc(alias = "…")]` entries for symbolic `"…"`.
Also a small change in documentation of `const` keywords.
Suggest deriving traits if possible
This only applies to builtin derives as I don't think there is a
clean way to get the available derives in typeck.
Closes#85851
* Clarify rounding.
* Avoid "wrapping" wording.
* Omit wrong claim on 0 only being returned in error cases.
* Typo fix for one_less_than_next_power_of_two.
BTreeMap/BTreeSet::from_iter: use bulk building to improve the performance
Bulk building is a common technique to increase the performance of building a fresh btree map. Instead of inserting items one-by-one, we sort all the items beforehand then create the BtreeMap in bulk.
Benchmark
```
./x.py bench library/alloc --test-args btree::map::from_iter
```
* Before
```
test btree::map::from_iter_rand_100 ... bench: 3,694 ns/iter (+/- 840)
test btree::map::from_iter_rand_10_000 ... bench: 1,033,446 ns/iter (+/- 192,950)
test btree::map::from_iter_seq_100 ... bench: 5,689 ns/iter (+/- 1,259)
test btree::map::from_iter_seq_10_000 ... bench: 861,033 ns/iter (+/- 118,815)
```
* After
```
test btree::map::from_iter_rand_100 ... bench: 3,033 ns/iter (+/- 707)
test btree::map::from_iter_rand_10_000 ... bench: 775,958 ns/iter (+/- 105,152)
test btree::map::from_iter_seq_100 ... bench: 2,969 ns/iter (+/- 336)
test btree::map::from_iter_seq_10_000 ... bench: 258,292 ns/iter (+/- 29,364)
```
Document when to use Windows' `symlink_dir` vs. `symlink_file`
It was previously unclear why there are two functions and when they should be used.
Fixes: #88635
Add links in docs for some primitive types
This pull request adds additional links in existing documentation of some of the primitive types.
Where items are linked only once, I have used the `[link](destination)` format. For items in `std`, I have linked directly to the HTML, since although the primitives are in `core`, they are not displayed on `core` documentation. I was unsure of what length I should keep lines of documentation to, so I tried to keep them within reason.
Additionally, I have avoided excessively linking to keywords like `self` when they are not relevant to the documentation. I can add these links if it would be an improvement.
I hope this can improve Rust. Please let me know if there's anything I did wrong!
While issues have been seen on arm64 platforms the Arm architecture requires
that the counter monotonically increases and that it must provide a uniform
view of system time (e.g. it must not be possible for a core to receive a
message from another core with a time stamp and observe time going backwards
(ARM DDI 0487G.b D11.1.2). While there have been a few 64bit SoCs that have
bugs (#49281, #56940) which cause time to not monotonically increase, these have
been fixed in the Linux kernel and we shouldn't penalize all Arm SoCs for those
who refuse to update their kernels:
SUN50I_ERRATUM_UNKNOWN1 - Allwinner A64 / Pine A64 - fixed in 5.1
FSL_ERRATUM_A008585 - Freescale LS2080A/LS1043A - fixed in 4.10
HISILICON_ERRATUM_161010101 - Hisilicon 1610 - fixed in 4.11
ARM64_ERRATUM_858921 - Cortex A73 - fixed in 4.12
255a3f3e183 std: Force `Instant::now()` to be monotonic added a mutex to work around
this problem and a small test program using glommio shows the majority of time spent
acquiring and releasing this Mutex. 3914a7b0da8 tries to improve this, but actually
makes it worse on big systems as for 128b atomics a ldxp/stxp pair (and
successful loop) is required which is expensive as a lock and because of how
the load/store-exclusives scale on large Arm systems is both unfair to threads
and tends to go backwards in performance.
aarch64 prior to v8.4 (FEAT_LSE2) doesn't have an instruction that guarantees
untorn 128b reads except for completing a 128b load/store exclusive pair
(ldxp/stxp) or compare-and-swap (casp) successfully. The requirement to
complete a 128b read+write atomic is actually more expensive and more unfair
than the previous implementation of monotonize() which used a Mutex on aarch64,
especially at large core counts. For aarch64 switch to the 64b atomic
implementation which is about 13x faster for a benchmark that involves many
calls to Instant::now().
This implementation has no `Copy` or `Clone` bound, so its operation is
guaranteed to be a move. The call site might copy, but the function
itself cannot.
Also linkify `Some` while we're touching the line anyway.
remove redundant / misplaced sentence from docs
Removes sentence that seems to have drifted down into the examples section. Luckily, someone already added an explanation of what happens with packed structs back into the initial section of the doc entry and this wayward sentence can likely just be deleted.
Add an example for deriving PartialOrd on enums
For some reason, I always forget which variants are smaller and which
are larger when you derive PartialOrd on an enum. And the wording in the
current docs is not entirely clear to me.
So, I often end up making a small enum, deriving PartialOrd on it, and
then writing a `#[test]` with an assert that the top one is smaller than
the bottom one (or the other way around) to figure out which way the
deriving goes.
So then I figured, it would be great if the standard library docs just
had that example, so if I keep forgetting, at least I can figure it out
quickly by looking at std's docs.