HirId has a more stable representation than NodeId, meaning that
modifications to one item don't influence (part of) the IDs within
other items. The other part is a DefIndex for which there already
is a way of stable hashing and persistence.
This commit introduces the HirId type and generates a HirId for
every NodeId during HIR lowering, but the resulting values are
not yet used anywhere, except in consistency checks.
This reverts commit dc0bb3f283, reversing
changes made to e879aa43ef.
This is a temporary step intended to fix regressions. A more
comprehensive fix for type inference and dead-code is in the works.
I have a suspicion that MinGW's make is the cause of #40546 rather than anything
else, but that's purely a suspicion without any facts to back it up. In any case
we'll eventually be moving the MSVC build over to Ninja in order to leverage
sccache regardless, so this commit simply jumpstarts that process by downloading
Ninja for use by MinGW anyway.
I'm not sure if this closes#40546 for real, but this is my current best shot at
closing it out, so...
Closes#40546
The code example in the trait documentation of ExactSizeIterator
has an incorrect implementation of the len method that does not return
the number of times the example iterator 'Counter' will iterate. This
may confuse readers of the docs as the example code will compile but
doesn't uphold the trait's contract.
This is easily fixed by modifying the implementation of len and changing
the assert statement to actually assert the correct behaviour. I also
slightly modified a code comment to better reflect what the method
returns.
When debugging why builds are taking so long it's often useful to get the
timestamp of all log messages as we're not always timing every tiny step of the
build. I wrote a [utility] for prepending a relative timestamp from the start of
a process which is now downloaded to the builders and is what we wrap the entire
build invocation in.
[utility]: https://github.com/alexcrichton/stamp-rsCloses#40577
This commit alters the translation layer to unconditionally emit the `uwtable`
LLVM attribute on Windows regardless of the `no_landing_pads` setting.
Previously I believe we omitted this attribute as an optimization when the
`-Cpanic=abort` flag was passed, but this unfortunately caused problems for
Gecko.
It [was discovered] that there was trouble unwinding through Rust functions due
to foreign exceptions such as illegal instructions or otherwise in-practice
methods used to abort a process. In testing it looked like the major difference
between a working binary and a non-working binary is indeed this `uwtable`
attribute, but this PR has unfortunately not been thoroughly tested in terms of
compiling Gecko with `-C panic=abort` *and* this PR to see whether it works, so
this is still somewhat working on just suspicion.
[was discovered]: https://bugzilla.mozilla.org/show_bug.cgi?id=1302078
Implement feature sort_unstable
Tracking issue for the feature: #40585
This is essentially integration of [pdqsort](https://github.com/stjepang/pdqsort) into libcore.
There's plenty of unsafe blocks to review. The heart of pdqsort is `fn partition_in_blocks` and is probably the most challenging function to understand. It requires some patience, but let me know if you find it too difficult - comments could always be improved.
#### Changes
* Added `sort_unstable` feature.
* Tweaked insertion sort constants for stable sort. Sorting integers is now up to 5% slower, but sorting big elements is much faster (in particular, `sort_large_big_random` is 35% faster). The old constants were highly optimized for sorting integers, so overall the configuration is more balanced now. A minor regression in case of integers is forgivable as we recently had performance improvements (#39538) that completely make up for it.
* Removed some uninteresting sort benchmarks.
* Added a new sort benchmark for string sorting.
#### Benchmarks
The following table compares stable and unstable sorting:
```
name stable ns/iter unstable ns/iter diff ns/iter diff %
slice::sort_large_ascending 7,240 (11049 MB/s) 7,380 (10840 MB/s) 140 1.93%
slice::sort_large_big_random 1,454,138 (880 MB/s) 910,269 (1406 MB/s) -543,869 -37.40%
slice::sort_large_descending 13,450 (5947 MB/s) 10,895 (7342 MB/s) -2,555 -19.00%
slice::sort_large_mostly_ascending 204,041 (392 MB/s) 88,639 (902 MB/s) -115,402 -56.56%
slice::sort_large_mostly_descending 217,109 (368 MB/s) 99,009 (808 MB/s) -118,100 -54.40%
slice::sort_large_random 477,257 (167 MB/s) 346,028 (231 MB/s) -131,229 -27.50%
slice::sort_large_random_expensive 21,670,537 (3 MB/s) 22,710,238 (3 MB/s) 1,039,701 4.80%
slice::sort_large_strings 6,284,499 (38 MB/s) 6,410,896 (37 MB/s) 126,397 2.01%
slice::sort_medium_random 3,515 (227 MB/s) 3,327 (240 MB/s) -188 -5.35%
slice::sort_small_ascending 42 (1904 MB/s) 41 (1951 MB/s) -1 -2.38%
slice::sort_small_big_random 503 (2544 MB/s) 514 (2490 MB/s) 11 2.19%
slice::sort_small_descending 72 (1111 MB/s) 69 (1159 MB/s) -3 -4.17%
slice::sort_small_random 369 (216 MB/s) 367 (217 MB/s) -2 -0.54%
```
Interesting cases:
* Expensive comparison function and string sorting - it's a really close race, but timsort performs a slightly smaller number of comparisons. This is a natural difference of bottom-up merging versus top-down partitioning.
* `large_descending` - unstable sort is faster, but both sorts should have equivalent performance. Both just check whether the slice is descending and if so, they reverse it. I blame LLVM for the discrepancy.
r? @alexcrichton