`mut_range_bound` check for immediate break after mutation
closes#7532
`mut_range_bound` ignores mutation on range bounds that is placed immediately before break. Still warns if the break is not always reachable.
changelog: [`mut_range_bound`] ignore range bound mutations before immediate break
Suggest deriving traits if possible
This only applies to builtin derives as I don't think there is a
clean way to get the available derives in typeck.
Closes#85851
Remove `hir::GenericBound::Unsized`
Rather than "moving" the `?Sized` bounds to the param bounds, just also check where clauses in `astconv`. I also did some related cleanup here, but that's not strictly neccesary. Also going to do a perf run here.
r? `@estebank`
These were deleted in https://reviews.llvm.org/D108614, and in C++ I
definitely see the argument for their removal. I didn't try and
propagate the changes up into higher layers of rustc in this change
because my initial goal was to get rustc working against LLVM HEAD
promptly, but I'm happy to follow up with some refactoring to make the
API on the Rust side match the LLVM API more directly (though the way
the enum works in Rust makes the API less scary IMO).
r? @nagisa cc @nikic
Fix handling of +whole-archive native link modifier.
This PR fixes a bug in `add_upstream_native_libraries` that led to the `+whole-archive` modifier being ignored when linking in native libs.
~~Note that the PR does not address the situation when `+whole-archive` is combined with `+bundle`.~~
`@wesleywiser's` commit adds validation code that turns combining `+whole-archive` with `+bundle` into an error.
Fixes https://github.com/rust-lang/rust/issues/88085.
r? `@petrochenkov`
cc `@wesleywiser` `@gcoakes`
BTreeMap/BTreeSet::from_iter: use bulk building to improve the performance
Bulk building is a common technique to increase the performance of building a fresh btree map. Instead of inserting items one-by-one, we sort all the items beforehand then create the BtreeMap in bulk.
Benchmark
```
./x.py bench library/alloc --test-args btree::map::from_iter
```
* Before
```
test btree::map::from_iter_rand_100 ... bench: 3,694 ns/iter (+/- 840)
test btree::map::from_iter_rand_10_000 ... bench: 1,033,446 ns/iter (+/- 192,950)
test btree::map::from_iter_seq_100 ... bench: 5,689 ns/iter (+/- 1,259)
test btree::map::from_iter_seq_10_000 ... bench: 861,033 ns/iter (+/- 118,815)
```
* After
```
test btree::map::from_iter_rand_100 ... bench: 3,033 ns/iter (+/- 707)
test btree::map::from_iter_rand_10_000 ... bench: 775,958 ns/iter (+/- 105,152)
test btree::map::from_iter_seq_100 ... bench: 2,969 ns/iter (+/- 336)
test btree::map::from_iter_seq_10_000 ... bench: 258,292 ns/iter (+/- 29,364)
```
Mmap the incremental data instead of reading it.
Instead of reading the full incremental state using `fs::read_file`, we memmap it using a private read-only file-backed map.
This allows the system to reclaim any memory we are not using, while ensuring we are not polluted by
outside modifications to the file.
Suggested in https://github.com/rust-lang/rust/pull/83036#issuecomment-800458082 by `@bjorn3`