Slimmer syntax
High-level summary of changes:
- The `syntax::node_count` pass is moved into `rustc_ast_passes`. This works towards improving #65031 by making compiling `syntax` go faster.
- The `syntax::{GLOBALS, with_globals, ..}` business is consolidated into `syntax::attr` for cleaner code and future possible improvements.
- The pretty printer loses its dependency on `ParseSess`, opting to use `SourceMap` & friends directly instead.
- Some drive by cleanup of `syntax::attr::HasAttr` happens.
- Builtin attribute logic (`syntax::attr::builtin`) + `syntax::attr::allow_internal_unstable` is moved into a new `rustc_attr` crate. More logic from `syntax::attr` should be moved into that crate over time. This also means that `syntax` loses all mentions of `ParseSess`, which enables the next point.
- The pretty printer `syntax::print` is moved into a new crate `rustc_ast_pretty`.
- `rustc_session::node_id` is moved back as `syntax::node_id`. As a result, `syntax` gets to drop dependencies on `rustc_session` (and implicitly `rustc_target`), `rustc_error_codes`, and `rustc_errors`. Moreover `rustc_hir` gets to drop its dependency on `rustc_session` as well. At this point, these crates are mostly "pure data crates", which is approaching a desirable end state.
- We should consider renaming `syntax` to `rustc_ast` now.
Add support for Control Flow Guard on Windows.
LLVM now supports Windows Control Flow Guard (CFG): d157a9bc8b
This patch adds support for rustc to emit the required LLVM module flags to enable CFG metadata (cfguard=1) or metadata and checks (cfguard=2). The LLVM module flags are ignored on unsupported targets and operating systems.
Remove incorrect debug assertions from catch_unwind
Previously the debug assertions in the implementation of catch_unwind
used to verify consistency of the panic count by checking that the count
is zero just before leaving the function. This incorrectly assumed that
no panic was in progress when entering `catch_unwind`.
Fixes#68696.
Bundle and document 6 BTreeMap navigation algorithms
- Expose a function to step through trees, without necessarily extracting the KV pair, that helps future operations like drain/retain (as demonstrated in [this drain_filter implementation](https://github.com/ssomers/rust/compare/btree_navigation_v3...ssomers:btree_drain_filter?expand=1))
- ~~Also aligns the implementation of the 2 x 3 iterators already using such navigation:~~
- ~~Delay the moment the K,V references are plucked from the tree, for the 4 iterators on immutable and owned maps, just for symmetry. The same had to be done to the two iterators for mutable maps in #58431.~~
- ~~Always explicitly use ptr::read to derive two handles from one handle. While the existing implementations for immutable maps (i.e. Range::next_unchecked and Range::next_back_unchecked) rely on implicit copying. There's no change in unsafe tags because these two functions were already (erroneously? prophetically?) tagged unsafe. I don't know whether they should be tagged unsafe. I guess they should be for mutable and owned maps, because you can change the map through one handle and leave the other handle invalid.~~
- Preserve the way two handles are (temporarily) derived from one handle: implementations for immutable maps (i.e. Range::next_unchecked and Range::next_back_unchecked) rely on copying (implicitly before, explicitly now) and the others do `ptr::read`.
- ~~I think the functions that support iterators on immutable trees (i.e. `Range::next_unchecked` and `Range::next_back_unchecked`) are erroneously tagged unsafe since you can already create multiple instances of such ranges, thus obtain multiple handles into the same tree. I did not change that but removed unsafe from the functions underneath.~~
Tested with miri in liballoc/tests/btree, except those that should_panic.
cargo benchcmp the best of 3 samples of all btree benchmarks before and after this PR:
```
name old1.txt ns/iter new2.txt ns/iter diff ns/iter diff % speedup
btree::map::find_rand_100 17 17 0 0.00% x 1.00
btree::map::find_rand_10_000 57 55 -2 -3.51% x 1.04
btree::map::find_seq_100 17 17 0 0.00% x 1.00
btree::map::find_seq_10_000 42 39 -3 -7.14% x 1.08
btree::map::first_and_last_0 14 14 0 0.00% x 1.00
btree::map::first_and_last_100 36 37 1 2.78% x 0.97
btree::map::first_and_last_10k 52 52 0 0.00% x 1.00
btree::map::insert_rand_100 34 34 0 0.00% x 1.00
btree::map::insert_rand_10_000 34 34 0 0.00% x 1.00
btree::map::insert_seq_100 46 46 0 0.00% x 1.00
btree::map::insert_seq_10_000 90 89 -1 -1.11% x 1.01
btree::map::iter_1000 2,811 2,771 -40 -1.42% x 1.01
btree::map::iter_100000 360,635 355,995 -4,640 -1.29% x 1.01
btree::map::iter_20 39 42 3 7.69% x 0.93
btree::map::iter_mut_1000 2,662 2,864 202 7.59% x 0.93
btree::map::iter_mut_100000 336,825 346,550 9,725 2.89% x 0.97
btree::map::iter_mut_20 40 43 3 7.50% x 0.93
btree::set::build_and_clear 4,184 3,994 -190 -4.54% x 1.05
btree::set::build_and_drop 4,151 3,976 -175 -4.22% x 1.04
btree::set::build_and_into_iter 4,196 4,155 -41 -0.98% x 1.01
btree::set::build_and_pop_all 5,176 5,241 65 1.26% x 0.99
btree::set::build_and_remove_all 6,868 6,903 35 0.51% x 0.99
btree::set::difference_random_100_vs_100 721 691 -30 -4.16% x 1.04
btree::set::difference_random_100_vs_10k 2,420 2,411 -9 -0.37% x 1.00
btree::set::difference_random_10k_vs_100 54,726 52,215 -2,511 -4.59% x 1.05
btree::set::difference_random_10k_vs_10k 164,384 170,256 5,872 3.57% x 0.97
btree::set::difference_staggered_100_vs_100 739 709 -30 -4.06% x 1.04
btree::set::difference_staggered_100_vs_10k 2,320 2,265 -55 -2.37% x 1.02
btree::set::difference_staggered_10k_vs_10k 68,020 70,246 2,226 3.27% x 0.97
btree::set::intersection_100_neg_vs_100_pos 23 24 1 4.35% x 0.96
btree::set::intersection_100_neg_vs_10k_pos 28 29 1 3.57% x 0.97
btree::set::intersection_100_pos_vs_100_neg 24 24 0 0.00% x 1.00
btree::set::intersection_100_pos_vs_10k_neg 28 28 0 0.00% x 1.00
btree::set::intersection_10k_neg_vs_100_pos 27 27 0 0.00% x 1.00
btree::set::intersection_10k_neg_vs_10k_pos 30 29 -1 -3.33% x 1.03
btree::set::intersection_10k_pos_vs_100_neg 27 28 1 3.70% x 0.96
btree::set::intersection_10k_pos_vs_10k_neg 29 29 0 0.00% x 1.00
btree::set::intersection_random_100_vs_100 592 572 -20 -3.38% x 1.03
btree::set::intersection_random_100_vs_10k 2,271 2,269 -2 -0.09% x 1.00
btree::set::intersection_random_10k_vs_100 2,301 2,333 32 1.39% x 0.99
btree::set::intersection_random_10k_vs_10k 147,879 150,148 2,269 1.53% x 0.98
btree::set::intersection_staggered_100_vs_100 622 632 10 1.61% x 0.98
btree::set::intersection_staggered_100_vs_10k 2,101 2,032 -69 -3.28% x 1.03
btree::set::intersection_staggered_10k_vs_10k 60,341 61,834 1,493 2.47% x 0.98
btree::set::is_subset_100_vs_100 417 426 9 2.16% x 0.98
btree::set::is_subset_100_vs_10k 1,281 1,324 43 3.36% x 0.97
btree::set::is_subset_10k_vs_100 2 2 0 0.00% x 1.00
btree::set::is_subset_10k_vs_10k 41,054 41,612 558 1.36% x 0.99
```
r? cuviper
Address inconsistency in using "is" with "declared here"
"is" was generally used for NLL diagnostics, but not other diagnostics. Using "is" makes the diagnostics sound more natural and readable, so it seems sensible to commit to them throughout.
r? @Centril
Shrink `Nonterminal`
These commits shrink `Nonterminal` from 240 bytes to 40 bytes. When building `serde_derive` they reduce the number of `memcpy` calls from 9.6M to 7.4M, and it's a tiny win on a few other benchmarks.
r? @petrochenkov
clarify "incorrect issue" error
Changes the message to be more precise, shrinks the span and adds a label specifying why the `issue` field is incorrect.
Change opt-level from 2 back to 3
In Cargo.toml, the opt-level for `release` and `bench` was overridden to be 2. This was to work around a problem with LLVM 7. However, rust no longer uses LLVM 7, so this is hopefully no longer needed?
I tried a little bit to replicate the original problem, and could not. I think running this through CI is the best way to smoke test this :) Even if things break dramatically, the comment should be updated to reflect that things are still broken with LLVM 9.
I'm just getting started playing with the compiler, so apologies if I've missed an obvious problem here.
fixes#52378
(possibly relevant is the [current update to LLVM 10](https://github.com/rust-lang/rust/pull/67759))
Previously the debug assertions in the implementation of catch_unwind
used to verify consistency of the panic count by checking that the count
is zero just before leaving the function. This incorrectly assumed that
no panic was in progress when entering `catch_unwind`.
Add `Iterator::map_while`
In `Iterator` trait there is `*_map` version of [`filter`] — [`filter_map`], however, there is no `*_map` version of [`take_while`], that can also be useful.
### Use cases
In my code, I've found that I need to iterate through iterator of `Option`s, stopping on the first `None`. So I've written code like this:
```rust
let arr = [Some(4), Some(10), None, Some(3)];
let mut iter = arr.iter()
.take_while(|x| x.is_some())
.map(|x| x.unwrap());
assert_eq!(iter.next(), Some(4));
assert_eq!(iter.next(), Some(10));
assert_eq!(iter.next(), None);
assert_eq!(iter.next(), None);
```
Thit code
1) isn't clean
2) In theory, can generate bad bytecode (I'm actually **not** sure, but I think that `unwrap` would generate additional branches with `panic!`)
The same code, but with `map_while` (in the original PR message it was named "take_while_map"):
```rust
let arr = [Some(4), Some(10), None, Some(3)];
let mut iter = arr.iter().map_while(std::convert::identity);
```
Also, `map_while` can be useful when converting something (as in [examples]).
[`filter`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.filter
[`filter_map`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.filter_map
[`take_while`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.take_while
[examples]: https://github.com/rust-lang/rust/compare/master...WaffleLapkin:iter_take_while_map?expand=1#diff-7e57917f962fe6ffdfba51e4955ad6acR1042
In Cargo.toml, the opt-level for `release` and `bench` was
overridden to be 2. This was to work around a problem with LLVM
7. However, rust no longer uses LLVM 7, so this is no longer
needed.
This creates a small compile time regression in MIR constant eval,
so I've added a #[inline(always)] on the `step` function used in
const eval
Also creates a binary size increase in wasm-stringify-ints-small,
so I've bumped the limit there.