Clean up `ast::Attribute`, `ast::CrateConfig`, and string interning
This PR
- removes `ast::Attribute_` (changing `Attribute` from `Spanned<Attribute_>` to a struct),
- moves a `MetaItem`'s name from the `MetaItemKind` variants to a field of `MetaItem`,
- avoids needlessly wrapping `ast::MetaItem` with `P`,
- moves string interning into `syntax::symbol` (`ast::Name` is a reexport of `symbol::Symbol` for now),
- replaces `InternedString` with `Symbol` in the AST, HIR, and various other places, and
- refactors `ast::CrateConfig` from a `Vec` to a `HashSet`.
r? @eddyb
To resume stack unwinding, the LLVM `resume` instruction must be used.
In order to use this instruction, the calling function must have an
exception handling personality set.
LLVM 4.0 adds a new IR validation check to ensure a personality is
always set in these cases.
This was introduced in [r277360](https://reviews.llvm.org/rL277360).
Associated type normalization is inhibited by higher-ranked regions.
Therefore, every time we erase them, we must re-normalize.
I was meaning to introduce this change some time ago, but we used
to erase regions in generic context, which broke this terribly (because
you can't always normalize in a generic context). That seems to be gone
now.
Ensure this by having a `erase_late_bound_regions_and_normalize`
function.
Fixes#37109 (the missing call was in mir::block).
Use it instead of a `panic` for inexhaustive matches and correct the
comment. I think we trust our match-generation algorithm enough to
generate these blocks, and not generating an `unreachable` means that
LLVM won't optimize `match void() {}` to an `unreachable`.
[MIR] Make scopes debuginfo-specific (visibility scopes).
Fixes#32949 by having MIR (visibility) scopes mimic the lexical structure.
Unlike #33235, this PR also removes all scopes without variable bindings.
Printing of scopes also changed, e.g. for:
```rust
fn foo(x: i32, y: i32) { let a = 0; let b = 0; let c = 0; }
```
Before my changes:
```rust
fn foo(arg0: i32, arg1: i32) -> () {
let var0: i32; // "x" in scope 1 at <anon>:1:8: 1:9
let var1: i32; // "y" in scope 1 at <anon>:1:16: 1:17
let var2: i32; // "a" in scope 3 at <anon>:1:30: 1:31
let var3: i32; // "b" in scope 6 at <anon>:1:41: 1:42
let var4: i32; // "c" in scope 9 at <anon>:1:52: 1:53
...
scope tree:
0 1 2 3 {
4 5
6 {
7 8
9 10 11
}
}
}
```
After my changes:
```rust
fn foo(arg0: i32, arg1: i32) -> () {
scope 1 {
let var0: i32; // "x" in scope 1 at <anon>:1:8: 1:9
let var1: i32; // "y" in scope 1 at <anon>:1:16: 1:17
scope 2 {
let var2: i32; // "a" in scope 2 at <anon>:1:30: 1:31
scope 3 {
let var3: i32; // "b" in scope 3 at <anon>:1:41: 1:42
scope 4 {
let var4: i32; // "c" in scope 4 at <anon>:1:52: 1:53
}
}
}
}
...
}
MSVC requires unwinding code to be split to a tree of *funclets*, where each funclet
can only branch to itself or to to its parent.
Luckily, the code we generates matches this pattern. Recover that structure in
an analyze pass and translate according to that.
this introduces a DropAndReplace terminator as a fix to #30380. That terminator
is suppsoed to be translated by desugaring during drop elaboration, which is
not implemented in this commit, so this breaks `-Z orbit` temporarily.
Currently, all switches in MIR are exhausitive, meaning that we can have
a lot of arms that all go to the same basic block, the extreme case
being an if-let expression which results in just 2 possible cases, be
might end up with hundreds of arms for large enums.
To improve this situation and give LLVM less code to chew on, we can
detect whether there's a pre-dominant target basic block in a switch
and then promote this to be the default target, not translating the
corresponding arms at all.
In combination with #33544 this makes unoptimized MIR trans of
nickel.rs as fast as using old trans and greatly improves the times for
optimized builds, which are only 30-40% slower instead of ~300%.
cc #33111