It's only used in one place, and there we clone and then make a bunch of
modifications. It's clearer if we duplicate more explicitly, and there's
a symmetry now between `sequence()` and `empty_sequence()`.
`parse_tt` needs a way to get from within submatchers make to the
enclosing submatchers. Currently it has two distinct mechanisms for
this:
- `Delimited` submatchers use `MatcherPos::stack` to record stuff about
the parent (and further back ancestors).
- `Sequence` submatchers use `MatcherPosSequence::parent` to point to
the parent matcher position.
Having two mechanisms is really confusing, and it took me a long time to
understand all this.
This commit eliminates `MatcherPos::stack`, and changes `Delimited`
submatchers to use the same mechanism as sequence submatchers. That
mechanism is also changed a bit: instead of storing the entire parent
`MatcherPos`, we now only store the necessary parts from the parent
`MatcherPos`.
Overall this is a small performance win, with the positives outweighing
the negatives, but it's mostly for clarity.
Spellchecking compiler comments
This PR cleans up the rest of the spelling mistakes in the compiler comments. This PR does not change any literal or code spelling issues.
Currently, we detect an exit from a `Delimited` submatcher when `idx`
exceeds the bounds of the current submatcher *and* there is a `stack`
entry.
This commit changes it to something simpler: just look for a
`CloseDelim` token.
Yet more `parse_tt` improvements
Including lots of comment improvements, and an overhaul of how `matches` work that gives big speedups.
r? `@petrochenkov`
Currently, matches within a sequence are recorded in a new empty
`matches` vector. Then when the sequence finishes the matches are merged
into the `matches` vector of the parent.
This commit changes things so that a sequence mp inherits the matches
made so far. This means that additional matches from the sequence don't
need to be merged into the parent. `push_match` becomes more
complicated, and the current sequence depth needs to be tracked. But
it's a sizeable performance win because it avoids one or more
`push_match` calls on every iteration of a sequence.
The commit also removes `match_hi`, which is no longer necessary.
Ignore doc comments in a declarative macro matcher.
Fixes#95267. Reverts to the old behaviour before #95159 introduced a
regression.
r? `@petrochenkov`
Remove `Nonterminal::NtTT`.
It's only needed for macro expansion, not as a general element in the
AST. This commit removes it, adds `NtOrTt` for the parser and macro
expansion cases, and renames the variants in `NamedMatch` to better
match the new type.
r? `@petrochenkov`
It's only needed for macro expansion, not as a general element in the
AST. This commit removes it, adds `NtOrTt` for the parser and macro
expansion cases, and renames the variants in `NamedMatch` to better
match the new type.
Remove `Session::one_time_diagnostic`
This is untracked mutable state, which modified the behaviour of queries.
It was used for 2 things: some full-blown errors, but mostly for lint declaration notes ("the lint level is defined here" notes).
It is replaced by the diagnostic deduplication infra which already exists in the diagnostic emitter.
A new diagnostic level `OnceNote` is introduced specifically for lint notes, to deduplicate subdiagnostics.
As a drive-by, diagnostic emission takes a `&mut` to allow dropping the `SubDiagnostic`s.
Currently it copies a `KleeneOp` and a `Token` out of a
`SequenceRepetition`. It's better to store a reference to the
`SequenceRepetition`, which is now possible due to #95159 having changed
the lifetimes.
The `Lrc` is only relevant within `transcribe()`. There, the `Lrc` is
helpful for the non-`NtTT` cases, because the entire nonterminal is
cloned. But for the `NtTT` cases the inner token tree is cloned (a full
clone) and so the `Lrc` is of no help.
This commit splits the `NtTT` and non-`NtTT` cases, avoiding the useless
`Lrc` in the former case, for the following effect on macro-heavy
crates.
- It reduces the total number of allocations a lot.
- It increases the size of some of the remaining allocations.
- It doesn't affect *peak* memory usage, because the larger allocations
are short-lived.
This overall gives a speed win.
Introduce `TtParser`
These commits make a number of changes to declarative macro expansion, resulting in code that is shorter, simpler, and faster.
Best reviewed one commit at a time.
r? `@petrochenkov`
As its name suggests, `TokenTreeOrTokenTreeSlice` is either a single
`TokenTree` or a slice of them. It has methods `len` and `get_tt` that
let it be treated much like an ordinary slice. The reason it isn't an
ordinary slice is that for `TokenTree::Delimited` the open and close
delimiters are represented implicitly, and when they are needed they are
constructed on the fly with `Delimited::{open,close}_tt`, rather than
being present in memory.
This commit changes `Delimited` so the open and close delimiters are
represented explicitly. As a result, `TokenTreeOrTokenTreeSlice` is no
longer needed and `MatcherPos` and `MatcherTtFrame` can just use an
ordinary slice. `TokenTree::{len,get_tt}` are also removed, because they
were only needed to support `TokenTreeOrTokenTreeSlice`.
The change makes the code shorter and a little bit faster on benchmarks
that use macro expansion heavily, partly because `MatcherPos` is a lot
smaller (less data to `memcpy`) and partly because ordinary slice
operations are faster than `TokenTreeOrTokenTreeSlice::{len,get_tt}`.
By putting them in `TtParser`, we can reuse them for every rule in a
macro. With that done, they can be `SmallVec` instead of `Vec`, and this
is a performance win because these vectors are hot and `SmallVec`
operations are a bit slower due to always needing an "inline or heap?"
check.
This type was a small performance win for `html5ever`, which uses a
macro with hundreds of very simple rules that don't contain any
metavariables. But this type is complicated (extra lifetimes) and
perf-neutral for macros that do have metavariables.
This commit removes `MatcherPosHandle`, simplifying things a lot. This
increases the allocation rate for `html5ever` and similar cases a bit,
but makes things easier for follow-up changes that will improve
performance more than what we lost here.
It currently has no state, just the three methods `parse_tt`,
`parse_tt_inner`, and `bb_items_ambiguity_error`.
This commit is large but trivial, and mostly consists of changes to the
indentation of those methods. Subsequent commits will do more.
There are a few places were we have to construct it, though, and a few
places that are more invasive to change. To do this, we create a
constructor with a long obvious name.
More robust fallback for `use` suggestion
Our old way to suggest where to add `use`s would first look for pre-existing `use`s in the relevant crate/module, and if there are *no* uses, it would fallback on trying to use another item as the basis for the suggestion.
But this was fragile, as illustrated in issue #87613
This PR instead identifies span of the first token after any inner attributes, and uses *that* as the fallback for the `use` suggestion.
Fix#87613
The current structure makes it hard to tell that there are just four
distinct code paths, depending on how many items there are in `bb_items`
and `next_items`. This commit introduces a `match` that clarifies
things.
Ensure stability directives are checked in all cases
Split off #93017
Stability and deprecation were not checked in all cases, for instance if a type error happened.
This PR moves the check earlier in the pipeline to ensure the errors are emitted in all cases.
r? `@lcnr`
Fix invalid lint_node_id being put on a removed stmt
This pull-request remove a invalid `assign_id!` being put on an stmt node.
The problem is that this node is being removed away by a cfg making it unreachable when triggering a buffered lint.
The comment in the other match arm already tell to not assign a id because it could have a `#[cfg()]` so this is just respecting the comment.
Fixes https://github.com/rust-lang/rust/issues/94523
r? ```````@petrochenkov```````
There are three `Option` fields in `MatcherPos` that are only used in
tandem. This commit combines them, making the code slightly easier to
read. (It also makes clear that the `sep` field arguably should have
been `Option<Option<Token>>`!)
To avoid the strange style where comments force `else` onto its own
line.
The commit also removes several else-after-return constructs, which can
be hard to read.
Improve allowness of the unexpected_cfgs lint
This pull-request improve the allowness (`#[allow(...)]`) of the `unexpected_cfgs` lint.
Before this PR only crate level `#![allow(unexpected_cfgs)]` worked, now with this PR it also work when put around `cfg!` or if it is in a upper level. Making it work ~for the attributes `cfg`, `cfg_attr`, ...~ for the same level is awkward as the current code is design to give "Some parent node that is close to this macro call" (cf. https://doc.rust-lang.org/nightly/nightly-rustc/rustc_expand/base/struct.ExpansionData.html) meaning that allow on the same line as an attribute won't work. I'm note even sure if this would be possible.
Found while working on https://github.com/rust-lang/rust/pull/94298.
r? ````````@petrochenkov````````
* Recover from invalid `'label: ` before block.
* Make suggestion to enclose statements in a block multipart.
* Point at `match`, `while`, `loop` and `unsafe` keywords when failing
to parse their expression.
* Do not suggest `{ ; }`.
* Do not suggest `|` when very unlikely to be what was wanted (in `let`
statements).
As an example:
#[test]
#[ignore = "not yet implemented"]
fn test_ignored() {
...
}
Will now render as:
running 2 tests
test tests::test_ignored ... ignored, not yet implemented
test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
Adopt let else in more places
Continuation of #89933, #91018, #91481, #93046, #93590, #94011.
I have extended my clippy lint to also recognize tuple passing and match statements. The diff caused by fixing it is way above 1 thousand lines. Thus, I split it up into multiple pull requests to make reviewing easier. This is the biggest of these PRs and handles the changes outside of rustdoc, rustc_typeck, rustc_const_eval, rustc_trait_selection, which were handled in PRs #94139, #94142, #94143, #94144.
expand: Pick `cfg`s and `cfg_attrs` one by one, like other attributes
This is a rebase of https://github.com/rust-lang/rust/pull/83354, but without any language-changing parts ~(except for https://github.com/rust-lang/rust/pull/84110)~, i.e. the attribute expansion order is the same.
This is a pre-requisite for any other changes making cfg attributes closer to regular macro attributes
- Possibly changing their expansion order (https://github.com/rust-lang/rust/issues/83331)
- Keeping macro backtraces for cfg attributes, or otherwise making them visible after expansion without keeping them in place literally (https://github.com/rust-lang/rust/pull/84110).
Two exceptions to the "one by one" behavior are:
- cfgs eagerly expanded by `derive` and `cfg_eval`, they are still expanded in a batch, that's by design.
- cfgs at the crate root, they are currently expanded not during the main expansion pass, but before that, during `#![feature]` collection. I'll try to disentangle that logic later in a separate PR.
r? `@Aaron1011`
Replace usages of vec![].into_iter with [].into_iter
`[].into_iter` is idiomatic over `vec![].into_iter` because its simpler and faster (unless the vec is optimized away in which case it would be the same)
So we should change all the implementation, documentation and tests to use it.
I skipped:
* `src/tools` - Those are copied in from upstream
* `src/test/ui` - Hard to tell if `vec![].into_iter` was used intentionally or not here and not much benefit to changing it.
* any case where `vec![].into_iter` was used because we specifically needed a `Vec::IntoIter<T>`
* any case where it looked like we were intentionally using `vec![].into_iter` to test it.
ast: Avoid aborts on fatal errors thrown from mutable AST visitor
Set the node to some dummy value and rethrow the error instead.
When using the old aborting `visit_clobber` in `InvocationCollector::visit_crate` the next tests abort due to fatal errors:
```
ui\modules\path-invalid-form.rs
ui\modules\path-macro.rs
ui\modules\path-no-file-name.rs
ui\parser\issues\issue-5806.rs
ui\parser\mod_file_with_path_attr.rs
```
Follow up to https://github.com/rust-lang/rust/pull/91313.
Remove `SymbolStr`
This was originally proposed in https://github.com/rust-lang/rust/pull/74554#discussion_r466203544. As well as removing the icky `SymbolStr` type, it allows the removal of a lot of `&` and `*` occurrences.
Best reviewed one commit at a time.
r? `@oli-obk`
By changing `as_str()` to take `&self` instead of `self`, we can just
return `&str`. We're still lying about lifetimes, but it's a smaller lie
than before, where `SymbolStr` contained a (fake) `&'static str`!
This feature is aimed at giving proc macros access to powers similar to
those used by builtin macros such as `format_args!` or `concat!`. These
macros are able to accept macros in place of string literal parameters,
such as the format string, as they perform recursive macro expansion
while being expanded.
This can be especially useful in many cases thanks to helper macros like
`concat!`, `stringify!` and `include_str!` which are often used to
construct string literals at compile-time in user code.
For now, this method only allows expanding macros which produce
literals, although more expresisons will be supported before the method
is stabilized.
The only reason to use `abort_if_errors` is when the program is so broken that either:
1. later passes get confused and ICE
2. any diagnostics from later passes would be noise
This is never the case for lints, because the compiler has to be able to deal with `allow`-ed lints.
So it can continue to lint and compile even if there are lint errors.
rustc_ast: Turn `MutVisitor::token_visiting_enabled` into a constant
It's a visitor property rather than something that needs to be determined at runtime
Adopt let_else across the compiler
This performs a substitution of code following the pattern:
```
let <id> = if let <pat> = ... { identity } else { ... : ! };
```
To simplify it to:
```
let <pat> = ... { identity } else { ... : ! };
```
By adopting the `let_else` feature (cc #87335).
The PR also updates the syn crate because the currently used version of the crate doesn't support `let_else` syntax yet.
Note: Generally I'm the person who *removes* usages of unstable features from the compiler, not adds more usages of them, but in this instance I think it hopefully helps the feature get stabilized sooner and in a better state. I have written a [comment](https://github.com/rust-lang/rust/issues/87335#issuecomment-944846205) on the tracking issue about my experience and what I feel could be improved before stabilization of `let_else`.
This performs a substitution of code following the pattern:
let <id> = if let <pat> = ... { identity } else { ... : ! };
To simplify it to:
let <pat> = ... { identity } else { ... : ! };
By adopting the let_else feature.
Fix linting when trailing macro expands to a trailing semi
When a macro is used in the trailing expression position of a block
(e.g. `fn foo() { my_macro!() }`), we currently parse it as an
expression, rather than a statement. As a result, we ended up
using the `NodeId` of the containing statement as our `lint_node_id`,
even though we don't normally do this for macro calls.
If such a macro expands to an expression with a `#[cfg]` attribute,
then the trailing statement can get removed entirely. This lead to
an ICE, since we were usng the `NodeId` of the expression to emit
a lint.
Ths commit makes us skip updating `lint_node_id` when handling
a macro in trailing expression position. This will cause us to
lint at the closest parent of the macro call.
When a macro is used in the trailing expression position of a block
(e.g. `fn foo() { my_macro!() }`), we currently parse it as an
expression, rather than a statement. As a result, we ended up
using the `NodeId` of the containing statement as our `lint_node_id`,
even though we don't normally do this for macro calls.
If such a macro expands to an expression with a `#[cfg]` attribute,
then the trailing statement can get removed entirely. This lead to
an ICE, since we were usng the `NodeId` of the expression to emit
a lint.
Ths commit makes us skip updating `lint_node_id` when handling
a macro in trailing expression position. This will cause us to
lint at the closest parent of the macro call.
Encode spans relative to the enclosing item
The aim of this PR is to avoid recomputing queries when code is moved without modification.
MCP at https://github.com/rust-lang/compiler-team/issues/443
This is achieved by :
1. storing the HIR owner LocalDefId information inside the span;
2. encoding and decoding spans relative to the enclosing item in the incremental on-disk cache;
3. marking a dependency to the `source_span(LocalDefId)` query when we translate a span from the short (`Span`) representation to its explicit (`SpanData`) representation.
Since all client code uses `Span`, step 3 ensures that all manipulations
of span byte positions actually create the dependency edge between
the caller and the `source_span(LocalDefId)`.
This query return the actual absolute span of the parent item.
As a consequence, any source code motion that changes the absolute byte position of a node will either:
- modify the distance to the parent's beginning, so change the relative span's hash;
- dirty `source_span`, and trigger the incremental recomputation of all code that
depends on the span's absolute byte position.
With this scheme, I believe the dependency tracking to be accurate.
For the moment, the spans are marked during lowering.
I'd rather do this during def-collection,
but the AST MutVisitor is not practical enough just yet.
The only difference is that we attach macro-expanded spans
to their expansion point instead of the macro itself.
Add proc_macro::Span::{before, after}.
This adds `proc_macro::Span::before()` and `proc_macro::Span::after()` to get a zero width span at the start or end of the span.
These are equivalent to rustc's `Span::shrink_to_lo()` and `Span::shrink_to_hi()` but with a less cryptic name. They are useful when generating diagnostlics like "missing \<thing\> after \<thing\>".
E.g.
```rust
syn::Error::new(ident.span().after(), "missing `:` after field name").into_compile_error()
```
Detect bare blocks with type ascription that were meant to be a `struct` literal
Address part of #34255.
Potential improvement: silence the other knock down errors in `issue-34255-1.rs`.
expand: Treat more macro calls as statement macro calls
This PR implements the suggestion from https://github.com/rust-lang/rust/pull/87981#issuecomment-906641052 and treats fn-like macro calls inside `StmtKind::Item` and `StmtKind::Semi` as statement macro calls, which is consistent with treatment of attribute invocations in the same positions and with token-based macro expansion model in general.
This also allows to remove a special case in `NodeId` assignment (previously tried in #87779), and to use statement `NodeId`s for linting (`assign_id!`).
r? `@Aaron1011`
Path remapping: Make behavior of diagnostics output dependent on presence of --remap-path-prefix.
This PR fixes a regression (#87745) with `--remap-path-prefix` where the flag stopped causing diagnostic messages to be remapped as well. The regression was introduced in https://github.com/rust-lang/rust/pull/83813 where we erroneously assumed that remapping of diagnostic messages was not desired anymore (because #70642 partially undid that functionality with nobody objecting).
The issue is fixed by making `--remap-path-prefix` remap diagnostic messages again, including for paths that have been remapped in upstream crates (e.g. the standard library). This means that "sysroot-localization" (implemented in #70642) is also disabled if `rustc` is invoked with `--remap-path-prefix`. The assumption is that once someone starts explicitly remapping paths they also don't want paths to their local Rust installation in their build output.
In the future we might want to give more fine-grained control over this behavior via compiler flags (see https://github.com/rust-lang/rfcs/pull/3127 for a related RFC). For now this PR is intended as a regression fix.
This PR is an alternative to https://github.com/rust-lang/rust/pull/88191, which makes diagnostic messages be remapped unconditionally. That approach, however, would effectively revert #70642.
Fixes https://github.com/rust-lang/rust/issues/87745.
cc `@cbeuw`
r? `@ghost`
Instead of updating global state to mark attributes as used,
we now explicitly emit a warning when an attribute is used in
an unsupported position. As a side effect, we are to emit more
detailed warning messages (instead of just a generic "unused" message).
`Session.check_name` is removed, since its only purpose was to mark
the attribute as used. All of the callers are modified to use
`Attribute.has_name`
Additionally, `AttributeType::AssumedUsed` is removed - an 'assumed
used' attribute is implemented by simply not performing any checks
in `CheckAttrVisitor` for a particular attribute.
We no longer emit unused attribute warnings for the `#[rustc_dummy]`
attribute - it's an internal attribute used for tests, so it doesn't
mark sense to treat it as 'unused'.
With this commit, a large source of global untracked state is removed.
Fixes#87877
This change interacts badly with `noop_flat_map_stmt`,
which synthesizes multiple statements with the same `NodeId`.
I'm working on a better fix that will still allow us to
remove this special case. For now, let's revert the change
to fix the ICE.
This reverts commit a4262cc984, reversing
changes made to 8ee962f88e.
Support negative numbers in Literal::from_str
proc_macro::Literal has allowed negative numbers in a single literal token ever since Rust 1.29, using https://doc.rust-lang.org/stable/proc_macro/struct.Literal.html#method.isize_unsuffixed and similar constructors.
```rust
let lit = proc_macro::Literal::isize_unsuffixed(-10);
```
However, the suite of constructors on Literal is not sufficient for all use cases, for example arbitrary precision floats, or custom suffixes in FFI macros.
```rust
let lit = proc_macro::Literal::f64_unsuffixed(0.101001000100001000001000000100000001); // :(
let lit = proc_macro::Literal::i???_suffixed(10ulong); // :(
```
For those, macros construct the literal using from_str instead, which preserves arbitrary precision, custom suffixes, base, and digit grouping.
```rust
let lit = "0.101001000100001000001000000100000001".parse::<Literal>().unwrap();
let lit = "10ulong".parse::<Literal>().unwrap();
let lit = "0b1000_0100_0010_0001".parse::<Literal>().unwrap();
```
However, until this PR it was not possible to construct a literal token that is **both** negative **and** preserving of arbitrary precision etc.
This PR fixes `Literal::from_str` to recognize negative integer and float literals.
Display an extra note for trailing semicolon lint with trailing macro
Currently, we parse macros at the end of a block
(e.g. `fn foo() { my_macro!() }`) as expressions, rather than
statements. This means that a macro invoked in this position
cannot expand to items or semicolon-terminated expressions.
In the future, we might want to start parsing these kinds of macros
as statements. This would make expansion more 'token-based'
(i.e. macro expansion behaves (almost) as if you just textually
replaced the macro invocation with its output). However,
this is a breaking change (see PR #78991), so it will require
further discussion.
Since the current behavior will not be changing any time soon,
we need to address the interaction with the
`SEMICOLON_IN_EXPRESSIONS_FROM_MACROS` lint. Since we are parsing
the result of macro expansion as an expression, we will emit a lint
if there's a trailing semicolon in the macro output. However, this
results in a somewhat confusing message for users, since it visually
looks like there should be no problem with having a semicolon
at the end of a block
(e.g. `fn foo() { my_macro!() }` => `fn foo() { produced_expr; }`)
To help reduce confusion, this commit adds a note explaining
that the macro is being interpreted as an expression. Additionally,
we suggest adding a semicolon after the macro *invocation* - this
will cause us to parse the macro call as a statement. We do *not*
use a structured suggestion for this, since the user may actually
want to remove the semicolon from the macro definition (allowing
the block to evaluate to the expression produced by the macro).
Currently, we parse macros at the end of a block
(e.g. `fn foo() { my_macro!() }`) as expressions, rather than
statements. This means that a macro invoked in this position
cannot expand to items or semicolon-terminated expressions.
In the future, we might want to start parsing these kinds of macros
as statements. This would make expansion more 'token-based'
(i.e. macro expansion behaves (almost) as if you just textually
replaced the macro invocation with its output). However,
this is a breaking change (see PR #78991), so it will require
further discussion.
Since the current behavior will not be changing any time soon,
we need to address the interaction with the
`SEMICOLON_IN_EXPRESSIONS_FROM_MACROS` lint. Since we are parsing
the result of macro expansion as an expression, we will emit a lint
if there's a trailing semicolon in the macro output. However, this
results in a somewhat confusing message for users, since it visually
looks like there should be no problem with having a semicolon
at the end of a block
(e.g. `fn foo() { my_macro!() }` => `fn foo() { produced_expr; }`)
To help reduce confusion, this commit adds a note explaining
that the macro is being interpreted as an expression. Additionally,
we suggest adding a semicolon after the macro *invocation* - this
will cause us to parse the macro call as a statement. We do *not*
use a structured suggestion for this, since the user may actually
want to remove the semicolon from the macro definition (allowing
the block to evaluate to the expression produced by the macro).
These attributes are currently discarded.
This may change in the future (see #63221), but for now,
placing inert attributes on a macro invocation does nothing,
so we should warn users about it.
Technically, it's possible for there to be attribute macro
on the same macro invocation (or at a higher scope), which
inspects the inert attribute. For example:
```rust
#[look_for_inline_attr]
#[inline]
my_macro!()
#[look_for_nested_inline]
mod foo { #[inline] my_macro!() }
```
However, this would be a very strange thing to do.
Anyone running into this can manually suppress the warning.
When we need to emit a lint at a macro invocation, we currently use the
`NodeId` of its parent definition (e.g. the enclosing function). This
means that any `#[allow]` / `#[deny]` attributes placed 'closer' to the
macro (e.g. on an enclosing block or statement) will have no effect.
This commit computes a better `lint_node_id` in `InvocationCollector`.
When we visit/flat_map an AST node, we assign it a `NodeId` (earlier
than we normally would), and store than `NodeId` in current
`ExpansionData`. When we collect a macro invocation, the current
`lint_node_id` gets cloned along with our `ExpansionData`, allowing it
to be used if we need to emit a lint later on.
This improves the handling of `#[allow]` / `#[deny]` for
`SEMICOLON_IN_EXPRESSIONS_FROM_MACROS` and some `asm!`-related lints.
The 'legacy derive helpers' lint retains its current behavior
(I've inlined the now-removed `lint_node_id` function), since
there isn't an `ExpansionData` readily available.
expand: Support helper attributes for built-in derive macros
This is needed for https://github.com/rust-lang/rust/pull/86735 (derive macro `Default` should have a helper attribute `default`).
With this PR we can specify helper attributes for built-in derives using syntax `#[rustc_builtin_macro(MacroName, attributes(attr1, attr2, ...))]` which mirrors equivalent syntax for proc macros `#[proc_macro_derive(MacroName, attributes(attr1, attr2, ...))]`.
Otherwise expansion infra was already ready for this.
The attribute parsing code is shared between proc macro derives and built-in macros (`fn parse_macro_name_and_helper_attrs`).
Rollup of 6 pull requests
Successful merges:
- #87085 (Search result colors)
- #87090 (Make BTreeSet::split_off name elements like other set methods do)
- #87098 (Unignore some pretty printing tests)
- #87099 (Upgrade `cc` crate to 1.0.69)
- #87101 (Suggest a path separator if a stray colon is found in a match arm)
- #87102 (Add GUI test for "go to first" feature)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
- The `Rustc::expn_id` field kept redundant information
- `SyntaxContext` is no longer thrown away before `save_proc_macro_span` because it's thrown away during metadata encoding anyway
Currently, we only point at the span of the macro argument. When the
macro call is itself generated by another macro, this can make it
difficult or impossible to determine which macro is responsible for
producing the error.
Remove unused feature gates
The first commit removes a usage of a feature gate, but I don't expect it to be controversial as the feature gate was only used to workaround a limitation of rust in the past. (closures never being `Clone`)
The second commit uses `#[allow_internal_unstable]` to avoid leaking the `trusted_step` feature gate usage from inside the index newtype macro. It didn't work for the `min_specialization` feature gate though.
The third commit removes (almost) all feature gates from the compiler that weren't used anyway.
As described in issue #85708, we currently do not properly decode
`SyntaxContext::root()` and `ExpnId::root()` from foreign crates. As a
result, when we decode a span from a foreign crate with
`SyntaxContext::root()`, we end up up considering it to have the edition
of the *current* crate, instead of the foreign crate where it was
originally created.
A full fix for this issue will be a fairly significant undertaking.
Fortunately, it's possible to implement a partial fix, which gives us
the correct edition-dependent behavior for `:pat` matchers when the
macro is loaded from another crate. Since we have the edition of the
macro's defining crate available, we can 'recover' from seeing a
`SyntaxContext::root()` and use the edition of the macro's defining
crate.
Any solution to issue #85708 must reproduce the behavior of this
targeted fix - properly preserving a foreign `SyntaxContext::root()`
means (among other things) preserving its edition, which by definition
is the edition of the foreign crate itself. Therefore, this fix moves us
closer to the correct overall solution, and does not expose any new
incorrect behavior to macros.
Fix `--remap-path-prefix` not correctly remapping `rust-src` component paths and unify handling of path mapping with virtualized paths
This PR fixes#73167 ("Binaries end up containing path to the rust-src component despite `--remap-path-prefix`") by preventing real local filesystem paths from reaching compilation output if the path is supposed to be remapped.
`RealFileName::Named` introduced in #72767 is now renamed as `LocalPath`, because this variant wraps a (most likely) valid local filesystem path.
`RealFileName::Devirtualized` is renamed as `Remapped` to be used for remapped path from a real path via `--remap-path-prefix` argument, as well as real path inferred from a virtualized (during compiler bootstrapping) `/rustc/...` path. The `local_path` field is now an `Option<PathBuf>`, as it will be set to `None` before serialisation, so it never reaches any build output. Attempting to serialise a non-`None` `local_path` will cause an assertion faliure.
When a path is remapped, a `RealFileName::Remapped` variant is created. The original path is preserved in `local_path` field and the remapped path is saved in `virtual_name` field. Previously, the `local_path` is directly modified which goes against its purpose of "suitable for reading from the file system on the local host".
`rustc_span::SourceFile`'s fields `unmapped_path` (introduced by #44940) and `name_was_remapped` (introduced by #41508 when `--remap-path-prefix` feature originally added) are removed, as these two pieces of information can be inferred from the `name` field: if it's anything other than a `FileName::Real(_)`, or if it is a `FileName::Real(RealFileName::LocalPath(_))`, then clearly `name_was_remapped` would've been false and `unmapped_path` would've been `None`. If it is a `FileName::Real(RealFileName::Remapped{local_path, virtual_name})`, then `name_was_remapped` would've been true and `unmapped_path` would've been `Some(local_path)`.
cc `@eddyb` who implemented `/rustc/...` path devirtualisation
This PR implements span quoting, allowing proc-macros to produce spans
pointing *into their own crate*. This is used by the unstable
`proc_macro::quote!` macro, allowing us to get error messages like this:
```
error[E0412]: cannot find type `MissingType` in this scope
--> $DIR/auxiliary/span-from-proc-macro.rs:37:20
|
LL | pub fn error_from_attribute(_args: TokenStream, _input: TokenStream) -> TokenStream {
| ----------------------------------------------------------------------------------- in this expansion of procedural macro `#[error_from_attribute]`
...
LL | field: MissingType
| ^^^^^^^^^^^ not found in this scope
|
::: $DIR/span-from-proc-macro.rs:8:1
|
LL | #[error_from_attribute]
| ----------------------- in this macro invocation
```
Here, `MissingType` occurs inside the implementation of the proc-macro
`#[error_from_attribute]`. Previosuly, this would always result in a
span pointing at `#[error_from_attribute]`
This will make many proc-macro-related error message much more useful -
when a proc-macro generates code containing an error, users will get an
error message pointing directly at that code (within the macro
definition), instead of always getting a span pointing at the macro
invocation site.
This is implemented as follows:
* When a proc-macro crate is being *compiled*, it causes the `quote!`
macro to get run. This saves all of the sapns in the input to `quote!`
into the metadata of *the proc-macro-crate* (which we are currently
compiling). The `quote!` macro then expands to a call to
`proc_macro::Span::recover_proc_macro_span(id)`, where `id` is an
opaque identifier for the span in the crate metadata.
* When the same proc-macro crate is *run* (e.g. it is loaded from disk
and invoked by some consumer crate), the call to
`proc_macro::Span::recover_proc_macro_span` causes us to load the span
from the proc-macro crate's metadata. The proc-macro then produces a
`TokenStream` containing a `Span` pointing into the proc-macro crate
itself.
The recursive nature of 'quote!' can be difficult to understand at
first. The file `src/test/ui/proc-macro/quote-debug.stdout` shows
the output of the `quote!` macro, which should make this eaier to
understand.
This PR also supports custom quoting spans in custom quote macros (e.g.
the `quote` crate). All span quoting goes through the
`proc_macro::quote_span` method, which can be called by a custom quote
macro to perform span quoting. An example of this usage is provided in
`src/test/ui/proc-macro/auxiliary/custom-quote.rs`
Custom quoting currently has a few limitations:
In order to quote a span, we need to generate a call to
`proc_macro::Span::recover_proc_macro_span`. However, proc-macros
support renaming the `proc_macro` crate, so we can't simply hardcode
this path. Previously, the `quote_span` method used the path
`crate::Span` - however, this only works when it is called by the
builtin `quote!` macro in the same crate. To support being called from
arbitrary crates, we need access to the name of the `proc_macro` crate
to generate a path. This PR adds an additional argument to `quote_span`
to specify the name of the `proc_macro` crate. Howver, this feels kind
of hacky, and we may want to change this before stabilizing anything
quote-related.
Additionally, using `quote_span` currently requires enabling the
`proc_macro_internals` feature. The builtin `quote!` macro
has an `#[allow_internal_unstable]` attribute, but this won't work for
custom quote implementations. This will likely require some additional
tricks to apply `allow_internal_unstable` to the span of
`proc_macro::Span::recover_proc_macro_span`.
Unify rustc and rustdoc parsing of `cfg()`
This extracts a new `parse_cfg` function that's used between both.
- Treat `#[doc(cfg(x), cfg(y))]` the same as `#[doc(cfg(x)]
#[doc(cfg(y))]`. Previously it would be completely ignored.
- Treat `#[doc(inline, cfg(x))]` the same as `#[doc(inline)]
#[doc(cfg(x))]`. Previously, the cfg would be ignored.
- Pass the cfg predicate through to rustc_expand to be validated
Technically this is a breaking change, but doc_cfg is still nightly so I don't think it matters.
Fixes https://github.com/rust-lang/rust/issues/84437.
r? `````````@petrochenkov`````````
This extracts a new `parse_cfg` function that's used between both.
- Treat `#[doc(cfg(x), cfg(y))]` the same as `#[doc(cfg(x)]
#[doc(cfg(y))]`. Previously it would be completely ignored.
- Treat `#[doc(inline, cfg(x))]` the same as `#[doc(inline)]
#[doc(cfg(x))]`. Previously, the cfg would be ignored.
- Pass the cfg predicate through to rustc_expand to be validated
Co-authored-by: Vadim Petrochenkov <vadim.petrochenkov@gmail.com>
Implement RFC 1260 with feature_name `imported_main`.
This is the second extraction part of #84062 plus additional adjustments.
This (mostly) implements RFC 1260.
However there's still one test case failure in the extern crate case. Maybe `LocalDefId` doesn't work here? I'm not sure.
cc https://github.com/rust-lang/rust/issues/28937
r? `@petrochenkov`
This PR modifies the macro expansion infrastructure to handle attributes
in a fully token-based manner. As a result:
* Derives macros no longer lose spans when their input is modified
by eager cfg-expansion. This is accomplished by performing eager
cfg-expansion on the token stream that we pass to the derive
proc-macro
* Inner attributes now preserve spans in all cases, including when we
have multiple inner attributes in a row.
This is accomplished through the following changes:
* New structs `AttrAnnotatedTokenStream` and `AttrAnnotatedTokenTree` are introduced.
These are very similar to a normal `TokenTree`, but they also track
the position of attributes and attribute targets within the stream.
They are built when we collect tokens during parsing.
An `AttrAnnotatedTokenStream` is converted to a regular `TokenStream` when
we invoke a macro.
* Token capturing and `LazyTokenStream` are modified to work with
`AttrAnnotatedTokenStream`. A new `ReplaceRange` type is introduced, which
is created during the parsing of a nested AST node to make the 'outer'
AST node aware of the attributes and attribute target stored deeper in the token stream.
* When we need to perform eager cfg-expansion (either due to `#[derive]` or `#[cfg_eval]`),
we tokenize and reparse our target, capturing additional information about the locations of
`#[cfg]` and `#[cfg_attr]` attributes at any depth within the target.
This is a performance optimization, allowing us to perform less work
in the typical case where captured tokens never have eager cfg-expansion run.
expand: Do not ICE when a legacy AST-based macro attribute produces and empty expression
Fixes https://github.com/rust-lang/rust/issues/80251
The reported error is the same as for `let _ = #[cfg(FALSE)] EXPR;`
Found with https://github.com/est31/warnalyzer.
Dubious changes:
- Is anyone else using rustc_apfloat? I feel weird completely deleting
x87 support.
- Maybe some of the dead code in rustc_data_structures, in case someone
wants to use it in the future?
- Don't change rustc_serialize
I plan to scrap most of the json module in the near future (see
https://github.com/rust-lang/compiler-team/issues/418) and fixing the
tests needed more work than I expected.
TODO: check if any of the comments on the deleted code should be kept.
Add function core::iter::zip
This makes it a little easier to `zip` iterators:
```rust
for (x, y) in zip(xs, ys) {}
// vs.
for (x, y) in xs.into_iter().zip(ys) {}
```
You can `zip(&mut xs, &ys)` for the conventional `iter_mut()` and
`iter()`, respectively. This can also support arbitrary nesting, where
it's easier to see the item layout than with arbitrary `zip` chains:
```rust
for ((x, y), z) in zip(zip(xs, ys), zs) {}
for (x, (y, z)) in zip(xs, zip(ys, zs)) {}
// vs.
for ((x, y), z) in xs.into_iter().zip(ys).zip(xz) {}
for (x, (y, z)) in xs.into_iter().zip((ys.into_iter().zip(xz)) {}
```
It may also format more nicely, especially when the first iterator is a
longer chain of methods -- for example:
```rust
iter::zip(
trait_ref.substs.types().skip(1),
impl_trait_ref.substs.types().skip(1),
)
// vs.
trait_ref
.substs
.types()
.skip(1)
.zip(impl_trait_ref.substs.types().skip(1))
```
This replaces the tuple-pair `IntoIterator` in #78204.
There is prior art for the utility of this in [`itertools::zip`].
[`itertools::zip`]: https://docs.rs/itertools/0.10.0/itertools/fn.zip.html
With this PR, we now lint for all cases where we perform some kind of
proc-macro back-compat hack.
The `js-sys` had an internal fix made to properly handle
`None`-delimited groups, so we need to manually check the version in the
filename. As a result, we no longer apply the back-compat hack to cases
where the version number is missing file the file path. This should not
affect any users of the `crates.io` crate.
Unlike the other cases of this lint, there's no simple way to detect if
an old version of the relevant crate (`syn`) is in use. The `actix-web`
crate only depends on `pin-project` v1.0.0, so checking the version of
`actix-web` does not guarantee that a new enough version of
`pin-project` (and therefore `syn`) is in use.
Instead, we rely on the fact that virtually all of the regressed crates
are pinned to a pre-1.0 version of `pin-project`. When this is the case,
bumping the `actix-web` dependency will pull in the *latest* version of
`pin-project`, which has an explicit dependency on a newer v dependency
on a newer version of `syn`.
The lint message tells users to update `actix-web`, since that's what
they're most likely to have control over. We could potentially tell them
to run `cargo update -p syn`, but I think it's more straightforward to
suggest an explicit change to the `Cargo.toml`
The `actori-web` fork had its last commit over a year ago, and appears
to just be a renamed fork of `actix-web`. Therefore, I've removed the
`actori-web` check entirely - any crates that actually get broken can
simply update `syn` themselves.
Extend `proc_macro_back_compat` lint to `procedural-masquerade`
We now lint on *any* use of `procedural-masquerade` crate. While this
crate still exists, its main reverse dependency (`cssparser`) no longer
depends on it. Any crates still depending off should stop doing so, as
it only exists to support very old Rust versions.
If a crate actually needs to support old versions of rustc via
`procedural-masquerade`, then they'll just need to accept the warning
until we remove it entirely (at the same time as the back-compat hack).
The latest version of `procedural-masquerade` does work with the
latest rustc, but trying to check for the version seems like more
trouble than it's worth.
While working on this, I realized that the `proc-macro-hack` check was
never actually doing anything. The corresponding enum variant in
`proc-macro-hack` is named `Value` or `Nested` - it has never been
called `Input`. Due to a strange Crater issue, the Crater run that
tested adding this did *not* end up testing it - some of the crates that
would have failed did not actually have their tests checked, making it
seem as though the `proc-macro-hack` check was working.
The Crater issue is being discussed at
https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/Nearly.20identical.20Crater.20runs.20processed.20a.20crate.20differently/near/230406661
Despite the `proc-macro-hack` check not actually doing anything, we
haven't gotten any reports from users about their build being broken.
I went ahead and removed it entirely, since it's clear that no one is
being affected by the `proc-macro-hack` regression in practice.
StructField -> FieldDef ("field definition")
Field -> ExprField ("expression field", not "field expression")
FieldPat -> PatField ("pattern field", not "field pattern")
Also rename visiting and other methods working on them.
We now lint on *any* use of `procedural-masquerade` crate. While this
crate still exists, its main reverse dependency (`cssparser`) no longer
depends on it. Any crates still depending off should stop doing so, as
it only exists to support very old Rust versions.
If a crate actually needs to support old versions of rustc via
`procedural-masquerade`, then they'll just need to accept the warning
until we remove it entirely (at the same time as the back-compat hack).
The latest version of `procedural-masquerade` does not work with the
latest rustc, but trying to check for the version seems like more
trouble than it's worth.
While working on this, I realized that the `proc-macro-hack` check was
never actually doing anything. The corresponding enum variant in
`proc-macro-hack` is named `Value` or `Nested` - it has never been
called `Input`. Due to a strange Crater issue, the Crater run that
tested adding this did *not* end up testing it - some of the crates that
would have failed did not actually have their tests checked, making it
seem as though the `proc-macro-hack` check was working.
The Crater issue is being discussed at
https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/Nearly.20identical.20Crater.20runs.20processed.20a.20crate.20differently/near/230406661
Despite the `proc-macro-hack` check not actually doing anything, we
haven't gotten any reports from users about their build being broken.
I went ahead and removed it entirely, since it's clear that no one is
being affected by the `proc-macro-hack` regression in practice.
Now that future-incompat-report support has landed in nightly Cargo, we
can start to make progress towards removing the various proc-macro
back-compat hacks that have accumulated in the compiler.
This PR introduces a new lint `proc_macro_back_compat`, which results in
a future-incompat-report entry being generated. All proc-macro
back-compat warnings will be grouped under this lint. Note that this
lint will never actually become a hard error - instead, we will remove
the special cases for various macros, which will cause older versions of
those crates to emit some other error.
I've added code to fire this lint for the `time-macros-impl` case. This
is the easiest case out of all of our current back-compat hacks - the
crate was renamed to `time-macros`, so seeing a filename with
`time-macros-impl` guarantees that an older version of the parent `time`
crate is in use.
When Cargo's future-incompat-report feature gets stabilized, affected
users will start to see future-incompat warnings when they build their
crates.
expand: Do not allocate `Lrc` for `allow_internal_unstable` list unless necessary
This allocation is done for any macro defined in the current crate, or used from a different crate.
EDIT: This also removes an `Lrc` increment from each *use* of such macro, which may be more significant.
Noticed when reviewing https://github.com/rust-lang/rust/pull/82367.
This probably doesn't matter, but let's do a perf run.