Update error to reflect that integer literals can have float suffixes
For example, `1` is parsed as an integer literal, but it can be turned
into a float with the suffix `f32`. Now the error calls them "numeric
literals" and notes that you can add a float suffix since they can be
either integers or floats.
rustc_parse: fix ConstBlock expr span
The span for a ConstBlock expression should presumably run through the end of the block it contains and not stop at the keyword, just like is done with similar block-containing expression kinds, such as a TryBlock
Properly handle attributes on statements
We now collect tokens for the underlying node wrapped by `StmtKind`
nstead of storing tokens directly in `Stmt`.
`LazyTokenStream` now supports capturing a trailing semicolon after it
is initially constructed. This allows us to avoid refactoring statement
parsing to wrap the parsing of the semicolon in `parse_tokens`.
Attributes on item statements
(e.g. `fn foo() { #[bar] struct MyStruct; }`) are now treated as
item attributes, not statement attributes, which is consistent with how
we handle attributes on other kinds of statements. The feature-gating
code is adjusted so that proc-macro attributes are still allowed on item
statements on stable.
Two built-in macros (`#[global_allocator]` and `#[test]`) needed to be
adjusted to support being passed `Annotatable::Stmt`.
For example, `1` is parsed as an integer literal, but it can be turned
into a float with the suffix `f32`. Now the error calls them "numeric
literals" and notes that you can add a float suffix since they can be
either integers or floats.
When parsing a statement (e.g. inside a function body),
we now consider `struct Foo {};` and `$stmt;` to each consist
of two statements: `struct Foo {}` and `;`, and `$stmt` and `;`.
As a result, an attribute macro invoke as
`fn foo() { #[attr] struct Bar{}; }` will see `struct Bar{}` as its
input. Additionally, the 'unused semicolon' lint now fires in more
places.
We now collect tokens for the underlying node wrapped by `StmtKind`
instead of storing tokens directly in `Stmt`.
`LazyTokenStream` now supports capturing a trailing semicolon after it
is initially constructed. This allows us to avoid refactoring statement
parsing to wrap the parsing of the semicolon in `parse_tokens`.
Attributes on item statements
(e.g. `fn foo() { #[bar] struct MyStruct; }`) are now treated as
item attributes, not statement attributes, which is consistent with how
we handle attributes on other kinds of statements. The feature-gating
code is adjusted so that proc-macro attributes are still allowed on item
statements on stable.
Two built-in macros (`#[global_allocator]` and `#[test]`) needed to be
adjusted to support being passed `Annotatable::Stmt`.
Cache pretty-print/retokenize result to avoid compile time blowup
Fixes#79242
If a `macro_rules!` recursively builds up a nested nonterminal
(passing it to a proc-macro at each step), we will end up repeatedly
pretty-printing/retokenizing the same nonterminals. Unfortunately, the
'probable equality' check we do has a non-trivial cost, which leads to a
blowup in compilation time.
As a workaround, we cache the result of the 'probable equality' check,
which eliminates the compilation time blowup for the linked issue. This
commit only touches a single file (other than adding tests), so it
should be easy to backport.
The proper solution is to remove the pretty-print/retokenize hack
entirely. However, this will almost certainly break a large number of
crates that were relying on hygiene bugs created by using the reparsed
`TokenStream`. As a result, we will definitely not want to backport
such a change.
Fixes#79242
If a `macro_rules!` recursively builds up a nested nonterminal
(passing it to a proc-macro at each step), we will end up repeatedly
pretty-printing/retokenizing the same nonterminals. Unfortunately, the
'probable equality' check we do has a non-trivial cost, which leads to a
blowup in compilation time.
As a workaround, we cache the result of the 'probable equality' check,
which eliminates the compilation time blowup for the linked issue. This
commit only touches a single file (other than adding tests), so it
should be easy to backport.
The proper solution is to remove the pretty-print/retokenize hack
entirely. However, this will almost certainly break a large number of
crates that were relying on hygiene bugs created by using the reparsed
`TokenStream`. As a result, we will definitely not want to backport
such a change.
Make `_` an expression, to discard values in destructuring assignments
This is the third and final step towards implementing destructuring assignment (RFC: rust-lang/rfcs#2909, tracking issue: #71126). This PR is the third and final part of #71156, which was split up to allow for easier review.
With this PR, an underscore `_` is parsed as an expression but is allowed *only* on the left-hand side of a destructuring assignment. There it simply discards a value, similarly to the wildcard `_` in patterns. For instance,
```rust
(a, _) = (1, 2)
```
will simply assign 1 to `a` and discard the 2. Note that for consistency,
```
_ = foo
```
is also allowed and equivalent to just `foo`.
Thanks to ````@varkor```` who helped with the implementation, particularly around pre-expansion gating.
r? ````@petrochenkov````
rustc_parse: Remove optimization for 0-length streams in `collect_tokens`
The optimization conflates empty token streams with unknown token stream, which is at least suspicious, and doesn't affect performance because 0-length token streams are very rare.
r? `@Aaron1011`
The optimization conflates empty token streams with unknown token stream, which is at least suspicious, and doesn't affect performance because 0-length token streams are very rare.
Implement destructuring assignment for structs and slices
This is the second step towards implementing destructuring assignment (RFC: rust-lang/rfcs#2909, tracking issue: #71126). This PR is the second part of #71156, which was split up to allow for easier review.
Note that the first PR (#78748) is not merged yet, so it is included as the first commit in this one. I thought this would allow the review to start earlier because I have some time this weekend to respond to reviews. If ``@petrochenkov`` prefers to wait until the first PR is merged, I totally understand, of course.
This PR implements destructuring assignment for (tuple) structs and slices. In order to do this, the following *parser change* was necessary: struct expressions are not required to have a base expression, i.e. `Struct { a: 1, .. }` becomes legal (in order to act like a struct pattern).
Unfortunately, this PR slightly regresses the diagnostics implemented in #77283. However, it is only a missing help message in `src/test/ui/issues/issue-77218.rs`. Other instances of this diagnostic are not affected. Since I don't exactly understand how this help message works and how to fix it yet, I was hoping it's OK to regress this temporarily and fix it in a follow-up PR.
Thanks to ``@varkor`` who helped with the implementation, particularly around the struct rest changes.
r? ``@petrochenkov``
Do not collect tokens for doc comments
Doc comment is a single token and AST has all the information to re-create it precisely.
Doc comments are also responsible for majority of calls to `collect_tokens` (with `num_calls == 1` and `num_calls == 0`, cc https://github.com/rust-lang/rust/pull/78736).
(I also moved token collection into `fn parse_attribute` to deduplicate code a bit.)
r? `@Aaron1011`
rustc_ast: Do not panic by default when visiting macro calls
Panicking by default made sense when we didn't have HIR or MIR and everything worked on AST, but now all AST visitors run early and majority of them have to deal with macro calls, often by ignoring them.
The second commit renames `visit_mac` to `visit_mac_call`, the corresponding structures were renamed earlier in https://github.com/rust-lang/rust/pull/69589.
Fixes#78675
We now bail out of `prepend_attrs` if we ended up capturing any inner
attributes (which can happen in several places, due to token capturing
for `macro_rules!` arguments.
Adjust turbofish help message for const generics
Types are no longer special. (This message arguably only makes sense with `min_const_generics` or more, but we'll be there soon.)
r? @lcnr
Tweak invalid `fn` header and body parsing
* Rely on regular "expected"/"found" parser error for `fn`, fix#77115
* Recover empty `fn` bodies when encountering `}`
* Recover trailing `>` in return types
* Recover from non-type in array type `[<BAD TOKEN>; LEN]`
Suggest that expressions that look like const generic arguments should be enclosed in brackets
I pulled out the changes for const expressions from https://github.com/rust-lang/rust/pull/71592 (without the trait object diagnostic changes) and made some small changes; the implementation is `@estebank's.`
We're also going to want to make some changes separately to account for trait objects (they result in poor diagnostics, as is evident from one of the test cases here), such as an adaption of https://github.com/rust-lang/rust/pull/72273.
Fixes https://github.com/rust-lang/rust/issues/70753.
r? `@petrochenkov`
Tweak match arm semicolon removal suggestion to account for futures
* Tweak and extend "use `.await`" suggestions
* Suggest removal of semicolon on prior match arm
* Account for `impl Future` when suggesting semicolon removal
* Silence some errors when encountering `await foo()?` as can't be certain what the intent was
*Thanks to https://twitter.com/a_hoverbear/status/1318960787105353728 for pointing this out!*
Unconditionally capture tokens for attributes.
This allows us to avoid synthesizing tokens in `prepend_attr`, since we
have the original tokens available.
We still need to synthesize tokens when expanding `cfg_attr`,
but this is an unavoidable consequence of the syntax of `cfg_attr` -
the user does not supply the `#` and `[]` tokens that a `cfg_attr`
expands to.
This is based on PR https://github.com/rust-lang/rust/pull/77250 - this PR exposes a bug in the current `collect_tokens` implementation, which is fixed by the rewrite.
This allows us to avoid synthesizing tokens in `prepend_attr`, since we
have the original tokens available.
We still need to synthesize tokens when expanding `cfg_attr`,
but this is an unavoidable consequence of the syntax of `cfg_attr` -
the user does not supply the `#` and `[]` tokens that a `cfg_attr`
expands to.
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
rustc_parse: fix spans on cast and range exprs with attrs
Currently the span for cast and range expressions does not include the span of attributes associated to the lhs which is causing some issues for us in rustfmt.
```rust
fn foo() -> i64 {
#[attr]
1u64 as i64
}
fn bar() -> Range<i32> {
#[attr]
1..2
}
```
This corrects the span for cast and range expressions to fully include the span of child nodes
Remove unused code
Rustc has a builtin lint for detecting unused code inside a crate, but when an item is marked `pub`, the code, even if unused inside the entire workspace, is never marked as such. Therefore, I've built [warnalyzer](https://github.com/est31/warnalyzer) to detect unused items in a cross-crate setting.
Closes https://github.com/est31/warnalyzer/issues/2
Fixes#74616
Makes progress towards #43081
Unblocks PR #76130
When pretty-printing an AST node, we may insert additional parenthesis
to ensure that precedence is properly preserved in code we output.
However, the proc macro implementation relies on comparing a
pretty-printed AST node to the captured `TokenStream`. Inserting extra
parenthesis changes the structure of the reparsed `TokenStream`, making
the comparison fail.
This PR refactors the AST pretty-printing code to allow skipping the
insertion of additional parenthesis. Several freestanding methods are
moved to trait methods on `PrintState`, which keep track of an internal
`insert_extra_parens` flag. This flag is normally `true`, but we expose
a public method which allows pretty-printing a nonterminal with
`insert_extra_parens = false`.
To avoid changing the public interface of `rustc_ast_pretty`, the
freestanding `_to_string` methods are changed to delegate to a
newly-crated `State`. The main pretty-printing code is moved to a new
`state` module to ensure that it does not accidentally call any of these
public helper functions (instead, the internal functions with the same
name should be used).
This approach lives exclusively in the parser, so struct expr bodies
that are syntactically correct on their own but are otherwise incorrect
will still emit confusing errors, like in the following case:
```rust
fn foo() -> Foo {
bar: Vec::new()
}
```
```
error[E0425]: cannot find value `bar` in this scope
--> src/file.rs:5:5
|
5 | bar: Vec::new()
| ^^^ expecting a type here because of type ascription
error[E0214]: parenthesized type parameters may only be used with a `Fn` trait
--> src/file.rs:5:15
|
5 | bar: Vec::new()
| ^^^^^ only `Fn` traits may use parentheses
error[E0107]: wrong number of type arguments: expected 1, found 0
--> src/file.rs:5:10
|
5 | bar: Vec::new()
| ^^^^^^^^^^ expected 1 type argument
```
If that field had a trailing comma, that would be a parse error and it
would trigger the new, more targetted, error:
```
error: struct literal body without path
--> file.rs:4:17
|
4 | fn foo() -> Foo {
| _________________^
5 | | bar: Vec::new(),
6 | | }
| |_^
|
help: you might have forgotten to add the struct literal inside the block
|
4 | fn foo() -> Foo { Path {
5 | bar: Vec::new(),
6 | } }
|
```
Partially address last part of #34255.
Makes progress towards #43081
In PR #73084, we started recursively expanded nonterminals during the
pretty-print/reparse check, allowing them to be properly compared
against the reparsed tokenstream.
Unfortunately, the recursive logic in that PR only handles the case
where a nonterminal appears inside a `TokenTree::Delimited`. If a
nonterminal appears directly in the expanded tokens of another
nonterminal, the inner nonterminal will not be expanded.
This PR fixes the recursive expansion of nonterminals, ensuring that
they are expanded wherever they occur.
Fixes#76011
This adds support for adding help diagnostics to the feature gating checks and
then uses it for the async_closure gate to add the extra bit of help
information as described in the issue.
use if let instead of single match arm expressions
use if let instead of single match arm expressions to compact code and reduce nesting (clippy::single_match)
If the right-hand iterator has exactly one more element than the
left-hand iterator, then both iterators will be fully consumed, but
the extra element will never be compared.
Fixing memory exhaustion when formatting short code suggestion
Details can be found in issue #76597. This PR replaces substractions with `saturating_sub`'s to avoid usize wrapping leading to memory exhaustion when formatting short suggestion messages.
Ignore `|` and `+` tokens during proc-macro pretty-print check
Fixes#76182
This is an alternative to PR #76188
These tokens are not preserved in the AST in certain cases
(e.g. a leading `|` in a pattern or a trailing `+` in a trait bound).
This PR ignores them entirely during the pretty-print/reparse check
to avoid spuriously using the re-parsed tokenstream.
We currently only attach tokens when parsing a `:stmt` matcher for a
`macro_rules!` macro. Proc-macro attributes on statements are still
unstable, and need additional work.
Fixes#76182
This is an alternative to PR #76188
These tokens are not preserved in the AST in certain cases
(e.g. a leading `|` in a pattern or a trailing `+` in a trait bound).
This PR ignores them entirely during the pretty-print/reparse check
to avoid spuriously using the re-parsed tokenstream.
Proc-macro API currently exposes jointness in `Punct` tokens. That is,
`+` in `+one` is **non** joint.
Our lexer produces jointness info for all tokens, so we need to censor
it *somewhere*
Previously we did this in a lexer, but it makes more sense to do this
in a proc-macro server.
Improve recovery on malformed format call
The token following a format expression should be a comma. However, when it is replaced with a similar token (such as a dot), then the corresponding error is emitted, but the token is treated as a comma, and the parsing step continues.
r? @petrochenkov
Restore public visibility on some parsing functions for rustfmt
In #74826 the visibility of several parsing functions was reduced. However, rustfmt is an external consumer of some of these functions as well and needs the visibility to be public, similar to other elements in rustc_parse such as `parse_ident`
db534b3ac2/src/librustc_parse/parser/mod.rs (L433-L436)
Previous implementation used the `Parser::parse_expr` function in order
to extract the format expression. If the first comma following the
format expression was mistakenly replaced with a dot, then the next
format expression was eaten by the function, because it looked as a
syntactically valid expression, which resulted in incorrectly spanned
error messages.
The way the format expression is exctracted is changed: we first look at
the first available token in the first argument supplied to the
`format!` macro call. If it is a string literal, then it is promoted as
a format expression immediatly, otherwise we fall back to the original
`parse_expr`-related method.
This allows us to ensure that the parser won't consume too much tokens
when a typo is made.
A test has been created so that it is ensured that the issue is properly
fixed.