Commit Graph

109 Commits

Author SHA1 Message Date
Dylan DPC
8216b359e5
Rollup merge of #79185 - petrochenkov:derattr2, r=Aaron1011
expand/resolve: Pre-requisites to "Turn `#[derive]` into a regular macro attribute"

Miscellaneous refactorings and error reporting changes extracted from https://github.com/rust-lang/rust/pull/79078.

Unlike https://github.com/rust-lang/rust/pull/79078 this PR doesn't make any observable changes to the language or library.
r? ```@Aaron1011```
2020-11-19 23:58:42 +01:00
Vadim Petrochenkov
dfb690eaa9 resolve/expand: Misc cleanup 2020-11-19 19:25:20 +03:00
varkor
efcbf1b00b Permit standalone generic parameters as const generic arguments in macros 2020-11-18 13:16:35 +00:00
Jonas Schievink
f66af28641
Rollup merge of #79016 - fanzier:underscore-expressions, r=petrochenkov
Make `_` an expression, to discard values in destructuring assignments

This is the third and final step towards implementing destructuring assignment (RFC: rust-lang/rfcs#2909, tracking issue: #71126). This PR is the third and final part of #71156, which was split up to allow for easier review.

With this PR, an underscore `_` is parsed as an expression but is allowed *only* on the left-hand side of a destructuring assignment. There it simply discards a value, similarly to the wildcard `_` in patterns. For instance,
```rust
(a, _) = (1, 2)
```
will simply assign 1 to `a` and discard the 2. Note that for consistency,
```
_ = foo
```
is also allowed and equivalent to just `foo`.

Thanks to ````@varkor```` who helped with the implementation, particularly around pre-expansion gating.

r? ````@petrochenkov````
2020-11-15 13:39:48 +01:00
Fabian Zaiser
8cf3564310 Add underscore expressions for destructuring assignments
Co-authored-by: varkor <github@varkor.com>
2020-11-14 13:53:12 +00:00
bors
50d3c2a3cb Auto merge of #78736 - petrochenkov:lazyenum, r=Aaron1011
rustc_parse: Remove optimization for 0-length streams in `collect_tokens`

The optimization conflates empty token streams with unknown token stream, which is at least suspicious, and doesn't affect performance because 0-length token streams are very rare.

r? `@Aaron1011`
2020-11-14 04:21:56 +00:00
Vadim Petrochenkov
2879ab793e rustc_parse: Remove optimization for 0-length streams in collect_tokens
The optimization conflates empty token streams with unknown token stream, which is at least suspicious, and doesn't affect performance because 0-length token streams are very rare.
2020-11-12 22:00:48 +03:00
Mara Bos
755dd14e00
Rollup merge of #78836 - fanzier:struct-and-slice-destructuring, r=petrochenkov
Implement destructuring assignment for structs and slices

This is the second step towards implementing destructuring assignment (RFC: rust-lang/rfcs#2909, tracking issue: #71126). This PR is the second part of #71156, which was split up to allow for easier review.

Note that the first PR (#78748) is not merged yet, so it is included as the first commit in this one. I thought this would allow the review to start earlier because I have some time this weekend to respond to reviews. If ``@petrochenkov`` prefers to wait until the first PR is merged, I totally understand, of course.

This PR implements destructuring assignment for (tuple) structs and slices. In order to do this, the following *parser change* was necessary: struct expressions are not required to have a base expression, i.e. `Struct { a: 1, .. }` becomes legal (in order to act like a struct pattern).

Unfortunately, this PR slightly regresses the diagnostics implemented in #77283. However, it is only a missing help message in `src/test/ui/issues/issue-77218.rs`. Other instances of this diagnostic are not affected. Since I don't exactly understand how this help message works and how to fix it yet, I was hoping it's OK to regress this temporarily and fix it in a follow-up PR.

Thanks to ``@varkor`` who helped with the implementation, particularly around the struct rest changes.

r? ``@petrochenkov``
2020-11-12 19:46:09 +01:00
bors
5a6a41e784 Auto merge of #78782 - petrochenkov:nodoctok, r=Aaron1011
Do not collect tokens for doc comments

Doc comment is a single token and AST has all the information to re-create it precisely.
Doc comments are also responsible for majority of calls to `collect_tokens` (with `num_calls == 1` and `num_calls == 0`, cc https://github.com/rust-lang/rust/pull/78736).

(I also moved token collection into `fn parse_attribute` to deduplicate code a bit.)

r? `@Aaron1011`
2020-11-12 00:33:55 +00:00
Fabian Zaiser
de84ad95b4 Implement destructuring assignment for structs and slices
Co-authored-by: varkor <github@varkor.com>
2020-11-11 12:10:52 +00:00
Dylan DPC
8ebca242bc
Rollup merge of #78710 - petrochenkov:macvisit, r=davidtwco
rustc_ast: Do not panic by default when visiting macro calls

Panicking by default made sense when we didn't have HIR or MIR and everything worked on AST, but now all AST visitors run early and majority of them have to deal with macro calls, often by ignoring them.

The second commit renames `visit_mac` to `visit_mac_call`, the corresponding structures were renamed earlier in https://github.com/rust-lang/rust/pull/69589.
2020-11-09 19:06:55 +01:00
Vadim Petrochenkov
12de1e8985 Do not collect tokens for doc comments 2020-11-09 01:47:11 +03:00
Guillaume Gomez
99200f760b Fix even more URLs 2020-11-05 20:11:29 +01:00
Vadim Petrochenkov
90fafc8c8f rustc_ast: visit_mac -> visit_mac_call 2020-11-03 23:39:51 +03:00
Vadim Petrochenkov
3237b3886c rustc_ast: Do not panic by default when visiting macro calls 2020-11-03 20:38:20 +03:00
Aaron Hill
22383b32b8
Use reparsed TokenStream if we captured any inner attributes
Fixes #78675

We now bail out of `prepend_attrs` if we ended up capturing any inner
attributes (which can happen in several places, due to token capturing
for `macro_rules!` arguments.
2020-11-02 13:22:03 -05:00
Vadim Petrochenkov
d0c63bccc5 parser: Cleanup LazyTokenStream and avoid some clones
by using a named struct instead of a closure.
2020-10-31 01:56:34 +03:00
Joshua Nelson
5339bd1ebe Add back missing comments 2020-10-30 10:13:41 -04:00
Joshua Nelson
57c6ed0c07 Fix even more clippy warnings 2020-10-30 10:13:39 -04:00
Yuki Okushi
8111706c18
Rollup merge of #78523 - estebank:fix-return-type-parse-regression, r=dtolnay
Revert invalid `fn` return type parsing change

Revert one of the changes in #78379.

Fix #78507.
2020-10-30 18:00:53 +09:00
Esteban Küber
f9a26643ec Revert invalid fn return type parsing change
Fix #78507.
2020-10-29 08:26:42 -07:00
Yuki Okushi
a7a0538802
Rollup merge of #78460 - varkor:turbofish-string-generic, r=lcnr
Adjust turbofish help message for const generics

Types are no longer special. (This message arguably only makes sense with `min_const_generics` or more, but we'll be there soon.)

r? @lcnr
2020-10-29 12:08:50 +09:00
varkor
6c73adf324 Adjust turbofish help message for const generics 2020-10-28 10:47:27 +00:00
Dylan DPC
6967005e6e
Rollup merge of #78453 - Storyyeller:patch-1, r=jonas-schievink
Fix typo in comments
2020-10-28 01:21:39 +01:00
Dylan DPC
892ebe9afe
Rollup merge of #78379 - estebank:fn-signature-parse, r=varkor
Tweak invalid `fn` header and body parsing

* Rely on regular "expected"/"found" parser error for `fn`, fix #77115
* Recover empty `fn` bodies when encountering `}`
* Recover trailing `>` in return types
* Recover from non-type in array type `[<BAD TOKEN>; LEN]`
2020-10-28 01:21:24 +01:00
Robert Grosse
710c1f4aca
Fix typo in comments 2020-10-27 14:23:58 -07:00
bors
20b1e05a8d Auto merge of #77502 - varkor:const-generics-suggest-enclosing-braces, r=petrochenkov
Suggest that expressions that look like const generic arguments should be enclosed in brackets

I pulled out the changes for const expressions from https://github.com/rust-lang/rust/pull/71592 (without the trait object diagnostic changes) and made some small changes; the implementation is `@estebank's.`

We're also going to want to make some changes separately to account for trait objects (they result in poor diagnostics, as is evident from one of the test cases here), such as an adaption of https://github.com/rust-lang/rust/pull/72273.

Fixes https://github.com/rust-lang/rust/issues/70753.

r? `@petrochenkov`
2020-10-27 09:25:54 +00:00
varkor
ac1454001c Suggest expressions that look like const generic arguments should be enclosed in brackets
Co-Authored-By: Esteban Kuber <github@kuber.com.ar>
2020-10-26 21:54:45 +00:00
Dylan DPC
083a5cd9a2
Rollup merge of #78214 - estebank:match-semicolon, r=oli-obk
Tweak match arm semicolon removal suggestion to account for futures

* Tweak and extend "use `.await`" suggestions
* Suggest removal of semicolon on prior match arm
* Account for `impl Future` when suggesting semicolon removal
* Silence some errors when encountering `await foo()?` as can't be certain what the intent was

*Thanks to https://twitter.com/a_hoverbear/status/1318960787105353728 for pointing this out!*
2020-10-26 03:09:06 +01:00
Esteban Küber
ff61949860 Tweak invalid fn header and body parsing
* Recover empty `fn` bodies when encountering `}`
* Recover trailing `>` in return types
* Recover from non-type in array type `[<BAD TOKEN>; LEN]`
2020-10-25 18:34:14 -07:00
Esteban Küber
040f568815 Rely on regular "expected"/"found" parser error for fn 2020-10-25 12:13:27 -07:00
bors
ffa2e7ae8f Auto merge of #77255 - Aaron1011:feature/collect-attr-tokens, r=petrochenkov
Unconditionally capture tokens for attributes.

This allows us to avoid synthesizing tokens in `prepend_attr`, since we
have the original tokens available.

We still need to synthesize tokens when expanding `cfg_attr`,
but this is an unavoidable consequence of the syntax of `cfg_attr` -
the user does not supply the `#` and `[]` tokens that a `cfg_attr`
expands to.

This is based on PR https://github.com/rust-lang/rust/pull/77250 - this PR exposes a bug in the current `collect_tokens` implementation, which is fixed by the rewrite.
2020-10-24 19:23:32 +00:00
Esteban Küber
3a0227bc49 Silence unnecessary await foo? knock-down error 2020-10-23 08:06:41 -07:00
Aaron Hill
5c7d8d049c
Only call collect_tokens when we have an attribute to parse 2020-10-22 15:17:40 -04:00
Santiago Pastorino
83abed9df6
Make inline const work for half open ranges 2020-10-22 13:22:12 -03:00
Santiago Pastorino
f8842b9bac
Make inline const work in range patterns 2020-10-22 13:21:18 -03:00
Santiago Pastorino
954b5a81b4
Rename parse_const_expr to parse_const_block 2020-10-22 13:21:18 -03:00
Aaron Hill
920bed1213
Don't create an empty LazyTokenStream 2020-10-22 10:09:08 -04:00
Aaron Hill
b9b2546417
Unconditionally capture tokens for attributes.
This allows us to avoid synthesizing tokens in `prepend_attr`, since we
have the original tokens available.

We still need to synthesize tokens when expanding `cfg_attr`,
but this is an unavoidable consequence of the syntax of `cfg_attr` -
the user does not supply the `#` and `[]` tokens that a `cfg_attr`
expands to.
2020-10-21 18:57:29 -04:00
bors
22e6b9c689 Auto merge of #77250 - Aaron1011:feature/flat-token-collection, r=petrochenkov
Rewrite `collect_tokens` implementations to use a flattened buffer

Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-10-21 15:03:14 +00:00
Yuki Okushi
de24210ebf
Rollup merge of #78118 - spastorino:inline-const-followups, r=petrochenkov
Inline const followups

r? @petrochenkov

Follow ups of #77124
2020-10-21 13:59:44 +09:00
Santiago Pastorino
d641cb82c1
Allow NtBlock to parse on check inline const next token 2020-10-19 18:50:58 -03:00
Aaron Hill
593fdd3d45
Rewrite collect_tokens implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-10-19 13:59:18 -04:00
Aaron Hill
f6aec82d4d
Avoid cloning the contents of a TokenStream in a few places 2020-10-19 12:30:41 -04:00
est31
b87e4f36e7 Remove redundant 'static in the compiler 2020-10-18 17:30:15 +02:00
Santiago Pastorino
59d07c3ae5
Parse inline const patterns 2020-10-16 15:15:34 -03:00
Santiago Pastorino
c3e8d7965c
Parse inline const expressions 2020-10-16 15:15:30 -03:00
Dylan DPC
9d8bf44409
Rollup merge of #77780 - calebcartwright:cast-expr-attr-span, r=oli-obk
rustc_parse: fix spans on cast and range exprs with attrs

Currently the span for cast and range expressions does not include the span of attributes associated to the lhs which is causing some issues for us in rustfmt.

```rust
fn foo() -> i64 {
    #[attr]
    1u64 as i64
}

fn bar() -> Range<i32> {
    #[attr]
    1..2
}
```

This corrects the span for cast and range expressions to fully include the span of child nodes
2020-10-16 02:10:22 +02:00
Andy Russell
95daa068f1
fix off-by-one in parameter spans 2020-10-15 09:49:36 -04:00
Yuki Okushi
022d20759b
Rollup merge of #77739 - est31:remove_unused_code, r=petrochenkov,varkor
Remove unused code

Rustc has a builtin lint for detecting unused code inside a crate, but when an item is marked `pub`, the code, even if unused inside the entire workspace, is never marked as such. Therefore, I've built [warnalyzer](https://github.com/est31/warnalyzer) to detect unused items in a cross-crate setting.

Closes https://github.com/est31/warnalyzer/issues/2
2020-10-15 07:32:29 +09:00