Commit Graph

73 Commits

Author SHA1 Message Date
Santiago Pastorino
83abed9df6
Make inline const work for half open ranges 2020-10-22 13:22:12 -03:00
Santiago Pastorino
f8842b9bac
Make inline const work in range patterns 2020-10-22 13:21:18 -03:00
Santiago Pastorino
954b5a81b4
Rename parse_const_expr to parse_const_block 2020-10-22 13:21:18 -03:00
bors
22e6b9c689 Auto merge of #77250 - Aaron1011:feature/flat-token-collection, r=petrochenkov
Rewrite `collect_tokens` implementations to use a flattened buffer

Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-10-21 15:03:14 +00:00
Yuki Okushi
de24210ebf
Rollup merge of #78118 - spastorino:inline-const-followups, r=petrochenkov
Inline const followups

r? @petrochenkov

Follow ups of #77124
2020-10-21 13:59:44 +09:00
Santiago Pastorino
d641cb82c1
Allow NtBlock to parse on check inline const next token 2020-10-19 18:50:58 -03:00
Aaron Hill
593fdd3d45
Rewrite collect_tokens implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-10-19 13:59:18 -04:00
Aaron Hill
f6aec82d4d
Avoid cloning the contents of a TokenStream in a few places 2020-10-19 12:30:41 -04:00
est31
b87e4f36e7 Remove redundant 'static in the compiler 2020-10-18 17:30:15 +02:00
Santiago Pastorino
59d07c3ae5
Parse inline const patterns 2020-10-16 15:15:34 -03:00
Santiago Pastorino
c3e8d7965c
Parse inline const expressions 2020-10-16 15:15:30 -03:00
Dylan DPC
9d8bf44409
Rollup merge of #77780 - calebcartwright:cast-expr-attr-span, r=oli-obk
rustc_parse: fix spans on cast and range exprs with attrs

Currently the span for cast and range expressions does not include the span of attributes associated to the lhs which is causing some issues for us in rustfmt.

```rust
fn foo() -> i64 {
    #[attr]
    1u64 as i64
}

fn bar() -> Range<i32> {
    #[attr]
    1..2
}
```

This corrects the span for cast and range expressions to fully include the span of child nodes
2020-10-16 02:10:22 +02:00
Andy Russell
95daa068f1
fix off-by-one in parameter spans 2020-10-15 09:49:36 -04:00
Yuki Okushi
022d20759b
Rollup merge of #77739 - est31:remove_unused_code, r=petrochenkov,varkor
Remove unused code

Rustc has a builtin lint for detecting unused code inside a crate, but when an item is marked `pub`, the code, even if unused inside the entire workspace, is never marked as such. Therefore, I've built [warnalyzer](https://github.com/est31/warnalyzer) to detect unused items in a cross-crate setting.

Closes https://github.com/est31/warnalyzer/issues/2
2020-10-15 07:32:29 +09:00
est31
215cd36e1c Remove unused code from remaining compiler crates 2020-10-14 04:14:32 +02:00
Caleb Cartwright
4e82da4a48 rustc_parse: correct span on range expr with attrs 2020-10-12 12:24:24 -05:00
Caleb Cartwright
7280f6aa41 rustc_parse: correct span on cast expr with attrs 2020-10-12 11:58:48 -05:00
Aaron Hill
477ce31d37
Remove unused import 2020-10-11 12:09:48 -04:00
Aaron Hill
820953819c
Add relaxed_delim_match parameter 2020-10-11 12:09:48 -04:00
Aaron Hill
ea468f4270
Allow skipping extra paren insertion during AST pretty-printing
Fixes #74616
Makes progress towards #43081
Unblocks PR #76130

When pretty-printing an AST node, we may insert additional parenthesis
to ensure that precedence is properly preserved in code we output.
However, the proc macro implementation relies on comparing a
pretty-printed AST node to the captured `TokenStream`. Inserting extra
parenthesis changes the structure of the reparsed `TokenStream`, making
the comparison fail.

This PR refactors the AST pretty-printing code to allow skipping the
insertion of additional parenthesis. Several freestanding methods are
moved to trait methods on `PrintState`, which keep track of an internal
`insert_extra_parens` flag. This flag is normally `true`, but we expose
a public method which allows pretty-printing a nonterminal with
`insert_extra_parens = false`.

To avoid changing the public interface of `rustc_ast_pretty`, the
freestanding `_to_string` methods are changed to delegate to a
newly-crated `State`. The main pretty-printing code is moved to a new
`state` module to ensure that it does not accidentally call any of these
public helper functions (instead, the internal functions with the same
name should be used).
2020-10-11 12:09:48 -04:00
Vadim Petrochenkov
dee704930d rustc_parse: More precise spans for tuple.0.0 2020-10-11 02:33:49 +03:00
Esteban Küber
e5f83bcd04 Detect blocks that could be struct expr bodies
This approach lives exclusively in the parser, so struct expr bodies
that are syntactically correct on their own but are otherwise incorrect
will still emit confusing errors, like in the following case:

```rust
fn foo() -> Foo {
    bar: Vec::new()
}
```

```
error[E0425]: cannot find value `bar` in this scope
 --> src/file.rs:5:5
  |
5 |     bar: Vec::new()
  |     ^^^ expecting a type here because of type ascription

error[E0214]: parenthesized type parameters may only be used with a `Fn` trait
 --> src/file.rs:5:15
  |
5 |     bar: Vec::new()
  |               ^^^^^ only `Fn` traits may use parentheses

error[E0107]: wrong number of type arguments: expected 1, found 0
 --> src/file.rs:5:10
  |
5 |     bar: Vec::new()
  |          ^^^^^^^^^^ expected 1 type argument
  ```

If that field had a trailing comma, that would be a parse error and it
would trigger the new, more targetted, error:

```
error: struct literal body without path
 --> file.rs:4:17
  |
4 |   fn foo() -> Foo {
  |  _________________^
5 | |     bar: Vec::new(),
6 | | }
  | |_^
  |
help: you might have forgotten to add the struct literal inside the block
  |
4 | fn foo() -> Foo { Path {
5 |     bar: Vec::new(),
6 | } }
  |
```

Partially address last part of #34255.
2020-10-07 13:40:52 -07:00
bors
a14bf4862d Auto merge of #77595 - petrochenkov:asmident, r=oli-obk
builtin_macros: Fix use of interpolated identifiers in `asm!`

Fixes https://github.com/rust-lang/rust/issues/77584
2020-10-07 11:51:51 +00:00
Vadim Petrochenkov
219c66c55c rustc_parse: Make Parser::unexpected public and use it in built-in macros 2020-10-06 00:23:36 +03:00
Eric Huss
35192ff574 Fix span for unicode escape suggestion. 2020-10-05 11:19:08 -07:00
Jonas Schievink
de8d7aa400
Rollup merge of #77444 - estebank:pat-field-label, r=davidtwco
Fix span for incorrect pattern field and add label

Address #73750.
2020-10-02 20:27:16 +02:00
Esteban Küber
7d5a6203ec Fix span for incorrect pattern field and add label 2020-10-02 00:44:16 -07:00
Aaron Hill
46d8c4bdb7
Fix recursive nonterminal expansion during pretty-print/reparse check
Makes progress towards #43081

In PR #73084, we started recursively expanded nonterminals during the
pretty-print/reparse check, allowing them to be properly compared
against the reparsed tokenstream.

Unfortunately, the recursive logic in that PR only handles the case
where a nonterminal appears inside a `TokenTree::Delimited`. If a
nonterminal appears directly in the expanded tokens of another
nonterminal, the inner nonterminal will not be expanded.

This PR fixes the recursive expansion of nonterminals, ensuring that
they are expanded wherever they occur.
2020-09-28 19:14:42 -04:00
Vadim Petrochenkov
fe3e5aa729 pretty-print-reparse hack: Remove an impossible case
Delimiters cannot appear as isolated tokens in a token stream
2020-09-26 20:27:14 +03:00
Vadim Petrochenkov
275bf626f6 pretty-print-reparse hack: Rename some variables for clarity 2020-09-26 20:27:09 +03:00
Dylan DPC
bcdbe79f0c
Rollup merge of #76994 - yuk1ty:fix-small-typo, r=estebank
fix small typo in docs and comments

Fixed `the the` to `the`, as far as I found.
2020-09-23 14:54:07 +02:00
ecstatic-morse
dcf4d1f2be
Rollup merge of #76888 - matthiaskrgr:clippy_single_match_2, r=Dylan-DPC
use if let instead of single match arm expressions

use if let instead of single match arm expressions to compact code and reduce nesting (clippy::single_match)
2020-09-21 20:40:55 -07:00
Aaron Hill
f5d71a9b3c
Don't use zip to compare iterators during pretty-print hack
If the right-hand iterator has exactly one more element than the
left-hand iterator, then both iterators will be fully consumed, but
the extra element will never be compared.
2020-09-21 15:11:59 -04:00
yuk1ty
16047d46a1 fix typo in docs and comments 2020-09-21 12:14:28 +09:00
Matthias Krüger
c690c82ad4 use if let instead of single match arm expressions to compact code and reduce nesting (clippy::single_match) 2020-09-20 11:42:52 +02:00
est31
ebdea01143 Remove redundant #![feature(...)] 's from compiler/ 2020-09-17 07:58:45 +02:00
SNCPlay42
4de9a53d98 improve diagnostics for lifetime after &mut 2020-09-15 10:36:06 -04:00
bors
90b1f5ae59 Auto merge of #76171 - estebank:turbofish-the-revenge, r=davidtwco
Detect turbofish with multiple type params missing leading `::`

Fix #76072.
2020-09-15 10:14:52 +00:00
Esteban Küber
62effcbd5b Detect turbofish with multiple type params missing leading ::
Fix #76072.
2020-09-14 12:06:51 -07:00
bors
17d3277064 Auto merge of #76598 - ad-anssi:diagnostic_errors_fix, r=estebank
Fixing memory exhaustion when formatting short code suggestion

Details can be found in issue #76597. This PR replaces substractions with `saturating_sub`'s to avoid usize wrapping leading to memory exhaustion when formatting short suggestion messages.
2020-09-13 11:08:41 +00:00
bors
dd33766e4a Auto merge of #76585 - Aaron1011:ignore-vert-plus, r=petrochenkov
Ignore `|` and `+` tokens during proc-macro pretty-print check

Fixes #76182

This is an alternative to PR #76188

These tokens are not preserved in the AST in certain cases
(e.g. a leading `|` in a pattern or a trailing `+` in a trait bound).

This PR ignores them entirely during the pretty-print/reparse check
to avoid spuriously using the re-parsed tokenstream.
2020-09-13 05:16:36 +00:00
Aurélien Deharbe
62068a59ee repairing broken error message and rustfix application for the new test
case
2020-09-11 17:31:52 +02:00
Aaron Hill
156ef2bee8
Attach tokens to ast::Stmt
We currently only attach tokens when parsing a `:stmt` matcher for a
`macro_rules!` macro. Proc-macro attributes on statements are still
unstable, and need additional work.
2020-09-10 17:33:06 -04:00
Aaron Hill
c1011165e6
Attach TokenStream to ast::Visibility
A `Visibility` does not have outer attributes, so we only capture tokens
when parsing a `macro_rules!` matcher
2020-09-10 17:33:06 -04:00
Aaron Hill
55082ce413
Attach TokenStream to ast::Path 2020-09-10 17:33:06 -04:00
Aaron Hill
3815e91ccd
Attach tokens to NtMeta (ast::AttrItem)
An `AttrItem` does not have outer attributes, so we only capture tokens
when parsing a `macro_rules!` matcher
2020-09-10 17:33:06 -04:00
Aaron Hill
d5a04a9927
Collect tokens when handling :literal matcher
An `NtLiteral` just wraps an `Expr`, so we don't need to add a new `tokens`
field to an AST struct.
2020-09-10 17:33:06 -04:00
Aaron Hill
1823dea7df
Attach TokenStream to ast::Ty
A `Ty` does not have outer attributes, so we only capture tokens
when parsing a `macro_rules!` matcher
2020-09-10 17:33:05 -04:00
Aaron Hill
de4bd9f0f8
Attach TokenStream to ast::Block
A `Block` does not have outer attributes, so we only capture tokens when
parsing a `macro_rules!` matcher
2020-09-10 17:33:05 -04:00
Aaron Hill
283d4c4d14
Ignore | and + tokens during proc-macro pretty-print check
Fixes #76182

This is an alternative to PR #76188

These tokens are not preserved in the AST in certain cases
(e.g. a leading `|` in a pattern or a trailing `+` in a trait bound).

This PR ignores them entirely during the pretty-print/reparse check
to avoid spuriously using the re-parsed tokenstream.
2020-09-10 16:20:05 -04:00