Parse expression after `else` as a condition if followed by `{`
Fixes#49361.
Two things:
1. This wording needs help. I can never find a natural/intuitive phrasing when I write diagnostics 😅
2. Do we even want to show the "wrap in braces" case? I would assume most of the time the "add an `if`" case is the right one.
Remove hacks in `make_token_stream`.
`make_tokenstream` has three commented hacks, and a comment at the top
referring to #67062. These hacks have no observable effect, at least as judged
by running the test suite. The hacks were added in #82608, with an explanation
[here](https://github.com/rust-lang/rust/pull/82608#issuecomment-812877329). It
appears that one of the following is true: (a) they never did anything useful,
(b) they do something useful but we have no test coverage for them, or (c)
something has changed in the meantime that means they are no longer necessary.
This commit removes the hacks and the comments, in the hope that (b) is not
true.
r? `@Aaron1011`
Begin fixing all the broken doctests in `compiler/`
Begins to fix#95994.
All of them pass now but 24 of them I've marked with `ignore HELP (<explanation>)` (asking for help) as I'm unsure how to get them to work / if we should leave them as they are.
There are also a few that I marked `ignore` that could maybe be made to work but seem less important.
Each `ignore` has a rough "reason" for ignoring after it parentheses, with
- `(pseudo-rust)` meaning "mostly rust-like but contains foreign syntax"
- `(illustrative)` a somewhat catchall for either a fragment of rust that doesn't stand on its own (like a lone type), or abbreviated rust with ellipses and undeclared types that would get too cluttered if made compile-worthy.
- `(not-rust)` stuff that isn't rust but benefits from the syntax highlighting, like MIR.
- `(internal)` uses `rustc_*` code which would be difficult to make work with the testing setup.
Those reason notes are a bit inconsistently applied and messy though. If that's important I can go through them again and try a more principled approach. When I run `rg '```ignore \(' .` on the repo, there look to be lots of different conventions other people have used for this sort of thing. I could try unifying them all if that would be helpful.
I'm not sure if there was a better existing way to do this but I wrote my own script to help me run all the doctests and wade through the output. If that would be useful to anyone else, I put it here: https://github.com/Elliot-Roberts/rust_doctest_fixing_tool
Overhaul `MacArgs`
Motivation:
- Clarify some code that I found hard to understand.
- Eliminate one use of three places where `TokenKind::Interpolated` values are created.
r? `@petrochenkov`
The value in `MacArgs::Eq` is currently represented as a `Token`.
Because of `TokenKind::Interpolated`, `Token` can be either a token or
an arbitrary AST fragment. In practice, a `MacArgs::Eq` starts out as a
literal or macro call AST fragment, and then is later lowered to a
literal token. But this is very non-obvious. `Token` is a much more
general type than what is needed.
This commit restricts things, by introducing a new type `MacArgsEqKind`
that is either an AST expression (pre-lowering) or an AST literal
(post-lowering). The downside is that the code is a bit more verbose in
a few places. The benefit is that makes it much clearer what the
possibilities are (though also shorter in some other places). Also, it
removes one use of `TokenKind::Interpolated`, taking us a step closer to
removing that variant, which will let us make `Token` impl `Copy` and
remove many "handle Interpolated" code paths in the parser.
Things to note:
- Error messages have improved. Messages like this:
```
unexpected token: `"bug" + "found"`
```
now say "unexpected expression", which makes more sense. Although
arbitrary expressions can exist within tokens thanks to
`TokenKind::Interpolated`, that's not obvious to anyone who doesn't
know compiler internals.
- In `parse_mac_args_common`, we no longer need to collect tokens for
the value expression.
Using an obviously-placeholder syntax. An RFC would still be needed before this could have any chance at stabilization, and it might be removed at any point.
But I'd really like to have it in nightly at least to ensure it works well with try_trait_v2, especially as we refactor the traits.
`make_tokenstream` has three commented hacks, and a comment at the top
referring to #67062. These hacks have no observable effect, at least as judged
by running the test suite. The hacks were added in #82608, with an explanation
[here](https://github.com/rust-lang/rust/pull/82608#issuecomment-812877329). It
appears that one of the following is true: (a) they never did anything useful,
(b) they do something useful but we have no test coverage for them, or (c)
something has changed in the meantime that means they are no longer necessary.
This commit removes the hacks and the comments, in the hope that (b) is not
true.
Change `span_suggestion` (and variants) to take `impl ToString` rather
than `String` for the suggested code, as this simplifies the
requirements on the diagnostic derive.
Signed-off-by: David Wood <david.wood@huawei.com>
rustc_ast: Harmonize delimiter naming with `proc_macro::Delimiter`
Compiler cannot reuse `proc_macro::Delimiter` directly due to extra impls, but can at least use the same naming.
After this PR the only difference between these two enums is that `proc_macro::Delimiter::None` is turned into `token::Delimiter::Invisible`.
It's my mistake that the invisible delimiter is called `None` on stable, during the stabilization I audited the naming and wrote the docs, but missed the fact that the `None` naming gives a wrong and confusing impression about what this thing is.
cc https://github.com/rust-lang/rust/pull/96421
r? ``@nnethercote``
Less `NoDelim`
Currently there are several places where `NoDelim` (which really means "implicit delimiter" or "invisible delimiter") is used to mean "no delimiter". The name `NoDelim` is a bit misleading, and may be a cause.
This PR changes these places, e.g. by changing a `DelimToken` to `Option<DelimToken>` and then using `None` to mean "no delimiter". As a result, the *only* place where `NoDelim` values are now produced is within:
- `Delimiter::to_internal()`, when converting from `Delimiter::None`.
- `FlattenNonterminals::process_token()`, when converting `TokenKind::Interpolated`.
r? ````@petrochenkov````
This lets us clone just the parts within a `TokenTree` that need
cloning, rather than the entire thing. This is a surprisingly large
performance win, up to 4% on `async-std-1.10.0`.
This makes `CloseDelim` handling more like `OpenDelim` handling, which
produces `OpenDelim` and pushes the stack at the same time. It requires
some adjustment to `parse_token_tree` now that we don't remain within
the frame after getting the `CloseDelim`.
A Google search of the error message fails to return any relevant
resuts, suggesting this has never occurred in practice. And removeing it
reduces instruction counts by up to 2% on some benchmarks.
The loop is there to handle a `NoDelim` open/close token. This commit
changes `TokenCursor::inlined_next` so it never returns such a token.
This is a performance win because the conditional test in `bump()` is
removed.
If the parser needs changing in the future to handle `NoDelim` tokens,
then `inlined_next()` can easily be changed to return them.
The `DelimToken` here is `NoDelim`, which means the returned delim
tokens will just be ignored by `Parser::bump()`. This commit changes
things so the delim tokens won't be returned.