Convert all the crates that have had their diagnostic migration
completed (except save_analysis because that will be deleted soon and
apfloat because of the licensing problem).
Document some of the AST nodes
Someone was confused about some of this on Zulip, added some docs
We probably should make sure every last field/variant in the AST/HIR is documented at some point
`@bors` rollup
Remove the `..` from the body, only a few invocations used it and it's
inconsistent with rust syntax.
Use `;` instead of `,` between consts. As the Rust syntax gods inteded.
This removes the `custom` format functionality as its only user was
trivially migrated to using a normal format.
If a new use case for a custom formatting impl pops up, you can add it
back.
Remove `token::Lit` from `ast::MetaItemLit`.
Currently `ast::MetaItemLit` represents the literal kind twice. This PR removes that redundancy. Best reviewed one commit at a time.
r? `@petrochenkov`
It has a single call site in the HIR pretty printer, where the resulting
token lit is immediately converted to a string.
This commit replaces `LitKind::synthesize_token_lit` with a `Display`
impl for `LitKind`, which can be used by the HIR pretty printer.
There are better ways to create the meta items.
- In the rustdoc tests, the commit adds `dummy_meta_item_name_value`,
which matches the existing `dummy_meta_item_word` function and
`dummy_meta_item_list` macro.
- In `types.rs` the commit clones the existing meta item and then
modifies the clone.
Remove useless borrows and derefs
They are nothing more than noise.
<sub>These are not all of them, but my clippy started crashing (stack overflow), so rip :(</sub>
`token::Lit` contains a `kind` field that indicates what kind of literal
it is. `ast::MetaItemLit` currently wraps a `token::Lit` but also has
its own `kind` field. This means that `ast::MetaItemLit` encodes the
literal kind in two different ways.
This commit changes `ast::MetaItemLit` so it no longer wraps
`token::Lit`. It now contains the `symbol` and `suffix` fields from
`token::Lit`, but not the `kind` field, eliminating the redundancy.
This is required to distinguish between cooked and raw byte string
literals in an `ast::LitKind`, without referring to an adjacent
`token::Lit`. It's a prerequisite for the next commit.
Lower them into a single item with multiple resolutions instead.
This also allows to remove additional `NodId`s and `DefId`s related to those additional items.
There is code for converting `Attribute` (syntactic) to `MetaItem`
(semantic). There is also code for the reverse direction. The reverse
direction isn't really necessary; it's currently only used when
generating attributes, e.g. in `derive` code.
This commit adds some new functions for creating `Attributes`s directly,
without involving `MetaItem`s: `mk_attr_word`, `mk_attr_name_value_str`,
`mk_attr_nested_word`, and
`ExtCtxt::attr_{word,name_value_str,nested_word}`.
These new methods replace the old functions for creating `Attribute`s:
`mk_attr_inner`, `mk_attr_outer`, and `ExtCtxt::attribute`. Those
functions took `MetaItem`s as input, and relied on many other functions
that created `MetaItems`, which are also removed: `mk_name_value_item`,
`mk_list_item`, `mk_word_item`, `mk_nested_word_item`,
`{MetaItem,MetaItemKind,NestedMetaItem}::token_trees`,
`MetaItemKind::attr_args`, `MetaItemLit::{from_lit_kind,to_token}`,
`ExtCtxt::meta_word`.
Overall this cuts more than 100 lines of code and makes thing simpler.
`Lit::from_included_bytes` calls `Lit::from_lit_kind`, but the two call
sites only need the resulting `token::Lit`, not the full `ast::Lit`.
This commit changes those call sites to use `LitKind::to_token_lit`,
which means `from_included_bytes` can be removed.
`MacArgs` is an enum with three variants: `Empty`, `Delimited`, and `Eq`. It's
used in two ways:
- For representing attribute macro arguments (e.g. in `AttrItem`), where all
three variants are used.
- For representing function-like macros (e.g. in `MacCall` and `MacroDef`),
where only the `Delimited` variant is used.
In other words, `MacArgs` is used in two quite different places due to them
having partial overlap. I find this makes the code hard to read. It also leads
to various unreachable code paths, and allows invalid values (such as
accidentally using `MacArgs::Empty` in a `MacCall`).
This commit splits `MacArgs` in two:
- `DelimArgs` is a new struct just for the "delimited arguments" case. It is
now used in `MacCall` and `MacroDef`.
- `AttrArgs` is a renaming of the old `MacArgs` enum for the attribute macro
case. Its `Delimited` variant now contains a `DelimArgs`.
Various other related things are renamed as well.
These changes make the code clearer, avoids several unreachable paths, and
disallows the invalid values.
Instead of `ast::Lit`.
Literal lowering now happens at two different times. Expression literals
are lowered when HIR is crated. Attribute literals are lowered during
parsing.
This commit changes the language very slightly. Some programs that used
to not compile now will compile. This is because some invalid literals
that are removed by `cfg` or attribute macros will no longer trigger
errors. See this comment for more details:
https://github.com/rust-lang/rust/pull/102944#issuecomment-1277476773
Recover wrong-cased keywords that start items
(_this pr was inspired by [this tweet](https://twitter.com/Azumanga/status/1552982326409367561)_)
r? `@estebank`
We've talked a bit about this recovery, but I just wanted to make sure that this is the right approach :)
For now I've only added the case insensitive recovery to `use`s, since most other items like `impl` blocks, modules, functions can start with multiple keywords which complicates the matter.
PR #98758 introduced code to avoid redundant assertions in derived code
like this:
```
let _: ::core::clone::AssertParamIsClone<u32>;
let _: ::core::clone::AssertParamIsClone<u32>;
```
But the predicate `is_simple_path` introduced as part of this failed to
account for generic arguments. Therefore the deriving code erroneously
considers types like `Option<bool>` and `Option<f32>` to be the same.
This commit fixes `is_simple_path`.
Fixes#103157.
Remove `TokenStreamBuilder`
`TokenStreamBuilder` is used to combine multiple token streams. It can be removed, leaving the code a little simpler and a little faster.
r? `@Aaron1011`
`TokenStreamBuilder` exists to concatenate multiple `TokenStream`s
together. This commit removes it, and moves the concatenation
functionality directly into `TokenStream`, via two new methods
`push_tree` and `push_stream`. This makes things both simpler and
faster.
`push_tree` is particularly important. `TokenStreamBuilder` only had a
single `push` method, which pushed a stream. But in practice most of the
time we push a single token tree rather than a stream, and `push_tree`
avoids the need to build a token stream with a single entry (which
requires two allocations, one for the `Lrc` and one for the `Vec`).
The main `push_tree` use arises from a change to one of the `ToInternal`
impls in `proc_macro_server.rs`. It now returns a `SmallVec` instead of
a `TokenStream`. This return value is then iterated over by
`concat_trees`, which does `push_tree` on each element. Furthermore, the
use of `SmallVec` avoids more allocations, because there is always only
one or two token trees.
Note: the removed `TokenStreamBuilder::push` method had some code to
deal with a quadratic blowup case from #57735. This commit removes the
code. I tried and failed to reproduce the blowup from that PR, before
and after this change. Various other changes have happened to
`TokenStreamBuilder` in the meantime, so I suspect the original problem
is no longer relevant, though I don't have proof of this. Generally
speaking, repeatedly extending a `Vec` without pre-determining its
capacity is *not* quadratic. It's also incredibly common, within rustc
and many other Rust programs, so if there were performance problems
there you'd think it would show up in other places, too.
Group together more size assertions.
Also add a few more assertions for some relevant token-related types.
And fix an erroneous comment in `rustc_errors`.
r? `@lqd`
On later stages, the feature is already stable.
Result of running:
rg -l "feature.let_else" compiler/ src/librustdoc/ library/ | xargs sed -s -i "s#\\[feature.let_else#\\[cfg_attr\\(bootstrap, feature\\(let_else\\)#"
make `mk_attr_id` part of `ParseSess`
Updates #48685
The current `mk_attr_id` uses the `AtomicU32` type, which is not very efficient and adds a lot of lock contention in a parallel environment.
This PR refers to the task list in #48685, uses `mk_attr_id` as a method of the `AttrIdGenerator` struct, and adds a new field `attr_id_generator` to `ParseSess`.
`AttrIdGenerator` uses the `WorkerLocal`, which has two advantages: 1. `Cell` is more efficient than `AtomicU32`, and does not increase any lock contention. 2. We put the index of the work thread in the first few bits of the generated `AttrId`, so that the `AttrId` generated in different threads can be easily guaranteed to be unique.
cc `@cjgillot`
Initial implementation of dyn*
This PR adds extremely basic and incomplete support for [dyn*](https://smallcultfollowing.com/babysteps//blog/2022/03/29/dyn-can-we-make-dyn-sized/). The goal is to get something in tree behind a flag to make collaboration easier, and also to make sure the implementation so far is not unreasonable. This PR does quite a few things:
* Introduce `dyn_star` feature flag
* Adds parsing for `dyn* Trait` types
* Defines `dyn* Trait` as a sized type
* Adds support for explicit casts, like `42usize as dyn* Debug`
* Including const evaluation of such casts
* Adds codegen for drop glue so things are cleaned up properly when a `dyn* Trait` object goes out of scope
* Adds codegen for method calls, at least for methods that take `&self`
Quite a bit is still missing, but this gives us a starting point. Note that this is never intended to become stable surface syntax for Rust, but rather `dyn*` is planned to be used as an implementation detail for async functions in dyn traits.
Joint work with `@nikomatsakis` and `@compiler-errors.`
r? `@bjorn3`
The primary purpose of this commit is to introduce the
dyn_star flag so we can begin experimenting with implementation.
In order to have something to do in the feature gate test, we also add
parser support for `dyn* Trait` objects. These are currently treated
just like `dyn Trait` objects, but this will change in the future.
Note that for now `dyn* Trait` is experimental syntax to enable
implementing some of the machinery needed for async fn in dyn traits
without fully supporting the feature.