2016-06-29 13:55:10 -05:00
|
|
|
//! # Token Streams
|
|
|
|
//!
|
2017-05-12 13:05:39 -05:00
|
|
|
//! `TokenStream`s represent syntactic objects before they are converted into ASTs.
|
2021-01-03 13:54:56 -06:00
|
|
|
//! A `TokenStream` is, roughly speaking, a sequence of [`TokenTree`]s,
|
|
|
|
//! which are themselves a single [`Token`] or a `Delimited` subsequence of tokens.
|
2016-06-29 13:55:10 -05:00
|
|
|
//!
|
2016-07-19 17:50:34 -05:00
|
|
|
//! ## Ownership
|
2019-02-08 07:53:55 -06:00
|
|
|
//!
|
2019-09-05 21:56:45 -05:00
|
|
|
//! `TokenStream`s are persistent data structures constructed as ropes with reference
|
2017-05-12 13:05:39 -05:00
|
|
|
//! counted-children. In general, this means that calling an operation on a `TokenStream`
|
|
|
|
//! (such as `slice`) produces an entirely new `TokenStream` from the borrowed reference to
|
2021-01-03 13:54:56 -06:00
|
|
|
//! the original. This essentially coerces `TokenStream`s into "views" of their subparts,
|
2017-05-12 13:05:39 -05:00
|
|
|
//! and a borrowed `TokenStream` is sufficient to build an owned `TokenStream` without taking
|
2016-07-19 17:50:34 -05:00
|
|
|
//! ownership of the original.
|
2016-06-20 10:49:33 -05:00
|
|
|
|
2019-10-11 05:46:32 -05:00
|
|
|
use crate::token::{self, DelimToken, Token, TokenKind};
|
2019-02-06 11:33:01 -06:00
|
|
|
|
2019-11-10 11:24:37 -06:00
|
|
|
use rustc_data_structures::stable_hasher::{HashStable, StableHasher};
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
use rustc_data_structures::sync::{self, Lrc};
|
2019-12-22 16:42:04 -06:00
|
|
|
use rustc_macros::HashStable_Generic;
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
use rustc_serialize::{Decodable, Decoder, Encodable, Encoder};
|
2019-12-31 11:15:40 -06:00
|
|
|
use rustc_span::{Span, DUMMY_SP};
|
2019-12-22 16:42:04 -06:00
|
|
|
use smallvec::{smallvec, SmallVec};
|
2016-06-29 13:55:10 -05:00
|
|
|
|
2020-10-30 16:40:41 -05:00
|
|
|
use std::{fmt, iter, mem};
|
2016-07-04 05:25:50 -05:00
|
|
|
|
2021-01-03 13:54:56 -06:00
|
|
|
/// When the main Rust parser encounters a syntax-extension invocation, it
|
|
|
|
/// parses the arguments to the invocation as a token tree. This is a very
|
|
|
|
/// loose structure, such that all sorts of different AST fragments can
|
2016-06-20 10:49:33 -05:00
|
|
|
/// be passed to syntax extensions using a uniform type.
|
|
|
|
///
|
|
|
|
/// If the syntax extension is an MBE macro, it will attempt to match its
|
|
|
|
/// LHS token tree against the provided token tree, and if it finds a
|
|
|
|
/// match, will transcribe the RHS token tree, splicing in any captured
|
2017-05-12 13:05:39 -05:00
|
|
|
/// `macro_parser::matched_nonterminals` into the `SubstNt`s it finds.
|
2016-06-20 10:49:33 -05:00
|
|
|
///
|
|
|
|
/// The RHS of an MBE macro is the only place `SubstNt`s are substituted.
|
|
|
|
/// Nothing special happens to misnamed or misplaced `SubstNt`s.
|
2020-06-11 09:49:57 -05:00
|
|
|
#[derive(Debug, Clone, PartialEq, Encodable, Decodable, HashStable_Generic)]
|
2016-06-20 10:49:33 -05:00
|
|
|
pub enum TokenTree {
|
2021-01-03 13:54:56 -06:00
|
|
|
/// A single token.
|
2019-06-04 12:42:43 -05:00
|
|
|
Token(Token),
|
2021-01-03 13:54:56 -06:00
|
|
|
/// A delimited sequence of token trees.
|
2019-01-08 23:53:14 -06:00
|
|
|
Delimited(DelimSpan, DelimToken, TokenStream),
|
2016-06-20 10:49:33 -05:00
|
|
|
}
|
|
|
|
|
2020-11-23 00:43:55 -06:00
|
|
|
#[derive(Copy, Clone)]
|
|
|
|
pub enum CanSynthesizeMissingTokens {
|
|
|
|
Yes,
|
|
|
|
No,
|
|
|
|
}
|
|
|
|
|
2019-05-19 13:44:06 -05:00
|
|
|
// Ensure all fields of `TokenTree` is `Send` and `Sync`.
|
|
|
|
#[cfg(parallel_compiler)]
|
|
|
|
fn _dummy()
|
|
|
|
where
|
2019-06-04 12:42:43 -05:00
|
|
|
Token: Send + Sync,
|
2019-05-19 13:44:06 -05:00
|
|
|
DelimSpan: Send + Sync,
|
|
|
|
DelimToken: Send + Sync,
|
|
|
|
TokenStream: Send + Sync,
|
2019-12-22 16:42:04 -06:00
|
|
|
{
|
|
|
|
}
|
2019-05-19 13:44:06 -05:00
|
|
|
|
2016-06-20 10:49:33 -05:00
|
|
|
impl TokenTree {
|
2021-01-03 13:54:56 -06:00
|
|
|
/// Checks if this `TokenTree` is equal to the other, regardless of span information.
|
2016-06-29 13:55:10 -05:00
|
|
|
pub fn eq_unspanned(&self, other: &TokenTree) -> bool {
|
|
|
|
match (self, other) {
|
2019-06-04 12:42:43 -05:00
|
|
|
(TokenTree::Token(token), TokenTree::Token(token2)) => token.kind == token2.kind,
|
|
|
|
(TokenTree::Delimited(_, delim, tts), TokenTree::Delimited(_, delim2, tts2)) => {
|
2019-01-08 23:53:14 -06:00
|
|
|
delim == delim2 && tts.eq_unspanned(&tts2)
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
2019-06-04 12:42:43 -05:00
|
|
|
_ => false,
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-03 13:54:56 -06:00
|
|
|
/// Retrieves the `TokenTree`'s span.
|
2016-06-29 13:55:10 -05:00
|
|
|
pub fn span(&self) -> Span {
|
2019-06-04 12:42:43 -05:00
|
|
|
match self {
|
|
|
|
TokenTree::Token(token) => token.span,
|
2018-11-29 17:02:04 -06:00
|
|
|
TokenTree::Delimited(sp, ..) => sp.entire(),
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-11-12 12:05:20 -06:00
|
|
|
/// Modify the `TokenTree`'s span in-place.
|
2017-07-19 23:54:01 -05:00
|
|
|
pub fn set_span(&mut self, span: Span) {
|
2019-06-04 12:42:43 -05:00
|
|
|
match self {
|
|
|
|
TokenTree::Token(token) => token.span = span,
|
|
|
|
TokenTree::Delimited(dspan, ..) => *dspan = DelimSpan::from_single(span),
|
2017-07-19 23:54:01 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-06-05 05:25:26 -05:00
|
|
|
pub fn token(kind: TokenKind, span: Span) -> TokenTree {
|
2019-06-05 01:39:34 -05:00
|
|
|
TokenTree::Token(Token::new(kind, span))
|
2019-06-04 12:42:43 -05:00
|
|
|
}
|
|
|
|
|
2018-11-29 17:02:04 -06:00
|
|
|
/// Returns the opening delimiter as a token tree.
|
2019-11-03 05:58:01 -06:00
|
|
|
pub fn open_tt(span: DelimSpan, delim: DelimToken) -> TokenTree {
|
|
|
|
TokenTree::token(token::OpenDelim(delim), span.open)
|
2018-11-29 17:02:04 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns the closing delimiter as a token tree.
|
2019-11-03 05:58:01 -06:00
|
|
|
pub fn close_tt(span: DelimSpan, delim: DelimToken) -> TokenTree {
|
|
|
|
TokenTree::token(token::CloseDelim(delim), span.close)
|
2018-11-29 17:02:04 -06:00
|
|
|
}
|
2020-03-07 06:58:27 -06:00
|
|
|
|
|
|
|
pub fn uninterpolate(self) -> TokenTree {
|
|
|
|
match self {
|
|
|
|
TokenTree::Token(token) => TokenTree::Token(token.uninterpolate().into_owned()),
|
|
|
|
tt => tt,
|
|
|
|
}
|
|
|
|
}
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
|
2019-11-10 11:24:37 -06:00
|
|
|
impl<CTX> HashStable<CTX> for TokenStream
|
2019-12-22 16:42:04 -06:00
|
|
|
where
|
|
|
|
CTX: crate::HashStableContext,
|
2019-11-10 11:24:37 -06:00
|
|
|
{
|
|
|
|
fn hash_stable(&self, hcx: &mut CTX, hasher: &mut StableHasher) {
|
|
|
|
for sub_tt in self.trees() {
|
|
|
|
sub_tt.hash_stable(hcx, hasher);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-30 16:40:41 -05:00
|
|
|
pub trait CreateTokenStream: sync::Send + sync::Sync {
|
|
|
|
fn create_token_stream(&self) -> TokenStream;
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
}
|
|
|
|
|
2020-10-30 16:40:41 -05:00
|
|
|
impl CreateTokenStream for TokenStream {
|
|
|
|
fn create_token_stream(&self) -> TokenStream {
|
|
|
|
self.clone()
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-03 13:54:56 -06:00
|
|
|
/// A lazy version of [`TokenStream`], which defers creation
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
/// of an actual `TokenStream` until it is needed.
|
2020-10-30 16:40:41 -05:00
|
|
|
/// `Box` is here only to reduce the structure size.
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
#[derive(Clone)]
|
2020-10-30 16:40:41 -05:00
|
|
|
pub struct LazyTokenStream(Lrc<Box<dyn CreateTokenStream>>);
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
|
2020-10-30 16:40:41 -05:00
|
|
|
impl LazyTokenStream {
|
|
|
|
pub fn new(inner: impl CreateTokenStream + 'static) -> LazyTokenStream {
|
|
|
|
LazyTokenStream(Lrc::new(Box::new(inner)))
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn create_token_stream(&self) -> TokenStream {
|
|
|
|
self.0.create_token_stream()
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-30 16:40:41 -05:00
|
|
|
impl fmt::Debug for LazyTokenStream {
|
|
|
|
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
|
|
|
fmt::Debug::fmt("LazyTokenStream", f)
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-30 16:40:41 -05:00
|
|
|
impl<S: Encoder> Encodable<S> for LazyTokenStream {
|
2020-10-31 14:47:57 -05:00
|
|
|
fn encode(&self, s: &mut S) -> Result<(), S::Error> {
|
|
|
|
// Used by AST json printing.
|
|
|
|
Encodable::encode(&self.create_token_stream(), s)
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-30 16:40:41 -05:00
|
|
|
impl<D: Decoder> Decodable<D> for LazyTokenStream {
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
fn decode(_d: &mut D) -> Result<Self, D::Error> {
|
|
|
|
panic!("Attempted to decode LazyTokenStream");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-30 16:40:41 -05:00
|
|
|
impl<CTX> HashStable<CTX> for LazyTokenStream {
|
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.
The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.
This implementation has a number of advantages over the previous one:
* It is significantly simpler, with no edge cases around capturing the
start/end of a delimited group.
* It can be easily extended to allow replacing tokens an an arbitrary
'depth' by just using `Vec::splice` at the proper position. This is
important for PR #76130, which requires us to track information about
attributes along with tokens.
* The lazy approach to `TokenStream` construction allows us to easily
parse an AST struct, and then decide after the fact whether we need a
`TokenStream`. This will be useful when we start collecting tokens for
`Attribute` - we can discard the `LazyTokenStream` if the parsed
attribute doesn't need tokens (e.g. is a builtin attribute).
The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-09-26 20:56:29 -05:00
|
|
|
fn hash_stable(&self, _hcx: &mut CTX, _hasher: &mut StableHasher) {
|
|
|
|
panic!("Attempted to compute stable hash for LazyTokenStream");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-03 13:54:56 -06:00
|
|
|
/// A `TokenStream` is an abstract sequence of tokens, organized into [`TokenTree`]s.
|
2019-09-05 21:56:45 -05:00
|
|
|
///
|
2017-01-17 21:27:09 -06:00
|
|
|
/// The goal is for procedural macros to work with `TokenStream`s and `TokenTree`s
|
|
|
|
/// instead of a representation of the abstract syntax tree.
|
2021-01-03 13:54:56 -06:00
|
|
|
/// Today's `TokenTree`s can still contain AST via `token::Interpolated` for
|
|
|
|
/// backwards compatability.
|
2020-06-11 09:49:57 -05:00
|
|
|
#[derive(Clone, Debug, Default, Encodable, Decodable)]
|
2020-09-14 00:45:10 -05:00
|
|
|
pub struct TokenStream(pub(crate) Lrc<Vec<TreeAndSpacing>>);
|
2016-07-19 17:50:34 -05:00
|
|
|
|
2020-09-03 10:21:53 -05:00
|
|
|
pub type TreeAndSpacing = (TokenTree, Spacing);
|
2018-12-18 21:53:52 -06:00
|
|
|
|
2018-11-29 17:02:04 -06:00
|
|
|
// `TokenStream` is used a lot. Make sure it doesn't unintentionally get bigger.
|
2021-03-06 10:02:48 -06:00
|
|
|
#[cfg(all(target_arch = "x86_64", target_pointer_width = "64"))]
|
2019-11-11 13:18:35 -06:00
|
|
|
rustc_data_structures::static_assert_size!(TokenStream, 8);
|
2018-11-29 17:02:04 -06:00
|
|
|
|
2020-06-11 09:49:57 -05:00
|
|
|
#[derive(Clone, Copy, Debug, PartialEq, Encodable, Decodable)]
|
2020-09-03 10:21:53 -05:00
|
|
|
pub enum Spacing {
|
|
|
|
Alone,
|
2018-12-19 16:50:14 -06:00
|
|
|
Joint,
|
|
|
|
}
|
|
|
|
|
2018-07-15 01:50:08 -05:00
|
|
|
impl TokenStream {
|
|
|
|
/// Given a `TokenStream` with a `Stream` of only two arguments, return a new `TokenStream`
|
|
|
|
/// separating the two arguments with a comma for diagnostic suggestions.
|
2019-10-16 03:59:30 -05:00
|
|
|
pub fn add_comma(&self) -> Option<(TokenStream, Span)> {
|
2018-08-08 00:28:09 -05:00
|
|
|
// Used to suggest if a user writes `foo!(a b);`
|
2019-10-09 15:29:02 -05:00
|
|
|
let mut suggestion = None;
|
|
|
|
let mut iter = self.0.iter().enumerate().peekable();
|
|
|
|
while let Some((pos, ts)) = iter.next() {
|
|
|
|
if let Some((_, next)) = iter.peek() {
|
|
|
|
let sp = match (&ts, &next) {
|
|
|
|
(_, (TokenTree::Token(Token { kind: token::Comma, .. }), _)) => continue,
|
2019-12-22 16:42:04 -06:00
|
|
|
(
|
2020-09-03 10:21:53 -05:00
|
|
|
(TokenTree::Token(token_left), Spacing::Alone),
|
2019-12-22 16:42:04 -06:00
|
|
|
(TokenTree::Token(token_right), _),
|
|
|
|
) if ((token_left.is_ident() && !token_left.is_reserved_ident())
|
|
|
|
|| token_left.is_lit())
|
|
|
|
&& ((token_right.is_ident() && !token_right.is_reserved_ident())
|
|
|
|
|| token_right.is_lit()) =>
|
|
|
|
{
|
|
|
|
token_left.span
|
|
|
|
}
|
2020-09-03 10:21:53 -05:00
|
|
|
((TokenTree::Delimited(sp, ..), Spacing::Alone), _) => sp.entire(),
|
2019-10-09 15:29:02 -05:00
|
|
|
_ => continue,
|
|
|
|
};
|
|
|
|
let sp = sp.shrink_to_hi();
|
2020-09-03 10:21:53 -05:00
|
|
|
let comma = (TokenTree::token(token::Comma, sp), Spacing::Alone);
|
2019-10-09 15:29:02 -05:00
|
|
|
suggestion = Some((pos, comma, sp));
|
2018-07-15 01:50:08 -05:00
|
|
|
}
|
|
|
|
}
|
2019-10-09 15:29:02 -05:00
|
|
|
if let Some((pos, comma, sp)) = suggestion {
|
2020-10-16 04:43:39 -05:00
|
|
|
let mut new_stream = Vec::with_capacity(self.0.len() + 1);
|
2019-10-09 15:29:02 -05:00
|
|
|
let parts = self.0.split_at(pos + 1);
|
|
|
|
new_stream.extend_from_slice(parts.0);
|
|
|
|
new_stream.push(comma);
|
|
|
|
new_stream.extend_from_slice(parts.1);
|
|
|
|
return Some((TokenStream::new(new_stream), sp));
|
|
|
|
}
|
2018-07-15 01:50:08 -05:00
|
|
|
None
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-01-17 21:27:09 -06:00
|
|
|
impl From<TokenTree> for TokenStream {
|
2018-12-18 21:53:52 -06:00
|
|
|
fn from(tree: TokenTree) -> TokenStream {
|
2020-09-03 10:21:53 -05:00
|
|
|
TokenStream::new(vec![(tree, Spacing::Alone)])
|
2018-12-18 21:53:52 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-09-03 10:21:53 -05:00
|
|
|
impl From<TokenTree> for TreeAndSpacing {
|
|
|
|
fn from(tree: TokenTree) -> TreeAndSpacing {
|
|
|
|
(tree, Spacing::Alone)
|
2016-07-19 17:50:34 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-10-13 18:37:21 -05:00
|
|
|
impl iter::FromIterator<TokenTree> for TokenStream {
|
|
|
|
fn from_iter<I: IntoIterator<Item = TokenTree>>(iter: I) -> Self {
|
2020-09-03 10:21:53 -05:00
|
|
|
TokenStream::new(iter.into_iter().map(Into::into).collect::<Vec<TreeAndSpacing>>())
|
2018-08-12 14:45:48 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-01-17 21:27:09 -06:00
|
|
|
impl Eq for TokenStream {}
|
|
|
|
|
2016-06-29 13:55:10 -05:00
|
|
|
impl PartialEq<TokenStream> for TokenStream {
|
|
|
|
fn eq(&self, other: &TokenStream) -> bool {
|
2017-01-17 21:27:09 -06:00
|
|
|
self.trees().eq(other.trees())
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl TokenStream {
|
2020-09-03 10:21:53 -05:00
|
|
|
pub fn new(streams: Vec<TreeAndSpacing>) -> TokenStream {
|
2019-10-09 15:29:02 -05:00
|
|
|
TokenStream(Lrc::new(streams))
|
2017-07-19 23:54:01 -05:00
|
|
|
}
|
|
|
|
|
2019-10-09 15:29:02 -05:00
|
|
|
pub fn is_empty(&self) -> bool {
|
|
|
|
self.0.is_empty()
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
|
2019-10-09 15:29:02 -05:00
|
|
|
pub fn len(&self) -> usize {
|
|
|
|
self.0.len()
|
2016-07-19 17:50:34 -05:00
|
|
|
}
|
|
|
|
|
2019-10-10 03:26:10 -05:00
|
|
|
pub fn from_streams(mut streams: SmallVec<[TokenStream; 2]>) -> TokenStream {
|
2017-02-18 06:45:32 -06:00
|
|
|
match streams.len() {
|
2019-10-09 15:29:02 -05:00
|
|
|
0 => TokenStream::default(),
|
2017-03-17 18:23:12 -05:00
|
|
|
1 => streams.pop().unwrap(),
|
2018-12-18 21:53:52 -06:00
|
|
|
_ => {
|
2019-10-07 18:34:53 -05:00
|
|
|
// We are going to extend the first stream in `streams` with
|
|
|
|
// the elements from the subsequent streams. This requires
|
|
|
|
// using `make_mut()` on the first stream, and in practice this
|
|
|
|
// doesn't cause cloning 99.9% of the time.
|
|
|
|
//
|
|
|
|
// One very common use case is when `streams` has two elements,
|
|
|
|
// where the first stream has any number of elements within
|
|
|
|
// (often 1, but sometimes many more) and the second stream has
|
|
|
|
// a single element within.
|
|
|
|
|
|
|
|
// Determine how much the first stream will be extended.
|
|
|
|
// Needed to avoid quadratic blow up from on-the-fly
|
|
|
|
// reallocations (#57735).
|
2019-12-22 16:42:04 -06:00
|
|
|
let num_appends = streams.iter().skip(1).map(|ts| ts.len()).sum();
|
2019-01-30 08:12:41 -06:00
|
|
|
|
2019-10-07 18:34:53 -05:00
|
|
|
// Get the first stream. If it's `None`, create an empty
|
|
|
|
// stream.
|
2019-11-04 08:59:09 -06:00
|
|
|
let mut iter = streams.drain(..);
|
2019-10-09 15:29:02 -05:00
|
|
|
let mut first_stream_lrc = iter.next().unwrap().0;
|
2019-10-07 18:34:53 -05:00
|
|
|
|
|
|
|
// Append the elements to the first stream, after reserving
|
|
|
|
// space for them.
|
|
|
|
let first_vec_mut = Lrc::make_mut(&mut first_stream_lrc);
|
|
|
|
first_vec_mut.reserve(num_appends);
|
|
|
|
for stream in iter {
|
2019-10-09 15:29:02 -05:00
|
|
|
first_vec_mut.extend(stream.0.iter().cloned());
|
2018-12-18 21:53:52 -06:00
|
|
|
}
|
2019-10-07 18:34:53 -05:00
|
|
|
|
|
|
|
// Create the final `TokenStream`.
|
2019-10-09 15:29:02 -05:00
|
|
|
TokenStream(first_stream_lrc)
|
2018-12-18 21:53:52 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-02-18 00:18:29 -06:00
|
|
|
pub fn trees(&self) -> Cursor {
|
|
|
|
self.clone().into_trees()
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn into_trees(self) -> Cursor {
|
2017-01-17 21:27:09 -06:00
|
|
|
Cursor::new(self)
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
|
2019-09-05 21:56:45 -05:00
|
|
|
/// Compares two `TokenStream`s, checking equality without regarding span information.
|
2016-07-19 17:50:34 -05:00
|
|
|
pub fn eq_unspanned(&self, other: &TokenStream) -> bool {
|
2018-04-10 14:52:47 -05:00
|
|
|
let mut t1 = self.trees();
|
|
|
|
let mut t2 = other.trees();
|
2021-03-08 17:32:41 -06:00
|
|
|
for (t1, t2) in iter::zip(&mut t1, &mut t2) {
|
2017-02-18 00:18:29 -06:00
|
|
|
if !t1.eq_unspanned(&t2) {
|
2016-07-19 17:50:34 -05:00
|
|
|
return false;
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
}
|
2018-04-10 14:52:47 -05:00
|
|
|
t1.next().is_none() && t2.next().is_none()
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
2017-03-17 18:41:09 -05:00
|
|
|
|
2020-09-14 00:45:10 -05:00
|
|
|
pub fn map_enumerated<F: FnMut(usize, &TokenTree) -> TokenTree>(self, mut f: F) -> TokenStream {
|
2019-10-09 15:29:02 -05:00
|
|
|
TokenStream(Lrc::new(
|
|
|
|
self.0
|
|
|
|
.iter()
|
|
|
|
.enumerate()
|
2020-09-14 00:45:10 -05:00
|
|
|
.map(|(i, (tree, is_joint))| (f(i, tree), *is_joint))
|
2019-12-22 16:42:04 -06:00
|
|
|
.collect(),
|
2019-10-09 15:29:02 -05:00
|
|
|
))
|
2017-07-19 23:54:01 -05:00
|
|
|
}
|
2017-03-17 18:41:09 -05:00
|
|
|
}
|
|
|
|
|
2019-03-27 20:27:26 -05:00
|
|
|
// 99.5%+ of the time we have 1 or 2 elements in this vector.
|
2018-07-22 10:48:29 -05:00
|
|
|
#[derive(Clone)]
|
2019-03-27 20:27:26 -05:00
|
|
|
pub struct TokenStreamBuilder(SmallVec<[TokenStream; 2]>);
|
2017-03-17 18:41:09 -05:00
|
|
|
|
|
|
|
impl TokenStreamBuilder {
|
2017-06-04 20:41:33 -05:00
|
|
|
pub fn new() -> TokenStreamBuilder {
|
2019-03-27 20:27:26 -05:00
|
|
|
TokenStreamBuilder(SmallVec::new())
|
2017-06-04 20:41:33 -05:00
|
|
|
}
|
|
|
|
|
2017-03-17 18:41:09 -05:00
|
|
|
pub fn push<T: Into<TokenStream>>(&mut self, stream: T) {
|
2019-10-07 21:32:59 -05:00
|
|
|
let mut stream = stream.into();
|
|
|
|
|
|
|
|
// If `self` is not empty and the last tree within the last stream is a
|
|
|
|
// token tree marked with `Joint`...
|
2019-10-09 15:29:02 -05:00
|
|
|
if let Some(TokenStream(ref mut last_stream_lrc)) = self.0.last_mut() {
|
2020-09-03 10:21:53 -05:00
|
|
|
if let Some((TokenTree::Token(last_token), Spacing::Joint)) = last_stream_lrc.last() {
|
2019-10-07 21:32:59 -05:00
|
|
|
// ...and `stream` is not empty and the first tree within it is
|
|
|
|
// a token tree...
|
2019-10-09 15:29:02 -05:00
|
|
|
let TokenStream(ref mut stream_lrc) = stream;
|
2020-09-03 10:21:53 -05:00
|
|
|
if let Some((TokenTree::Token(token), spacing)) = stream_lrc.first() {
|
2019-10-09 15:29:02 -05:00
|
|
|
// ...and the two tokens can be glued together...
|
|
|
|
if let Some(glued_tok) = last_token.glue(&token) {
|
|
|
|
// ...then do so, by overwriting the last token
|
|
|
|
// tree in `self` and removing the first token tree
|
|
|
|
// from `stream`. This requires using `make_mut()`
|
|
|
|
// on the last stream in `self` and on `stream`,
|
|
|
|
// and in practice this doesn't cause cloning 99.9%
|
|
|
|
// of the time.
|
|
|
|
|
|
|
|
// Overwrite the last token tree with the merged
|
|
|
|
// token.
|
|
|
|
let last_vec_mut = Lrc::make_mut(last_stream_lrc);
|
2020-09-03 10:21:53 -05:00
|
|
|
*last_vec_mut.last_mut().unwrap() = (TokenTree::Token(glued_tok), *spacing);
|
2019-10-09 15:29:02 -05:00
|
|
|
|
|
|
|
// Remove the first token tree from `stream`. (This
|
|
|
|
// is almost always the only tree in `stream`.)
|
|
|
|
let stream_vec_mut = Lrc::make_mut(stream_lrc);
|
|
|
|
stream_vec_mut.remove(0);
|
|
|
|
|
|
|
|
// Don't push `stream` if it's empty -- that could
|
|
|
|
// block subsequent token gluing, by getting
|
|
|
|
// between two token trees that should be glued
|
|
|
|
// together.
|
|
|
|
if !stream.is_empty() {
|
|
|
|
self.0.push(stream);
|
2019-10-07 21:32:59 -05:00
|
|
|
}
|
2019-10-09 15:29:02 -05:00
|
|
|
return;
|
2019-10-07 21:32:59 -05:00
|
|
|
}
|
2017-03-17 18:41:09 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
self.0.push(stream);
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn build(self) -> TokenStream {
|
2018-12-18 21:53:52 -06:00
|
|
|
TokenStream::from_streams(self.0)
|
2017-03-17 18:41:09 -05:00
|
|
|
}
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
|
2021-01-03 13:54:56 -06:00
|
|
|
/// By-reference iterator over a [`TokenStream`].
|
2020-09-27 08:52:51 -05:00
|
|
|
#[derive(Clone)]
|
|
|
|
pub struct CursorRef<'t> {
|
|
|
|
stream: &'t TokenStream,
|
|
|
|
index: usize,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<'t> CursorRef<'t> {
|
|
|
|
fn next_with_spacing(&mut self) -> Option<&'t TreeAndSpacing> {
|
|
|
|
self.stream.0.get(self.index).map(|tree| {
|
|
|
|
self.index += 1;
|
|
|
|
tree
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<'t> Iterator for CursorRef<'t> {
|
|
|
|
type Item = &'t TokenTree;
|
|
|
|
|
|
|
|
fn next(&mut self) -> Option<&'t TokenTree> {
|
|
|
|
self.next_with_spacing().map(|(tree, _)| tree)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-03 13:54:56 -06:00
|
|
|
/// Owning by-value iterator over a [`TokenStream`].
|
|
|
|
// FIXME: Many uses of this can be replaced with by-reference iterator to avoid clones.
|
2017-06-09 22:30:33 -05:00
|
|
|
#[derive(Clone)]
|
2018-12-18 21:53:52 -06:00
|
|
|
pub struct Cursor {
|
|
|
|
pub stream: TokenStream,
|
2017-02-18 06:45:32 -06:00
|
|
|
index: usize,
|
2017-01-17 21:27:09 -06:00
|
|
|
}
|
2016-06-29 13:55:10 -05:00
|
|
|
|
2017-03-17 18:23:12 -05:00
|
|
|
impl Iterator for Cursor {
|
|
|
|
type Item = TokenTree;
|
|
|
|
|
|
|
|
fn next(&mut self) -> Option<TokenTree> {
|
2020-09-03 10:21:53 -05:00
|
|
|
self.next_with_spacing().map(|(tree, _)| tree)
|
2017-03-17 18:23:12 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-02-18 00:18:29 -06:00
|
|
|
impl Cursor {
|
|
|
|
fn new(stream: TokenStream) -> Self {
|
2018-12-18 21:53:52 -06:00
|
|
|
Cursor { stream, index: 0 }
|
2017-03-17 18:41:09 -05:00
|
|
|
}
|
|
|
|
|
2020-09-03 10:21:53 -05:00
|
|
|
pub fn next_with_spacing(&mut self) -> Option<TreeAndSpacing> {
|
2019-10-09 15:29:02 -05:00
|
|
|
if self.index < self.stream.len() {
|
|
|
|
self.index += 1;
|
|
|
|
Some(self.stream.0[self.index - 1].clone())
|
|
|
|
} else {
|
|
|
|
None
|
2017-03-28 20:55:01 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-18 21:53:52 -06:00
|
|
|
pub fn append(&mut self, new_stream: TokenStream) {
|
|
|
|
if new_stream.is_empty() {
|
|
|
|
return;
|
2017-02-19 23:44:06 -06:00
|
|
|
}
|
2018-12-18 21:53:52 -06:00
|
|
|
let index = self.index;
|
2019-10-09 15:29:02 -05:00
|
|
|
let stream = mem::take(&mut self.stream);
|
2019-03-27 20:27:26 -05:00
|
|
|
*self = TokenStream::from_streams(smallvec![stream, new_stream]).into_trees();
|
2018-12-18 21:53:52 -06:00
|
|
|
self.index = index;
|
2017-02-19 23:44:06 -06:00
|
|
|
}
|
|
|
|
|
2020-09-14 00:45:10 -05:00
|
|
|
pub fn look_ahead(&self, n: usize) -> Option<&TokenTree> {
|
|
|
|
self.stream.0[self.index..].get(n).map(|(tree, _)| tree)
|
2017-02-19 23:44:06 -06:00
|
|
|
}
|
2016-06-29 13:55:10 -05:00
|
|
|
}
|
|
|
|
|
2020-06-11 09:49:57 -05:00
|
|
|
#[derive(Debug, Copy, Clone, PartialEq, Encodable, Decodable, HashStable_Generic)]
|
2018-09-08 20:07:02 -05:00
|
|
|
pub struct DelimSpan {
|
|
|
|
pub open: Span,
|
|
|
|
pub close: Span,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl DelimSpan {
|
|
|
|
pub fn from_single(sp: Span) -> Self {
|
2019-12-22 16:42:04 -06:00
|
|
|
DelimSpan { open: sp, close: sp }
|
2018-09-08 20:07:02 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
pub fn from_pair(open: Span, close: Span) -> Self {
|
|
|
|
DelimSpan { open, close }
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn dummy() -> Self {
|
|
|
|
Self::from_single(DUMMY_SP)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn entire(self) -> Span {
|
|
|
|
self.open.with_hi(self.close.hi())
|
|
|
|
}
|
|
|
|
}
|