ci: pin awscli dependencies
docutils 0.15, a dependency of awscli, broke our CI since it's not compatible with Python 2 due to a bug. This pins all the dependencies of awscli with docutils 0.14, to make sure this kind of regressions doesn't happen again.
r? @Mark-Simulacrum @alexcrichton
docutils 0.15, a dependency of awscli, broke our CI since it's not
compatible with Python 2 due to a bug. This pins all the dependencies of
awscli with docutils 0.14, to make sure this kind of regressions doesn't
happen again.
The essence of lexer
cc @eddyb
I would love to make a reusable library to lex rust code, which could be used by rustc, rust-analyzer, proc-macros, etc. This **draft** PR is my attempt at the API. Currently, the PR uses new lexer to lex comments and shebang, while using the old lexer for everything else. This should be enough to agree on the API though!
### High-level picture
An `rust_lexer` crate is introduced, with zero or minimal (for XID_Start and other unicode) dependencies. This crate basically exposes a single function: `next_token(&str) -> (TokenKind, usize)` which returns the first token of a non-empty string (`usize` is the length of the token). The main goal of the API is to be minimal. Non-strictly essential concerns, like string interning, are left to the clients.
### Finer Points
#### Iterator API
We probably should expose a convenience function `fn tokenize(&str) -> impl Iterator<Item = Token>`
EDIT: I've added `tokenize`
#### Error handling
The lexer itself provides only minimal amount of error detection and reporting. Additionally, it never fatal-errors and always produces some non-empty token. Examples of errors detected by the lexer:
* unterminated block comment
* unterminated string literals
Example of errors **not** detected by the lexer:
* invalid escape sequence in a string literal
* out of range integer literal
* bare `\r` in the doc comment.
The idea is that the clients are responsible for additional validation of tokens. This is the mode IDE operates in: you want to skip validation for library files, because you are not showing errors there anyway, and for user-code, you want to do a deep validation with quick fixes and suggestions, which is not really fit for the lexer itself.
In particular, in this PR unclosed `/*` comment is handled by the new lexer, bare `\r` and distinction between doc and non-doc comments is handled by the old lexer.
#### Performance
No attempt at performance measurement is made so far :) I think it is acceptable to regress perf here a bit in exchange for cleaner code, and I hope that regression wouldn't be too costly. In particular, because we validate tokens separately, we'll have to do one more pass for some of the tokens. I hope this is not a prohibitive cost. For example, for doc comments we already do two passes (lexing + interning), so adding a third one shouldn't be that much slower (and we also do an additional pass for utf-8 validation). And lexing is hopefully not a bottleneck. Note that for IDEs separate validation might actually improve performance, because we will be able to skip validation when, for example, computing completions.
Long term, I hope that this approach will allow for *better* performance. If we separate pure lexing, in the future we can code-gen super-optimizes state machine that walks utf-8 directly, instead of current manual char-by-char toil.
#### Cursor API
For implementation, I am going slightly unconventionally. Instead of defining a `Lexer` struct with a bunch of helper methods (`current`, `bump`) and a bunch of lexing methods (`lex_comment`, `lex_whitespace`), I define a `Cursor` struct which has only helpers, and define a top-level function with a `&mut Cursor` argument for each grammar production. I find this C-style more readable for parsers and lexers.
EDIT: swithced to a more conventional setup with lexing methods
So, what do folks think about this?
The idea here is to make a reusable library out of the existing
rust-lexer, by separating out pure lexing and rustc-specific concerns,
like spans, error reporting an interning.
So, rustc_lexer operates directly on `&str`, produces simple tokens
which are a pair of type-tag and a bit of original text, and does not
report errors, instead storing them as flags on the token.
Specific error for positional args after named args in `format!()`
When writing positional arguments after named arguments in the
`format!()` and `println!()` macros, provide a targeted diagnostic.
Follow up to https://github.com/rust-lang/rust/pull/57522/files#r247278885
Add meta-variable checks in macro definitions
This is an implementation of #61053. It is not sound (some errors are not reported) and not complete (reports may not be actual errors). This is due to the possibility to define macros in macros in indirect ways. See module documentation of `macro_check` for more details.
What remains to be done:
- [x] Migrate from an error to an allow-by-default lint.
- [x] Add more comments in particular for the handling of nested macros.
- [x] Add more tests if needed.
- [x] Try to avoid cloning too much (one idea is to use lists on the stack).
- [ ] Run crater with deny-by-default lint (measure rate of false positives).
- [ ] Remove extra commit for deny-by-default lint
- [x] Create a PR to remove the old `question_mark_macro_sep` lint #62160
libsyntax: Rename `Mark` into `ExpnId`
"`Mark`" is an ID that identifies both a macro invocation (`foo!()`), and expansion process, and expansion result of that macro invocation.
The problem is that it's pretty hard to infer that from its name.
This PR renames it into `ExpnId` reflecting its meaning in most contexts.
(The contexts where it's meaning is closer to "macro invocation ID" are rarer.)
I've kept "mark" in the names of functions that add or remove marks to/from syntactic contexts, those marks are not just expansion IDs, but something more complex.
This is needed for having complete error messages where reporting macro variable
errors. Here is what they would look like:
error: meta-variable repeats with different kleene operator
--> $DIR/issue-61053-different-kleene.rs:3:57
|
LL | ( $( $i:ident = $($j:ident),+ );* ) => { $( $( $i = $j; )* )* };
| - expected repetition ^^ - conflicting repetition
azure: Prepare configuration for 4-core machines
This commit updates some of our assorted Azure/CI configuration to
prepare for some 4-core machines coming online. We're still in the
process of performance testing them to get final numbers, but some
changes are worth landing ahead of this. The updates here are:
* Use `C:/` instead of `D:/` for submodule checkout since it should have
plenty of space and the 4-core machines won't have `D:/`
* Update `lzma-sys` to 0.1.14 which has support for VS2019, where 0.1.10
doesn't.
* Update `src/ci/docker/run.sh` to work when it itself is running inside
of a docker container (see the comment in the file for more info)
* Print step timings on the `try` branch in addition to the `auto`
branch in. The logs there should be seen by similarly many humans (not
many) and can be useful for performance analysis after a `try` build
runs.
* Install the WIX and InnoSetup tools manually on Windows instead of
relying on pre-installed copies on the VM. This gives us more control
over what's being used on the Azure cloud right now (we control the
version) and in the 4-core machines these won't be pre-installed. Note
that on AppVeyor we actually already were installing InnoSetup, we
just didn't carry that over on Azure!
This commit updates some of our assorted Azure/CI configuration to
prepare for some 4-core machines coming online. We're still in the
process of performance testing them to get final numbers, but some
changes are worth landing ahead of this. The updates here are:
* Use `C:/` instead of `D:/` for submodule checkout since it should have
plenty of space and the 4-core machines won't have `D:/`
* Update `lzma-sys` to 0.1.14 which has support for VS2019, where 0.1.10
doesn't.
* Update `src/ci/docker/run.sh` to work when it itself is running inside
of a docker container (see the comment in the file for more info)
* Print step timings on the `try` branch in addition to the `auto`
branch in. The logs there should be seen by similarly many humans (not
many) and can be useful for performance analysis after a `try` build
runs.
* Install the WIX and InnoSetup tools manually on Windows instead of
relying on pre-installed copies on the VM. This gives us more control
over what's being used on the Azure cloud right now (we control the
version) and in the 4-core machines these won't be pre-installed. Note
that on AppVeyor we actually already were installing InnoSetup, we
just didn't carry that over on Azure!
Add an `after_expansion` callback in the driver
To format a given file RLS needs to know the Rust edition associated with it. It's not enough to look at the `edition` key in Cargo.toml - each crate target can have a different edition associated with it so the sure way to fetch a correct edition is to scan the input files used to compile a given crate target.
Right now this was done in the `after_analysis` callback of our shim, however this leads to other problems - if a crate cannot be successfully compiled (e.g. it has a type error) then a callback would not be invoked meaning we can't populate the files -> edition mapping.
However, doing this only after parsing is not enough, since expansion can pull in additional source files (e.g. by invoking `macro_rules! include_my_mod { () => { mod some_mod; }; }`).
Without copy-pasting the entire driver setup it's also not possible to expand the crate ourselves in the `after_parsing` callback - to expand crate we'd need to register plugins and initialize session ourselves. However, this is done normally after executing the callback itself, thus triggering the `Once::set` assertions in `Session::init_features`.
r? @Zoxc
cc @RalfJung @oli-obk this affects public driver interface used by Miri and Clippy
rustc_typeck: improve diagnostics for -> _ fn return type
This should implement IIUC the mentioned issue.
~~I'm not sure if there is a better way than `get_infer_ret_ty` to get/check the return type without code duplication.~~
~~Also, is this unwrap be okay `ty::Binder::bind(*tables.liberated_fn_sigs().get(hir_id).unwrap())`?~~
r? @eddyb
Closes: https://github.com/rust-lang/rust/issues/56132
resolve: Improve candidate search for unresolved macro suggestions
Use same scope visiting machinery for both collecting suggestion candidates and actually resolving the names.
The PR is better read in per-commit fashion with whitespace changes ignored (the first commit in particular moves some code around).
This should be the last pre-requisite for https://github.com/rust-lang/rust/pull/62086.
r? @davidtwco
Rollup of 15 pull requests
Successful merges:
- #61926 (Fix hyperlinks in From impls between Vec and VecDeque)
- #62615 ( Only error about MSVC + PGO + unwind if we're generating code)
- #62696 (Check that trait is exported or public before adding hint)
- #62712 (Update the help message on error for self type)
- #62728 (Fix repeated wording in slice documentation)
- #62730 (Consolidate hygiene tests)
- #62732 (Remove last use of mem::uninitialized from std::io::util)
- #62740 (Add missing link to Infallible in TryFrom doc)
- #62745 (update data_layout and features for armv7-wrs-vxworks)
- #62749 (Document link_section arbitrary bytes)
- #62752 (Disable Z3 in LLVM build)
- #62764 (normalize use of backticks in compiler messages for librustc/lint)
- #62774 (Disable simd_select_bitmask test on big endian)
- #62777 (Self-referencial type now called a recursive type)
- #62778 (Emit artifact notifications for dependency files)
Failed merges:
- #62746 ( do not use mem::uninitialized in std::io)
r? @ghost
Disable simd_select_bitmask test on big endian
Per #59356 it is expected that the interpretation of the bitmask depends
on target endianness.
Closes#59356