3015: vscode: yet another refactor commit r=matklad a=Veetaha
It compiles, it runs in dev extension host, It bundles, it runs when bundled and installed.
Removed 5 lines of code as you like less code, especially TypeScript code)
Co-authored-by: Veetaha <gerzoh1@gmail.com>
3016: Fix unneeded `.` in `docs/user/README.md` r=kjeremy a=fusillicode
I hope I got the typo right 😅
Thanks a lot of this wonderful project 🙇
Co-authored-by: Gian D <fusillicode@users.noreply.github.com>
2981: vscode: Add ability to call onEnter without overriding "type". r=matklad a=71
Before this PR, the only way to get enhanced typing (right now, only with `onEnter`) was to override VS Code's `type` command. This leads to issues with extensions like [VsCodeVim](https://github.com/VSCodeVim/Vim) that need to override `type` as well.
This PR adds an additional command, `onEnter`. This command can be used with the following keybinding, which allows the user to get smart `onEnter` behavior without overriding `type`.
```json
{
"key": "enter",
"command": "rust-analyzer.onEnter",
"when": "editorTextFocus && editorLangId == rust"
}
```
Co-authored-by: Gregoire Geis <git@gregoirege.is>
Co-authored-by: Grégoire Geis <git@gregoirege.is>
2962: Differentiate underscore alias from named aliases r=matklad a=zombiefungus
pre for Fixing Issue 2736
edited to avoid autoclosing the issue
Co-authored-by: zombiefungus <divmermarlav@gmail.com>
2911: Implement collecting errors while tokenizing r=matklad a=Veetaha
Now we are collecting errors from `rustc_lexer` and returning them in `ParsedToken { token, error }` and `ParsedTokens { tokens, errors }` structures **([UPD]: this is now simplified, see updates bellow)**.
The main changes are introduced in `ra_syntax/parsing/lexer.rs`. It now exposes the following functions and types:
```rust
pub fn tokenize(text: &str) -> ParsedTokens;
pub fn tokenize_append(text: &str, parsed_tokens_to_append_to: &mut ParsedTokens);
pub fn first_token(text: &str) -> Option<ParsedToken>; // allows any number of tokens in text
pub fn single_token(text: &str) -> Option<ParsedToken>; // allows only a single token in text
pub struct ParsedToken { pub token: Token, pub error: Option<SyntaxError> }
pub struct ParsedTokens { pub tokens: Vec<Token>, pub errors: Vec<SyntaxError> }
pub enum TokenizeError { /* Simple enum which reflects rustc_lexer tokenization errors */ }
```
In the first commit I implemented it with iterators, but then decided that since this crate is ad hoc for `rust-analyzer` and we clearly see the places of its usage it would be better to simplify it to vectors.
This is currently WIP, because I want to add tests for error messages generated by the lexer.
I'd like to listen to you thoughts how to define these tests in `ra_syntax/test-data` dir.
Related issues: #223
**[UPD]**
After the PR review the API was simplified:
```rust
pub fn tokenize(text: &str) -> (Vec<Token>, Vec<SyntaxError>);
// Both lex functions do not check for unescape errors
pub fn lex_single_syntax_kind(text: &str) -> Option<(SyntaxKind, Option<SyntaxError>)>;
pub fn lex_single_valid_syntax_kind(text: &str) -> Option<SyntaxKind>;
// This will be removed in the next PR in favour of simlifying `SyntaxError` to `(String, TextRange)`
pub enum TokenizeError { /* Simple enum which reflects rustc_lexer tokenization errors */ }
// this is private, but may be made public if such demand would exist in future (least privilege principle)
fn lex_first_token(text: &str) -> Option<(Token, Option<SyntaxError>)>;
```
Co-authored-by: Veetaha <gerzoh1@gmail.com>
3003: Remove rollup-typescript r=matklad a=matklad
It seems like just calling typescript directly is simpler and more reliable?
@Veetaha what do you think about this approach?
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>