10221: internal: prevent possible bugs when adding magical comments r=matklad a=matklad

bors r+
🤖

Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
This commit is contained in:
bors[bot] 2021-09-13 11:23:43 +00:00 committed by GitHub
commit cada16f5e0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 369 additions and 169 deletions

View File

@ -6,21 +6,21 @@ use syntax::{
use crate::{AssistContext, AssistId, AssistKind, Assists};
/// Assist: line_to_block
///
/// Converts comments between block and single-line form
///
/// ```
/// // Multi-line
/// // comment
/// ```
/// ->
/// ```
/// /**
/// Multi-line
/// comment
/// */
/// ```
// Assist: line_to_block
//
// Converts comments between block and single-line form.
//
// ```
// // Multi-line$0
// // comment
// ```
// ->
// ```
// /*
// Multi-line
// comment
// */
// ```
pub(crate) fn convert_comment_block(acc: &mut Assists, ctx: &AssistContext) -> Option<()> {
let comment = ctx.find_token_at_offset::<ast::Comment>()?;
// Only allow comments which are alone on their line

View File

@ -1,10 +1,63 @@
//! `assists` crate provides a bunch of code assists, also known as code
//! actions (in LSP) or intentions (in IntelliJ).
//! `assists` crate provides a bunch of code assists, also known as code actions
//! (in LSP) or intentions (in IntelliJ).
//!
//! An assist is a micro-refactoring, which is automatically activated in
//! certain context. For example, if the cursor is over `,`, a "swap `,`" assist
//! becomes available.
//!
//! ## Assists Guidelines
//!
//! Assists are the main mechanism to deliver advanced IDE features to the user,
//! so we should pay extra attention to the UX.
//!
//! The power of assists comes from their context-awareness. The main problem
//! with IDE features is that there are a lot of them, and it's hard to teach
//! the user what's available. Assists solve this problem nicely: 💡 signifies
//! that *something* is possible, and clicking on it reveals a *short* list of
//! actions. Contrast it with Emacs `M-x`, which just spits an infinite list of
//! all the features.
//!
//! Here are some considerations when creating a new assist:
//!
//! * It's good to preserve semantics, and it's good to keep the code compiling,
//! but it isn't necessary. Example: "flip binary operation" might change
//! semantics.
//! * Assist shouldn't necessary make the code "better". A lot of assist come in
//! pairs: "if let <-> match".
//! * Assists should have as narrow scope as possible. Each new assists greatly
//! improves UX for cases where the user actually invokes it, but it makes UX
//! worse for every case where the user clicks 💡 to invoke some *other*
//! assist. So, a rarely useful assist which is always applicable can be a net
//! negative.
//! * Rarely useful actions are tricky. Sometimes there are features which are
//! clearly useful to some users, but are just noise most of the time. We
//! don't have a good solution here, our current approach is to make this
//! functionality available only if assist is applicable to the whole
//! selection. Example: `sort_items` sorts items alphabetically. Naively, it
//! should be available more or less everywhere, which isn't useful. So
//! instead we only show it if the user *selects* the items they want to sort.
//! * Consider grouping related assists together (see [`Assists::add_group`]).
//! * Make assists robust. If the assist depends on results of type-inference to
//! much, it might only fire in fully-correct code. This makes assist less
//! useful and (worse) less predictable. The user should have a clear
//! intuition when each particular assist is available.
//! * Make small assists, which compose. Example: rather than auto-importing
//! enums in `fill_match_arms`, we use fully-qualified names. There's a
//! separate assist to shorten a fully-qualified name.
//! * Distinguish between assists and fixits for diagnostics. Internally, fixits
//! and assists are equivalent. They have the same "show a list + invoke a
//! single element" workflow, and both use [`Assist`] data structure. The main
//! difference is in the UX: while 💡 looks only at the cursor position,
//! diagnostics squigglies and fixits are calculated for the whole file and
//! are presented to the user eagerly. So, diagnostics should be fixable
//! errors, while assists can be just suggestions for an alternative way to do
//! something. If something *could* be a diagnostic, it should be a
//! diagnostic. Conversely, it might be valuable to turn a diagnostic with a
//! lot of false errors into an assist.
//! *
//!
//! See also this post:
//! <https://rust-analyzer.github.io/blog/2020/09/28/how-to-make-a-light-bulb.html>
#[allow(unused)]
macro_rules! eprintln {
($($tt:tt)*) => { stdx::eprintln!($($tt)*) };
@ -28,6 +81,9 @@ pub use ide_db::assists::{
};
/// Return all the assists applicable at the given position.
///
// NOTE: We don't have a `Feature: ` section for assists, they are special-cased
// in the manual.
pub fn assists(
db: &RootDatabase,
config: &AssistConfig,

View File

@ -1064,6 +1064,23 @@ fn main() {
)
}
#[test]
fn doctest_line_to_block() {
check_doc_test(
"line_to_block",
r#####"
// Multi-line$0
// comment
"#####,
r#####"
/*
Multi-line
comment
*/
"#####,
)
}
#[test]
fn doctest_make_raw_string() {
check_doc_test(

View File

@ -1,95 +1,4 @@
//! Feature: completion with imports-on-the-fly
//!
//! When completing names in the current scope, proposes additional imports from other modules or crates,
//! if they can be qualified in the scope, and their name contains all symbols from the completion input.
//!
//! To be considered applicable, the name must contain all input symbols in the given order, not necessarily adjacent.
//! If any input symbol is not lowercased, the name must contain all symbols in exact case; otherwise the containing is checked case-insensitively.
//!
//! ```
//! fn main() {
//! pda$0
//! }
//! # pub mod std { pub mod marker { pub struct PhantomData { } } }
//! ```
//! ->
//! ```
//! use std::marker::PhantomData;
//!
//! fn main() {
//! PhantomData
//! }
//! # pub mod std { pub mod marker { pub struct PhantomData { } } }
//! ```
//!
//! Also completes associated items, that require trait imports.
//! If any unresolved and/or partially-qualified path precedes the input, it will be taken into account.
//! Currently, only the imports with their import path ending with the whole qualifier will be proposed
//! (no fuzzy matching for qualifier).
//!
//! ```
//! mod foo {
//! pub mod bar {
//! pub struct Item;
//!
//! impl Item {
//! pub const TEST_ASSOC: usize = 3;
//! }
//! }
//! }
//!
//! fn main() {
//! bar::Item::TEST_A$0
//! }
//! ```
//! ->
//! ```
//! use foo::bar;
//!
//! mod foo {
//! pub mod bar {
//! pub struct Item;
//!
//! impl Item {
//! pub const TEST_ASSOC: usize = 3;
//! }
//! }
//! }
//!
//! fn main() {
//! bar::Item::TEST_ASSOC
//! }
//! ```
//!
//! NOTE: currently, if an assoc item comes from a trait that's not currently imported, and it also has an unresolved and/or partially-qualified path,
//! no imports will be proposed.
//!
//! .Fuzzy search details
//!
//! To avoid an excessive amount of the results returned, completion input is checked for inclusion in the names only
//! (i.e. in `HashMap` in the `std::collections::HashMap` path).
//! For the same reasons, avoids searching for any path imports for inputs with their length less than 2 symbols
//! (but shows all associated items for any input length).
//!
//! .Import configuration
//!
//! It is possible to configure how use-trees are merged with the `importMergeBehavior` setting.
//! Mimics the corresponding behavior of the `Auto Import` feature.
//!
//! .LSP and performance implications
//!
//! The feature is enabled only if the LSP client supports LSP protocol version 3.16+ and reports the `additionalTextEdits`
//! (case-sensitive) resolve client capability in its client capabilities.
//! This way the server is able to defer the costly computations, doing them for a selected completion item only.
//! For clients with no such support, all edits have to be calculated on the completion request, including the fuzzy search completion ones,
//! which might be slow ergo the feature is automatically disabled.
//!
//! .Feature toggle
//!
//! The feature can be forcefully turned off in the settings with the `rust-analyzer.completion.autoimport.enable` flag.
//! Note that having this flag set to `true` does not guarantee that the feature is enabled: your client needs to have the corresponding
//! capability enabled.
//! See [`import_on_the_fly`].
use ide_db::helpers::{
import_assets::{ImportAssets, ImportCandidate},
insert_use::ImportScope,
@ -105,6 +14,97 @@ use crate::{
use super::Completions;
// Feature: Completion With Autoimport
//
// When completing names in the current scope, proposes additional imports from other modules or crates,
// if they can be qualified in the scope, and their name contains all symbols from the completion input.
//
// To be considered applicable, the name must contain all input symbols in the given order, not necessarily adjacent.
// If any input symbol is not lowercased, the name must contain all symbols in exact case; otherwise the containing is checked case-insensitively.
//
// ```
// fn main() {
// pda$0
// }
// # pub mod std { pub mod marker { pub struct PhantomData { } } }
// ```
// ->
// ```
// use std::marker::PhantomData;
//
// fn main() {
// PhantomData
// }
// # pub mod std { pub mod marker { pub struct PhantomData { } } }
// ```
//
// Also completes associated items, that require trait imports.
// If any unresolved and/or partially-qualified path precedes the input, it will be taken into account.
// Currently, only the imports with their import path ending with the whole qualifier will be proposed
// (no fuzzy matching for qualifier).
//
// ```
// mod foo {
// pub mod bar {
// pub struct Item;
//
// impl Item {
// pub const TEST_ASSOC: usize = 3;
// }
// }
// }
//
// fn main() {
// bar::Item::TEST_A$0
// }
// ```
// ->
// ```
// use foo::bar;
//
// mod foo {
// pub mod bar {
// pub struct Item;
//
// impl Item {
// pub const TEST_ASSOC: usize = 3;
// }
// }
// }
//
// fn main() {
// bar::Item::TEST_ASSOC
// }
// ```
//
// NOTE: currently, if an assoc item comes from a trait that's not currently imported, and it also has an unresolved and/or partially-qualified path,
// no imports will be proposed.
//
// .Fuzzy search details
//
// To avoid an excessive amount of the results returned, completion input is checked for inclusion in the names only
// (i.e. in `HashMap` in the `std::collections::HashMap` path).
// For the same reasons, avoids searching for any path imports for inputs with their length less than 2 symbols
// (but shows all associated items for any input length).
//
// .Import configuration
//
// It is possible to configure how use-trees are merged with the `importMergeBehavior` setting.
// Mimics the corresponding behavior of the `Auto Import` feature.
//
// .LSP and performance implications
//
// The feature is enabled only if the LSP client supports LSP protocol version 3.16+ and reports the `additionalTextEdits`
// (case-sensitive) resolve client capability in its client capabilities.
// This way the server is able to defer the costly computations, doing them for a selected completion item only.
// For clients with no such support, all edits have to be calculated on the completion request, including the fuzzy search completion ones,
// which might be slow ergo the feature is automatically disabled.
//
// .Feature toggle
//
// The feature can be forcefully turned off in the settings with the `rust-analyzer.completion.autoimport.enable` flag.
// Note that having this flag set to `true` does not guarantee that the feature is enabled: your client needs to have the corresponding
// capability enabled.
pub(crate) fn import_on_the_fly(acc: &mut Completions, ctx: &CompletionContext) -> Option<()> {
if !ctx.config.enable_imports_on_the_fly {
return None;

View File

@ -1,4 +1,4 @@
//! See `complete_fn_param`.
//! See [`complete_fn_param`].
use rustc_hash::FxHashMap;
use syntax::{

View File

@ -194,10 +194,12 @@ fn opt_visibility(p: &mut Parser) -> bool {
// crate fn main() { }
// struct S { crate field: u32 }
// struct T(crate u32);
//
// test crate_keyword_path
// fn foo() { crate::foo(); }
T![crate] if !p.nth_at(1, T![::]) => {
T![crate] => {
if p.nth_at(1, T![::]) {
// test crate_keyword_path
// fn foo() { crate::foo(); }
return false;
}
let m = p.start();
p.bump(T![crate]);
m.complete(p, VISIBILITY);

View File

@ -374,7 +374,6 @@ fn lhs(p: &mut Parser, r: Restrictions) -> Option<(CompletedMarker, BlockLike)>
// let mut p = F{x: 5};
// {p}.x = 10;
// }
//
let (lhs, blocklike) = atom::atom_expr(p, r)?;
return Some(postfix_expr(p, lhs, blocklike, !(r.prefer_stmt && blocklike.is_block())));
}

View File

@ -17,7 +17,7 @@ pub(crate) fn pattern(p: &mut Parser) {
pattern_r(p, PAT_RECOVERY_SET);
}
/// Parses a pattern list separated by pipes `|`
/// Parses a pattern list separated by pipes `|`.
pub(super) fn pattern_top(p: &mut Parser) {
pattern_top_r(p, PAT_RECOVERY_SET)
}
@ -27,14 +27,15 @@ pub(crate) fn pattern_single(p: &mut Parser) {
}
/// Parses a pattern list separated by pipes `|`
/// using the given `recovery_set`
/// using the given `recovery_set`.
pub(super) fn pattern_top_r(p: &mut Parser, recovery_set: TokenSet) {
p.eat(T![|]);
pattern_r(p, recovery_set);
}
/// Parses a pattern list separated by pipes `|`, with no leading `|`,using the
/// given `recovery_set`
/// given `recovery_set`.
// test or_pattern
// fn main() {
// match () {

View File

@ -43,10 +43,12 @@ pub fn list_files(dir: &Path) -> Vec<PathBuf> {
res
}
#[derive(Clone)]
pub struct CommentBlock {
pub id: String,
pub line: usize,
pub contents: Vec<String>,
is_doc: bool,
}
impl CommentBlock {
@ -54,61 +56,62 @@ impl CommentBlock {
assert!(tag.starts_with(char::is_uppercase));
let tag = format!("{}:", tag);
let mut res = Vec::new();
for (line, mut block) in do_extract_comment_blocks(text, true) {
let first = block.remove(0);
if let Some(id) = first.strip_prefix(&tag) {
let id = id.trim().to_string();
let block = CommentBlock { id, line, contents: block };
res.push(block);
}
}
res
// Would be nice if we had `.retain_mut` here!
CommentBlock::extract_untagged(text)
.into_iter()
.filter_map(|mut block| {
let first = block.contents.remove(0);
first.strip_prefix(&tag).map(|id| {
if block.is_doc {
panic!(
"Use plain (non-doc) comments with tags like {}:\n {}",
tag, first
)
}
block.id = id.trim().to_string();
block
})
})
.collect()
}
pub fn extract_untagged(text: &str) -> Vec<CommentBlock> {
let mut res = Vec::new();
for (line, block) in do_extract_comment_blocks(text, false) {
let id = String::new();
let block = CommentBlock { id, line, contents: block };
res.push(block);
let lines = text.lines().map(str::trim_start);
let dummy_block =
CommentBlock { id: String::new(), line: 0, contents: Vec::new(), is_doc: false };
let mut block = dummy_block.clone();
for (line_num, line) in lines.enumerate() {
match line.strip_prefix("//") {
Some(mut contents) => {
if let Some('/' | '!') = contents.chars().next() {
contents = &contents[1..];
block.is_doc = true;
}
if let Some(' ') = contents.chars().next() {
contents = &contents[1..];
}
block.contents.push(contents.to_string());
}
None => {
if !block.contents.is_empty() {
let block = mem::replace(&mut block, dummy_block.clone());
res.push(block);
}
block.line = line_num + 2;
}
}
}
if !block.contents.is_empty() {
res.push(block)
}
res
}
}
fn do_extract_comment_blocks(
text: &str,
allow_blocks_with_empty_lines: bool,
) -> Vec<(usize, Vec<String>)> {
let mut res = Vec::new();
let prefix = "// ";
let lines = text.lines().map(str::trim_start);
let mut block = (0, vec![]);
for (line_num, line) in lines.enumerate() {
if line == "//" && allow_blocks_with_empty_lines {
block.1.push(String::new());
continue;
}
let is_comment = line.starts_with(prefix);
if is_comment {
block.1.push(line[prefix.len()..].to_string());
} else {
if !block.1.is_empty() {
res.push(mem::take(&mut block));
}
block.0 = line_num + 2;
}
}
if !block.1.is_empty() {
res.push(block)
}
res
}
#[derive(Debug)]
pub struct Location {
pub file: PathBuf,

View File

@ -1,4 +1,4 @@
SOURCE_FILE@0..154
SOURCE_FILE@0..247
USE@0..17
USE_KW@0..3 "use"
WHITESPACE@3..4 " "
@ -35,4 +35,74 @@ SOURCE_FILE@0..154
SEMICOLON@123..124 ";"
WHITESPACE@124..125 " "
COMMENT@125..153 "// Rust 2018 - Unifor ..."
WHITESPACE@153..154 "\n"
WHITESPACE@153..155 "\n\n"
USE@155..178
USE_KW@155..158 "use"
WHITESPACE@158..159 " "
USE_TREE@159..177
PATH@159..177
PATH@159..171
PATH@159..163
PATH_SEGMENT@159..163
NAME_REF@159..163
SELF_KW@159..163 "self"
COLON2@163..165 "::"
PATH_SEGMENT@165..171
NAME_REF@165..171
IDENT@165..171 "module"
COLON2@171..173 "::"
PATH_SEGMENT@173..177
NAME_REF@173..177
IDENT@173..177 "Item"
SEMICOLON@177..178 ";"
WHITESPACE@178..179 "\n"
USE@179..195
USE_KW@179..182 "use"
WHITESPACE@182..183 " "
USE_TREE@183..194
PATH@183..194
PATH@183..188
PATH_SEGMENT@183..188
NAME_REF@183..188
CRATE_KW@183..188 "crate"
COLON2@188..190 "::"
PATH_SEGMENT@190..194
NAME_REF@190..194
IDENT@190..194 "Item"
SEMICOLON@194..195 ";"
WHITESPACE@195..196 "\n"
USE@196..219
USE_KW@196..199 "use"
WHITESPACE@199..200 " "
USE_TREE@200..218
PATH@200..218
PATH@200..210
PATH@200..204
PATH_SEGMENT@200..204
NAME_REF@200..204
SELF_KW@200..204 "self"
COLON2@204..206 "::"
PATH_SEGMENT@206..210
NAME_REF@206..210
IDENT@206..210 "some"
COLON2@210..212 "::"
PATH_SEGMENT@212..218
NAME_REF@212..218
IDENT@212..218 "Struct"
SEMICOLON@218..219 ";"
WHITESPACE@219..220 "\n"
USE@220..246
USE_KW@220..223 "use"
WHITESPACE@223..224 " "
USE_TREE@224..245
PATH@224..245
PATH@224..234
PATH_SEGMENT@224..234
NAME_REF@224..234
IDENT@224..234 "crate_name"
COLON2@234..236 "::"
PATH_SEGMENT@236..245
NAME_REF@236..245
IDENT@236..245 "some_item"
SEMICOLON@245..246 ";"
WHITESPACE@246..247 "\n"

View File

@ -1,3 +1,8 @@
use ::crate_name; // Rust 2018 - All flavours
use crate_name; // Rust 2018 - Anchored paths
use item_in_scope_or_crate_name; // Rust 2018 - Uniform Paths
use self::module::Item;
use crate::Item;
use self::some::Struct;
use crate_name::some_item;

View File

@ -1,4 +1,4 @@
SOURCE_FILE@0..60
SOURCE_FILE@0..115
STRUCT@0..59
STRUCT_KW@0..6 "struct"
WHITESPACE@6..7 " "
@ -39,4 +39,47 @@ SOURCE_FILE@0..60
WHITESPACE@56..57 "\n"
R_PAREN@57..58 ")"
SEMICOLON@58..59 ";"
WHITESPACE@59..60 "\n"
WHITESPACE@59..61 "\n\n"
ENUM@61..114
ENUM_KW@61..65 "enum"
WHITESPACE@65..66 " "
NAME@66..67
IDENT@66..67 "S"
WHITESPACE@67..68 " "
VARIANT_LIST@68..114
L_CURLY@68..69 "{"
WHITESPACE@69..74 "\n "
VARIANT@74..111
NAME@74..77
IDENT@74..77 "Uri"
TUPLE_FIELD_LIST@77..111
L_PAREN@77..78 "("
TUPLE_FIELD@78..110
ATTR@78..106
POUND@78..79 "#"
L_BRACK@79..80 "["
META@80..105
PATH@80..85
PATH_SEGMENT@80..85
NAME_REF@80..85
IDENT@80..85 "serde"
TOKEN_TREE@85..105
L_PAREN@85..86 "("
IDENT@86..90 "with"
WHITESPACE@90..91 " "
EQ@91..92 "="
WHITESPACE@92..93 " "
STRING@93..104 "\"url_serde\""
R_PAREN@104..105 ")"
R_BRACK@105..106 "]"
WHITESPACE@106..107 " "
PATH_TYPE@107..110
PATH@107..110
PATH_SEGMENT@107..110
NAME_REF@107..110
IDENT@107..110 "Uri"
R_PAREN@110..111 ")"
COMMA@111..112 ","
WHITESPACE@112..113 "\n"
R_CURLY@113..114 "}"
WHITESPACE@114..115 "\n"

View File

@ -2,3 +2,7 @@ struct S (
#[serde(with = "url_serde")]
pub Uri,
);
enum S {
Uri(#[serde(with = "url_serde")] Uri),
}