1208: [WIP] Goto for Macro's r=matklad a=Lapz
Adds goto definition for macros. Currently only works for macros in the current crate ~~otherwise it panics~~. Proper macro resolution needs to be added for it to resolve macros in other crates.
Todo
- [X] Allow goto from macro calls
- [X] Fix panics
- [x] Add tests
![Screen Recording 2019-04-25 at 18 00 24](https://user-images.githubusercontent.com/19998186/56754499-1dd01c00-6785-11e9-9e9a-1e36de70cfa3.gif)
Co-authored-by: Lenard Pratt <l3np27@gmail.com>
1213: Make lexer produce only single character puncts r=matklad a=edwin0cheng
As discussed in Zulip, this PR change `lexer` to produce only single char punct.
* Remove producing `DOTDOTDOT, DOTDOTEQ, DOTDOT, COLONCOLON, EQEQ, FAT_ARROW, NEQ, THIN_ARROW` in lexer.
* Add required code in parser to make sure everythings works fine.
* Change some tests (Mainly because the `ast::token_tree` is different)
Note: i think the use of `COLON` in rust is too overloaded :)
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
E.g. in
```
let foo = 1u32;
if true {
<|>foo;
}
```
the hover shows `()`, the type of the whole if expression, instead of the more
sensible `u32`. The reason for this was that the search for an expression was
slightly left-biased: When on the edge between two tokens, it first looked at
all ancestors of the left token and then of the right token. Instead merge the
ancestors in ascending order, so that we get the smaller of the two possible
expressions.
1154: Initial support for lang items (and str completion) r=flodiebold a=marcogroppo
This PR adds partial support for lang items.
For now, the only supported lang items are the ones that target an impl block.
Lang items are now resolved during type inference - this means that `str` completion now works.
Fixes#1139.
(thanks Florian Diebold for the help!)
Co-authored-by: Marco Groppo <marco.groppo@gmail.com>
1138: Add L_DOLLAR and R_DOLLAR r=matklad a=edwin0cheng
As discussion in issue https://github.com/rust-analyzer/rust-analyzer/issues/1132 and PR #1125 , this PR add 2 `Syntax::Kind` : `L_DOLLAR` and `R_DOLLAR` for representing `Delimiter::None` in mbe and proc_marco. By design, It should not affect the final syntax tree, and will be discard in `TreeSink`.
My original idea is handling these 2 tokens case by case, but i found that they will appear in every place in the parser (imagine `tt` matcher). So this PR only handle it in `Parser::do_bump` and `Parser::start`, although It will not fix the `expr` matcher executing order problem in original idea.
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
1076: Const body inference r=flodiebold a=Lapz
This is the second part of #887. I've added type inference on const bodies and introduced the DefWithBody containing Function, Const and Static. I want to add tests but im unsure on how I would go about testing that completions work.
Co-authored-by: Lenard Pratt <l3np27@gmail.com>
These are now used when parsing type bounds. In addition parsing paths inside a
bound now does not recursively parse paths, rather they are treated as separate
bounds, separated by +.
1034: HIR diagnostics API r=matklad a=matklad
This PR introduces diagnostics API for HIR, so we can now start issuing errors and warnings! Here are requirements that this solution aims to fulfill:
* structured diagnostics: rather than immediately rendering error to string, we provide a well-typed blob of data with error-description. These data is used by IDE to provide fixes
* open set diagnostics: there's no single enum with all possible diagnostics, which hopefully should result in better modularity
The `Diagnostic` trait describes "a diagnostic", which can be downcast to a specific diagnostic kind. Diagnostics are expressed in terms of macro-expanded syntax tree: they store pointers to syntax nodes. Diagnostics are self-contained: you don't need any context, besides `db`, to fully understand the meaning of a diagnostic.
Because diagnostics are tied to the source, we can't store them in salsa. So subsystems like type-checking produce subsystem-local diagnostic (which is a closed `enum`), which is expressed in therms of subsystem IR. A separate step converts these proto-diagnostics into `Diagnostic`, by merging them with source-maps.
Note that this PR stresses type-system quite a bit: we now type-check every function in open files to compute errors!
Discussion on Zulip: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0/topic/Diagnostics.20API
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
876: Fix join_lines not adding a comma after join_single_expr_block with match arm r=matklad a=vipentti
Fixes#868
Co-authored-by: Ville Penttinen <villem.penttinen@gmail.com>
Benchmarks show no difference. This is probably because we are
bottlenecked on memory allocations, and we should fix that, but we are
not optimizing for performance just yet.
changes. Lines starting # with '#' will be ignored, and an empty
message aborts the commit. # # On branch token-source # Changes to be
committed: # modified: crates/ra_syntax/src/parsing/parser_api.rs #
modified: crates/ra_syntax/src/parsing/parser_impl.rs #
We allow invalid inner attributes to be parsed, e.g. inner attributes that are
not directly after the opening brace of the match block.
Instead we run validation on `MatchArmList` to allow better reporting of errors.
692: [WIP] Correctly parse attributes r=matklad a=DJMcNab
Reference - https://doc.rust-lang.org/reference/attributes.html
This fixes/investigates inner attributes for:
- [x] `impl` blocks
- [x] `extern` blocks
- [x] `fn`s (fixes#689)
- [x] `mod`s (already supported)
- [x] 'block expressions' (the long text just describes all 'blocks' used as statements)
This also investigates/fixes outer attributes for:
- [ ] 'most statements' (see also: #685, https://doc.rust-lang.org/reference/expressions.html#expression-attributes)
- [x] Enum variants, Struct and Union fields (Fixed in #507)
- [ ] 'Match expression arms' (@matklad can you provide a test case which explains what this means?)
- [ ] 'Generic lifetime or type parameters'
- [ ] 'Elements of array expressions, tuple expressions, call expressions, tuple-style struct and enum variant expressions'
- [ ] 'The tail expression of block expressions'
Co-authored-by: DJMcNab <36049421+djmcnab@users.noreply.github.com>
630: Fill in DocumentSymbol::detail r=matklad a=hban
Closes: #516
I just pulled type text from the syntax node and "formatted" is bit. VS Code can't really handle multi-line symbol detail (it's will crop it when rendering), so that formatting will just collapse all white-space to singe space. It isn't pretty, but maybe there's a better way.
Issue also mentions "need to be done for `NavigationTarget` to `SymbolInformation`", but `SymbolInformation` doesn't have detail field on it?
Co-authored-by: Hrvoje Ban <hban@users.noreply.github.com>
623: WIP: module id is not def id r=matklad a=matklad
This achieves two things:
* makes module_tree & item_map per crate, not per source_root
* begins the refactoring to remove universal `DefId` in favor of having separate ids for each kind of `Def`. Currently, only modules get a differnt ID though.
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
440: Implement type inference for boolean operators r=flodiebold a=marcusklaas
Tried implementing the easiest part of https://github.com/rust-analyzer/rust-analyzer/issues/390. Hope this is somewhat close to what the intent of the issue was. Found it surprisingly easy to find my way around the repository - it's well organized!
Very grateful for any pointers.
Co-authored-by: Marcus Klaas de Vries <mail@marcusklaas.nl>
This was a bit complicated. I've added a wrapper type for now that does the
LocalSyntaxPtr <-> ExprId translation; we might want to get rid of that or give
it a nicer interface.
Since we need to be able to go from def to containing impl block, as well as the
other direction, and to find all impls for a certain type, a design similar to
the one for modules, where we collect all impls for the whole crate and keep
them in an arena, seemed fitting. The ImplBlock type, which provides the public
interface, then consists only of an Arc to the arena containing all impls, and
the index into it.
We need to be able to get the receiver even if there is no field name yet, and
currently "a." wouldn't get parsed as a field name at all. This seems to help.
The cast expression expected any type via types::type_() function,
but the language spec does only allow TypeNoBounds (types without direct extra bounds
via `+`).
**Example:**
```rust
fn test() {
6i8 as i32 + 5;
}
```
This fails, because the types::type_() function which should parse the type after the
as keyword is greedy, and takes all plus sign after path types as extra.
My proposed fix is to replace the not implemented `type_no_plus()` just calls (`type_()`)
function, which is used at several places. The replacement is `type_with_bounds_cond(p: &mut Parser, allow_bounds: bool)`, which passes the condition to relevant sub-parsers.
This function is then called by `type_()` and the new public `type_no_bounds()`.
256: Improve/add use_item documentation r=matklad a=DJMcNab
Adds some documentation to use_item explaining all code paths (use imports are hard, especially with the ongoing discussion of anchored v. uniform paths - see https://github.com/rust-lang/rust/issues/55618 for what appears to be the latest developments)
Co-authored-by: DJMcNab <36049421+djmcnab@users.noreply.github.com>
207: Finish implementing char validation r=aochagavia a=aochagavia
The only thing missing right now are good integration tests (and maybe more descriptive error messages)
Co-authored-by: Adolfo Ochagavía <github@adolfo.ochagavia.xyz>
127: Improve folding r=matklad a=aochagavia
I was messing around with adding support for multiline comments in folding and ended up changing a bunch of other things.
First of all, I am not convinced of folding groups of successive items. For instance, I don't see why it is worthwhile to be able to fold something like the following:
```rust
use foo;
use bar;
```
Furthermore, this causes problems if you want to fold a multiline import:
```rust
use foo::{
quux
};
use bar;
```
The problem is that now there are two possible folds at the same position: we could fold the first use or we could fold the import group. IMO, the only place where folding groups makes sense is when folding comments. Therefore I have **removed folding import groups in favor of folding multiline imports**.
Regarding folding comments, I made it a bit more robust by requiring that comments can only be folded if they have the same flavor. So if you have a bunch of `//` comments followed by `//!` comments, you will get two separate fold groups instead of a single one.
Finally, I rewrote the API in such a way that it should be trivial to add new folds. You only need to:
* Create a new FoldKind
* Add it to the `fold_kind` function that converts from `SyntaxKind` to `FoldKind`
Fixes#113
Co-authored-by: Adolfo Ochagavía <github@adolfo.ochagavia.xyz>
Implements a pretty barebones function signature help mechanism in
the language server.
Users can use `Analysis::resolve_callback()` to get basic information
about a call site.
Fixes#102
116: Collapse comments upon join r=matklad a=aochagavia
Todo:
- [x] Write tests
- [x] Resolve fixmes
- [x] Implement `comment_start_length` using the parser
I left a bunch of questions as fixmes. Can someone take a look at them? Also, I would love to use the parser to calculate the length of the leading characters in a comment (`//`, `///`, `//!`, `/*`), so any hints are greatly appreciated.
Co-authored-by: Adolfo Ochagavía <aochagavia92@gmail.com>
Co-authored-by: Adolfo Ochagavía <github@adolfo.ochagavia.xyz>