1208: [WIP] Goto for Macro's r=matklad a=Lapz
Adds goto definition for macros. Currently only works for macros in the current crate ~~otherwise it panics~~. Proper macro resolution needs to be added for it to resolve macros in other crates.
Todo
- [X] Allow goto from macro calls
- [X] Fix panics
- [x] Add tests
![Screen Recording 2019-04-25 at 18 00 24](https://user-images.githubusercontent.com/19998186/56754499-1dd01c00-6785-11e9-9e9a-1e36de70cfa3.gif)
Co-authored-by: Lenard Pratt <l3np27@gmail.com>
1216: Basic Chalk integration r=matklad a=flodiebold
This replaces the ad-hoc `implements` check by Chalk. It doesn't yet any new functionality (e.g. where clauses aren't passed to Chalk yet). The tests that exist actually work, but it needs some refactoring, currently crashes when running analysis on the RA repo, and depends on rust-lang/chalk#216 which isn't merged yet 😄
The main work here is converting stuff back and forth and providing Chalk with the information it needs, and the canonicalization logic. Since canonicalization depends a lot on the inference table, I don't think we can currently reuse the logic from Chalk, so we need to implement it ourselves; it's not actually that complicated anyway ;) I realized that we need a `Ty::Bound` variant separate from `Ty::Param` -- these are two different things, and I think type parameters inside a function actually need to be represented in Chalk as `Placeholder` types.
~~Currently this crashes in the 'real' world because we don't yet do canonicalization when filtering method candidates. Proper canonicalization needs the inference table (to collapse different inference variables that have already been unified), but we need to be able to call the method candidate selection from the completion code... So I'm currently thinking how to best handle that 😄~~
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
1238: Macro queries r=edwin0cheng a=matklad
In https://github.com/rust-analyzer/rust-analyzer/pull/1231, I've added aggressive clean up of `ast_id_to_node` query.
The result of this query is a `SyntaxTree`, and we don't want to retain syntax trees in memory unless absolutely necessary.
Moreover, `SyntaxTree` has identity equality semantics, meaning that we'll get a diffferent syntax tree for a file after every reparse. That means that `ast_id_to_node` query should not genereally be used in HIR, unless it is behind some kind of salsa firewall, like the `raw` module of name resoulution.
However, that PR resulted in the abysmal performance: turns out we were using ast_id_to_node quite heavily in hir when expanding macros!
So this PR installs the more incremental-friendly query structure:
* converting source to token tree is now a query; changing source without affecting token-trees will now preserve macro expansions
* expand macro (tt -> tt) is now a query as well, so we cache macro expansions *before* parsing them into item lists or expressions, which is nice: we can cache expansion without knowing the calling context!
r? @edwin0cheng
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
Currently, when expanding macros, we look at the source code
directly (we invoke ast_id_to_node query via to_node method).
This is less then ideal, because it make us re-expand macros after
every source change.
This commit establishes a salsa-firewall: a query to get macro call's
token tree. Unlike the syntax tree, token tree changes only if we
actually modify the macro itself.
1230: Desugar doc comments to `#[doc = "...."]` attributes in `syntax node` to tt conversion r=matklad a=edwin0cheng
As discussed in [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0/topic/MBE.20discussion/near/164446835), this PR desugar doc comments to `#[doc = "...."]` in `syntax node` to tt conversion.
Note that after this PR, all obvious mbe bugs in dogfooding are fixed. (i.e. No parsing or expanding mbe error in `env RUST_LOG=ra_hir=WARN target\release\ra_cli.exe analysis-stats`) 🎉
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
1227: Add `default_type` method in `TypeParam` Node r=matklad a=edwin0cheng
This PR add a `default_type` method in `TypeParam` Node which allow future PR to handle #1099 case.
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
1222: Skip Dollars when bump raw token r=matklad a=edwin0cheng
This PR fixed a bug while parsing token_tree, it should skip all L_DOLLAR AND R_DOLLAR.
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
1220: Add macro pat parsing r=matklad a=edwin0cheng
This PR add support to parsing macro call in pattern , e.g :
```
let m!(x) = 0;
```
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
1213: Make lexer produce only single character puncts r=matklad a=edwin0cheng
As discussed in Zulip, this PR change `lexer` to produce only single char punct.
* Remove producing `DOTDOTDOT, DOTDOTEQ, DOTDOT, COLONCOLON, EQEQ, FAT_ARROW, NEQ, THIN_ARROW` in lexer.
* Add required code in parser to make sure everythings works fine.
* Change some tests (Mainly because the `ast::token_tree` is different)
Note: i think the use of `COLON` in rust is too overloaded :)
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
E.g. in
```
let foo = 1u32;
if true {
<|>foo;
}
```
the hover shows `()`, the type of the whole if expression, instead of the more
sensible `u32`. The reason for this was that the search for an expression was
slightly left-biased: When on the edge between two tokens, it first looked at
all ancestors of the left token and then of the right token. Instead merge the
ancestors in ascending order, so that we get the smaller of the two possible
expressions.
1200: Allows searching for case-equivalent symbols (fixes#1151) r=matklad a=jrvidal
I couldn't find a nice, functional way of calculating the ranges in one pass so I resorted to a plain old `for` loop.
Co-authored-by: Roberto Vidal <vidal.roberto.j@gmail.com>
1147: Handle macros in type checking / HIR r=matklad a=Lapz
An other attempt at #1102. I will need to flatten the nested if statements and im also not sure if the way that i get the resolver and module will always work
Co-authored-by: Lenard Pratt <l3np27@gmail.com>
We really shouldn't be looking at the identifier at point. Instead,
all filtering and sorting should be implemented at the layer above.
This layer should probably be home for auto-import completions as
well, but, since that is not yet implemented, let's just stick this
into complete_scope.
1193: Add a test for #1178 case r=edwin0cheng a=edwin0cheng
A little PR to add a test case for #1178
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
1184: Start structured editing API r=matklad a=matklad
I think I finally understand how to provide nice, mutable structured editing API on top of red-green trees.
The problem I am trying to solve is that any modification to a particular `SyntaxNode` returns an independent new file. So, if you are editing a struct literal, and add a field, you get back a SourceFile, and you have to find the struct literal inside it yourself! This happens because our trees are immutable, but have parent pointers.
The main idea here is to introduce `AstEditor<T>` type, which abstracts away that API. So, you create an `AstEditor` for node you want to edit and call various `&mut` taking methods on it. Internally, `AstEditor` stores both the original node and the current node. All edits are applied to the current node, which is replaced by the corresponding node in the new file. In the end, `AstEditor` computes a text edit between old and new nodes.
Note that this also should sole a problem when you create an anchor pointing to a subnode and mutate the parent node, invalidating anchor. Because mutation needs `&mut`, all anchors must be killed before modification.
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>