* make stuff more type-safe by using `BindPat` instead of just `Pat`
* don't add `mut` into binding hash
* reset shadow counter when we enter a function
1537: Less magic completions r=matklad a=marcogroppo
Restrict `if`, `not` and `while` postfix magic completions to boolean expressions and expressions of an unknown type.
(this may be controversial, marking as draft for this reason)
See the discussion in #1526.
Co-authored-by: Marco Groppo <marco.groppo@gmail.com>
1520: Ignore workspace/didChangeConfiguration notifications. r=matklad a=bolinfest
If the client happens to send a `workspace/didChangeConfiguration`
notification, it is nicer if rust-analyzer can just ignore it rather than
crash with an "unhandled notification" error.
Co-authored-by: Michael Bolin <bolinfest@gmail.com>
This appears to have been introduced ages ago in
be742a5877
but has since been removed.
As it stands, it is problematic if multiple instances of the
rust-analyzer LSP are launched during the same VS Code session because
VS Code complains about multiple LSP servers trying to register the
same command.
Most LSP servers workaround this by parameterizing the command by the
process id. For example, this is where `rls` does this:
ff0b9057c8/rls/src/server/mod.rs (L413-L421)
Though `apply_code_action` does not seems to be used, so it seems better
to delete it than to parameterize it.
1515: Trait environment r=matklad a=flodiebold
This adds the environment, i.e. the set of `where` clauses in scope, when solving trait goals. That means that e.g. in
```rust
fn foo<T: SomeTrait>(t: T) {}
```
, we are able to complete methods of `SomeTrait` on the `t`. This affects the trait APIs quite a bit (since every method that needs to be able to solve for some trait needs to get this environment somehow), so I thought I'd do it rather sooner than later ;)
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
1504: Simplify LSP handlers r=matklad a=kjeremy
Takes advantage of protocol inheritance via composition and simplifies some responses via the `From`/`Into` traits.
Co-authored-by: Jeremy Kolb <kjeremy@gmail.com>
1499: processing attribute #[path] of module r=matklad a=andreevlex
support two cases
- simple name file `foo.rs`
- declaration in mod.rs
#1211
Co-authored-by: Alexander Andreev <andreevlex.as@gmail.com>
1491: More clippy r=matklad a=kjeremy
A few more clippy changes.
I'm a little unsure of the second commit. It's the trivially_copy_pass_by_ref lint and there are a number of places in the code we could use it if it makes sense.
Co-authored-by: Jeremy Kolb <kjeremy@gmail.com>
This wasn't a right decision in the first place, the feature flag was
broken in the last rustfmt release, and syntax highlighting of imports
is more important anyway
Array members are allow to have attributes such as `#[cfg]`.
This is a bit tricky as we don't know if the first expression is an
initializer or a member until we encounter a `;`. This reuses a trick
from `stmt` where we remember if we saw an attribute and then raise an
error if the first expression ends up being an initializer.
This isn't perfect as the error isn't correctly located on the attribute
or initializer; it ends up immediately after the `;`.
1456: Deduplicate method candidates r=matklad a=flodiebold
With trait method completion + autoderef, we were getting a lot of duplicates, which was really annoying...
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
My workflow in Visual Studio Code + Rust Analyzer has become:
1. Make a change to Rust source code using all the analysis magic
2. Save the file to trigger `cargo watch`. I have format on save enabled
for all file types so this also runs `rustfmt`
3. Fix any diagnostics that `cargo watch` finds
Unfortunately if the Rust source has any syntax errors the act of saving
will pop up a scary "command has failed" message and will switch to the
"Output" tab to show the `rustfmt` error and exit code.
I did a quick survey of what other Language Servers do in this case.
Both the JSON and TypeScript servers will swallow the error and return
success. This is consistent with how I remember my workflow in those
languages. The syntax error will show up as a diagnostic so it should
be clear why the file isn't formatting.
I checked the `rustfmt` source code and while it does distinguish "parse
errors" from "operational errors" internally they both result in exit
status of 1. However, more catastrophic errors (missing `rustfmt`,
SIGSEGV, etc) will return 127+ error codes which we can distinguish from
a normal failure.
This changes our handler to log an info message and feign success if
`rustfmt` exits with status 1.
Another option I considered was only swallowing the error if the
formatting request came from format-on-save. However, the Language
Server Protocol doesn't seem to distinguish those cases.
1443: cache chalk queries r=flodiebold a=matklad
This gives a significant speedup, because chalk will call these
functions several times even withing a single revision. The only
significant one here is `impl_data`, but I figured it might be good to
cache others just for consistency.
The results I get are:
Before:
from scratch: 16.081457952s
no change: 15.846493ms
trivial change: 352.95592ms
comment change: 361.998408ms
const change: 457.629212ms
After:
from scratch: 14.910610278s
no change: 14.934647ms
trivial change: 85.633023ms
comment change: 96.433023ms
const change: 171.543296ms
Seems like a nice win!
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
This gives a significant speedup, because chalk will call these
functions several times even withing a single revision. The only
significant one here is `impl_data`, but I figured it might be good to
cache others just for consistency.
The results I get are:
Before:
from scratch: 16.081457952s
no change: 15.846493ms
trivial change: 352.95592ms
comment change: 361.998408ms
const change: 457.629212ms
After:
from scratch: 14.910610278s
no change: 14.934647ms
trivial change: 85.633023ms
comment change: 96.433023ms
const change: 171.543296ms
Seems like a nice win!
Now, one can use `let _p = ra_prof::cpu_profiler()` to capture profile
of a block of code.
This is not an out of the box experience, as that relies on gperfools
See the docs on https://github.com/AtheMathmo/cpuprofiler for more!
1432: Make fill_match_arm work with trivial arm r=matklad a=ironyman
Addresses this issue https://github.com/rust-analyzer/rust-analyzer/issues/1399
One minor issue I noticed is that complete_postfix creates an arm like this
```
match E::X {
<|>_ => {},
}
```
but fill_match_arms creates arms like this
```
E::X => (),
```
Co-authored-by: ironyman <ironyman@users.noreply.github.com>
Co-authored-by: Changyu Li <changyl@microsoft.com>
1409: The Fall down of failures r=matklad a=mominul
😁
Replaced all the uses of `failure` crate with `std::error::Error`.
Closes#1400
Depends on rust-analyzer/teraron#1
Co-authored-by: Muhammad Mominul Huque <mominul2082@gmail.com>
Can be used like this:
```
$ cargo run --release -p ra_cli -- \
analysis-bench ../chalk/ \
--complete ../chalk/chalk-engine/src/logic.rs:94:0
loading: 225.970093ms
from scratch: 8.492373325s
no change: 445.265µs
trivial change: 95.631242ms
```
Or like this:
```
$ cargo run --release -p ra_cli -- \
analysis-bench ../chalk/ \
--highlight ../chalk/chalk-engine/src/logic.rs
loading: 209.873484ms
from scratch: 9.504916942s
no change: 7.731119ms
trivial change: 124.984039ms
```
"from scratch" includes initial analysis of the relevant bits of the
project
"no change" just asks the same question for the second time. It
measures overhead on assembling the answer outside of salsa.
"trivial change" doesn't do an actual salsa change, it just advances
the revision. This test how fast is salsa at validating things.
1408: Associated type basics & Deref support r=matklad a=flodiebold
This adds the necessary Chalk integration to handle associated types and uses it to implement support for `Deref` in the `*` operator and autoderef; so e.g. dot completions through an `Arc` work now.
It doesn't yet implement resolution of associated types in paths, though. Also, there's a big FIXME about handling variables in the solution we get from Chalk correctly.
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
1406: reuse AnalysisHost in batch analysis r=matklad a=matklad
We do some custom setup in `AnalysisHost`, like setting up LRU size. I figure it's a good idea to not duplicate this work in batch analysis, *if* we want to keep batch and non-batch close.
Long-term, I see a value in keeping batch a separate, lighter weight thing. However, because now we use batch to measure performance, keeping them closer makes more sense.
I'd also like to add ability to get completions by using batch analysis, and that will require ra_ide_api as well.
@flodiebold were there some reason why we haven't started with this approach from the start?
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>