4207: Add unwrap block assist #4156 r=matklad a=bnjjj
close issue #4156
4253: Remove `workspaceLoaded` setting r=matklad a=eminence
The `workspaceLoaded` notification setting was originally designed to
control the display of a popup message that said:
"workspace loaded, {} rust packages"
This popup was removed and replaced by a much sleeker message in the
VSCode status bar that provides a real-time status while loading:
rust-analyzer: {}/{} packages
This was done as part of #3587
The change in this PR simply renames this setting from `workspaceLoaded` to
`progress` to better describe what it actually controls. At the moment,
the only type of progress message that is controlled by this setting is
the initial load messages, but in theory other messages could also be
controlled by this setting.
Reviewer notes:
* If we didn't like the idea of causing minor breaking to user's config, we could keep the setting name as `workspaceLoaded`
* I think we can now close both #2719 and #3176 since the notification dialog in question no longer exists (actually I think you can close those issues even if you reject this PR 😄 )
Co-authored-by: Benjamin Coenen <5719034+bnjjj@users.noreply.github.com>
Co-authored-by: Andrew Chin <achin@eminence32.net>
The `workspaceLoaded` notification setting was originally designed to
control the display of a popup message that said:
"workspace loaded, {} rust packages"
This popup was removed and replaced by a much sleeker message in the
VSCode status bar that provides a real-time status while loading:
rust-analyzer: {}/{} packages
This was done as part of #3587
The new status-bar indicator is unobtrusive and shouldn't need to be
disabled. So this setting is removed.
4161: lsp-types 0.74 r=kjeremy a=kjeremy
* Fixes a bunch of param types to take partial progress into account.
* Will allow us to support insert/replace text in completions
Co-authored-by: kjeremy <kjeremy@gmail.com>
4113: Support returning non-hierarchical symbols r=matklad a=kjeremy
If `hierarchicalDocumentSymbolSupport` is not true in the client capabilites
then it does not support the `DocumentSymbol[]` return type from the
`textDocument/documentSymbol` request and we must fall back to `SymbolInformation[]`.
This is one of the few requests that use the client capabilities to
differentiate between return types and could cause problems for clients.
See https://github.com/microsoft/language-server-protocol/pull/538#issuecomment-442510767 for more context.
Found while looking at #144
4136: add support for cfg feature attributes on expression #4063 r=matklad a=bnjjj
close issue #4063
4141: Fix typo r=matklad a=Veetaha
4142: Remove unnecessary async from vscode language client creation r=matklad a=Veetaha
Co-authored-by: kjeremy <kjeremy@gmail.com>
Co-authored-by: Benjamin Coenen <5719034+bnjjj@users.noreply.github.com>
Co-authored-by: veetaha <veetaha2@gmail.com>
4133: main: eagerly prime goto-definition caches r=matklad a=BurntSushi
This commit eagerly primes the caches used by goto-definition by
submitting a "phantom" goto-definition request. This is perhaps a bit
circuitous, but it does actually get the job done. The result of this
change is that once RA is finished its initial loading of a project,
goto-definition requests are instant. There don't appear to be any more
surprise latency spikes.
This _partially_ addresses #1650 in that it front-loads the latency of the
first goto-definition request, which in turn makes it more predictable and
less surprising. In particular, this addresses the use case where one opens
the text editor, starts reading code for a while, and only later issues the
first goto-definition request. Before this PR, that first goto-definition request
is guaranteed to have high latency in any reasonably sized project. But
after this PR, there's a good chance that it will now be instant.
What this _doesn't_ address is that initial loading time. In fact, it makes it
longer by adding a phantom goto-definition request to the initial startup
sequence. However, I observed that while this did make initial loading
slower, it was overall a somewhat small (but not insignificant) fraction
of initial loading time.
-----
At least, the above is what I _want_ to do. The actual change in this PR is just a proof-of-concept. I came up with after an evening of printf-debugging. Once I found the spot where this cache priming should go, I was unsure of how to generate a phantom input. So I just took an input I knew worked from my printf-debugging and hacked it in. Obviously, what I'd like to do is make this more general such that it will always work.
I don't know whether this is the "right" approach or not. My guess is that there is perhaps a cleaner solution that more directly primes whatever cache is being lazily populated rather than fudging the issue with a phantom goto-definition request.
I created this as a draft PR because I'd really like help making this general. I think whether y'all want to accept this patch is perhaps a separate question. IMO, it seems like a good idea, but to be honest, I'm happy to maintain this patch on my own since it's so trivial. But I would like to generalize it so that it will work in any project.
My thinking is that all I really need to do is find a file and a token somewhere in the loaded project, and then use that as input. But I don't quite know how to connect all the data structures to do that. Any help would be appreciated!
cc @matklad since I've been a worm in your ear about this problem. :-)
Co-authored-by: Andrew Gallant <jamslam@gmail.com>
This commit makes RA more aggressive about eagerly priming the caches.
In particular, this fixes an issue where even after RA was done priming
its caches, an initial goto-definition request would have very high
latency. This fixes that issue by requesting syntax highlighting for
everything. It is presumed that this is a tad wasteful, but not overly
so.
This commit also tweaks the logic that determines when the cache is
primed. Namely, instead of just priming it when the state is loaded
initially, we attempt to prime it whenever some state changes. This
fixes an issue where if a modification notification is seen before cache
priming is done, it would stop the cache priming early.
4125: Avoid lossy OsString conversions r=matklad a=lnicola
This is a bit invasive, and perhaps for not much benefit since non-UTF-8 environment variables don't work anyway.
Co-authored-by: Laurențiu Nicola <lnicola@dend.ro>
If `hierarchicalDocumentSymbolSupport` is not true in the client capabilites
then it does not support the `DocumentSymbol[]` return type from the
`textDocument/documentSymbol` request and we must fall back to `SymbolInformation[]`.
This is one of the few requests that use the client capabilities to
differentiate between return types and could cause problems for clients.
See https://github.com/microsoft/language-server-protocol/pull/538#issuecomment-442510767 for more context.
Found while looking at #144
4101: Panic proc macro srv if read request failed r=matklad a=edwin0cheng
This PR fixed a bug when the rust-analyzer is killed suddenly, the `rust-analyzer proc-macro` will become stale.
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
I might be reading this wrong, but it looks like we are setting it to
essentially arbitrary string at the moment, as there are no defined
order on the items in the *set* of completions.
3954: Improve autocompletion by looking on the type and name r=matklad a=bnjjj
This tweet (https://twitter.com/tjholowaychuk/status/1248918374731714560) gaves me the idea to implement that in rust-analyzer.
Basically for this first example I made some examples when we are in a function call definition. I look on the parameter list to prioritize autocompletions for the same types and if it's the same type + the same name then it's displayed first in the completion list.
So here is a draft, first step to open a discussion and know what you think about the implementation. It works (cf tests) but maybe I can make a better implementation at some places. Be careful the code needs some refactoring to be better and concise.
PS: It was lot of fun writing this haha
Co-authored-by: Benjamin Coenen <5719034+bnjjj@users.noreply.github.com>
In general, there should be no reason to call `.to_string_lossy`.
If you want to display the path, use `.display()`.
If you want to pass the path to an OS API (like std::process::Command)
than use `PathBuf` or `OsString`.
4092: feat: run ignored tests r=matklad a=hdevalke
I started making some exercices on https://exercism.io/ and a lot of test have the `#[ignore]` attribute.
The `Run Test|Debug` code lens show up, but running the test results in:
```
running 1 test
test test_one_piece ... ignored
test result: ok. 0 passed; 0 failed; 1 ignored; 0 measured; 5 filtered out
```
This pull request adds the `--ignored` flag if needed.
Co-authored-by: Hannes De Valkeneer <hannes@de-valkeneer.be>
This is a quick way to implement unresolved reference diagnostics.
For example, adding to VS Code config
"editor.tokenColorCustomizationsExperimental": {
"unresolvedReference": "#FF0000"
},
will highlight all unresolved refs in red.
3996: Fix path for proc-macro in nightly / stable release r=matklad a=edwin0cheng
I messed up that I forget we use different executable names for nightly / stable release, I changed to use the current executable name instead.
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
In textmate, keyword.control is used for all kinds of things; in fact,
the default scope mapping for keyword is keyword.control!
So let's add a less ambiguous controlFlow modifier
See Microsoft/vscode#94367