SCIP requires symbols to be unique, but multiple functions may have a
parameter with the same name. Qualify parameters according to the
containing function.
internal: Defer structured snippet rendering to allow escaping snippet bits
Since we know exactly where snippets are, we can transparently escape snippet bits to the exact text edits that need it, and not have to do it for anything other text edits.
Also will eventually fix#11006 once all assists are migrated. This comes as a side-effect of text edits that don't have snippets get marked as having no insert formatting at all.
Structured snippets precisely track which text edits need to be marked
as snippet text edits, but the cases where structured snippets aren't
used but snippets are still present are for simple single text-edit
changes, so it's perfectly fine to mark all one of them as being a
snippet text edit
Map our diagnostics to rustc and clippy's ones
And control their severity by lint attributes `#[allow]`, `#[deny]` and ... .
It doesn't work with proc macros and I would like to fix that before merge but I don't know how to do it.
internal: Format let-else
As nightly finally got support for it I went ahead and formatted r-a with the latest nightly, then with the latest stable (in case other stuff changed)
Split out project loading capabilities from rust-analyzer crate
External tools currently depend on the entire lsp infra for no good reason so let's lift that out so those tools have something better to depend on
Use anonymous lifetime where possible
Because anonymous lifetimes are *super* cool.
More seriously, I believe anonymous lifetimes, especially those in impl headers, reduce cognitive load to a certain extent because they usually signify that they are not relevant in the signature of the methods within (or that we can apply the usual lifetime elision rules even if they are relevant).
internal: add `library` fixture meta
Currently, there is no way to specify `CrateOrigin` of a file fixture ([this] might be a bug?). This PR adds `library` meta to explicitly specify the fixture to be `CrateOrigin::Library` and also makes sure crates that belong to a library source root are set `CrateOrigin::Library`.
(`library` isn't really the best name. It essentially means that the crate is outside workspace but `non_workspace_member` feels a bit too long. Suggestions for the better name would be appreciated)
Additionally:
- documents the fixture meta syntax as thoroughly as possible
- refactors relevant code
[this]: 4b06d3c595/crates/base-db/src/fixture.rs (L450)
internal: remove spurious regex dependency
- replace tokio's env-filter with a smaller&simpler targets filter
- reshuffle logging infra a bit to make sure there's only a single place where we read environmental variables
- use anyhow::Result in rust-analyzer binary
- replace tokio's env-filter with a smaller&simpler targets filter
- reshuffle logging infra a bit to make sure there's only a single place
where we read environmental variables
- use anyhow::Result in rust-analyzer binary
Lower const params with a bad id
cc #7434
This PR adds an `InTypeConstId` which is a `DefWithBodyId` and lower const generic parameters into bodies using it, and evaluate them with the mir interpreter. I think this is the last unimplemented const generic feature relative to rustc stable.
But there is a problem: The id used in the `InTypeConstId` is the raw `FileAstId`, which changes frequently. So these ids and their bodies will be invalidated very frequently, which is bad for incremental analysis.
Due this problem, I disabled lowering for local crates (in library crate the id is stable since files won't be changed). This might be overreacting (const generic expressions are usually small, maybe it would be better enabled with bad performance than disabled) but it makes motivation for doing it in the correct way, and it splits the potential panic and breakages that usually comes with const generic PRs in two steps.
Other than the id, I think (at least I hope) other parts are in the right direction.
Properly format documentation for `SignatureHelpRequest`s
Properly formats function documentation instead of returning it raw when responding to `SignatureHelpRequest`s.
I added a test in `crates/rust-analyzer/tests/slow-tests/main.rs` -- not sure if this is the best location given the relevant code is in `crates/rust-analyzer` or if it's possible to test in a less heavyweight manner.
Closes#14958
Add span to group.
This appears to fix#14959, but I've never contributed to rust-analyzer before and there were some things that confused me:
- I had to add the `fn byte_range` method to get it to build. This was added to rust in [April](https://github.com/rust-lang/rust/pull/109002), so I don't understand why it wasn't needed until now
- When testing, I ran into the fact that rust recently updated its `METADATA_VERSION`, so I had to test this with nightly-2023-05-20. But then I noticed that rust has its own copy of `rust-analyzer`, and the metadata version bump has already been [handled there](60e95e76d0). So I guess I don't really understand the relationship between the code there and the code here.
Prioritize threads affected by user typing
To this end I’ve introduced a new custom thread pool type which can spawn threads using each QoS class. This way we can run latency-sensitive requests under one QoS class and everything else under another QoS class. The implementation is very similar to that of the `threadpool` crate (which is currently used by rust-analyzer) but with unused functionality stripped out.
I’ll have to rebase on master once #14859 is merged but I think everything else is alright :D
This code replaces the thread pool implementation we were using
previously (from the `threadpool` crate). By making the thread pool
aware of QoS, each job spawned on the thread pool can have a different
QoS class.
This commit also replaces every QoS class used previously with Default
as a temporary measure so that each usage can be chosen deliberately.
Specify thread types using Quality of Service API
<details>
<summary>Some background (in case you haven’t heard of QoS before)</summary>
Heterogenous multi-core CPUs are increasingly found in laptops and desktops (e.g. Alder Lake, Snapdragon 8cx Gen 3, M1). To maximize efficiency on this kind of hardware, it is important to provide the operating system with more information so threads can be scheduled on different core types appropriately.
The approach that XNU (the kernel of macOS, iOS, etc) and Windows have taken is to provide a high-level semantic API – quality of service, or QoS – which informs the OS of the program’s intent. For instance, you might specify that a thread is running a render loop for a game. This makes the OS provide this thread with as large a share of the system’s resources as possible. Specifying a thread is running an unimportant background task, on the other hand, is cause for it to be scheduled exclusively on high-efficiency cores instead of high-performance cores.
QoS APIs allows for easy configuration of many different parameters at once; for instance, setting QoS on XNU affects scheduling, timer latency, I/O priorities, and of course what core type the thread in question should run on. I don’t know any details on how QoS works on Windows, but I would guess it’s similar.
Hypothetically, taking advantage of these APIs would improve power consumption, thermals, battery life if applicable, etc.
</details>
# Relevance to rust-analyzer
From what I can tell the philosophy behind both the XNU and Windows QoS APIs is that _user interfaces should never stutter under any circumstances._ You can see this in the array of QoS classes which are available: the highest QoS class in both APIs is one intended explicitly for UI render loops.
Imagine rust-analyzer is performing CPU-intensive background work – maybe you just invoked Find Usages on `usize` or opened a large project – in this scenario the editor’s render loop should absolutely get higher priority than rust-analyzer, no matter what. You could view it in terms of “realtime-ness”: flight control software is hard realtime, audio software is soft realtime, GUIs are softer realtime, and rust-analyzer is not realtime at all. Of course, maximizing responsiveness is important, but respecting the rest of the system is more important.
# Implementation
I’ve tried my best to unify thread creation in `stdx`, where the new API I’ve introduced _requires_ specifying a QoS class. Different points along the performance/efficiency curve can make a great difference; the M1’s e-cores use around three times less power than the p-cores, so putting in this effort is worthwhile IMO.
It’s worth mentioning that Linux does not [yet](https://youtu.be/RfgPWpTwTQo) have a QoS API. Maybe translating QoS into regular thread priorities would be acceptable? From what I can tell the only scheduling-related code in rust-analyzer is Windows-specific, so ignoring QoS entirely on Linux shouldn’t cause any new issues. Also, I haven’t implemented support for the Windows QoS APIs because I don’t have a Windows machine to test on, and because I’m completely unfamiliar with Windows APIs :)
I noticed that rust-analyzer handles some requests on the main thread (using `.on_sync()`) and others on a threadpool (using `.on()`). I think it would make sense to run the main thread at the User Initiated QoS and the threadpool at Utility, but only if all requests that are caused by typing use `.on_sync()` and all that don’t use `.on()`. I don’t understand how the `.on_sync()`/`.on()` split that’s currently present was chosen, so I’ve let this code be for the moment. Let me know if changing this to what I proposed makes any sense.
To avoid having to change everything back in case I’ve misunderstood something, I’ve left all threads at the Utility QoS for now. Of course, this isn’t what I hope the code will look like in the end, but I figured I have to start somewhere :P
# References
<ul>
<li><a href="https://developer.apple.com/library/archive/documentation/Performance/Conceptual/power_efficiency_guidelines_osx/PrioritizeWorkAtTheTaskLevel.html">Apple documentation related to QoS</a></li>
<li><a href="67e155c940/include/pthread/qos.h">pthread API for setting QoS on XNU</a></li>
<li><a href="https://learn.microsoft.com/en-us/windows/win32/procthread/quality-of-service">Windows’s QoS classes</a></li>
<li>
<details>
<summary>Full documentation of XNU QoS classes. This documentation is only available as a huge not-very-readable comment in a header file, so I’ve reformatted it and put it here for reference.</summary>
<ul>
<li><p><strong><code>QOS_CLASS_USER_INTERACTIVE</code>: A QOS class which indicates work performed by this thread is interactive with the user.</strong></p><p>Such work is requested to run at high priority relative to other work on the system. Specifying this QOS class is a request to run with nearly all available system CPU and I/O bandwidth even under contention. This is not an energy-efficient QOS class to use for large tasks. The use of this QOS class should be limited to critical interaction with the user such as handling events on the main event loop, view drawing, animation, etc.</p></li>
<li><p><strong><code>QOS_CLASS_USER_INITIATED</code>: A QOS class which indicates work performed by this thread was initiated by the user and that the user is likely waiting for the results.</strong></p><p>Such work is requested to run at a priority below critical user-interactive work, but relatively higher than other work on the system. This is not an energy-efficient QOS class to use for large tasks. Its use should be limited to operations of short enough duration that the user is unlikely to switch tasks while waiting for the results. Typical user-initiated work will have progress indicated by the display of placeholder content or modal user interface.</p></li>
<li><p><strong><code>QOS_CLASS_DEFAULT</code>: A default QOS class used by the system in cases where more specific QOS class information is not available.</strong></p><p>Such work is requested to run at a priority below critical user-interactive and user-initiated work, but relatively higher than utility and background tasks. Threads created by <code>pthread_create()</code> without an attribute specifying a QOS class will default to <code>QOS_CLASS_DEFAULT</code>. This QOS class value is not intended to be used as a work classification, it should only be set when propagating or restoring QOS class values provided by the system.</p></li>
<li><p><strong><code>QOS_CLASS_UTILITY</code>: A QOS class which indicates work performed by this thread may or may not be initiated by the user and that the user is unlikely to be immediately waiting for the results.</strong></p><p>Such work is requested to run at a priority below critical user-interactive and user-initiated work, but relatively higher than low-level system maintenance tasks. The use of this QOS class indicates the work should be run in an energy and thermally-efficient manner. The progress of utility work may or may not be indicated to the user, but the effect of such work is user-visible.</p></li>
<li><p><strong><code>QOS_CLASS_BACKGROUND</code>: A QOS class which indicates work performed by this thread was not initiated by the user and that the user may be unaware of the results.</strong></p><p>Such work is requested to run at a priority below other work. The use of this QOS class indicates the work should be run in the most energy and thermally-efficient manner.</p></li>
<li><p><strong><code>QOS_CLASS_UNSPECIFIED</code>: A QOS class value which indicates the absence or removal of QOS class information.</strong></p><p>As an API return value, may indicate that threads or pthread attributes were configured with legacy API incompatible or in conflict with the QOS class system.</p></li>
</ul>
</details>
</li>
</ul>
feat: Highlight closure captures when cursor is on pipe or move keyword
This runs into the same issue on vscode as exit points for `->`, where highlights are only triggered on identifiers, https://github.com/rust-lang/rust-analyzer/issues/9395
Though putting the cursor on `move` should at least work.
internal: Warn when loading sysroot fails to find the core library
Should help a bit more with user experience, before we only logged this now we show it in the status
Closes https://github.com/rust-lang/rust-analyzer/issues/11606
Drop support for non-syroot proc macro ABIs
This makes some bigger changes to how we handle the proc-macro-srv things, for one it is now an empty crate if built without the `sysroot-abi` feature, this simplifies some things dropping the need to put the feature cfg in various places. The cli wrapper now actually depends on the server, instead of being part of the server that is just exported, that way we can have a true dummy server that just errors on each request if no sysroot support was specified.
internal: Add config to specifiy lru capacities for all queries
Might help figuring out what queries should be limited by LRU by default, as currently we only limit `parse`, `parse_macro_expansion` and `macro_expand`.