ide: improve ReferenceCategoryType
It is bitset semantically --- many categorical things can be true about a reference at the same time.
In parciular, a reference can be a "test" and a "write" at the same time.
It is bitset semantically --- many categorical things can be true about
a reference at the same time.
In parciular, a reference can be a "test" and a "write" at the same
time.
internal : redesign rust-analyzer::config
This PR aims to cover the infrastructural requirements for the `rust-analyzer.toml` ( #13529 ) issue. This means, that
1. We no longer have a single config base. The once single `ConfigData` has been divided into 4 : A tree of `.ratoml` files, a set of configs coming from the client ( this is what was called before the `CrateData` except that now values do not default to anything when they are not defined) , a set of configs that will reflect what the contents of a `ratoml` file defined in user's config directory ( e.g `~/.config/rust-analyzer/.rust-analyzer.toml` and finally a tree root that is populated by default values only.
2. Configs have also been divided into 3 different blocks : `global` , `local` , `client`. The current status of a config may change until #13529 got merged.
Once again many thanks to `@cormacrelf` for doing all the serde work.
internal: improve `TokenSet` implementation and add reserved keywords
The current `TokenSet` type represents "A bit-set of `SyntaxKind`s" as a newtype `u128`.
Internally, the flag for each `SyntaxKind` variant in the bit-set is set as the n-th LSB (least significant bit) via a bit-wise left shift operation, where n is the discriminant.
Edit: This is problematic because there's currently ~121 token `SyntaxKind`s, so adding new token kinds for missing reserved keywords increases the number of token `SyntaxKind`s above 128, thus making this ["mask"](7a8374c162/crates/parser/src/token_set.rs (L31-L33)) operation overflow.
~~This is problematic because there's currently 266 SyntaxKinds, so this ["mask"](7a8374c162/crates/parser/src/token_set.rs (L31-L33)) operation silently overflows in release mode.~~
~~This leads to a single flag/bit in the bit-set being shared by multiple `SyntaxKind`s~~.
This PR:
- Changes the wrapped type for `TokenSet` from `u128` to `[u64; 3]` ~~`[u*; N]` (currently `[u16; 17]`) where `u*` can be any desirable unsigned integer type and `N` is the minimum array length needed to represent all token `SyntaxKind`s without any collisions~~.
- Edit: Add assertion that `TokenSet`s only include token `SyntaxKind`s
- Edit: Add ~7 missing [reserved keywords](https://doc.rust-lang.org/stable/reference/keywords.html#reserved-keywords)
- ~~Moves the definition of the `TokenSet` type to grammar codegen in xtask, so that `N` is adjusted automatically (depending on the chosen `u*` "base" type) when new `SyntaxKind`s are added~~.
- ~~Updates the `token_set_works_for_tokens` unit test to include the `__LAST` `SyntaxKind` as a way of catching overflows in tests.~~
~~Currently `u16` is arbitrarily chosen as the `u*` "base" type mostly because it strikes a good balance (IMO) between unused bits and readability of the generated `TokenSet` code (especially the [`union` method](7a8374c162/crates/parser/src/token_set.rs (L26-L28))), but I'm open to other suggestions or a better methodology for choosing `u*` type.~~
~~I considered using a third-party crate for the bit-set, but a direct implementation seems simple enough without adding any new dependencies. I'm not strongly opposed to using a third-party crate though, if that's preferred.~~
~~Finally, I haven't had the chance to review issues, to figure out if there are any parser issues caused by collisions due the current implementation that may be fixed by this PR - I just stumbled upon the issue while adding "new" keywords to solve #16858~~
Edit: fixes#16858
Better inline preview for postfix completion
Better inline preview for postfix completion, a proper implementation of c5686c8941.
Here editors may filter completion item with the text within `delete_range`, so we need to include the `receiver text` in the `lookup` (aka `FilterText` in LSP spec) for editors to find the completion item. (See https://github.com/rust-lang/rust-analyzer/issues/17036#issuecomment-2056614180, Thanks to [pascalkuthe](https://github.com/pascalkuthe))
fix: Fix inlay hint resolution being broken
So, things broke because we now store a hash (u64) in the resolution payload, but javascript and hence JSON only support integers of up to 53 bits (anything beyond gets truncated in various ways) which caused almost all hashes to always differ when resolving them. This masks the hash to 53 bits to work around that.
Fixes https://github.com/rust-lang/rust-analyzer/issues/16962
fix: VFS should not confuse paths with source roots that have the same prefix
Previously, the VFS would assign paths to the source root that had the longest string prefix match. This would break when we had source roots in subdirectories:
```
/foo
/foo/bar
```
Given a file `/foo/bar_baz.rs`, we would attribute it to the `/foo/bar` source root, which is wrong.
As a result, we would attribute paths to the wrong crate when a crate was in a subdirectory of another one. This is more common in larger monorepos, but could occur in any Rust project.
Fix this in the VFS, and add a test.