Expand lint tables && make clippy happy 🎉
This PR expands the lint tables on `./Cargo.toml` and thereby makes `cargo clippy` exit successfully! 🎉Fixes#15918
## How?
In the beginning there are some warnings for rustc.
Next, and most importantly, there is the clippy lint table. There are a few sections in there.
First there are the lint groups.
Second there are all lints which are permanently allowed with the reasoning why they are allowed.
Third there is a huge list of temporarily allowed lints. They should be removed in the mid-term, but incur a substantial amount of work, therefore they are allowed for now and can be worked on bit by bit.
Fourth there are all lints which should warn.
Additionally there are a few allow statements in the code for lints which should be permanently allowed in this specific place, but not in the whole code base.
## Follow up work
- [ ] Run clippy in CI
- [ ] Remove tidy test (at least `@Veykril` wrote this in #15017)
- [ ] Work on temporarily allowed lints
internal: Record FnAbi
This unfortunately breaks our lub coercions, so will need to look into fixing that first, though I am not sure what is going wrong where...
Stubbed some stuff out for the time being.
`cargo clippy --fix`
This PR is the result of running `cargo clippy --fix && cargo fmt` in the root of the repository. I did not manually review all the changes, but just skimmed through a few of them. The tests still pass, so it seems fine.
Add a new config to allow renaming of non-local defs
With #15656 we started disallowing renaming of non-local items. Although this makes sense there are some false positives that impacted users' workflows. So this config aims to mitigate this by giving users the liberty to disable this feature.
The reason why this is a draft is that I saw one of the tests fail and I am not sure if the "got" result even syntactically makes sense
Test case is :
```rust
check(
"Baz",
r#"
//- /lib.rs crate:lib new_source_root:library
pub struct S;
//- /main.rs crate:main deps:lib new_source_root:local
use lib::S$0;
"#,
"use lib::Baz;"
);
```
```
Left:
use lib::Baz;
Right:
use lib::Baz;Baz
Diff:
use lib::Baz;Baz
```
The first one succeeds because the functionality is already implemented.
The second one fails and represents the functionality to be implemented
in this PR.
Detect `NulInCStr` error earlier.
By making it an `EscapeError` instead of a `LitError`. This makes it like the other errors produced when checking string literals contents, e.g. for invalid escape sequences or bare CR chars.
NOTE: this means these errors are issued earlier, before expansion, which changes behaviour. It will be possible to move the check back to the later point if desired. If that happens, it's likely that all the string literal contents checks will be delayed together.
One nice thing about this: the old approach had some code in `report_lit_error` to calculate the span of the nul char from a range. This code used a hardwired `+2` to account for the `c"` at the start of a C string literal, but this should have changed to a `+3` for raw C string literals to account for the `cr"`, which meant that the caret in `cr"` nul error messages was one short of where it should have been. The new approach doesn't need any of this and avoids the off-by-one error.
r? ```@fee1-dead```
With #15656 we started disallowing renaming of non-local items.
Although this makes sense there are some false positives that
impacted users' workflows. So this config aims to mitigate this
by giving users the liberty to disable this feature.
fix: better handling of SelfParam in assist 'inline_call'
fix#15470.
The current `inline_call` directly translates `&self` into `let ref this = ...;` and `&mut self` into `let ref mut this = ...;`. However, it does not handle some complex scenarios.
This PR addresses the following transformations (assuming the receiving object is `obj`):
- `self`: `let this = obj`
- `mut self`: `let mut this = obj`
- `&self`: `let this = &obj`
- `&mut self`
+ If `obj` is `let mut obj = ...`, use a mutable reference: `let this = &mut obj`
+ If `obj` is `let obj = &mut ...;`, perform a reborrow: `let this = &mut *obj`
internal: Follow rustfmt's algorithm for ordering imports when ordering and merging use trees
Updates use tree ordering and merging utilities to follow rustfmt's algorithm for ordering imports.
The [rustfmt implementation](6356fca675/src/imports.rs) was used as reference.
Show which roots are being scanned in progress messages
This changes the `Roots Scanned` message to include the directory being scanned.
Before: `Roots Scanned 206/210 (98%)`
After: `Roots Scanned 206/210: .direnv (98%)`
This makes it a lot easier to tell that `rust-analyzer` isn't crashed, it's just trying to scan a huge directory.
See: #12613
By making it an `EscapeError` instead of a `LitError`. This makes it
like the other errors produced when checking string literals contents,
e.g. for invalid escape sequences or bare CR chars.
NOTE: this means these errors are issued earlier, before expansion,
which changes behaviour. It will be possible to move the check back to
the later point if desired. If that happens, it's likely that all the
string literal contents checks will be delayed together.
One nice thing about this: the old approach had some code in
`report_lit_error` to calculate the span of the nul char from a range.
This code used a hardwired `+2` to account for the `c"` at the start of
a C string literal, but this should have changed to a `+3` for raw C
string literals to account for the `cr"`, which meant that the caret in
`cr"` nul error messages was one short of where it should have been. The
new approach doesn't need any of this and avoids the off-by-one error.
fix: Acknowledge `pub(crate)` imports in import suggestions
rust-analyzer has logic that discounts suggesting `use`s for private imports, but that logic is unnecessarily strict - for instance given this code:
```rust
mod foo {
pub struct Foo;
}
pub(crate) use self::foo::*;
mod bar {
fn main() {
Foo$0;
}
}
```
... RA will suggest to add `use crate::foo::Foo;`, which not only makes the code overly verbose (especially in larger code bases), but also is disjoint with what rustc itself suggests.
This commit adjusts the logic, so that `pub(crate)` imports are taken into account when generating the suggestions; considering rustc's behavior, I think this change doesn't warrant any extra configuration flag.
Note that this is my first commit to RA, so I guess the approach taken here might be suboptimal - certainly feels somewhat hacky, maybe there's some better way of finding out the optimal import path 😅
rust-analyzer has logic that discounts suggesting `use`s for private
imports, but that logic is unnecessarily strict - for instance given
this code:
```rust
mod foo {
pub struct Foo;
}
pub(crate) use self::foo::*;
mod bar {
fn main() {
Foo$0;
}
}
```
... RA will suggest to add `use crate::foo::Foo;`, which not only makes
the code overly verbose (especially in larger code bases), but also is
disjoint with what rustc itself suggests.
This commit adjusts the logic, so that `pub(crate)` imports are taken
into account when generating the suggestions; considering rustc's
behavior, I think this change doesn't warrant any extra configuration
flag.
Note that this is my first commit to RA, so I guess the approach taken
here might be suboptimal - certainly feels somewhat hacky, maybe there's
some better way of finding out the optimal import path 😅
minor: Mark unresolved associated item diagnostic as experimental
Per #16327 unresolved associated item has false positives. Mark the diagnostic as experimental until this is more dependable.
Resolve panic in `generate_delegate_methods`
Fixes#16276
This PR addresses two issues:
1. When using `PathTransform`, it searches for the node corresponding to the `path` in the `source_scope` during `make::fn_`. Therefore, we need to perform the transform before `make::fn_` (similar to the problem in issue #15804). Otherwise, even though the tokens are the same, their offsets (i.e., `span`) differ, resulting in the error "Can't find CONST_ARG@xxx."
2. As mentioned in the first point, `PathTransform` searches for the node corresponding to the `path` in the `source_scope`. Thus, when transforming paths, we should update nodes from right to left (i.e., use **reverse of preorder** (right -> left -> root) instead of **postorder** (left -> right -> root)). Reasons are as follows:
In the red-green tree (rowan), we do not store absolute ranges but instead store the length of each node and dynamically calculate offsets (spans). Therefore, when modifying the left-side node (such as nodes are inserted or deleted), it causes all right-side nodes' spans to change. This, in turn, leads to PathTransform being unable to find nodes with the same paths (due to different spans), resulting in errors.
fix: Fix `ast::Path::segments` implementation
calling `ast::Path::segments` on a qualifier currently returns all the segments of the top path instead of just the segments of the qualifier.
The issue can be summarized by the simple failing test below:
```rust
#[test]
fn path_segments() {
//use ra_ap_syntax::ast;
let path: ast::Path = ...; // e.g. `ast::Path` for "foo::bar::item".
let path_segments: Vec<_> = path.segments().collect();
let qualifier_segments: Vec<_> = path.qualifier().unwrap().segments().collect();
assert_eq!(path_segments.len(), qualifier_segments.len() + 1); // Fails because `LHS = RHS`.
}
```
This PR:
- Fixes the implementation of `ast::Path::segments`
- Fixes `ast::Path::segments` callers that either implicitly relied on behavior of previous implementation or exhibited other "wrong" behavior directly related to the result of `ast::Path::segments` (all callers have been reviewed, only one required modification)
- Removes unnecessary (and now unused) `ast::Path::segments` alternatives
fix: Differentiate between vfs config load and file changed events
Kind of fixes https://github.com/rust-lang/rust-analyzer/issues/14730 in a pretty bad way. We need to rethink the vfs-notify layer entirely. For a decent fix.
internal: Only compare relevant parts in `ide::{runnables,inlay_hints}` tests
This PR limits the data being compared. Therefore the tests should be more readable, as well as being more robust to changes to the data structure.
Part of https://github.com/rust-lang/rust-analyzer/issues/14268.
internal: clean and enhance readability for `generate_delegate_trait`
Continue from #16112
This PR primarily involves some cleanup and simple refactoring work, including:
- Adding numerous comments to layer the code and explain the behavior of each step.
- Renaming some variables to make them more sensible.
- Simplify certain operations using a more elegant approach.
The goal is to make this intricate implementation clearer and facilitate future maintenance.
In addition to this, the PR also removes redundant `path_transform` operations for `type_gen_args`.
Taking the example of `impl Trait<T1> for S<S1>`, where `S1` is considered. The struct `S` must be in the file where the user triggers code actions, so there's no need for the `path_transform`. Furthermore, before performing the transform, we've already renamed `S1`, ensuring it won't clash with existing generics parameters. Therefore, there's no need to transform it.
internal: Speed up import searching some more
Pushes the sorting to the caller, meaning additional filtering can be done pre-sorting. Similarly a collect call was pushed to the caller for allowing some other filters to run pre-collecting.
Remove completion limit for trait importing method completions
Fixes https://github.com/rust-lang/rust-analyzer/issues/16075
The < 3 char limit never applied to methods and the amount of completions generated due this is not absolutely massive as not all traits in a project are ever applicable so there is little reason to employ the limit here. Especially as it limits the number of traits we consider, not items (after my changes yesterday), and the number of traits is not the slowing factor here. Tested this in r-a where we have ~800 traits project wide and even when ~260 are applicable there was no noticable slow down from it.
internal: Move query limits to the caller
Prior we calculated up to `limit` entries from a query, then filtered from that leaving us with less entries than the limit in some cases (which might give odd completion behavior due to items disappearing). This changes it so we filter before checking the limit.