Use restricted Damerau-Levenshtein distance for diagnostics
This replaces the existing Levenshtein algorithm with the Damerau-Levenshtein algorithm. This means that "ab" to "ba" is one change (a transposition) instead of two (a deletion and insertion). More specifically, this is a _restricted_ implementation, in that "ca" to "abc" cannot be performed as "ca" → "ac" → "abc", as there is an insertion in the middle of a transposition. I believe that errors like that are sufficiently rare that it's not worth taking into account.
This was first brought up [on IRLO](https://internals.rust-lang.org/t/18227) when it was noticed that the diagnostic for `prinltn!` (transposed L and T) was `print!` and not `println!`. Only a single existing UI test was effected, with the result being an objective improvement.
~~I have left the method name and various other references to the Levenshtein algorithm untouched, as the exact manner in which the edit distance is calculated should not be relevant to the caller.~~
r? ``@estebank``
``@rustbot`` label +A-diagnostics +C-enhancement
Improve building compiler artifacts output
Fixes#108051
``@Manishearth,`` ``@jyn514`` mentioned you might be interested in these changes to the outputs.
Document that CStr::as_ptr returns a type alias
Rustdoc resolves type aliases too eagerly #15823 which makes the [std re-export](https://doc.rust-lang.org/stable/std/ffi/struct.CStr.html#method.as_ptr) of `CStr::as_ptr` show `i8` instead of `c_char`. To work around this I've added info about `c_char` in the method's description.
BTW, I've also added a comment to what-not-to-do example in case someone copypasted it without reading the surrounding text.
create dummy placeholder crate to prevent compiler from panicing
This PR is to address the panic found in https://github.com/rust-lang/rust/issues/105700.
There are 2 separate things going on with this panic.
First the code could not generate a dummy response for crate fragment types when it hits the recursion limit.
This PR adds the method to the trait implementation for `DymmyResult` to be able to create a dummy crate node.
This stops the panic from happening.
The second thing that is not addressed (and maybe does not need addressing? 🤷🏻)
is that when you have multiple attributes it ends up treating attributes that follow another as being the result of expanding the former (maybe there is a better way to say that). So you end up hitting the recursion limit. Even though you would think there is no expansion happening here.
If you did not hit the recursion limit the compiler would output that `invalid_attribute` does not exists. But it currently exits before the resolution step when the recursion limit is reached here.
Only include stable lints in `rustdoc::all` group
Fixes#106289.
Including unstable lints in the lint group produces unintuitive behavior
on stable (see #106289). Meanwhile, if we only included unstable lints
on nightly and not on stable, we could end up with confusing bugs that
were hard to compare across versions of Rust that lacked code changes.
I think that only including stable lints in `rustdoc::all`, no matter
the release channel, is the most intuitive option. Users can then
control unstable lints individually, which is reasonable since they have
to enable the feature gates individually anyway.
r? `@GuillaumeGomez`
Including unstable lints in the lint group produces unintuitive behavior
on stable (see #106289). Meanwhile, if we only included unstable lints
on nightly and not on stable, we could end up with confusing bugs that
were hard to compare across versions of Rust that lacked code changes.
I think that only including stable lints in `rustdoc::all`, no matter
the release channel, is the most intuitive option. Users can then
control unstable lints individually, which is reasonable since they have
to enable the feature gates individually anyway.
Type-directed probing for inherent associated types
When probing for inherent associated types (IATs), equate the Self-type found in the projection with the Self-type of the relevant inherent impl blocks and check if all predicates are satisfied.
Previously, we didn't look at the Self-type or at the bounds and just picked the first inherent impl block containing an associated type with the name we were searching for which is obviously incorrect.
Regarding the implementation, I basically copied what we do during method probing (`assemble_inherent_impl_probe`, `consider_probe`). Unfortunately, I had to duplicate a lot of the diagnostic code found in `rustc_hir_typeck::method::suggest` which we don't have access to in `rustc_hir_analysis`. Not sure if there is a simple way to unify the error handling. Note that in the future, `rustc_hir_analysis::astconv` might not actually be the place where we resolve inherent associated types (see https://github.com/rust-lang/rust/pull/103621#issuecomment-1304309565) but `rustc_hir_typeck` (?) in which case the duplication may naturally just disappear. While inherent associated *constants* are currently resolved during "method" probing, I did not find a straightforward way to incorporate IAT lookup into it as types and values (functions & constants) are two separate entities for which distinct code paths are taken.
Fixes#104251 (incl. https://github.com/rust-lang/rust/issues/104251#issuecomment-1338501171).
Fixes#105305.
Fixes#107468.
`@rustbot` label T-types F-inherent_associated_types
r? types
Make codegen choose whether to emit overflow checks
ConstProp and DataflowConstProp currently have a specific code path not to propagate constants when they overflow. This is meant to have the correct behaviour when inlining from a crate with overflow checks (like `core`) into a crate compiled without.
This PR shifts the behaviour change to the `Assert(Overflow*)` MIR terminators: if the crate is compiled without overflow checks, just skip emitting the assertions. This is already what happens with `OverflowNeg`.
This allows ConstProp and DataflowConstProp to transform `CheckedBinaryOp(Add, u8::MAX, 1)` into `const (0, true)`, and let codegen ignore the `true`.
The interpreter is modified to conform to this behaviour.
Fixes#35310
NB: Since we are using the same InferCtxt in each iteration,
we essentially *spoil* the inference variables and we only
ever get at most *one* applicable candidate (only the 1st candidate
has clean variables that can still unify correctly).