Use assert_ne in hash tests
The hash tests were written before the assert_ne macro was added to the standard library. The assert_ne macro provides better output in case of a failure.
The current check for wether a target is no_std or not is matching for the
string "-none-" in a target triple. This doesn't work for triples that end in
"-none", like "aarch64-unknown-none".
Fix this by matching for "-none" instead.
I checked for all the current target triples containing "none", and this should
not generate any false positives.
This fixes an issue encountered in https://github.com/rust-lang/rust/pull/68334
When the obligation that couldn't be fulfilled is specific to a nested
obligation, maintain both the nested and parent obligations around for
more accurate and detailed error reporting.
Surface associated type projection bounds that could not be fulfilled in
E0599 errors. Always present the list of unfulfilled trait bounds,
regardless of whether we're pointing at the ADT or trait that didn't
satisfy it.
parser: `token` -> `normalized_token`, `nonnormalized_token` -> `token`
So, after https://github.com/rust-lang/rust/pull/69006, its follow-ups and an attempt to remove `Parser::prev_span` I came to the conclusion that the unnormalized token and its span is what you want in most cases, so it should be default.
Normalization only makes difference in few cases where we are checking against `token::Ident` or `token::Lifetime` specifically.
This PR uses `normalized_token` for those cases.
Using normalization explicitly means that people writing code should remember about `NtIdent` and `NtLifetime` in general. (That is alleviated by the fact that `token.ident()` and `fn parse_ident_*` are already written.)
Remembering about `NtIdent`, was, however, already the case, kind of, because the implicit normalization was performed only for the current/previous token, but not for things like `look_ahead`.
As a result, most of token classification methods in `token.rs` already take `NtIdent` into account (this PR fixes a few pre-existing minor mistakes though).
The next step is removing `normalized(_prev)_token` entirely and replacing it with `token.ident()` (mostly) and `token.normalize()` (occasionally).
I want to make it a separate PR for that and run it though perf.
`normalized_token` filled on every bump has both a potential to avoid repeated normalization, and to do unnecessary work in advance (it probably doesn't matter anyway, the normalization is very cheap).
r? @Centril
instantiate_value_path: on `SelfCtor`, avoid unconstrained tyvars
Fixes https://github.com/rust-lang/rust/issues/69306.
On `Self(...)` (that is, a `Res::SelfCtor`), do not use `self.impl_self_ty(...)`. The problem with that method is that it creates unconstrained inference variables for type parameters in the `impl` (e.g. `impl<T> S0<T>`). These variables then eventually get substituted for something else when they come in contact with the expected type (e.g. `S0<u8>`) or merely the arguments passed to the tuple constructor (e.g. the `0` in `Self(0)`).
Instead of using `self.impl_self_ty(...)`, we instead merely use `let ty = self.normalize_ty(span, tcx.at(span).type_of(impl_def_id));` to get the rewritten `res`.
r? @eddyb
Account for bounds and asociated items when denying `_`
Fix#68801, #69204. Follow up to #67597 and #68071.
Output for the original ICE report:
```
Checking vinoteca v5.0.0 (/Users/ekuber/workspace/vinoteca)
error[E0121]: the type placeholder `_` is not allowed within types on item signatures
--> src/producers.rs:43:70
|
43 | pub fn top<Table: diesel::Table + diesel::query_dsl::InternalJoinDsl<_, diesel::query_source::joins::Inner, _>>(table: Table, limit: usize, connection: DbConn) -> RestResult<Vec<TopWineType>> {
| ^ not allowed in type signatures ^ not allowed in type signatures
error: aborting due to previous error
```
BTreeMap navigation done safer & faster
It turns out that there was a faster way to do the tree navigation code bundled in #67073, by moving from edge to KV and from KV to next edge separately. It extracts most of the code as safe functions, and contains the duplication of handles within the short wrapper functions.
This somehow hits a sweet spot in the compiler because it reports boosts all over the board:
```
>cargo benchcmp pre3.txt posz4.txt --threshold 5
name pre3.txt ns/iter posz4.txt ns/iter diff ns/iter diff % speedup
btree::map::first_and_last_0 40 37 -3 -7.50% x 1.08
btree::map::first_and_last_100 58 44 -14 -24.14% x 1.32
btree::map::iter_1000 8,920 3,419 -5,501 -61.67% x 2.61
btree::map::iter_100000 1,069,290 411,615 -657,675 -61.51% x 2.60
btree::map::iter_20 169 58 -111 -65.68% x 2.91
btree::map::iter_mut_1000 8,701 3,303 -5,398 -62.04% x 2.63
btree::map::iter_mut_100000 1,034,560 405,975 -628,585 -60.76% x 2.55
btree::map::iter_mut_20 165 58 -107 -64.85% x 2.84
btree::set::clone_100 1,831 1,562 -269 -14.69% x 1.17
btree::set::clone_100_and_clear 1,831 1,565 -266 -14.53% x 1.17
btree::set::clone_100_and_into_iter 1,917 1,541 -376 -19.61% x 1.24
btree::set::clone_100_and_pop_all 2,609 2,441 -168 -6.44% x 1.07
btree::set::clone_100_and_remove_all 4,598 3,927 -671 -14.59% x 1.17
btree::set::clone_100_and_remove_half 2,765 2,551 -214 -7.74% x 1.08
btree::set::clone_10k 191,610 164,616 -26,994 -14.09% x 1.16
btree::set::clone_10k_and_clear 192,003 164,616 -27,387 -14.26% x 1.17
btree::set::clone_10k_and_into_iter 200,037 163,010 -37,027 -18.51% x 1.23
btree::set::clone_10k_and_pop_all 267,023 250,913 -16,110 -6.03% x 1.06
btree::set::clone_10k_and_remove_all 536,230 464,100 -72,130 -13.45% x 1.16
btree::set::clone_10k_and_remove_half 453,350 430,545 -22,805 -5.03% x 1.05
btree::set::difference_random_100_vs_100 1,787 801 -986 -55.18% x 2.23
btree::set::difference_random_100_vs_10k 2,978 2,696 -282 -9.47% x 1.10
btree::set::difference_random_10k_vs_100 111,075 54,734 -56,341 -50.72% x 2.03
btree::set::difference_random_10k_vs_10k 246,380 175,980 -70,400 -28.57% x 1.40
btree::set::difference_staggered_100_vs_100 1,789 951 -838 -46.84% x 1.88
btree::set::difference_staggered_100_vs_10k 2,798 2,606 -192 -6.86% x 1.07
btree::set::difference_staggered_10k_vs_10k 176,452 97,401 -79,051 -44.80% x 1.81
btree::set::intersection_100_neg_vs_10k_pos 34 32 -2 -5.88% x 1.06
btree::set::intersection_100_pos_vs_100_neg 30 27 -3 -10.00% x 1.11
btree::set::intersection_random_100_vs_100 1,537 613 -924 -60.12% x 2.51
btree::set::intersection_random_100_vs_10k 2,793 2,649 -144 -5.16% x 1.05
btree::set::intersection_random_10k_vs_10k 222,127 147,166 -74,961 -33.75% x 1.51
btree::set::intersection_staggered_100_vs_100 1,447 622 -825 -57.01% x 2.33
btree::set::intersection_staggered_100_vs_10k 2,606 2,382 -224 -8.60% x 1.09
btree::set::intersection_staggered_10k_vs_10k 143,620 58,790 -84,830 -59.07% x 2.44
btree::set::is_subset_100_vs_100 1,349 488 -861 -63.83% x 2.76
btree::set::is_subset_100_vs_10k 1,720 1,428 -292 -16.98% x 1.20
btree::set::is_subset_10k_vs_10k 135,984 48,527 -87,457 -64.31% x 2.80
```
The `first_and_last` ones are noise (they don't do iteration), the others seem genuine.
As always, approved by Miri.
Also, a separate commit with some more benchmarks of mutable behaviour (which also benefit).
r? @cuviper
Canonicalize inputs to const eval where needed
Canonicalize inputs to const eval, so that they can contain inference variables. Which enables invoking const eval queries even if the current param env has inference variable within it, which can occur during trait selection.
This is a reattempt of #67717, in a far less invasive way.
Fixes#68477
r? @nikomatsakis
cc @eddyb