Rollup of 8 pull requests
Successful merges:
- #67888 (Prefetch some queries used by the metadata encoder)
- #69934 (Update the mir inline costs)
- #69965 (Refactorings to get rid of rustc_codegen_utils)
- #70054 (Build dist-android with --enable-profiler)
- #70089 (rustc_infer: remove InferCtxt::closure_sig as the FnSig is always shallowly known.)
- #70092 (hir: replace "items" terminology with "nodes" where appropriate.)
- #70138 (do not 'return' in 'throw_' macros)
- #70151 (Update stdarch submodule)
Failed merges:
- #70074 (Expand: nix all fatal errors)
r? @ghost
do not 'return' in 'throw_' macros
In https://github.com/rust-lang/rust/pull/69839 we turned a closure into a `try` block, but it turns out that does not work with our `throw_` macros, which `return` so they skip the `try`.
Here we fix that. For some reason that means we also have to remove some `;`.
r? @oli-obk
hir: replace "items" terminology with "nodes" where appropriate.
The newly added `HirOwnerItems` confused me before I realized that "items" there actually referred to HIR nodes, not `hir:Item` or "item-like" (which we should IMO replace with "owner").
I suspect the naming had something to do with `ItemLocalId`'s use of "item".
That is, `ItemLocalId` could be interpreted to mean one of two things:
* `IntraItemNodeId` i.e. `IntraOwnerNodeId`
* this is IMO correct, and I'd even like to rename it, but I didn't want to throw that into this PR
* `IntraOwnerItemId`
* this is what `HirOwnerItems` would seem to imply
r? @Zoxc cc @michaelwoerister @nikomatsakis
rustc_infer: remove InferCtxt::closure_sig as the FnSig is always shallowly known.
That is, `ClosureSubsts` is always created (in `rustc_typeck::check::closure`) with a `FnSig`, as the number of inputs is known, even if they might all have inference types.
The only useful thing `InferCtxt::closure_sig` was doing is resolving an inference variable used just to get the `ty::FnPtr` containing that `FnSig` into `ClosureSubsts`.
The ideal way to solve this would be to add a constructor for `ClosureSubsts`, that combines the parent `Substs`, the closure kind, the signature, and capture types together, but for now I've went with resolving the inference types just after unifying them with the real types.
r? @nikomatsakis
Refactorings to get rid of rustc_codegen_utils
r? @eddyb
cc #45276
After this, the only modules left in `rustc_codegen_utils` are
- `link`: a bunch of linking-related functions (many dealing with file names). These are mostly consumed by save analysis, rustc_driver, rustc_interface, and of course codegen. I assume they live here because we don't want a dependency of save analysis on codegen... Perhaps they can be moved to librustc?
- ~`symbol_names` and `symbol_names_test`: honestly it seems odd that `symbol_names_test` is not a submodule of `symbol_names`. It seems like these could honestly live in their own crate or move to librustc. Already name mangling is exported as the `symbol_name` query.~ (move it to its own crate)
I don't mind doing either of the above as part of this PR or a followup if you want.
Update the mir inline costs
handle that when mir is lowered to llvm-ir more code is generated.
Landingpads generates 10 llvm-ir instructions
and resume 9 llvm-ir instructions.
r? @wesleywiser
Prefetch some queries used by the metadata encoder
This brings the time for `metadata encoding and writing` for `syntex_syntax` from 1.338s to 0.997s with 6 threads in non-incremental debug mode.
r? @Mark-Simulacrum
Rollup of 16 pull requests
Successful merges:
- #65097 (Make std::sync::Arc compatible with ThreadSanitizer)
- #69033 (Use generator resume arguments in the async/await lowering)
- #69997 (add `Option::{zip,zip_with}` methods under "option_zip" gate)
- #70038 (Remove the call that makes miri fail)
- #70058 (can_begin_literal_maybe_minus: `true` on `"-"? lit` NTs.)
- #70111 (BTreeMap: remove shared root)
- #70139 (add delay_span_bug to TransmuteSizeDiff, just to be sure)
- #70165 (Remove the erase regions MIR transform)
- #70166 (Derive PartialEq, Eq and Hash for RangeInclusive)
- #70176 (Add tests for #58319 and #65131)
- #70177 (Fix oudated comment for NamedRegionMap)
- #70184 (expand_include: set `.directory` to dir of included file.)
- #70187 (more clippy fixes)
- #70188 (Clean up E0439 explanation)
- #70189 (Abi::is_signed: assert that we are a Scalar)
- #70194 (#[must_use] on split_off())
Failed merges:
r? @ghost
Abi::is_signed: assert that we are a Scalar
A bit more sanity checking, suggested by @eddyb. This makes this method actually "safer" than `TyS::is_signed`, so I made sure Miri consistently uses the `Abi` version.
Though I am not sure if this would have caught the mistake where the layout of a zero-sized enum was asked for its sign.
r? @eddyb
Derive PartialEq, Eq and Hash for RangeInclusive
The manual implementation of `PartialEq`, `Eq` and `Hash` for `RangeInclusive` was functionally equivalent to a derived implementation.
This change removes the manual implementation and adds the respective derives.
A side effect of this change is that the derives also add implementations for `StructuralPartialEq` and `StructuralEq`, which enables `RangeInclusive` to be used in const generics, closing #70155.
This change is enabled by #68835, which changed the field `is_empty: Option<bool>` to `exhausted: bool` removing the need for *semantic* equality instead of *structural* equality.
## PartialEq
original [`PartialEq`](f4c675c476/src/libcore/ops/range.rs (L353-L359)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: PartialEq> PartialEq for RangeInclusive<Idx> {
#[inline]
fn eq(&self, other: &Self) -> bool {
self.start == other.start && self.end == other.end && self.exhausted == other.exhausted
}
}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx> crate::marker::StructuralPartialEq for RangeInclusive<Idx> {}
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate::cmp::PartialEq> crate::cmp::PartialEq for RangeInclusive<Idx> {
#[inline]
fn eq(&self, other: &RangeInclusive<Idx>) -> bool {
match *other {
RangeInclusive { start: ref __self_1_0,end: ref __self_1_1, exhausted: ref __self_1_2 } => match *self {
RangeInclusive { start: ref __self_0_0, end: ref __self_0_1, exhausted: ref __self_0_2 } => {
(*__self_0_0) == (*__self_1_0) && (*__self_0_1) == (*__self_1_1) && (*__self_0_2) == (*__self_1_2)
}
},
}
}
#[inline]
fn ne(&self, other: &RangeInclusive<Idx>) -> bool {
match *other {
RangeInclusive { start: ref __self_1_0, end: ref __self_1_1, exhausted: ref __self_1_2 } => match *self {
RangeInclusive { start: ref __self_0_0, end: ref __self_0_1exhausted: ref __self_0_2 } => {
(*__self_0_0) != (*__self_1_0) || (*__self_0_1) != (*__self_1_1) || (*__self_0_2) != (*__self_1_2)
}
},
}
}
}
```
These implementations both test for *structural* equality, with the same order of field comparisons, and the bound `Idx: PartialEq` is the same.
## Eq
original [`Eq`](f4c675c476/src/libcore/ops/range.rs (L361-L362)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: Eq> Eq for RangeInclusive<Idx> {}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx> crate::marker::StructuralEq for RangeInclusive<Idx> {}
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate::cmp::Eq> crate::cmp::Eq for RangeInclusive<Idx> {
#[inline]
#[doc(hidden)]
fn assert_receiver_is_total_eq(&self) -> () {
{
let _: crate::cmp::AssertParamIsEq<Idx>;
let _: crate::cmp::AssertParamIsEq<Idx>;
let _: crate::cmp::AssertParamIsEq<bool>;
}
}
}
```
These implementations are equivalent since `Eq` is just a marker trait and the bound `Idx: Eq` is the same.
## Hash
original [`Hash`](f4c675c476/src/libcore/ops/range.rs (L364-L371)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: Hash> Hash for RangeInclusive<Idx> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.start.hash(state);
self.end.hash(state);
self.exhausted.hash(state);
}
}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate:#️⃣:Hash> crate:#️⃣:Hash for RangeInclusive<Idx> {
fn hash<__H: crate:#️⃣:Hasher>(&self, state: &mut __H) -> () {
match *self { RangeInclusive { start: ref __self_0_0, end: ref __self_0_1, exhausted: ref __self_0_2 } => {
crate:#️⃣:Hash::hash(&(*__self_0_0), state);
crate:#️⃣:Hash::hash(&(*__self_0_1), state);
crate:#️⃣:Hash::hash(&(*__self_0_2), state)
}
}
}
}
```
These implementations are functionally equivalent, with the same order of field hashing, and the bound `Idx: Hash` is the same.
BTreeMap: remove shared root
This replaces the shared root with `Option`s in the BTreeMap code, and then slightly cleans up the node manipulation code taking advantage of the removal of the shared root. I expect that further simplification is possible, but wanted to get this posted for initial review.
Note that `BTreeMap::new()` continues to not allocate.
Benchmarks seem within the margin of error/unaffected, as expected for an entirely predictable branch.
```
name alloc-bench-a ns/iter alloc-bench-b ns/iter diff ns/iter diff % speedup
btree::map::iter_mut_20 20 21 1 5.00% x 0.95
btree::set::clone_100 1,360 1,439 79 5.81% x 0.95
btree::set::clone_100_and_into_iter 1,319 1,434 115 8.72% x 0.92
btree::set::clone_10k 143,515 150,991 7,476 5.21% x 0.95
btree::set::clone_10k_and_clear 142,792 152,916 10,124 7.09% x 0.93
btree::set::clone_10k_and_into_iter 146,019 154,561 8,542 5.85% x 0.94
```
can_begin_literal_maybe_minus: `true` on `"-"? lit` NTs.
Make `can_begin_literal_or_bool` (renamed to `can_begin_literal_maybe_minus`) accept `NtLiteral(e) | NtExpr(e)` where `e` is either a literal or a negated literal.
Fixes https://github.com/rust-lang/rust/issues/70050.
r? @petrochenkov
add `Option::{zip,zip_with}` methods under "option_zip" gate
This PR introduces 2 methods - `Option::zip` and `Option::zip_with` with
respective signatures:
- zip: `(Option<T>, Option<U>) -> Option<(T, U)>`
- zip_with: `(Option<T>, Option<U>, (T, U) -> R) -> Option<R>`
Both are under the feature gate "option_zip".
I'm not sure about the name "zip", maybe we can find a better name for this.
(I would prefer `union` for example, but this is a keyword :( )
--------------------------------------------------------------------------------
Recently in a russian rust begginers telegram chat a newbie asked (translated):
> Are there any methods for these conversions:
>
> 1. `(Option<A>, Option<B>) -> Option<(A, B)>`
> 2. `Vec<Option<T>> -> Option<Vec<T>>`
>
> ?
While second (2.) is clearly `vec.into_iter().collect::<Option<Vec<_>>()`, the
first one isn't that clear.
I couldn't find anything similar in the `core` and I've come to this solution:
```rust
let tuple: (Option<A>, Option<B>) = ...;
let res: Option<(A, B)> = tuple.0.and_then(|a| tuple.1.map(|b| (a, b)));
```
However this solution isn't "nice" (same for just `match`/`if let`), so I thought
that this functionality should be in `core`.
Use generator resume arguments in the async/await lowering
This removes the TLS requirement from async/await and enables it in `#![no_std]` crates.
Closes https://github.com/rust-lang/rust/issues/56974
I'm not confident the HIR lowering is completely correct, there seem to be quite a few undocumented invariants in there. The `async-std` and tokio test suites are passing with these changes though.
Make std::sync::Arc compatible with ThreadSanitizer
The memory fences used previously in Arc implementation are not properly
understood by thread sanitizer as synchronization primitives. This had
unfortunate effect where running any non-trivial program compiled with
`-Z sanitizer=thread` would result in numerous false positives.
Replace acquire fences with acquire loads to address the issue.
Fixes#39608.
Namely, this version focuses on the end-to-end behavior that the attempt to
create the UDP binding will fail, regardless of the semantics of how particular
DNS servers handle junk inputs.
(I spent some time trying to create a second more-focused test that would
sidestep the DNS resolution, but this is not possible without more invasive
changes to the internal infrastructure of `ToSocketAddrs` and what not. It is
not worth it.)
This commit fixes an issue where if `eprintln!` is used in a TLS
destructor it can accidentally cause the process to abort. TLS
destructors are executed after `main` returns on the main thread, and at
this point we've also deinitialized global `Lazy` values like those
which store the `Stderr` and `Stdout` internals. This means that despite
handling TLS not being accessible in `eprintln!`, we will fail due to
not being able to call `stderr()`. This means that we'll double-panic
quickly because panicking also attempt to write to stderr.
The fix here is to reimplement the global stderr handle to avoid the
need for destruction. This avoids the need for `Lazy` as well as the
hidden panic inside of the `stderr` function.
Overall this should improve the robustness of printing errors and/or
panics in weird situations, since the `stderr` accessor should be
infallible in more situations.