Abi::is_signed: assert that we are a Scalar
A bit more sanity checking, suggested by @eddyb. This makes this method actually "safer" than `TyS::is_signed`, so I made sure Miri consistently uses the `Abi` version.
Though I am not sure if this would have caught the mistake where the layout of a zero-sized enum was asked for its sign.
r? @eddyb
Derive PartialEq, Eq and Hash for RangeInclusive
The manual implementation of `PartialEq`, `Eq` and `Hash` for `RangeInclusive` was functionally equivalent to a derived implementation.
This change removes the manual implementation and adds the respective derives.
A side effect of this change is that the derives also add implementations for `StructuralPartialEq` and `StructuralEq`, which enables `RangeInclusive` to be used in const generics, closing #70155.
This change is enabled by #68835, which changed the field `is_empty: Option<bool>` to `exhausted: bool` removing the need for *semantic* equality instead of *structural* equality.
## PartialEq
original [`PartialEq`](f4c675c476/src/libcore/ops/range.rs (L353-L359)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: PartialEq> PartialEq for RangeInclusive<Idx> {
#[inline]
fn eq(&self, other: &Self) -> bool {
self.start == other.start && self.end == other.end && self.exhausted == other.exhausted
}
}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx> crate::marker::StructuralPartialEq for RangeInclusive<Idx> {}
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate::cmp::PartialEq> crate::cmp::PartialEq for RangeInclusive<Idx> {
#[inline]
fn eq(&self, other: &RangeInclusive<Idx>) -> bool {
match *other {
RangeInclusive { start: ref __self_1_0,end: ref __self_1_1, exhausted: ref __self_1_2 } => match *self {
RangeInclusive { start: ref __self_0_0, end: ref __self_0_1, exhausted: ref __self_0_2 } => {
(*__self_0_0) == (*__self_1_0) && (*__self_0_1) == (*__self_1_1) && (*__self_0_2) == (*__self_1_2)
}
},
}
}
#[inline]
fn ne(&self, other: &RangeInclusive<Idx>) -> bool {
match *other {
RangeInclusive { start: ref __self_1_0, end: ref __self_1_1, exhausted: ref __self_1_2 } => match *self {
RangeInclusive { start: ref __self_0_0, end: ref __self_0_1exhausted: ref __self_0_2 } => {
(*__self_0_0) != (*__self_1_0) || (*__self_0_1) != (*__self_1_1) || (*__self_0_2) != (*__self_1_2)
}
},
}
}
}
```
These implementations both test for *structural* equality, with the same order of field comparisons, and the bound `Idx: PartialEq` is the same.
## Eq
original [`Eq`](f4c675c476/src/libcore/ops/range.rs (L361-L362)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: Eq> Eq for RangeInclusive<Idx> {}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx> crate::marker::StructuralEq for RangeInclusive<Idx> {}
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate::cmp::Eq> crate::cmp::Eq for RangeInclusive<Idx> {
#[inline]
#[doc(hidden)]
fn assert_receiver_is_total_eq(&self) -> () {
{
let _: crate::cmp::AssertParamIsEq<Idx>;
let _: crate::cmp::AssertParamIsEq<Idx>;
let _: crate::cmp::AssertParamIsEq<bool>;
}
}
}
```
These implementations are equivalent since `Eq` is just a marker trait and the bound `Idx: Eq` is the same.
## Hash
original [`Hash`](f4c675c476/src/libcore/ops/range.rs (L364-L371)) implementation:
```rust
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: Hash> Hash for RangeInclusive<Idx> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.start.hash(state);
self.end.hash(state);
self.exhausted.hash(state);
}
}
```
expanded derive implementation (using `cargo expand ops::range`):
```rust
#[automatically_derived]
#[allow(unused_qualifications)]
#[stable(feature = "inclusive_range", since = "1.26.0")]
impl<Idx: crate:#️⃣:Hash> crate:#️⃣:Hash for RangeInclusive<Idx> {
fn hash<__H: crate:#️⃣:Hasher>(&self, state: &mut __H) -> () {
match *self { RangeInclusive { start: ref __self_0_0, end: ref __self_0_1, exhausted: ref __self_0_2 } => {
crate:#️⃣:Hash::hash(&(*__self_0_0), state);
crate:#️⃣:Hash::hash(&(*__self_0_1), state);
crate:#️⃣:Hash::hash(&(*__self_0_2), state)
}
}
}
}
```
These implementations are functionally equivalent, with the same order of field hashing, and the bound `Idx: Hash` is the same.
BTreeMap: remove shared root
This replaces the shared root with `Option`s in the BTreeMap code, and then slightly cleans up the node manipulation code taking advantage of the removal of the shared root. I expect that further simplification is possible, but wanted to get this posted for initial review.
Note that `BTreeMap::new()` continues to not allocate.
Benchmarks seem within the margin of error/unaffected, as expected for an entirely predictable branch.
```
name alloc-bench-a ns/iter alloc-bench-b ns/iter diff ns/iter diff % speedup
btree::map::iter_mut_20 20 21 1 5.00% x 0.95
btree::set::clone_100 1,360 1,439 79 5.81% x 0.95
btree::set::clone_100_and_into_iter 1,319 1,434 115 8.72% x 0.92
btree::set::clone_10k 143,515 150,991 7,476 5.21% x 0.95
btree::set::clone_10k_and_clear 142,792 152,916 10,124 7.09% x 0.93
btree::set::clone_10k_and_into_iter 146,019 154,561 8,542 5.85% x 0.94
```
can_begin_literal_maybe_minus: `true` on `"-"? lit` NTs.
Make `can_begin_literal_or_bool` (renamed to `can_begin_literal_maybe_minus`) accept `NtLiteral(e) | NtExpr(e)` where `e` is either a literal or a negated literal.
Fixes https://github.com/rust-lang/rust/issues/70050.
r? @petrochenkov
add `Option::{zip,zip_with}` methods under "option_zip" gate
This PR introduces 2 methods - `Option::zip` and `Option::zip_with` with
respective signatures:
- zip: `(Option<T>, Option<U>) -> Option<(T, U)>`
- zip_with: `(Option<T>, Option<U>, (T, U) -> R) -> Option<R>`
Both are under the feature gate "option_zip".
I'm not sure about the name "zip", maybe we can find a better name for this.
(I would prefer `union` for example, but this is a keyword :( )
--------------------------------------------------------------------------------
Recently in a russian rust begginers telegram chat a newbie asked (translated):
> Are there any methods for these conversions:
>
> 1. `(Option<A>, Option<B>) -> Option<(A, B)>`
> 2. `Vec<Option<T>> -> Option<Vec<T>>`
>
> ?
While second (2.) is clearly `vec.into_iter().collect::<Option<Vec<_>>()`, the
first one isn't that clear.
I couldn't find anything similar in the `core` and I've come to this solution:
```rust
let tuple: (Option<A>, Option<B>) = ...;
let res: Option<(A, B)> = tuple.0.and_then(|a| tuple.1.map(|b| (a, b)));
```
However this solution isn't "nice" (same for just `match`/`if let`), so I thought
that this functionality should be in `core`.
Use generator resume arguments in the async/await lowering
This removes the TLS requirement from async/await and enables it in `#![no_std]` crates.
Closes https://github.com/rust-lang/rust/issues/56974
I'm not confident the HIR lowering is completely correct, there seem to be quite a few undocumented invariants in there. The `async-std` and tokio test suites are passing with these changes though.
Make std::sync::Arc compatible with ThreadSanitizer
The memory fences used previously in Arc implementation are not properly
understood by thread sanitizer as synchronization primitives. This had
unfortunate effect where running any non-trivial program compiled with
`-Z sanitizer=thread` would result in numerous false positives.
Replace acquire fences with acquire loads to address the issue.
Fixes#39608.
This avoids wasting a small amount of space for some of the data sets.
The chunk resizing is caused by but not directly related to changes in this
commit.
Alphabetic : 3036 bytes
Case_Ignorable : 2133 bytes (- 3 bytes)
Cased : 934 bytes
Cc : 32 bytes
Grapheme_Extend: 1760 bytes (-14 bytes)
Lowercase : 985 bytes
N : 1220 bytes (- 5 bytes)
Uppercase : 934 bytes
White_Space : 97 bytes
Total table sizes: 11131 bytes (-22 bytes)
Namely, this version focuses on the end-to-end behavior that the attempt to
create the UDP binding will fail, regardless of the semantics of how particular
DNS servers handle junk inputs.
(I spent some time trying to create a second more-focused test that would
sidestep the DNS resolution, but this is not possible without more invasive
changes to the internal infrastructure of `ToSocketAddrs` and what not. It is
not worth it.)
Currently the test file takes a while to compile -- 30 seconds or so -- but
since it's not going to be committed, and is just for local testing, that seems
fine.
Try chunk sizes between 1 and 64, selecting the one which minimizes the number
of bytes used. 16, the previous constant, turned out to be a rather good choice,
with 5/9 of the datasets still using it.
Alphabetic : 3036 bytes (- 19 bytes)
Case_Ignorable : 2136 bytes
Cased : 934 bytes
Cc : 32 bytes (- 11 bytes)
Grapheme_Extend: 1774 bytes
Lowercase : 985 bytes
N : 1225 bytes (- 41 bytes)
Uppercase : 934 bytes
White_Space : 97 bytes (- 43 bytes)
Total table sizes: 11153 bytes (-114 bytes)
If the unicode-downloads folder already exists, we likely just fetched the data,
so don't make any further network requests. Unicode versions are released rarely
enough that this doesn't matter much in practice.
This commit fixes an issue where if `eprintln!` is used in a TLS
destructor it can accidentally cause the process to abort. TLS
destructors are executed after `main` returns on the main thread, and at
this point we've also deinitialized global `Lazy` values like those
which store the `Stderr` and `Stdout` internals. This means that despite
handling TLS not being accessible in `eprintln!`, we will fail due to
not being able to call `stderr()`. This means that we'll double-panic
quickly because panicking also attempt to write to stderr.
The fix here is to reimplement the global stderr handle to avoid the
need for destruction. This avoids the need for `Lazy` as well as the
hidden panic inside of the `stderr` function.
Overall this should improve the robustness of printing errors and/or
panics in weird situations, since the `stderr` accessor should be
infallible in more situations.
This makes ensure_root_is_owned return a reference to the (now guaranteed to
exist) root, allowing callers to operate on it without going through another
unwrap.
Unfortunately this is only rarely useful as it's frequently the case that both
the length and the root need to be accessed and field-level borrows in methods
don't yet exist.