Refactor iter adapters with less macros
Just some code cleanup. Introduced a util `and_then_or_clear` for each of chain, flatten and fuse iter adapter impls. This reduces code nicely for flatten, but admittedly the other modules are more of a lateral move replacing macros with a function. But I think consistency across the modules and avoiding macros when possible is good.
attempt to optimise vectored write
benchmarked:
old:
```
test io::cursor::tests::bench_write_vec ... bench: 68 ns/iter (+/- 2)
test io::cursor::tests::bench_write_vec_vectored ... bench: 913 ns/iter (+/- 31)
```
new:
```
test io::cursor::tests::bench_write_vec ... bench: 64 ns/iter (+/- 0)
test io::cursor::tests::bench_write_vec_vectored ... bench: 747 ns/iter (+/- 27)
```
More unsafe than I wanted (and less gains) in the end, but it still does the job
libcore tests: avoid int2ptr casts
We don't need any of these pointers to actually be dereferenceable so using `ptr::invalid` should be fine. And then we can run Miri with strict provenance enforcement on the tests.
liballoc tests: avoid int2ptr cast
I think we don't need `ptr::from_exposed_addr` here; `ptr::invalid` should be enough for this test. (And this makes Miri less unhappy when running these tests.)
proc_macro/bridge: cache static spans in proc_macro's client thread-local state
This is the second part of https://github.com/rust-lang/rust/pull/86822, split off as requested in https://github.com/rust-lang/rust/pull/86822#pullrequestreview-1008655452. This patch removes the RPC calls required for the very common operations of `Span::call_site()`, `Span::def_site()` and `Span::mixed_site()`.
Some notes:
This part is one of the ones I don't love as a final solution from a design standpoint, because I don't like how the spans are serialized immediately at macro invocation. I think a more elegant solution might've been to reserve special IDs for `call_site`, `def_site`, and `mixed_site` at compile time (either starting at 1 or from `u32::MAX`) and making reading a Span handle automatically map these IDs to the relevant values, rather than doing extra serialization.
This would also have an advantage for potential future work to allow `proc_macro` to operate more independently from the compiler (e.g. to reduce the necessity of `proc-macro2`), as methods like `Span::call_site()` could be made to function without access to the compiler backend.
That was unfortunately tricky to do at the time, as this was the first part I wrote of the patches. After the later part (#98188, #98189), the other uses of `InternedStore` are removed meaning that a custom serialization strategy for `Span` is easier to implement.
If we want to go that path, we'll still need the majority of the work to split the bridge object and introduce the `Context` trait for free methods, and it will be easier to do after `Span` is the only user of `InternedStore` (after #98189).
Update `std::alloc::System` doc example code style
`return` on the last line of a block is unidiomatic so I don't think the example should be using that here
std: use an event-flag-based thread parker on SOLID
`Mutex` and `Condvar` are being replaced by more efficient implementations, which need thread parking themselves (see #93740). Therefore, the generic `Parker` needs to be replaced on all platforms where the new lock implementation will be used, which, after #96393, are SOLID, SGX and Hermit (more PRs coming soon).
SOLID, conforming to the [μITRON specification](http://www.ertl.jp/ITRON/SPEC/FILE/mitron-400e.pdf), has event flags, which are a thread parking primitive very similar to `Parker`. However, they do not make any atomic ordering guarantees (even though those can probably be assumed) and necessitate a system call even when the thread token is already available. Hence, this `Parker`, like the Windows parker, uses an extra atomic state variable.
I future-proofed the code by wrapping the event flag in a `WaitFlag` structure, as both SGX and Hermit can share the Parker implementation, they just have slightly different primitives (SGX uses signals and Hermit has a thread blocking API).
`````@kawadakk````` I assume you are the target maintainer? Could you test this for me?
Mitigate MMIO stale data vulnerability
Intel publicly disclosed the MMIO stale data vulnerability on June 14. To mitigate this vulnerability, compiler changes are required for the `x86_64-fortanix-unknown-sgx` target.
cc: ````@jethrogb````
Windows: Iterative `remove_dir_all`
This will allow better strategies for use of memory and File handles. However, fully taking advantage of that is left to future work.
Note to reviewer: It's probably best to view the `remove_dir_all_recursive` as a new function. The diff is not very helpful (imho).
Make RwLockReadGuard covariant
Hi, first time contributor here, if anything is not as expected, please let me know.
`RwLockReadGoard`'s type constructor is invariant. Since it behaves like a smart pointer to an immutable reference, there is no reason that it should not be covariant. Take e.g.
```
fn test_read_guard_covariance() {
fn do_stuff<'a>(_: RwLockReadGuard<'_, &'a i32>, _: &'a i32) {}
let j: i32 = 5;
let lock = RwLock::new(&j);
{
let i = 6;
do_stuff(lock.read().unwrap(), &i);
}
drop(lock);
}
```
where the compiler complains that &i doesn't live long enough. If `RwLockReadGuard` is covariant, then the above code is accepted because the lifetime can be shorter than `'a`.
In order for `RwLockReadGuard` to be covariant, it can't contain a full reference to the `RwLock`, which can never be covariant (because it exposes a mutable reference to the underlying data structure). By reducing the data structure to the required pieces of `RwLock`, the rest falls in place.
If there is a better way to do a test that tests successful compilation, please let me know.
Fixes#80392
Fix `panic` message for `BTreeSet`'s `range` API and document `panic` cases
Currently, the `panic` cases for [`BTreeSet`'s `range` API](https://doc.rust-lang.org/std/collections/struct.BTreeSet.html#method.range) are undocumented and produce a slightly wrong `panic` message (says `BTreeMap` instead of `BTreeSet`).
Panic case 1 code:
```rust
use std::collections::BTreeSet;
use std::ops::Bound::Excluded;
fn main() {
let mut set = BTreeSet::new();
set.insert(3);
set.insert(5);
set.insert(8);
for &elem in set.range((Excluded(&3), Excluded(&3))) {
println!("{elem}");
}
}
```
Panic case 1 message:
```
thread 'main' panicked at 'range start and end are equal and excluded in BTreeMap', /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/alloc/src/collections/btree/search.rs:105:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
Panic case 2 code:
```rust
use std::collections::BTreeSet;
use std::ops::Bound::Included;
fn main() {
let mut set = BTreeSet::new();
set.insert(3);
set.insert(5);
set.insert(8);
for &elem in set.range((Included(&8), Included(&3))) {
println!("{elem}");
}
}
```
Panic case 2:
```
thread 'main' panicked at 'range start is greater than range end in BTreeMap', /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/alloc/src/collections/btree/search.rs:110:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
This PR fixes the output messages to say `BTreeSet`, adds the relevant unit tests, and updates the documentation for the API.
This commit adds new methods that combine sequences of existing
formatting methods.
- `Formatter::debug_{tuple,struct}_field[12345]_finish`, equivalent to a
`Formatter::debug_{tuple,struct}` + N x `Debug{Tuple,Struct}::field` +
`Debug{Tuple,Struct}::finish` call sequence.
- `Formatter::debug_{tuple,struct}_fields_finish` is similar, but can
handle any number of fields by using arrays.
These new methods are all marked as `doc(hidden)` and unstable. They are
intended for the compiler's own use.
Special-casing up to 5 fields gives significantly better performance
results than always using arrays (as was tried in #95637).
The commit also changes the `Debug` deriving code to use these new methods. For
example, where the old `Debug` code for a struct with two fields would be like
this:
```
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
match *self {
Self {
f1: ref __self_0_0,
f2: ref __self_0_1,
} => {
let debug_trait_builder = &mut ::core::fmt::Formatter::debug_struct(f, "S2");
let _ = ::core::fmt::DebugStruct::field(debug_trait_builder, "f1", &&(*__self_0_0));
let _ = ::core::fmt::DebugStruct::field(debug_trait_builder, "f2", &&(*__self_0_1));
::core::fmt::DebugStruct::finish(debug_trait_builder)
}
}
}
```
the new code is like this:
```
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
match *self {
Self {
f1: ref __self_0_0,
f2: ref __self_0_1,
} => ::core::fmt::Formatter::debug_struct_field2_finish(
f,
"S2",
"f1",
&&(*__self_0_0),
"f2",
&&(*__self_0_1),
),
}
}
```
This shrinks the code produced for `Debug` instances
considerably, reducing compile times and binary sizes.
Co-authored-by: Scott McMurray <scottmcm@users.noreply.github.com>
clarify Arc::clone overflow check comment
I had to read this twice to realize that this is explaining that the code is technically unsound, so move that into a dedicated paragraph and make the wording a bit more explicit.
Fix documentation for `with_capacity` and `reserve` families of methods
Fixes#95614
Documentation for the following methods
- `with_capacity`
- `with_capacity_in`
- `with_capacity_and_hasher`
- `reserve`
- `reserve_exact`
- `try_reserve`
- `try_reserve_exact`
was inconsistent and often not entirely correct where they existed on the following types
- `Vec`
- `VecDeque`
- `String`
- `OsString`
- `PathBuf`
- `BinaryHeap`
- `HashSet`
- `HashMap`
- `BufWriter`
- `LineWriter`
since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked `BufReader`, but there the docs appear to be accurate as it appears to actually allocate the exact capacity).
Some effort was made to make the documentation more consistent between types as well.
clarify how Rust atomics correspond to C++ atomics
``@cbeuw`` noted in https://github.com/rust-lang/miri/pull/1963 that the correspondence between C++ atomics and Rust atomics is not quite as obvious as one might think, since in Rust I can use `get_mut` to treat previously non-atomic data as atomic. However, I think using C++20 `atomic_ref`, we can establish a suitable relation between the two -- or do you see problems with that ``@cbeuw?`` (I recall you said there was some issue, but it was deep inside that PR and Github makes it impossible to find...)
Cc ``@thomcc;`` not sure whom else to ping for atomic memory model things.
Rollup of 4 pull requests
Successful merges:
- #98235 (Drop magic value 3 from code)
- #98267 (Don't omit comma when suggesting wildcard arm after macro expr)
- #98276 (Mention formatting macros when encountering `ArgumentV1` method in const)
- #98296 (Add a link to the unstable book page on Generator doc comment)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup