Update the stdarch submodule
This notably brings in a number of codegen updates to ensure that wasm
simd intrinsics generate the expected instruction with LLVM 13
Bulk building is a common technique to increase the performance of
building a fresh btree map. Instead of inserting items one-by-one,
we sort all the items beforehand then create the BtreeMap in bulk.
Add Saturating type (based on Wrapping type)
Tracking #87920
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
- [x] ~`impl Div for Saturating<T>` falls back on inner integer division - which seems alright?~
- [x] add `saturating_div`? (to respect division by `-1`)
- [x] There is no `::saturating_shl` and `::saturating_shr`. (How to) implement `Shl`, `ShlAssign`, `Shr` and `ShrAssign`?
- [naively](3f7d2ce28f)
- [x] ~`saturating_neg` is only implemented on [signed integer types](https://doc.rust-lang.org/std/?search=saturating_n)~
- [x] Is the implementation copied over from the `Wrapping`-type correct for `Saturating`?
- [x] `Saturating::rotate_left`
- [x] `Saturating::rotate_right`
- [x] `Not`
- [x] `BitXorOr` and `BitXorOrAssign`
- [x] `BitOr` and `BitOrAssign`
- [x] `BitAnd` and `BitAndAssign`
- [x] `Saturating::swap_bytes`
- [x] `Saturating::reverse_bits`
The `incoming` method is really useful, however for some use cases the borrow
this introduces is needlessly restricting. Thus, an owned variant is added.
Add `c_size_t` and `c_ssize_t` to `std::os::raw`.
Apparently these aren't guaranteed to be the same, and are merely "always the same in practice" (see https://rust-lang.zulipchat.com/#narrow/stream/136281-t-lang.2Fwg-unsafe-code-guidelines/topic/.60usize.60.20vs.20.60size_t.60).
This is a big footgun, but I suspect it can be alleviated if we expose this and start migrating people to it in advance of any platforms that ever have this as different.
I'll file a tracking issue after this gets some traction.
Use if-let guards in the codebase and various other pattern cleanups
Dogfooding if-let guards as experimentation for the feature.
Tracking issue #51114. Conflicts with #87937.
Stabilise BufWriter::into_parts
The FCP for this has already completed, in #80690.
This was just blocked on #85901 (which changed the name), which is now merged. The original stabilisation MR was #84770 but that has a lot of noise in it, and I also accidentally deleted the branch while trying to tidy up. So here is a new MR. Sorry for the noise.
Closes#80690
Errorkind reorder
I was doing a bit more work in this area and the untidiness of these two orderings bothered me.
The commit messages have the detailed rationale. For your convenience, I c&p them here:
```
io::ErrorKind: rationalise ordering in main enum
It is useful to keep some coherent structure to this ordering. In
particular, Other and Uncategorized should be next to each other, at
the end.
Also it seems to make sense to treat UnexpectedEof and OutOfMemory
specially, since they are not like the other errors (despite
OutOfMemory also being generatable by some OS errors).
So:
* Move Other to the end, just before Uncategorized
* Move Unsupported to between Interrupted and UnexpectedEof
* Add some comments documenting where to add things
```
```
io::Error: alphabeticise the match in as_str()
There was no rationale for the previous ordering.
```
r? kennytm since that's who rust-highfive picked before, in #88294 which I accidentally closed.
Add SAFETY comments to core::slice::sort::partition_in_blocks
A few more SAFETY comments for #66219. There are still a few more in this module.
`@rustbot` label T-libs T-compiler C-cleanup
Fix references to `ControlFlow` in docs
The `Iterator::for_each` method previously stated that it was not possible to use `break` and `continue` in it — this has been updated to acknowledge the stabilization of `ControlFlow`. Additionally, `ControlFlow` was referred to as `crate::ops::ControlFlow` which is not the correct path for an end user.
r? `@jyn514`
Fix typo “a Rc” → “an Rc” (and a few more)
After stumbling about it in the dev-guide, I’ve devided to eliminate all mentions of “a Rc”, replacing it with “an Rc”. E.g.
```plain
$ rg "(^|[^'])\ba\b[^\w=:]*\bRc"
compiler/rustc_data_structures/src/owning_ref/mod.rs
1149:/// Typedef of a owning reference that uses a `Rc` as the owner.
library/std/src/ffi/os_str.rs
919: /// Converts a [`OsString`] into a [`Rc`]`<OsStr>` without copying or allocating.
library/std/src/ffi/c_str.rs
961: /// Converts a [`CString`] into a [`Rc`]`<CStr>` without copying or allocating.
src/doc/rustc-dev-guide/src/query.md
61:are cheaply cloneable; insert a `Rc` if necessary).
src/doc/book/src/ch15-06-reference-cycles.md
72:decreases the reference count of the `a` `Rc<List>` instance from 2 to 1 as
library/alloc/src/rc.rs
1746: /// Converts a generic type `T` into a `Rc<T>`
```
_(the match in the book is a false positive)_
Since the dev-guide is a submodule, it’s getting a separate PR: rust-lang/rustc-dev-guide#1191
I’ve also gone ahead and done the same search for `RwLock` and hit a few cases in the `OwningRef` adaption. Then, I couldn’t keep the countless cases of “a owning …” or “a owner” unaddressed, which concludes this PR.
`@rustbot` label C-cleanup
Adjust / fix documentation of `Arc::make_mut`
Related discussion in the users forum:
[Whatʼs this alleged difference between Arc::make_mut and Rc::make_mut? – The Rust Programming Language Forum](https://users.rust-lang.org/t/what-s-this-alleged-difference-between-arc-make-mut-and-rc-make-mut/63747?u=steffahn)
Also includes a small formatting improvement in the documentation of `Rc::make_mut`.
This PR makes the two documentations in question complete analogs. The previously claimed point in which one “differs from the behavior of” the other turns out to be incorrect, AFAIK.
One remaining inaccuracy: `Weak` pointers aren’t disassociated from the allocation but only from the contained value, i.e. in case of outstanding `Weak` pointers there still is a new allocation created, just the call to `.clone()` is avoided, instead the value is moved from one allocation to the other.
`@rustbot` label T-libs-api, A-docs
add Cell::as_array_of_cells, similar to Cell::as_slice_of_cells
I'd like to propose adding `Cell::as_array_of_cells`, as a natural analog to `Cell::as_slice_of_cells`. I don't have a specific use case in mind, other than that supporting slices but not arrays feels like a gap. Do other folks agree with that intuition? Would this addition be substantial enough to need an RFC?
---
Previously, converting `&mut [T; N]` to `&[Cell<T>; N]` looks like this:
```rust
let array = &mut [1, 2, 3];
let cells: &[Cell<i32>; 3] = Cell::from_mut(&mut array[..])
.as_slice_of_cells()
.try_into()
.unwrap();
```
With this new helper method, it looks like this:
```rust
let array = &mut [1, 2, 3];
let cells = Cell::from_mut(array).as_array_of_cells();
```
It is useful to keep some coherent structure to this ordering. In
particular, Other and Uncategorized should be next to each other, at
the end.
Also it seems to make sense to treat UnexpectedEof and OutOfMemory
specially, since they are not like the other errors (despite
OutOfMemory also being generatable by some OS errors).
So:
* Move Other to the end, just before Uncategorized
* Move Unsupported to between Interrupted and UnexpectedEof
* Add some comments documenting where to add things
Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
Get piece unchecked in `write`
We already use specialized `zip`, but it seems like we can do a little better by not checking `pieces` length at all.
`Arguments` constructors are now unsafe. So the `format_args!` expansion now includes an `unsafe` block.
<details>
<summary>Local Bench Diff</summary>
```text
name before ns/iter after ns/iter diff ns/iter diff % speedup
fmt::write_str_macro1 22,967 19,718 -3,249 -14.15% x 1.16
fmt::write_str_macro2 35,527 32,654 -2,873 -8.09% x 1.09
fmt::write_str_macro_debug 571,953 575,973 4,020 0.70% x 0.99
fmt::write_str_ref 9,579 9,459 -120 -1.25% x 1.01
fmt::write_str_value 9,573 9,572 -1 -0.01% x 1.00
fmt::write_u128_max 176 173 -3 -1.70% x 1.02
fmt::write_u128_min 138 134 -4 -2.90% x 1.03
fmt::write_u64_max 139 136 -3 -2.16% x 1.02
fmt::write_u64_min 129 135 6 4.65% x 0.96
fmt::write_vec_macro1 24,401 22,273 -2,128 -8.72% x 1.10
fmt::write_vec_macro2 37,096 35,602 -1,494 -4.03% x 1.04
fmt::write_vec_macro_debug 588,291 589,575 1,284 0.22% x 1.00
fmt::write_vec_ref 9,568 9,732 164 1.71% x 0.98
fmt::write_vec_value 9,516 9,625 109 1.15% x 0.99
```
</details>
Implement `AsFd` etc. for `UnixListener`.
Implement `AsFd`, `From<OwnedFd>`, and `Into<OwnedFd>` for
`UnixListener`. This is a follow-up to #87329.
r? `@joshtriplett`
Previously, converting `&mut [T; N]` to `&[Cell<T>; N]` looks like this:
let array = &mut [1, 2, 3];
let cells: &[Cell<i32>; 3] = Cell::from_mut(&mut array[..])
.as_slice_of_cells()
.try_into()
.unwrap();
With this new helper method, it looks like this:
let array = &mut [1, 2, 3];
let cells: &[Cell<i32>; 3] = Cell::from_mut(array).as_array_of_cells();
Fix example in `Extend<(A, B)>` impl
After looking over the examples in my last PR (#85835) on doc.rust-lang.org/nightly I realized that the example didn't actually show what I wanted it to show 😅
So here's the better example
add file_prefix method to std::path
This is an initial implementation of `std::path::Path::file_prefix`. It is effectively a "left" variant of the existing [`file_stem`](https://doc.rust-lang.org/std/path/struct.Path.html#method.file_stem) method. An illustration of the difference is
```rust
use std::path::Path;
let path = Path::new("foo.tar.gz");
assert_eq!(path.file_stem(), Some("foo.tar"));
assert_eq!(path.file_prefix(), Some("foo"));
```
In my own development, I generally find I almost always want the prefix, rather than the stem, so I thought it might be best to suggest it's addition to libstd.
Of course, as this is my first contribution, I expect there is probably more work that needs to be done. Additionally, if the libstd team feel this isn't appropriate then so be it.
There has been some [discussion about this on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/file_lstem/near/238076313) and a user there suggested I open a PR to see whether someone in the libstd team thinks it is worth pursuing.
Optimize unnecessary check in VecDeque::retain
This pr is highly inspired by https://github.com/rust-lang/rust/pull/88060 which shared the same idea: we can split the `for` loop into stages so that we can remove unnecessary checks like `del > 0`.
## Benchmarks
Before
```rust
test collections::vec_deque::tests::bench_retain_half_10000 ... bench: 290,125 ns/iter (+/- 8,717)
test collections::vec_deque::tests::bench_retain_odd_10000 ... bench: 291,588 ns/iter (+/- 9,621)
test collections::vec_deque::tests::bench_retain_whole_10000 ... bench: 287,426 ns/iter (+/- 9,009)
```
After
```rust
test collections::vec_deque::tests::bench_retain_half_10000 ... bench: 243,940 ns/iter (+/- 8,563)
test collections::vec_deque::tests::bench_retain_odd_10000 ... bench: 242,768 ns/iter (+/- 3,903)
test collections::vec_deque::tests::bench_retain_whole_10000 ... bench: 202,926 ns/iter (+/- 6,332)
```
Based on the current benchmark, this PR will improve the perf of `VecDeque::retain` by around 16%. For special cases, the improvement will be up to 30%.
Signed-off-by: Xuanwo <github@xuanwo.io>
For some reason, I always forget which variants are smaller and which
are larger when you derive PartialOrd on an enum. And the wording in the
current docs is not entirely clear to me.
So, I often end up making a small enum, deriving PartialOrd on it, and
then writing a `#[test]` with an assert that the top one is smaller than
the bottom one (or the other way around) to figure out which way the
deriving goes.
So then I figured, it would be great if the standard library docs just
had that example, so if I keep forgetting, at least I can figure it out
quickly by looking at std's docs.
where available use AtomicU{64,128} instead of mutex for Instant backsliding protection
This decreases the overhead of backsliding protection on x86 systems with unreliable TSC, e.g. windows. And on aarch64 systems where 128bit atomics are available.
The following benchmarks were taken on x86_64 linux though by overriding `actually_monotonic()`, the numbers may look different on other platforms
```
# actually_monotonic() == true
test time::tests::instant_contention_01_threads ... bench: 44 ns/iter (+/- 0)
test time::tests::instant_contention_02_threads ... bench: 44 ns/iter (+/- 0)
test time::tests::instant_contention_04_threads ... bench: 44 ns/iter (+/- 0)
test time::tests::instant_contention_08_threads ... bench: 44 ns/iter (+/- 0)
test time::tests::instant_contention_16_threads ... bench: 44 ns/iter (+/- 0)
# 1x AtomicU64
test time::tests::instant_contention_01_threads ... bench: 65 ns/iter (+/- 0)
test time::tests::instant_contention_02_threads ... bench: 157 ns/iter (+/- 20)
test time::tests::instant_contention_04_threads ... bench: 281 ns/iter (+/- 53)
test time::tests::instant_contention_08_threads ... bench: 555 ns/iter (+/- 77)
test time::tests::instant_contention_16_threads ... bench: 883 ns/iter (+/- 107)
# mutex
test time::tests::instant_contention_01_threads ... bench: 60 ns/iter (+/- 2)
test time::tests::instant_contention_02_threads ... bench: 770 ns/iter (+/- 231)
test time::tests::instant_contention_04_threads ... bench: 1,347 ns/iter (+/- 45)
test time::tests::instant_contention_08_threads ... bench: 2,693 ns/iter (+/- 114)
test time::tests::instant_contention_16_threads ... bench: 5,244 ns/iter (+/- 487)
```
Since I don't have an arm machine with 128bit atomics I wasn't able to benchmark the AtomicU128 implementation.
I/O safety.
Introduce `OwnedFd` and `BorrowedFd`, and the `AsFd` trait, and
implementations of `AsFd`, `From<OwnedFd>` and `From<T> for OwnedFd`
for relevant types, along with Windows counterparts for handles and
sockets.
Tracking issue: <https://github.com/rust-lang/rust/issues/87074>
RFC: <https://github.com/rust-lang/rfcs/blob/master/text/3128-io-safety.md>
Highlights:
- The doc comments at the top of library/std/src/os/unix/io/mod.rs and library/std/src/os/windows/io/mod.rs
- The new types and traits in library/std/src/os/unix/io/fd.rs and library/std/src/os/windows/io/handle.rs
- The removal of the `RawHandle` struct the Windows impl, which had the same name as the `RawHandle` type alias, and its functionality is now folded into `Handle`.
Managing five levels of wrapping (File wraps sys::fs::File wraps sys::fs::FileDesc wraps OwnedFd wraps RawFd, etc.) made for a fair amount of churn and verbose as/into/from sequences in some places. I've managed to simplify some of them, but I'm open to ideas here.
r? `@joshtriplett`
Add fast path for Path::cmp that skips over long shared prefixes
```
# before
test path::tests::bench_path_cmp_fast_path_buf_sort ... bench: 60,811 ns/iter (+/- 865)
test path::tests::bench_path_cmp_fast_path_long ... bench: 6,459 ns/iter (+/- 275)
test path::tests::bench_path_cmp_fast_path_short ... bench: 1,777 ns/iter (+/- 34)
# after
test path::tests::bench_path_cmp_fast_path_buf_sort ... bench: 38,140 ns/iter (+/- 211)
test path::tests::bench_path_cmp_fast_path_long ... bench: 1,471 ns/iter (+/- 24)
test path::tests::bench_path_cmp_fast_path_short ... bench: 1,106 ns/iter (+/- 9)
```
The name (and updated documentation) make the FFI-only usage clearer, and wrapping Option<OwnedHandle> avoids the need to write a separate Drop or Debug impl.
Co-authored-by: Josh Triplett <josh@joshtriplett.org>
Add TcpStream type to TcpListener::incoming docs
## Context
While going through the "The Rust Programming Language" book (Klabnik & Nichols), the TCP server example directs us to use TcpListener::incoming. I was curious how I could pass this value to a function (before reading ahead in the book), so I looked up the docs to determine the signature.
When I opened the docs, I found https://doc.rust-lang.org/std/net/struct.TcpListener.html#method.incoming, which didn't mention TcpStream anywhere in the example.
Eventually, I clicked on https://doc.rust-lang.org/std/net/struct.TcpListener.html#method.accept in the docs (after clicking a few other locations first), and was able to surmise that the value contained TcpStream.
## Opportunity
While this type is mentioned several times in this doc, I feel that someone should be able to fully use the results of the TcpListner::incoming iterator based solely on the docs of just this method.
## Implementation
I took the code from the top-level TcpListener https://doc.rust-lang.org/std/net/struct.TcpListener.html#method.incoming and blended it with the existing docs for TcpListener::incoming https://doc.rust-lang.org/std/net/struct.TcpListener.html#method.incoming.
It does make the example a little longer, and it also introduces a little duplication. It also gives the reader the type signatures they need to move on to the next step.
## Additional considerations
I noticed that in this doc, `handle_connection` and `handle_client` are both used to accept a TcpStream in the docs on this page. I want to standardize on one function name convention, so readers don't accidentally think two different concepts are being referenced. I didn't want to cram do too much in one PR, I can update this PR to make that change, or I could send another PR (if you would like).
First attempted contribution to Rust (and I'm also still very new, hence reading through the rust book for the first time)! Would you please let me know what you think?
Update the backtrace crate in libstd
This commit updates the backtrace crate in libstd now that dependencies
have been updated to use `memchr` from the standard library as well.
This is mostly just making sure deps are up-to-date and have all the
latest-and-greatest fixes and such.
Closesrust-lang/backtrace-rs#432
This commit updates the backtrace crate in libstd now that dependencies
have been updated to use `memchr` from the standard library as well.
This is mostly just making sure deps are up-to-date and have all the
latest-and-greatest fixes and such.
Closesrust-lang/backtrace-rs#432
Related discussion in the users forum:
https://users.rust-lang.org/t/what-s-this-alleged-difference-between-arc-make-mut-and-rc-make-mut/63747?u=steffahn
Also includes small formatting improvement in the documentation of `Rc::make_mut`.
This commit makes the two documentations in question complete analogs. The previously claimed point in which
one "differs from the behavior of" the other turns out to be incorrect, AFAIK.
One remaining inaccuracy: `Weak` pointers aren't disassociated from the allocation but only from the contained
value, i.e. in case of outstanding `Weak` pointers there still is a new allocation created, just the
call to `.clone()` is avoided, instead the value is moved from one allocation to the other.
Unbox mutexes, condvars and rwlocks on hermit
[RustyHermit](https://github.com/hermitcore/rusty-hermit) provides now movable synchronization primitives and we are able to unbox mutexes and condvars.
Fix environment variable getter docs
`@RalfJung` pointed out a number of errors and suboptimal choices I made in my documentation for #86183. This PR should (hopefully) fix the problems they've identified.
Change WASI's `RawFd` from `u32` to `c_int` (`i32`).
WASI previously used `u32` as its `RawFd` type, since its "file descriptors"
are unsigned table indices, and there's no fundamental reason why WASI can't
have more than 2^31 handles.
However, this creates myriad little incompability problems with code
that also supports Unix platforms, where `RawFd` is `c_int`. While WASI
isn't a Unix, it often shares code with Unix, and this difference made
such shared code inconvenient. #87329 is the most recent example of such
code.
So, switch WASI to use `c_int`, which is `i32`. This will mean that code
intending to support WASI should ideally avoid assuming that negative file
descriptors are invalid, even though POSIX itself says that file descriptors
are never negative.
This is a breaking change, but `RawFd` is considerd an experimental
feature in [the documentation].
[the documentation]: https://doc.rust-lang.org/stable/std/os/wasi/io/type.RawFd.html
r? `@alexcrichton`
The libs-api team agrees to allow const_trait_impl to appear in the
standard library as long as stable code cannot be broken (they are
properly gated) this means if the compiler teams thinks it's okay, then
it's okay.
My priority on constifying would be:
1. Non-generic impls (e.g. Default) or generic impls with no
bounds
2. Generic functions with bounds (that use const impls)
3. Generic impls with bounds
4. Impls for traits with associated types
For people opening constification PRs: please cc me and/or oli-obk.
Assign FIXMEs to me and remove obsolete ones
Also fixed capitalization of documentation
We also don't need to transform predicates to be non-const since we basically ignore const predicates in non-const contexts.
r? `````@oli-obk`````
Add future-incompat lint for `doc(primitive)`
## What is `doc(primitive)`?
`doc(primitive)` is an attribute recognized by rustdoc which adds documentation for the built-in primitive types, such as `usize` and `()`. It has been stable since Rust 1.0.
## Why change anything?
`doc(primitive)` is useless for anyone outside the standard library. Since rustdoc provides no way to combine the documentation on two different primitive items, you can only replace the docs, and since the standard library already provides extensive documentation there is no reason to do so.
While fixing rustdoc's handling of primitive items (https://github.com/rust-lang/rust/pull/87073) I discovered that even rustdoc's existing handling of primitive items was broken if you had more than two crates using it (it would pick randomly between them). That meant both:
- Keeping rustdoc's existing treatment was nigh-impossible, because it was random.
- doc(primitive) was even more useless than it would otherwise be.
The only use-case for this outside the standard library is for no-std libraries which want to link to primitives (https://github.com/rust-lang/rust/issues/73423) which is being fixed in https://github.com/rust-lang/rust/pull/87073.
https://github.com/rust-lang/rust/pull/87073 makes various breaking changes to `doc(primitive)` (breaking in the sense that they change the semantics, not in that they cause code to fail to compile). It's not possible to avoid these and still fix rustdoc's issues.
## What can we do about it?
As shown by the crater run (https://github.com/rust-lang/rust/pull/87050#issuecomment-886166706), no one is actually using doc(primitive), there wasn't a single true regression in the whole run. We can either:
1. Feature gate it completely, breaking anyone who crater missed. They can easily fix the breakage just by removing the attribute.
2. add it to the `INVALID_DOC_ATTRIBUTES` future-incompat lint, and at the same time make it a no-op unless you add a feature gate. That would mean rustdoc has to look at the features of dependent crates, because it needs to know where primitives are defined in order to link to them.
3. add it to `INVALID_DOC_ATTRIBUTES`, but still use it to determine where primitives come from
4. do nothing; the behavior will silently change in https://github.com/rust-lang/rust/pull/87073.
My preference is for 2, but I would also be happy with 1 or 3. I don't think we should silently change the behavior.
This PR currently implements 3.
Uplift the invalid_atomic_ordering lint from clippy to rustc
This is mostly just a rebase of https://github.com/rust-lang/rust/pull/79654; I've copy/pasted the text from that PR below.
r? `@lcnr` since you reviewed the last one, but feel free to reassign.
---
This is an implementation of https://github.com/rust-lang/compiler-team/issues/390.
As mentioned, in general this turns an unconditional runtime panic into a (compile time) lint failure. It has no false positives, and the only false negatives I'm aware of are if `Ordering` isn't specified directly and is comes from an argument/constant/whatever.
As a result of it having no false positives, and the alternative always being strictly wrong, it's on as deny by default. This seems right.
In the [zulip stream](https://rust-lang.zulipchat.com/#narrow/stream/233931-t-compiler.2Fmajor-changes/topic/Uplift.20the.20.60invalid_atomic_ordering.60.20lint.20from.20clippy/near/218483957) `@joshtriplett` suggested that lang team should FCP this before landing it. Perhaps libs team cares too?
---
Some notes on the code for reviewers / others below
## Changes from clippy
The code is changed from [the implementation in clippy](68cf94f6a6/clippy_lints/src/atomic_ordering.rs) in the following ways:
1. Uses `Symbols` and `rustc_diagnostic_item`s instead of string literals.
- It's possible I should have just invoked Symbol::intern for some of these instead? Seems better to use symbol, but it did require adding several.
2. The functions are moved to static methods inside the lint struct, as a way to namespace them.
- There's a lot of other code in that file — which I picked as the location for this lint because `@jyn514` told me that seemed reasonable.
3. Supports unstable AtomicU128/AtomicI128.
- I did this because it was almost easier to support them than not — not supporting them would have (ideally) required finding a way not to give them a `rustc_diagnostic_item`, which would have complicated an already big macro.
- These don't have tests since I wasn't sure if/how I should make tests conditional on whether or not the target has the atomic... This is to a certain extent an issue of 64bit atomics too, but 128-bit atomics are much less common. Regardless, the existing tests should be *more* than thorough enough here.
4. Minor changes like:
- grammar tweaks ("loads cannot have `Release` **and** `AcqRel` ordering" => "loads cannot have `Release` **or** `AcqRel` ordering")
- function renames (`match_ordering_def_path` => `matches_ordering_def_path`),
- avoiding clippy-specific helper methods that don't exist in rustc_lint and didn't seem worth adding for this case (for example `cx.struct_span_lint` vs clippy's `span_lint_and_help` helper).
## Potential issues
(This is just about the code in this PR, not conceptual issues with the lint or anything)
1. I'm not sure if I should have used a diagnostic item for `Ordering` and its variants (I couldn't figure out how really, so if I should do this some pointers would be appreciated).
- It seems possible that failing to do this might possibly mean there are more cases this lint would miss, but I don't really know how `match_def_path` works and if it has any pitfalls like that, so maybe not.
2. I *think* I deprecated the lint in clippy (CC `@flip1995` who asked to be notified about clippy changes in the future in [this comment](https://github.com/rust-lang/rust/pull/75671#issuecomment-718731659)) but I'm not sure if I need to do anything else there.
- I'm kind of hoping CI will catch if I missed anything, since `x.py test src/tools/clippy` fails with a lot of errors with and without my changes (and is probably a nonsense command regardless). Running `cargo test` from src/tools/clippy also fails with unrelated errors that seem like refactorings that didnt update clippy? So, honestly no clue.
3. I wasn't sure if the description/example I gave good. Hopefully it is. The example is less thorough than the one from clippy here: https://rust-lang.github.io/rust-clippy/master/index.html#invalid_atomic_ordering. Let me know if/how I should change it if it needs changing.
4. It pulls in the `if_chain` crate. This crate was already used in clippy, and seems like it's used elsewhere in rustc, but I'm willing to rewrite it to not use this if needed (I'd prefer not to, all things being equal).
- Deprecate clippy::invalid_atomic_ordering
- Use rustc_diagnostic_item for the orderings in the invalid_atomic_ordering lint
- Reduce code duplication
- Give up on making enum variants diagnostic items and just look for
`Ordering` instead
I ran into tons of trouble with this because apparently the change to
store HIR attrs in a side table also gave the DefIds of the
constructor instead of the variant itself. So I had to change
`matches_ordering` to also check the grandparent of the defid as well.
- Rename `atomic_ordering_x` symbols to just the name of the variant
- Fix typos in checks - there were a few places that said "may not be
Release" in the diagnostic but actually checked for SeqCst in the lint.
- Make constant items const
- Use fewer diagnostic items
- Only look at arguments after making sure the method matches
This prevents an ICE when there aren't enough arguments.
- Ignore trait methods
- Only check Ctors instead of going through `qpath_res`
The functions take values, so this couldn't ever be anything else.
- Add if_chain to allowed dependencies
- Fix grammar
- Remove unnecessary allow
BTree: merge the complication introduced by #81486 and #86031
Also:
- Deallocate the last few tree nodes as soon as an `into_iter` iterator steps beyond the end, instead of waiting around for the drop of the iterator (just to share more code).
- Symmetric code for backward iteration.
- Mark unsafe the methods on dying handles, modelling dying handles after raw pointers: it's the caller's responsibility to use them safely.
r? `@Mark-Simulacrum`
Test and fix `size_hint` for slice’s [r]split* iterators
Adds extensive test (of `size_hint`) for all the _[r]split*_ iterators.
Fixes `size_hint` upper bound for _split_inclusive*_ iterators which was one higher than necessary for non-empty slices.
Fixes `size_hint` lower bound for _[r]splitn*_ iterators when _n == 0_, which was one too high.
**Lower bound being one too high was a logic error, violating the correctness condition of `size_hint`.**
_Edit:_ I’ve opened an issue for that bug, so this PR fixes#87978
Specialize `Vec::clone_from` for `Copy` types
This should improve performance and reduce code size.
This also improves `clone_from` for `String`, `OsString` and `PathBuf`.
WASI previously used `u32` as its `RawFd` type, since its "file descriptors"
are unsigned table indices, and there's no fundamental reason why WASI can't
have more than 2^31 handles.
However, this creates myriad little incompability problems with code
that also supports Unix platforms, where `RawFd` is `c_int`. While WASI
isn't a Unix, it often shares code with Unix, and this difference made
such shared code inconvenient. #87329 is the most recent example of such
code.
So, switch WASI to use `c_int`, which is `i32`. This will mean that code
intending to support WASI should ideally avoid assuming that negative file
descriptors are invalid, even though POSIX itself says that file descriptors
are never negative.
This is a breaking change, but `RawFd` is considerd an experimental
feature in [the documentation].
[the documentation]: https://doc.rust-lang.org/stable/std/os/wasi/io/type.RawFd.html
Implement `black_box` using intrinsic
Introduce `black_box` intrinsic, as suggested in https://github.com/rust-lang/rust/pull/87590#discussion_r680468700.
This is still codegenned as empty inline assembly for LLVM. For MIR interpretation and cranelift it's treated as identity.
cc `@Amanieu` as this is related to inline assembly
cc `@bjorn3` for rustc_codegen_cranelift changes
cc `@RalfJung` as this affects MIRI
r? `@nagisa` I suppose
Adds extensive test for all the [r]split* iterators.
Fixes size_hint upper bound for split_inclusive* iterators which was one higher than necessary for non-empty slices.
Fixes size_hint lower bound for [r]splitn* iterators when n==0, which was one too high.
The new implementation allows some `memcpy`s to be optimized away,
so the uninit value in ui/sanitize/memory.rs is constructed directly
onto the return place. Therefore the sanitizer now says that the
value is allocated by `main` rather than `random`.
STD support for the ESP-IDF framework
Dear all,
This PR is implementing libStd support for the [ESP-IDF](https://github.com/espressif/esp-idf) newlib-based framework, which is the open source SDK provided by Espressif for their MCU family (esp32, esp32s2, esp32c3 and all other forthcoming ones).
Note that this PR has a [sibling PR](https://github.com/rust-lang/libc/pull/2310) against the libc crate, which implements proper declarations for all ESP-IDF APIs which are necessary for libStd support.
# Implementation approach
The ESP-IDF framework - despite being bare metal - offers a relatively complete POSIX API based on newlib. `pthread`, BSD sockets, file descriptors, and even a small file-system VFS layer. Perhaps the only significant exception is the lack of support for processes, which is to be expected of course on bare metal.
Therefore, the libStd support is implemented as a set of (hopefully small) changes to the `sys/unix` family of modules, in the form of conditional-compilation branches based either on `target_os = "espidf"` or in a couple of cases - based on `target_env = "newlib"` (the latter was already there actually and is not part of this patch).
The PR also contains two new targets:
- `riscv32imc-esp-espidf`
- `riscv32imac-esp-espidf`
... which are essentially copies of `riscv32imc-unknown-none-elf` and `riscv32imac-unknown-none-elf`, but enriched with proper `linker`, `linker_flavor`, `families`, `os`, `env` etc. specifications so that (a) the proper conditional compilation branches in libStd are selected when compiling with these targets and (b) the correct linker is used.
Since support for atomics is a precondition for libStd, the `riscv32imc-esp-espidf` target additionally is configured in such a way, so as to emit libcalls to the `__sync*` & `__atomic*` GCC functions, which are already implemented in the ESP-IDF framework. If this modification is not acceptable, we can also live with only the `riscv32imac-esp-espidf` target as well. While the RiscV chips of Espressif lack native atomics support, the relevant instructions are transparently emulated in the ESP-IDF framework using invalid instruction trap. This modification was implemented specifically with Rust support in mind.
# Target maintainers
In case this PR eventually gets merged, you can list myself as a Target Maintainer.
More importantly, Espressif (the chip vendor) is now actively involved and [embracing](https://github.com/espressif/rust-esp32-example/blob/main/docs/rust-on-xtensa.md) all [Rust-related efforts](https://github.com/esp-rs) which were originally a community effort. In light of that, I suppose `@MabezDev` - who initiated the Rust-on-Espressif efforts back in time and who now works for Espressif won't object to being listed as a maintainer as well.
**EDIT:** I was hinted (thanks, `@Urgau)` that answering the Tier 3 policy explicitly might be helpful. Answers below.
# Tier 3 Target Policy - answers
> A proposed target or target-specific patch that substantially changes code shared with other targets (not just target-specific code) must be reviewed and approved by the appropriate team for that shared code before acceptance.
Hopefully, the changes introduced by the ESP-IDF libStd support are rather on the small side. They are completely contained within the `sys/unix` set of modules (that is, aside from the obviously necessary one-liners in the `unwind` crate and in `build.rs`).
> A tier 3 target must have a designated developer or developers (the "target maintainers") on record to be CCed when issues arise regarding the target. (The mechanism to track and CC such developers may evolve over time.)
`@ivmarkov`
`@MabezDev`
> Targets must use naming consistent with any existing targets; for instance, a target for the same CPU or OS as an existing Rust target should use the same name for that CPU or OS. Targets should normally use the same names and naming conventions as used elsewhere in the broader ecosystem beyond Rust (such as in other toolchains), unless they have a very good reason to diverge. Changing the name of a target can be highly disruptive, especially once the target reaches a higher tier, so getting the name right is important even for a tier 3 target.
The two introduced targets follow as much as possible the naming conventions of the other targets. I.e. taking the bare-metal `riscv32imac_unknown_none_elf` as a base:
* The name of the new target was derived by replacing `none` with `espidf` to designate the `target_os`.
* `_elf` was removed, as the non-bare metal targets seem not to have it
* `-newlib` was deliberately NOT added at the end, as I believe the chance of having two simultaneously active separate targets for the ESP-IDF framework with different C libraries (say, newlib vs musl) is way too small
* Finally, we replaced the middle `unknown` with `esp` which is kind of the name of the whole chipset MCU family (and abbreviation from Espressif which is too long). It will stay `esp` for all RiscV32-based MCUs of the company, as they all use the riscv32imc instruction set. By necessity however (disambiguation), it will be `esp32` or `esp32s2` or `esp32s3` for the Xtensa-based MCUs as all of these have their own variation of the Xtensa architecture. (The Xtensa targets are not part of this PR, even though they would use 1:1 the same LibStd implementation provided here, as they depend on the upstreaming of the Xtensa architecture support in LLVM; this upstreaming this is currently in progress.)
There was also a preceding discussion on the topic [here](https://github.com/espressif/rust-esp32-example/issues/14).
> Target names should not introduce undue confusion or ambiguity unless absolutely necessary to maintain ecosystem compatibility. For example, if the name of the target makes people extremely likely to form incorrect beliefs about what it targets, the name should be changed or augmented to disambiguate it.
We are explicitly putting an `-espidf` suffix to designate that the target is *specifically* for Rust + ESP-IDF
> Tier 3 targets may have unusual requirements to build or use, but must not create legal issues or impose onerous legal terms for the Rust project or for Rust developers or users.
Agreed.
> The target must not introduce license incompatibilities.
To the best of our knowledge, it doesn't.
> Anything added to the Rust repository must be under the standard Rust license (MIT OR Apache-2.0).
MIT + Apache 2.0
> The target must not cause the Rust tools or libraries built for any other host (even when supporting cross-compilation to the target) to depend on any new dependency less permissive than the Rust licensing policy. This applies whether the dependency is a Rust crate that would require adding new license exceptions (as specified by the tidy tool in the rust-lang/rust repository), or whether the dependency is a native library or binary. In other words, the introduction of the target must not cause a user installing or running a version of Rust or the Rust tools to be subject to any new license requirements.
Requirements are not changed for any other target.
> If the target supports building host tools (such as rustc or cargo), those host tools must not depend on proprietary (non-FOSS) libraries, other than ordinary runtime libraries supplied by the platform and commonly used by other binaries built for the target. For instance, rustc built for the target may depend on a common proprietary C runtime library or console output library, but must not depend on a proprietary code generation library or code optimization library. Rust's license permits such combinations, but the Rust project has no interest in maintaining such combinations within the scope of Rust itself, even at tier 3.
The targets are for bare-metal environment which is not hosting build tools or a compiler.
> Targets should not require proprietary (non-FOSS) components to link a functional binary or library.
The linker used by the targets is the GCC linker from the GCC toolchain cross-compiled for riscv. GNU GPL.
> "onerous" here is an intentionally subjective term. At a minimum, "onerous" legal/licensing terms include but are not limited to: non-disclosure requirements, non-compete requirements, contributor license agreements (CLAs) or equivalent, "non-commercial"/"research-only"/etc terms, requirements conditional on the employer or employment of any particular Rust developers, revocable terms, any requirements that create liability for the Rust project or its developers or users, or any requirements that adversely affect the livelihood or prospects of the Rust project or its developers or users.
> Neither this policy nor any decisions made regarding targets shall create any binding agreement or estoppel by any party. If any member of an approving Rust team serves as one of the maintainers of a target, or has any legal or employment requirement (explicit or implicit) that might affect their decisions regarding a target, they must recuse themselves from any approval decisions regarding the target's tier status, though they may otherwise participate in discussions.
> This requirement does not prevent part or all of this policy from being cited in an explicit contract or work agreement (e.g. to implement or maintain support for a target). This requirement exists to ensure that a developer or team responsible for reviewing and approving a target does not face any legal threats or obligations that would prevent them from freely exercising their judgment in such approval, even if such judgment involves subjective matters or goes beyond the letter of these requirements.
Agreed.
> Tier 3 targets should attempt to implement as much of the standard libraries as possible and appropriate (core for most targets, alloc for targets that can support dynamic memory allocation, std for targets with an operating system or equivalent layer of system-provided functionality), but may leave some code unimplemented (either unavailable or stubbed out as appropriate), whether because the target makes it impossible to implement or challenging to implement. The authors of pull requests are not obligated to avoid calling any portions of the standard library on the basis of a tier 3 target not implementing those portions.
The targets implement libStd almost in its entirety, except for the missing support for process, as this is a bare metal platform. The process `sys\unix` module is currently stubbed to return "not implemented" errors.
> The target must provide documentation for the Rust community explaining how to build for the target, using cross-compilation if possible. If the target supports running tests (even if they do not pass), the documentation must explain how to run tests for the target, using emulation if possible or dedicated hardware if necessary.
Target does not (yet) support running tests. We would gladly provide all documentation how to build for the target (where?). It is currently hosted in this [README.md](https://github.com/ivmarkov/rust-esp32-std-hello) file, but will likely be moved to the [esp-rs](https://github.com/esp-rs) organization. Since the build for the target is driven by cargo and [all other tooling is downloaded automatically during the build](https://github.com/esp-rs/esp-idf-sys/blob/master/build.rs), there is no need for extensive documentation.
> Tier 3 targets must not impose burden on the authors of pull requests, or other developers in the community, to maintain the target. In particular, do not post comments (automated or manual) on a PR that derail or suggest a block on the PR based on a tier 3 target. Do not send automated messages or notifications (via any medium, including via `@)` to a PR author or others involved with a PR regarding a tier 3 target, unless they have opted into such messages.
Agreed.
> Backlinks such as those generated by the issue/PR tracker when linking to an issue or PR are not considered a violation of this policy, within reason. However, such messages (even on a separate repository) must not generate notifications to anyone involved with a PR who has not requested such notifications.
Agreed.
> Patches adding or updating tier 3 targets must not break any existing tier 2 or tier 1 target, and must not knowingly break another tier 3 target without approval of either the compiler team or the maintainers of the other tier 3 target.
To the best of our knowledge, we believe we are not breaking any other target (be it tier 1, 2 or 3).
> In particular, this may come up when working on closely related targets, such as variations of the same architecture with different features. Avoid introducing unconditional uses of features that another variation of the target may not have; use conditional compilation or runtime detection, as appropriate, to let each target run code supported by that target.
To the best of our knowledge, we have not introduced any unconditional use of a feature that affects any other target.
> If a tier 3 target stops meeting these requirements, or the target maintainers no longer have interest or time, or the target shows no signs of activity and has not built for some time, or removing the target would improve the quality of the Rust codebase, we may post a PR to remove it; any such PR will be CCed to the target maintainers (and potentially other people who have previously worked on the target), to check potential interest in improving the situation.
Agreed.
Implement Extend<(A, B)> for (Extend<A>, Extend<B>)
I oriented myself at the implementation of `Iterator::unzip` and also rewrote the impl in terms of `(A, B)::extend` after that.
Since (A, B) now also implements Extend we could also mention in the documentation of unzip that it can do "nested unzipping" (you could unzip `Iterator<Item=(A, (B, C))>` into `(Vec<A>, (Vec<B>, Vec<C>))` for example) but I'm not sure of that so I'm asking here 🙂
(P.S. I saw a couple of people asking if there is an unzip3 but there isn't. So this could be a way to get equivalent functionality)
removed references to parent/child from std::thread documentation
- also clarifies how thread.join and detaching of threads works
- the previous prose implied that there is a relationship between a
spawning thread and the thread being spawned, and that "child" threads
couldn't outlive their "parents" unless detached, which is incorrect.
Added the `Option::unzip()` method
* Adds the `Option::unzip()` method to turn an `Option<(T, U)>` into `(Option<T>, Option<U>)` under the `unzip_option` feature
* Adds tests for both `Option::unzip()` and `Option::zip()`, I noticed that `.zip()` didn't have any
* Adds `#[inline]` to a few of `Option`'s methods that were missing it
Constify implementations of `(Try)From` for int types
I believe this to be one of the (many?) things blocking const (Range) iterators.
~~If this is to be merged maybe that should wait until `#![feature(const_trait_impl)]` no longer needs `#![allow(incomplete_features)]`?~~ - Done
The existing documentation felt a little unhelpfully concise, so this
change tries to improve it by using longer sentences, each of which
specifies which kinds of types it applies to as early as possible. In
particular, the third item starts with “Structs ...” instead of
saying “Foo is a struct” later.
Also, the previous list items “Only the last field has a type
involving `T`” and “`T` is not part of the type of any other
fields” are, as far as I see, redundant with each other, so I removed
the latter.
Replace read_to_string with read_line in Stdin example
The current example results in infinitely reading from stdin, which can confuse newcomers trying to read from stdin.
(`@razmag` encountered this while learning the language from the docs)
While going through the "The Rust Programming Language" book (Klabnik & Nichols), the TCP server example directs us to use TcpListener::incoming. I was curious how I could pass this value to a function (before reading ahead in the book), so I looked up the docs to determine the signature.
When I opened the docs, I found https://doc.rust-lang.org/std/net/struct.TcpListener.html#method.incoming, which didn't mention TcpStream anywhere in the example.
Eventually, I clicked on https://doc.rust-lang.org/std/net/struct.TcpListener.html#method.accept in the docs (after clicking a few other locations first), and was able to surmise that the value contained TcpStream.
## Opportunity
While this type is mentioned several times in this doc, I feel that someone should be able to fully use the results of the TcpListner::incoming iterator based solely on the docs of just this method.
## Implementation
I took the code from the top-level TcpListener https://doc.rust-lang.org/std/net/struct.TcpListener.html#method.incoming and blended it with the existing docs for TcpListener::incoming https://doc.rust-lang.org/std/net/struct.TcpListener.html#method.incoming.
It does make the example a little longer, and it also introduces a little duplication. It also gives the reader the type signatures they need to move on to the next step.
## Additional considerations
I noticed that in this doc, `handle_connection` and `handle_client` are both used to accept a TcpStream in the docs on this page. I want to standardize on one function name convention, so readers don't accidentally think two different concepts are being referenced. I didn't want to cram do too much in one PR, I can update this PR to make that change, or I could send another PR (if you would like).
First attempted contribution to Rust (and I'm also still very new, hence reading through the rust book for the first time)! Would you please let me know what you think?
Stabilize Vec<T>::shrink_to
This PR stabilizes `shrink_to` feature and closes the corresponding issue. The second point was addressed already, and no `panic!` should occur.
Closes#56431.
Avoid using the `copy_nonoverlapping` wrapper through `mem::replace`.
This is a much simpler way to achieve the pre-#86003 behavior of `mem::replace` not needing dynamically-sized `memcpy`s (at least before inlining), than re-doing #81238 (which needs #86699 or something similar).
I didn't notice it until recently, but `ptr::write` already explicitly avoided using the wrapper, while `ptr::read` just called the wrapper (and was the reason for us observing any behavior change from #86003 in Rust-GPU).
<hr/>
The codegen test I've added fails without the change to `core::ptr::read` like this (ignore the `v0` mangling, I was using a worktree with it turned on by default, for this):
```llvm
13: ; core::intrinsics::copy_nonoverlapping::<u8>
14: ; Function Attrs: inlinehint nonlazybind uwtable
15: define internal void `@_RINvNtCscK5tvALCJol_4core10intrinsics19copy_nonoverlappinghECsaS4X3EinRE8_25mem_replace_direct_memcpy(i8*` %src, i8* %dst, i64 %count) unnamed_addr #0 {
16: start:
17: %0 = mul i64 %count, 1
18: call void `@llvm.memcpy.p0i8.p0i8.i64(i8*` align 1 %dst, i8* align 1 %src, i64 %0, i1 false)
not:17 !~~~~~~~~~~~~~~~~~~~~~ error: no match expected
19: ret void
20: }
```
With the `core::ptr::read` change, `core::intrinsics::copy_nonoverlapping` doesn't get instantiated and the test passes.
<hr/>
r? `@m-ou-se` cc `@nagisa` (codegen test) `@oli-obk` / `@RalfJung` (miri diagnostic changes)
Change proc_macro::Diagnostics docs
With Rust 1.54 attributes can invoke function-like macros. This allows functions generated by macros to have non-generic documentation. This pull request changes the documentation of `proc_macro::Diagnostics` to be more specific and include a link to the level.
## Example
Before:
```markdown
Adds a new child diagnostic message to `self` with the level
identified by this method’s name with the given `message`.
```
After:
```markdown
Adds a new child diagnostic message to self with the [`Level::Error`]
level, and the given `message`.
```
impl Default, Copy, Clone for std::io::Sink and Empty
The omission of `Sink: Default` is causing me a slight inconvenience in a test harness. There seems little reason for this and `Empty` not to be `Clone` and `Copy` too.
I have made all three of these insta-stable, because:
AIUI `Copy` can only be derived, and I was not able to find any examples of how to unstably derive it. I think it is probably not possible.
I hunted through the git history for precedent and found
> 79b8ad84c8
> Implement `Copy` for `IoSlice`
> https://github.com/rust-lang/rust/pull/69403
which was also insta-stable.
Fix intra doc link in hidden doc of Iterator::__iterator_get_unchecked
Recently, I edited the import list of the `core::iter::traits::iterator` module (in #85874). This results in a broken intra doc link in a hidden documentation with the effect that `RUSTDOCFLAGS='--document-private-items --document-hidden-items' x doc library/std` fails. (This can be worked around by adding `-Arustdoc::broken-intra-doc-links`; still, it’s a broken link so let’s fix it.)
``@rustbot`` label C-cleanup, T-libs
Document that fs::read_dir skips . and ..
Hi,
I think this is worth noting in the docs since it differs from POSIX `readdir`. I didn’t put it under platform-specific notes because it seems to be consistent across platforms, and changing this behavior in the future could cause pretty nasty bugs.
Thanks!
- also clarifies how thread.join and detaching of threads works
- the previous prose implied that there is a relationship between a
spawning thread and the thread being spawned, and that "child" threads
couldn't outlive their parents unless detached, which is incorrect.
Hide allocator details from TryReserveError
I think there's [no need for TryReserveError to carry detailed information](https://github.com/rust-lang/rust/issues/48043#issuecomment-825139280), but I wouldn't want that issue to delay stabilization of the `try_reserve` feature.
So I'm proposing to stabilize `try_reserve` with a `TryReserveError` as an opaque structure, and if needed, expose error details later.
This PR moves the `enum` to an unstable inner `TryReserveErrorKind` that lives under a separate feature flag. `TryReserveErrorKind` could possibly be left as an implementation detail forever, and the `TryReserveError` get methods such as `allocation_size() -> Option<usize>` or `layout() -> Option<Layout>` instead, or the details could be dropped completely to make try-reserve errors just a unit struct, and thus smaller and cheaper.
Rollup of 9 pull requests
Successful merges:
- #87561 (thread set_name haiku implementation.)
- #87715 (Add long error explanation for E0625)
- #87727 (explicit_generic_args_with_impl_trait: fix min expected number of generics)
- #87742 (Validate FFI-safety warnings on naked functions)
- #87756 (Add back -Zno-profiler-runtime)
- #87759 (Re-use std::sealed::Sealed in os/linux/process.)
- #87760 (Promote `aarch64-apple-ios-sim` to Tier 2)
- #87770 (permit drop impls with generic constants in where clauses)
- #87780 (alloc: Use intra doc links for the reserve function)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
alloc: Use intra doc links for the reserve function
The sentence exists to highlight the existence of a
performance footgun of repeated calls of the
reserve_exact function.
Re-use std::sealed::Sealed in os/linux/process.
This uses `std::sealed::Sealed` in `std::os::linux::process` instead of defining new `Sealed` traits there.
Currently there is no API that allows fallible zero-allocation of a Vec.
Vec.try_reserve is not appropriate for this job since it doesn't know
whether it should zero or arbitrary uninitialized memory is fine.
Since Box currently holds most of the zeroing/uninit/slice allocation APIs
it's the best place to add yet another entry into this feature matrix.
rustc: Fill out remaining parts of C-unwind ABI
This commit intends to fill out some of the remaining pieces of the
C-unwind ABI. This has a number of other changes with it though to move
this design space forward a bit. Notably contained within here is:
* On `panic=unwind`, the `extern "C"` ABI is now considered as "may
unwind". This fixes a longstanding soundness issue where if you
`panic!()` in an `extern "C"` function defined in Rust that's actually
UB because the LLVM representation for the function has the `nounwind`
attribute, but then you unwind.
* Whether or not a function unwinds now mainly considers the ABI of the
function instead of first checking the panic strategy. This fixes a
miscompile of `extern "C-unwind"` with `panic=abort` because that ABI
can still unwind.
* The aborting stub for non-unwinding ABIs with `panic=unwind` has been
reimplemented. Previously this was done as a small tweak during MIR
generation, but this has been moved to a separate and dedicated MIR
pass. This new pass will, for appropriate functions and function
calls, insert a `cleanup` landing pad for any function call that may
unwind within a function that is itself not allowed to unwind. Note
that this subtly changes some behavior from before where previously on
an unwind which was caught-to-abort it would run active destructors in
the function, and now it simply immediately aborts the process.
* The `#[unwind]` attribute has been removed and all users in tests and
such are now using `C-unwind` and `#![feature(c_unwind)]`.
I think this is largely the last piece of the RFC to implement.
Unfortunately I believe this is still not stabilizable as-is because
activating the feature gate changes the behavior of the existing `extern
"C"` ABI in a way that has no replacement. My thinking for how to enable
this is that we add support for the `C-unwind` ABI on stable Rust first,
and then after it hits stable we change the behavior of the `C` ABI.
That way anyone straddling stable/beta/nightly can switch to `C-unwind`
safely.
#[inline] slice::Iter::advance_by
https://github.com/rust-lang/rust/pull/87387#issuecomment-891942661 was marked as a regression. One of the methods in the PR was missing an inline annotation unlike all the other methods on slice iterators.
Let's see if that makes a difference.
Make wrapping_neg() use wrapping_sub(), #[inline(always)]
This is a follow-up change to the fix for #75598. It simplifies the implementation of wrapping_neg() for all integer types by just calling 0.wrapping_sub(self) and always inlines it. This leads to much less assembly code being emitted for opt-level≤1 and thus much better performance for debug-compiled code.
Background is [this discussion on the internals forum](https://internals.rust-lang.org/t/why-does-rust-generate-10x-as-much-unoptimized-assembly-as-gcc/14930).
Remove the aarch64 `crypto` target_feature
The subfeatures `aes` or `sha2` should be used instead.
This can't yet be done for ARM targets as some LLVM intrinsics still require `crypto`.
Also update the runtime feature detection tests in `library/std` to mirror the updates in `stdarch`. This also helps https://github.com/rust-lang/rust/issues/86941
r? ``@Amanieu``
Remove space after negative sign in Literal to_string
Negative proc macro literal tokens used to be printed with a space between the minus sign and the magnitude. That's because `impl ToString for Literal` used to convert the Literal into a TokenStream, which splits the minus sign into a separate Punct token.
```rust
Literal::isize_unsuffixed(-10).to_string() // "- 10"
```
This PR updates the ToString impl to directly use `rustc_ast::token::Lit`'s ToString, which matches the way Rust negative numbers are idiomatically written without a space.
```rust
Literal::isize_unsuffixed(-10).to_string() // "-10"
```
Add `core::stream::from_iter`
_Tracking issue: https://github.com/rust-lang/rust/issues/81798_
This_ PR implements `std::stream::from_iter`, as outlined in the _"Converting an Iterator to a Stream"_ section of the [Stream RFC](https://github.com/nellshamrell/rfcs/blob/add-async-stream-rfc/text/0000-async-stream.md#converting-an-iterator-to-a-stream). This function enables converting an `Iterator` to a `Stream` by wrapping each item in the iterator with a `Poll::Ready` instance.
r? `@tmandry`
cc/ `@rust-lang/libs` `@rust-lang/wg-async-foundations`
## Example
Being able to convert from an iterator into a stream is useful when refactoring from iterative loops into a more functional adapter-based style. This is fairly common when using more complex `filter` / `map` / `find` chains. In its basic form this conversion looks like this:
**before**
```rust
let mut output = vec![];
for item in my_vec {
let out = do_io(item).await?;
output.push(out);
}
```
**after**
```rust
use std::stream;
let output = stream::from_iter(my_vec.iter())
.map(async |item| do_io(item).await)
.collect()?;
```
Having a way to convert an `Iterator` to a `Stream` is essential in enabling this flow.
## Implementation Notes
This PR makes use of `unsafe {}` to pin an item. Currently we're having conversations on the libs stream in Zulip how to bring `pin-project` in as a dependency to `core` so we can omit the `unsafe {}`.
This PR also includes a documentation block which references `Stream::next` which currently doesn't exist in the stdlib (originally included in the RFC and PR, but later omitted because of an unresolved issue). `stream::from_iter` can't stabilize before `Stream` does, and there's still a chance we may stabilize `Stream` with a `next` method. So this PR includes documentation referencing that method, which we can remove as part of stabilization if by any chance we don't have `Stream::next`.
## Alternatives Considered
### `impl IntoStream for T: IntoIterator`
An obvious question would be whether we could make it so every iterator can automatically be converted into a stream by calling `into_stream` on it. The answer is: "perhaps, but it could cause type issues". Types like `std::collections` may want to opt to create manual implementations for `IntoStream` and `IntoIter`, which wouldn't be possible if it was implemented through a catch-all trait.
Possibly an alternative such as `impl IntoStream for T: Iterator` could work, but it feels somewhat restrictive. In the end, converting an iterator to a stream is likely to be a bit of a niche case. And even then, **adding a standalone function to convert an `Iterator` into a `Stream` would not be mutually exclusive with a blanket implementation**.
### Naming
The exact name can be debated in the period before stabilization. But I've chosen `stream::from_iter` rather than `stream::iter` because we are _creating a stream from an iterator_ rather than _iterating a stream_. We also expect to add a stream counterpart to `iter::from_fn` later on (blocked on async closures), and having `stream::from_fn` and `stream::from_iter` would feel like a consistent pair. It also has prior art in `async_std::stream::from_iter`.
## Future Directions
### Stream conversions for collections
This is a building block towards implementing `stream/stream_mut/into_stream` methods for `std::collections`, `std::vec`, and more. This would allow even quicker refactorings from using loops to using iterator adapters by omitting the import altogether:
**before**
```rust
use std::stream;
let output = stream::from_iter(my_vec.iter())
.map(async |item| do_io(item).await)
.collect()?;
```
**after**
```rust
let output = my_vec
.stream()
.map(async |item| do_io(item).await)
.collect()?;
```
This commit intends to fill out some of the remaining pieces of the
C-unwind ABI. This has a number of other changes with it though to move
this design space forward a bit. Notably contained within here is:
* On `panic=unwind`, the `extern "C"` ABI is now considered as "may
unwind". This fixes a longstanding soundness issue where if you
`panic!()` in an `extern "C"` function defined in Rust that's actually
UB because the LLVM representation for the function has the `nounwind`
attribute, but then you unwind.
* Whether or not a function unwinds now mainly considers the ABI of the
function instead of first checking the panic strategy. This fixes a
miscompile of `extern "C-unwind"` with `panic=abort` because that ABI
can still unwind.
* The aborting stub for non-unwinding ABIs with `panic=unwind` has been
reimplemented. Previously this was done as a small tweak during MIR
generation, but this has been moved to a separate and dedicated MIR
pass. This new pass will, for appropriate functions and function
calls, insert a `cleanup` landing pad for any function call that may
unwind within a function that is itself not allowed to unwind. Note
that this subtly changes some behavior from before where previously on
an unwind which was caught-to-abort it would run active destructors in
the function, and now it simply immediately aborts the process.
* The `#[unwind]` attribute has been removed and all users in tests and
such are now using `C-unwind` and `#![feature(c_unwind)]`.
I think this is largely the last piece of the RFC to implement.
Unfortunately I believe this is still not stabilizable as-is because
activating the feature gate changes the behavior of the existing `extern
"C"` ABI in a way that has no replacement. My thinking for how to enable
this is that we add support for the `C-unwind` ABI on stable Rust first,
and then after it hits stable we change the behavior of the `C` ABI.
That way anyone straddling stable/beta/nightly can switch to `C-unwind`
safely.
Add convenience method for handling ipv4-mapped addresses by canonicalizing them
This simplifies checking common properties in an address-family-agnostic
way since #86335 commits to not checking IPv4 semantics
of IPv4-mapped addresses in the `Ipv6Addr` property methods.
Commit to not supporting IPv4-in-IPv6 addresses
Stabilization of the `ip` feature has for a long time been blocked on the question of whether Rust should support handling "IPv4-in-IPv6" addresses: should the various `Ipv6Address` property methods take IPv4-mapped or IPv4-compatible addresses into account. See also the IPv4-in-IPv6 Address Support issue #85609 and #69772 which originally asked the question.
# Overview
In the recent PR #85655 I proposed changing `is_loopback` to take IPv4-mapped addresses into account, so `::ffff:127.0.0.1` would be recognized as a looback address. However, due to the points that came up in that PR, I alternatively propose the following: Keeping the current behaviour and commit to not assigning any special meaning for IPv4-in-IPv6 addresses, other than what the standards prescribe. This would apply to the stable method `is_loopback`, but also to currently unstable methods like `is_global` and `is_documentation` and any future methods. This is implemented in this PR as a change in documentation, specifically the following section:
> Both types of addresses are not assigned any special meaning by this implementation, other than what the relevant standards prescribe. This means that an address like `::ffff:127.0.0.1`, while representing an IPv4 loopback address, is not itself an IPv6 loopback address; only `::1` is. To handle these so called "IPv4-in-IPv6" addresses, they have to first be converted to their canonical IPv4 address.
# Discussion
In the discussion for or against supporting IPv4-in-IPv6 addresses the question what would be least surprising for users of other languages has come up several times. At first it seemed most big other languages supported IPv4-in-IPv6 addresses (or at least considered `::ffff:127.0.0.1` a loopback address). However after further investigation it appears that supporting IPv4-in-IPv6 addresses comes down to how a language represents addresses. .Net and Go do not have a separate type for IPv4 or IPv6 addresses, and do consider `::ffff:127.0.0.1` a loopback address. Java and Python, which do have separate types, do not consider `::ffff:127.0.0.1` a loopback address. Seeing as Rust has the separate `Ipv6Addr` type, it would make sense to also not support IPv4-in-IPv6 addresses. Note that this focuses on IPv4-mapped addresses, no other language handles IPv4-compatible addresses.
Another issue that was raised is how useful supporting these IPv4-in-IPv6 addresses would be in practice. Again with the example of `::ffff:127.0.0.1`, considering it a loopback address isn't too useful as to use it with most of the socket APIs it has to be converted to an IPv4 address anyway. From that perspective it would be better to instead provide better ways for doing this conversion like stabilizing `to_ipv4_mapped` or introducing a `to_canonical` method.
A point in favour of not supporting IPv4-in-IPv6 addresses is that that is the behaviour Rust has always had, and that supporting it would require changing already stable functions like `is_loopback`. This also keeps the documentation of these functions simpler, as we only have to refer to the relevant definitions in the IPv6 specification.
# Decision
To make progress on the `ip` feature, a decision needs to be made on whether or not to support IPv4-in-IPv6 addresses.
There are several options:
- Keep the current implementation and commit to never supporting IPv4-in-IPv6 addresses (accept this PR).
- Support IPv4-in-IPv6 addresses in some/all `IPv6Addr` methods (accept PR #85655).
- Keep the current implementation and but not commit to anything yet (reject both this PR and PR #85655), this entire issue will however come up again in the stabilization of several methods under the `ip` feature.
There are more options, like supporting IPv4-in-IPv6 addresses in `IpAddr` methods instead, but to my knowledge those haven't been seriously argued for by anyone.
There is currently an FCP ongoing on PR #85655. I would ask the libs team for an alternative FCP on this PR as well, which if completed means the rejection of PR #85655, and the decision to commit to not supporting IPv4-in-IPv6 addresses.
If anyone feels there is not enough evidence yet to make the decision for or against supporting IPv4-in-IPv6 addresses, let me know and I'll do whatever I can to resolve it.