Windows: Load synch functions together
Attempt to load all the required sync functions and fail if any one of them fails.
This fixes a FIXME by going back to optional loading of `WakeByAddressSingle`.
Also reintroduces a macro for optional loading of functions but keeps it separate from the fallback macro rather than having that do two different jobs.
r? `@thomcc`
Fix trailing space showing up in example
The current text is rendered as: U+005B ..= U+0060 ``[ \ ] ^ _ ` ``, or (**note the final space!**)
This patch changes that to render as: U+005B ..= U+0060 `` [ \ ] ^ _ ` ``, or (**note no final space!**)
The reason for that, is that CommonMark has a solution for starting or ending inline code with a backtick/grave accent: padding both sides with a space, makes that padding disappear.
Expose `Utf8Lossy` as `Utf8Chunks`
This PR changes the feature for `Utf8Lossy` from `str_internals` to `utf8_lossy` and improves the API. This is done to eventually expose the API as stable.
Proposal: rust-lang/libs-team#54
Tracking Issue: #99543
Avoid zeroing a 1kb stack buffer on every call to `std::sys::windows::fill_utf16_buf`
I've also tried to be slightly more careful about integer overflows, although in practice this is likely still not handled ideally.
r? `@ChrisDenton`
Refactor iteration logic in the `Flatten` and `FlatMap` iterators
The `Flatten` and `FlatMap` iterators both delegate to `FlattenCompat`:
```rust
struct FlattenCompat<I, U> {
iter: Fuse<I>,
frontiter: Option<U>,
backiter: Option<U>,
}
```
Every individual iterator method that `FlattenCompat` implements needs to carefully manage this state, checking whether the `frontiter` and `backiter` are present, and storing the current iterator appropriately if iteration is aborted. This has led to methods such as `next`, `advance_by`, and `try_fold` all having similar code for managing the iterator's state.
I have extracted this common logic of iterating the inner iterators with the option to exit early into a `iter_try_fold` method:
```rust
impl<I, U> FlattenCompat<I, U>
where
I: Iterator<Item: IntoIterator<IntoIter = U>>,
{
fn iter_try_fold<Acc, Fold, R>(&mut self, acc: Acc, fold: Fold) -> R
where
Fold: FnMut(Acc, &mut U) -> R,
R: Try<Output = Acc>,
{ ... }
}
```
It passes each of the inner iterators to the given function as long as it keep succeeding. It takes care of managing `FlattenCompat`'s state, so that the actual `Iterator` methods don't need to. The resulting code that makes use of this abstraction is much more straightforward:
```rust
fn next(&mut self) -> Option<U::Item> {
#[inline]
fn next<U: Iterator>((): (), iter: &mut U) -> ControlFlow<U::Item> {
match iter.next() {
None => ControlFlow::CONTINUE,
Some(x) => ControlFlow::Break(x),
}
}
self.iter_try_fold((), next).break_value()
}
```
Note that despite being implemented in terms of `iter_try_fold`, `next` is still able to benefit from `U`'s `next` method. It therefore does not take the performance hit that implementing `next` directly in terms of `Self::try_fold` causes (in some benchmarks).
This PR also adds `iter_try_rfold` which captures the shared logic of `try_rfold` and `advance_back_by`, as well as `iter_fold` and `iter_rfold` for folding without early exits (used by `fold`, `rfold`, `count`, and `last`).
Benchmark results:
```
before after
bench_flat_map_sum 423,255 ns/iter 414,338 ns/iter
bench_flat_map_ref_sum 1,942,139 ns/iter 2,216,643 ns/iter
bench_flat_map_chain_sum 1,616,840 ns/iter 1,246,445 ns/iter
bench_flat_map_chain_ref_sum 4,348,110 ns/iter 3,574,775 ns/iter
bench_flat_map_chain_option_sum 780,037 ns/iter 780,679 ns/iter
bench_flat_map_chain_option_ref_sum 2,056,458 ns/iter 834,932 ns/iter
```
I added the last two benchmarks specifically to demonstrate an extreme case where `FlatMap::next` can benefit from custom internal iteration of the outer iterator, so take it with a grain of salt. We should probably do a perf run to see if the changes to `next` are worth it in practice.
Don't derive `PartialEq::ne`.
Currently we skip deriving `PartialEq::ne` for C-like (fieldless) enums
and empty structs, thus reyling on the default `ne`. This behaviour is
unnecessarily conservative, because the `PartialEq` docs say this:
> Implementations must ensure that eq and ne are consistent with each other:
>
> `a != b` if and only if `!(a == b)` (ensured by the default
> implementation).
This means that the default implementation (`!(a == b)`) is always good
enough. So this commit changes things such that `ne` is never derived.
The motivation for this change is that not deriving `ne` reduces compile
times and binary sizes.
Observable behaviour may change if a user has defined a type `A` with an
inconsistent `PartialEq` and then defines a type `B` that contains an
`A` and also derives `PartialEq`. Such code is already buggy and
preserving bug-for-bug compatibility isn't necessary.
Two side-effects of the change:
- There is only one error message produced for types where `PartialEq`
cannot be derived, instead of two.
- For coverage reports, some warnings about generated `ne` methods not
being executed have disappeared.
Both side-effects seem fine, and possibly preferable.
Attempt to load all the required sync functions and fail if any one of them fails.
This reintroduces a macro for optional loading of functions but keeps it separate from the fallback macro rather than having that do two different jobs.
unwind: don't build dependency when building for Miri
This is basically re-submitting https://github.com/rust-lang/rust/pull/94813.
In that PR there was a suggestion to instead have bootstrap set a `RUST_CHECK` env var and use that rather than doing something Miri-specific. However, such an env var would mean that when switching between `./x.py check` and `./x.py build`, the build script gets re-run each time, which doesn't seem good. So I think for now checking for Miri probably causes fewer problems.
r? ````@Mark-Simulacrum````
Simple Clamp Function
I thought this was more robust and easier to read. I also allowed this function to return early in order to skip the extra bound check (I'm sure the difference is negligible). I'm not sure if there was a reason for binding `self` to `x`; if so, please correct me.
Simple Clamp Function for f64
I thought this was more robust and easier to read. I also allowed this function to return early in order to skip the extra bound check (I'm sure the difference is negligible). I'm not sure if there was a reason for binding `self` to `x`; if so, please correct me.
Floating point clamp test
f32 clamp using mut self
f64 clamp using mut self
Update library/core/src/num/f32.rs
Update f64.rs
Update x86_64-floating-point-clamp.rs
Update src/test/assembly/x86_64-floating-point-clamp.rs
Update x86_64-floating-point-clamp.rs
Co-Authored-By: scottmcm <scottmcm@users.noreply.github.com>
Update the minimum external LLVM to 13
With this change, we'll have stable support for LLVM 13 through 15 (pending release).
For reference, the previous increase to LLVM 12 was #90175.
r? `@nagisa`
The current text is rendered as: U+005B ..= U+0060 ``[ \ ] ^ _ ` ``, or.
This patch changes that to render as: U+005B ..= U+0060 `` [ \ ] ^ _ ` ``, or
The reason for that, is that CommonMark has a solution for starting or ending inline code with a backtick/grave accent: padding both sides with a space, makes that padding disappear.
fix(iter::skip): Optimize `next` and `nth` implementations of `Skip`
This avoids calling nth/next or nth/nth to first skip elements and then get the next one (unless necessary due to usize overflow).
Fix HorizonOS regression in FileTimes
The changes in #98246 caused a regression for multiple Newlib-based systems. This is just a fix including HorizonOS to the list of targets which require a workaround.
``@AzureMarker`` ``@ian-h-chamberlain``
r? ``@nagisa``
Add `Iterator::array_chunks` (take N+1)
A revival of https://github.com/rust-lang/rust/pull/92393.
r? `@Mark-Simulacrum`
cc `@rossmacarthur` `@scottmcm` `@the8472`
I've tried to address most of the review comments on the previous attempt. The only thing I didn't address is `try_fold` implementation, I've left the "custom" one for now, not sure what exactly should it use.
promote debug_assert to assert when possible and useful
This PR fixed a very old issue https://github.com/rust-lang/rust/issues/94705 to clarify and improve the POSIX error checking, and some of the checks are skipped because can have no benefit, but I'm sure that this can open some interesting discussion.
Fixes https://github.com/rust-lang/rust/issues/94705
cc: `@tavianator`
cc: `@cuviper`
make raw_eq precondition more restrictive
Specifically, don't allow comparing pointers that way. Comparing pointers is subtle because you have to talk about what happens to the provenance.
This matches what [Miri already implements](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=9eb1dfb8a61b5a2d4a7cee43df2717af), and all existing users are fine with this.
If raw_eq on pointers is ever desired, we can adjust the intrinsic spec and Miri implementation as needed, but for now that seems just unnecessary. Also, this is a const intrinsic, and in const, comparing pointers this way is *not possible* -- so if we allow the intrinsic to compare pointers in general, we need to impose an extra restrictions saying that in const-context, pointers are *not* okay.
linux: Use `pthread_setname_np` instead of `prctl`
This function is available on Linux since glibc 2.12, musl 1.1.16, and
uClibc 1.0.20. The main advantage over `prctl` is that it properly
represents the pointer argument, rather than a multi-purpose `long`,
so we're better representing strict provenance (#95496).
Replace pointer casting in hashmap_random_keys with safe code
The old code was unnecessarily unsafe and relied on the layout of tuples always being the same as an array of the same size (which might be bad with `-Z randomize-layout`)?
The replacement has [identical codegen](https://rust.godbolt.org/z/qxsvdb8nx), so it seems like a reasonable change.
Reoptimize layout array
This way it's one check instead of two, so hopefully (cc #99117) it'll be simpler for rustc perf too 🤞
Quick demonstration:
```rust
pub fn demo(n: usize) -> Option<Layout> {
Layout::array::<i32>(n).ok()
}
```
Nightly: <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=e97bf33508aa03f38968101cdeb5322d>
```nasm
mov rax, rdi
mov ecx, 4
mul rcx
seto cl
movabs rdx, 9223372036854775805
xor esi, esi
cmp rax, rdx
setb sil
shl rsi, 2
xor edx, edx
test cl, cl
cmove rdx, rsi
ret
```
This PR (note no `mul`, in addition to being much shorter):
```nasm
xor edx, edx
lea rax, [4*rcx]
shr rcx, 61
sete dl
shl rdx, 2
ret
```
This is built atop `@CAD97` 's #99136; the new changes are cb8aba66ef6a0e17f08a0574e4820653e31b45a0.
I added a bunch more tests for `Layout::from_size_align` and `Layout::array` too.
Inline CStr::from_bytes_with_nul_unchecked::rt_impl
Currently `CStr::from_bytes_with_nul_unchecked::rt_impl` is not being inlined. The following function:
```rust
pub unsafe fn from_bytes_with_nul_unchecked(bytes: &[u8]) {
CStr::from_bytes_with_nul_unchecked(bytes);
}
```
Outputs the following assembly on current nightly
```asm
example::from_bytes_with_nul_unchecked:
jmp qword ptr [rip + _ZN4core3ffi5c_str4CStr29from_bytes_with_nul_unchecked7rt_impl17h026f29f3d6a41333E@GOTPCREL]
```
Meanwhile on beta this provides the following assembly:
```asm
example::from_bytes_with_nul_unchecked:
ret
```
This pull request adds `#[inline]` annotation to`rt_impl` to fix a code generation regression for `CStr::from_bytes_with_nul_unchecked`.
docs: remove repetition in `is_numeric` function docs
In https://github.com/rust-lang/rust/pull/99628 we introduce new docs for the `is_numeric` function, and this is a follow-up PR that removes some unnecessary repetition that may be introduced by some rebasing.
`@rustbot` r? `@joshtriplett`
Stabilize backtrace
This PR stabilizes the std::backtrace module. As of #99431, the std::Error::backtrace item has been removed, and so the rest of the backtrace feature is set to be stabilized.
Previous discussion can be found in #72981, #3156.
Stabilized API summary:
```rust
pub mod std {
pub mod backtrace {
pub struct Backtrace { }
pub enum BacktraceStatus {
Unsupported,
Disabled,
Captured,
}
impl fmt::Debug for Backtrace {}
impl Backtrace {
pub fn capture() -> Backtrace;
pub fn force_capture() -> Backtrace;
pub const fn disabled() -> Backtrace;
pub fn status(&self) -> BacktraceStatus;
}
impl fmt::Display for Backtrace {}
}
}
```
`@yaahc`
Update doc comments to make the guarantee explicit. However, some
implementations does not have the statement though.
* `HashMap`, `HashSet`: require guarantees on hashbrown side.
* `PathBuf`: simply redirecting to `OsString`.
Fixes#99606.
This function is available on Linux since glibc 2.12, musl 1.1.16, and
uClibc 1.0.20. The main advantage over `prctl` is that it properly
represents the pointer argument, rather than a multi-purpose `long`,
so we're better representing strict provenance (#95496).
test: skip terminfo parsing in Miri
Terminfo parsing takes a significant amount of time in Miri, making libtest startup very slow. To work around that Miri in fact unsets the `TERM` variable. However, this means we don't get colors in `cargo miri test`.
So I propose we add some logic in libtest that skips parsing terminfo files under Miri, and just uses the regular basic coloring commands (taken from the `colored` crate).
As far as I can see, these two commands are all that libtest ever needs from terminfo, so Miri doesn't even lose any functionality through this. If you want I can entirely remove the terminfo parsing code and just use these commands instead.
Cc https://github.com/rust-lang/miri/issues/2292 `@saethlin`
Remove Windows function preloading
After `@Mark-Simulacrum` asked me to provide guidance for when optionally imported functions should be preloaded, I realised my justifications were now quite weak. I think the strongest argument that can be made is that it avoids some degree of nondeterminism when calling these functions (in as far as system API calls can be said to be deterministic). However, I don't think that's particularly convincing unless there's a real world use case where it matters. Further discussion with `@thomcc` has strengthened my feeling that preloading isn't really needed.
Note that `WaitOnAddress` needed some adjustment to work without preloading. I opted not to use a macro for this special case as it seemed silly to do so for just one thing (and I don't like macros tbh).