Add try_reserve and try_reserve_exact for OsString
Add `try_reserve` and `try_reserve_exact` for OsString.
Part of https://github.com/rust-lang/rust/issues/91789
I will squash the commits after PR is ready to merge.
Signed-off-by: Xuanwo <github@xuanwo.io>
It appears `find_max_slow` comes from the BinaryHeap docs, where the
try_reserve example is a slow implementation of find_max. It has no
relevance to this code in OsString though.
The crate name is already set in Cargo.toml. The comment says there is
some logic in the compiler that reads #![crate_name] and not
--crate-name, but I can't find it. Removing it seems to work fine.
Remove `maybe_uninit_extra` feature from Vec docs
In `Vec`, two doc tests are using `MaybeUninit::write` , stabilized in 1.55. This makes docs' usage of `maybe_uninit_extra` feature unnecessary.
Add `#[inline]` modifier to `TypeId::of`
It was already inlined but it happened only in 4th InlinerPass on my testcase.
With `#[inline]` modifier it happens on 2nd pass.
Closes#74362
RawVec: don't recompute capacity after allocating.
Currently it sets the capacity to `ptr.len() / mem::size_of::<T>()`
after any buffer allocation/reallocation. This would be useful if
allocators ever returned a `NonNull<[u8]>` with a size larger than
requested. But this never happens, so it's not useful.
Removing this slightly reduces the size of generated LLVM IR, and
slightly speeds up the hot path of `RawVec` growth.
r? `@ghost`
disable test with self-referential generator on Miri
Running the libcore test suite in Miri currently fails due to the known incompatibility of self-referential generators with Miri's aliasing checks (https://github.com/rust-lang/unsafe-code-guidelines/issues/148). So let's disable that test in Miri for now.
Use panic() instead of panic!() in some places in core.
See https://github.com/rust-lang/rust/pull/92068 and https://github.com/rust-lang/rust/pull/92140.
This avoids the `panic!()` macro in a few potentially hot paths. This becomes more relevant when switching `core` to Rust 2021, as it'll avoid format_args!() and save some compilation time. (It doesn't make a huge difference, but still.) (Also the errors in const panic become slightly nicer.)
Quote bat script command line
Fixes#91991
[`CreateProcessW`](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessw#parameters) should only be used to run exe files but it does have some (undocumented) special handling for files with `.bat` and `.cmd` extensions. Essentially those magic extensions will cause the parameters to be automatically rewritten. Example pseudo Rust code (note that `CreateProcess` starts with an optional application name followed by the application arguments):
```rust
// These arguments...
CreateProcess(None, `@"foo.bat` "hello world""`@,` ...);
// ...are rewritten as
CreateProcess(Some(r"C:\Windows\System32\cmd.exe"), `@""foo.bat` "hello world"""`@,` ...);
```
However, when setting the first parameter (the application name) as we now do, it will omit the extra level of quotes around the arguments:
```rust
// These arguments...
CreateProcess(Some("foo.bat"), `@"foo.bat` "hello world""`@,` ...);
// ...are rewritten as
CreateProcess(Some(r"C:\Windows\System32\cmd.exe"), `@"foo.bat` "hello world""`@,` ...);
```
This means the arguments won't be passed to the script as intended.
Note that running batch files this way is undocumented but people have relied on this so we probably shouldn't break it.
Change Backtrace::enabled atomic from SeqCst to Relaxed
This atomic is not synchronizing anything outside of its own value, so we don't need the `Acquire`/`Release` guarantee that all memory operations prior to the store are visible after the subsequent load, nor the `SeqCst` guarantee of all threads seeing all of the sequentially consistent operations in the same order.
Using `Relaxed` reduces the overhead of `Backtrace::capture()` in the case that backtraces are not enabled.
## Benchmark
```rust
#![feature(backtrace)]
use std::backtrace::Backtrace;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::thread;
use std::time::Instant;
fn main() {
let begin = Instant::now();
let mut threads = Vec::new();
for _ in 0..64 {
threads.push(thread::spawn(|| {
for _ in 0..10_000_000 {
let _ = Backtrace::capture();
static LOL: AtomicUsize = AtomicUsize::new(0);
LOL.store(1, Ordering::Release);
}
}));
}
for thread in threads {
let _ = thread.join();
}
println!("{:?}", begin.elapsed());
}
```
**Before:** 6.73 seconds
**After:** 5.18 seconds
Allow reverse iteration of lowercase'd/uppercase'd chars
The PR implements `DoubleEndedIterator` trait for `ToLowercase` and `ToUppercase`.
This enables reverse iteration of lowercase/uppercase variants of character sequences.
One of use cases: determining whether a char sequence is a suffix of another one.
Example:
```rust
fn endswith_ignore_case(s1: &str, s2: &str) -> bool {
for eob in s1
.chars()
.flat_map(|c| c.to_lowercase())
.rev()
.zip_longest(s2.chars().flat_map(|c| c.to_lowercase()).rev())
{
match eob {
EitherOrBoth::Both(c1, c2) => {
if c1 != c2 {
return false;
}
}
EitherOrBoth::Left(_) => return true,
EitherOrBoth::Right(_) => return false,
}
}
true
}
```
Currently it sets the capacity to `ptr.len() / mem::size_of::<T>()`
after any buffer allocation/reallocation. This would be useful if
allocators ever returned a `NonNull<[u8]>` with a size larger than
requested. But this never happens, so it's not useful.
Removing this slightly reduces the size of generated LLVM IR, and
slightly speeds up the hot path of `RawVec` growth.
The previous implementation used slice::as_mut_ptr_range to derive the
pointer for the spare capacity slice. This is invalid, because that
pointer is derived from the initialized region, so it does not have
provenance over the uninitialized region.
Update example code for Vec::splice to change the length
The current example for `Vec::splice` illustrates the replacement of a section of length 2 with a new section of length 2. This isn't a particularly interesting case for splice, and makes it look a bit like a shorthand for the kind of manipulations that could be done with a mutable slice.
In order to provide a stronger example, this updates the example to use different lengths for the source and destination regions, and uses a slice from the middle of the vector to illustrate that this does not necessarily have to be at the beginning or the end.
Resolves#92067
Revert "Temporarily rename int_roundings functions to avoid conflicts"
This reverts commit 3ece63b64e.
This should be okay because #90329 has been merged.
r? `@joshtriplett`
The src pointers in CopyOnDrop and InsertionHole used to be *mut T, and
were derived via automatic conversion from &mut T. According to Stacked
Borrows 2.1, this means that those pointers become invalidated by
interior mutation in the comparison function.
But there's no need for mutability in this code path. Thus, we can
change the drop guards to use *const and derive those from &T.
Add a space and 2 grave accents
I only noticed this because I have this implementation copy pasted in some places in my code and I really can't wait for this to be stabilized...
Update stdlib to the 2021 edition
progress towards https://github.com/rust-lang/rust/issues/88638
I couldnt find a way to run the 2018 style panic tests against 2018 so I just deleted them, maybe theres a way to do it that I missed though?
Mark defaulted `PartialEq`/`PartialOrd` methods as const
WIthout it, `const` impls of these traits are unpleasant to write. I think this kind of change is allowed now. although it looks like it might require some Miri tweaks. Let's find out.
r? ```@fee1-dead```
Do array-slice equality via array equality, rather than always via slices
~~Draft because it needs a rebase after #91766 eventually gets through bors.~~
This enables the optimizations from #85828 to be used for array-to-slice comparisons too, not just array-to-array.
For example, <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=5f9ba69b3d5825a782f897c830d3a6aa>
```rust
pub fn demo(x: &[u8], y: [u8; 4]) -> bool {
*x == y
}
```
Currently writes the array to stack for no reason:
```nasm
sub rsp, 4
mov dword ptr [rsp], edx
cmp rsi, 4
jne .LBB0_1
mov eax, dword ptr [rdi]
cmp eax, dword ptr [rsp]
sete al
add rsp, 4
ret
.LBB0_1:
xor eax, eax
add rsp, 4
ret
```
Whereas with the change in this PR it just compares it directly:
```nasm
cmp rsi, 4
jne .LBB1_1
cmp dword ptr [rdi], edx
sete al
ret
.LBB1_1:
xor eax, eax
ret
```