Stablize {HashMap,BTreeMap}::into_{keys,values}
I would propose to stabilize `{HashMap,BTreeMap}::into_{keys,values}`( aka. `map_into_keys_values`).
Closes#75294.
For certain sorts of systems, programming, it's deemed essential that
all allocation failures be explicitly handled where they occur. For
example, see Linus Torvald's opinion in [1]. Merely not calling global
panic handlers, or always `try_reserving` first (for vectors), is not
deemed good enough, because the mere presence of the global OOM handlers
is burdens static analysis.
One option for these projects to use rust would just be to skip `alloc`,
rolling their own allocation abstractions. But this would, in my
opinion be a real shame. `alloc` has a few `try_*` methods already, and
we could easily have more. Features like custom allocator support also
demonstrate and existing to support diverse use-cases with the same
abstractions.
A natural way to add such a feature flag would a Cargo feature, but
there are currently uncertainties around how std library crate's Cargo
features may or not be stable, so to avoid any risk of stabilizing by
mistake we are going with a more low-level "raw cfg" token, which
cannot be interacted with via Cargo alone.
Note also that since there is no notion of "default cfg tokens" outside
of Cargo features, we have to invert the condition from
`global_oom_handling` to to `not(no_global_oom_handling)`. This breaks
the monotonicity that would be important for a Cargo feature (i.e.
turning on more features should never break compatibility), but it
doesn't matter for raw cfg tokens which are not intended to be
"constraint solved" by Cargo or anything else.
To support this use-case we create a new feature, "global-oom-handling",
on by default, and put the global OOM handler infra and everything else
it that depends on it behind it. By default, nothing is changed, but
users concerned about global handling can make sure it is disabled, and
be confident that all OOM handling is local and explicit.
For this first iteration, non-flat collections are outright disabled.
`Vec` and `String` don't yet have `try_*` allocation methods, but are
kept anyways since they can be oom-safely created "from parts", and we
hope to add those `try_` methods in the future.
[1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
Replace 'NULL' with 'null'
This replaces occurrences of "NULL" with "null" in docs, comments, and compiler error/lint messages. This is for the sake of consistency, as the lowercase "null" is already the dominant form in Rust. The all-caps NULL looks like the C macro (or SQL keyword), which seems out of place in a Rust context, given that NULL does not exist in the Rust language or standard library (instead having [`ptr::null()`](https://doc.rust-lang.org/stable/std/ptr/fn.null.html)).
i8 and u8::to_string() specialisation (far less asm).
Take 2. Around 1/6th of the assembly to without specialisation.
https://godbolt.org/z/bzz8Mq
(partially fixes#73533 )
Remove slice diagnostic item
...because it is unusally placed on an impl and is redundant with a lang item.
Depends on rust-lang/rust-clippy#7074 (next clippy sync). ~I expect clippy tests to fail in the meantime.~ Nope tests passed...
CC `@flip1995`
further split up const_fn feature flag
This continues the work on splitting up `const_fn` into separate feature flags:
* `const_fn_trait_bound` for `const fn` with trait bounds
* `const_fn_unsize` for unsizing coercions in `const fn` (looks like only `dyn` unsizing is still guarded here)
I don't know if there are even any things left that `const_fn` guards... at least libcore and liballoc do not need it any more.
`@oli-obk` are you currently able to do reviews?
Remove duplicated fn(Box<[T]>) -> Vec<T>
`<[T]>::into_vec()` does the same thing as `Vec::from::<Box<[T]>>()`, so they can be implemented in terms of each other. This was the previous implementation of `Vec::from()`, but was changed in #78461. I'm not sure what the rationale was for that change, but it seems preferable to maintain a single implementation.
Replace all `fmt.pad` with `debug_struct`
This replaces any occurrence of:
- `f.pad("X")` with `f.debug_struct("X").finish()`
- `f.pad("X { .. }")` with `f.debug_struct("X").finish_non_exhaustive()`
This is in line with existing formatting code such as
1255053067/library/std/src/sync/mpsc/mod.rs (L1470-L1475)
Improve code example for length comparison
Small fix/improvement: it's much safer to check that you're under the length of an array rather than chacking that you're equal to it. It's even more true in case you update the length of the array while iterating.
Add strong_count mutation methods to Rc
The corresponding methods were stabilized on `Arc` in #79285 (tracking: #71983). This patch implements and stabilizes identical methods on the `Rc` types as well.
Bump bootstrap to 1.52 beta
This includes the standard bump, but also a workaround for new cargo behavior around clearing out the doc directory when the rustdoc version changes.
BTree: move blocks around in node.rs
Without changing any names or implementation, reorder some members:
- Move down the ones defined long ago on the demised `struct Root`, to below the definition of their current host `struct NodeRef`.
- Move up some defined on `struct NodeRef` that are interspersed with those defined on `struct Handle`.
- Move up the `correct_…` methods squeezed between the two flavours of `push`.
- Move the unchecked static downcasts (`cast_to_…`) after the upcasts (`forget_`) and the (weirdly named) dynamic downcasts (`force`).
r? ````@Mark-Simulacrum````
BTree: no longer search arrays twice to check Ord
A possible addition to / partial replacement of #83147: no longer linearly search the upper bound of a range in the initial portion of the keys we already know are below the lower bound.
- Should be faster: fewer key comparisons at the cost of some instructions dealing with offsets
- Makes code a little more complicated.
- No longer detects ill-defined `Ord` implementations, but that wasn't a publicised feature, and was quite incomplete, and was only done in the `range` and `range_mut` methods.
r? `@Mark-Simulacrum`
Fix double-drop in `Vec::from_iter(vec.into_iter())` specialization when items drop during panic
This fixes the double-drop but it leaves a behavioral difference compared to the default implementation intact: In the default implementation the source and the destination vec are separate objects, so they get dropped separately. Here they share an allocation and the latter only exists as a pointer into the former. So if dropping the former panics then this fix will leak more items than the default implementation would. Is this acceptable or should the specialization also mimic the default implementation's drops-during-panic behavior?
Fixes#83618
`@rustbot` label T-libs-impl
Clean up Vec's benchmarks
The Vec benchmarks need a lot of love. I sort of noticed this in https://github.com/rust-lang/rust/pull/83357 but the overall situation is much less awesome than I thought at the time. The first commit just removes a lot of asserts and does a touch of other cleanup.
A number of these benchmarks are poorly-named. For example, `bench_map_fast` is not in fact fast, `bench_rev_1` and `bench_rev_2` are vague, `bench_in_place_zip_iter_mut` doesn't call `zip`, `bench_in_place*` don't do anything in-place... Should I fix these, or is there tooling that depend on the names not changing?
I've also noticed that `bench_rev_1` and `bench_rev_2` are remarkably fragile. It looks like poking other code in `Vec` can cause the codegen of this benchmark to switch to a version that has almost exactly half its current throughput and I have absolutely no idea why.
Here's the fast version:
```asm
0.69 │110: movdqu -0x20(%rbx,%rdx,4),%xmm0
1.76 │ movdqu -0x10(%rbx,%rdx,4),%xmm1
0.71 │ pshufd $0x1b,%xmm1,%xmm1
0.60 │ pshufd $0x1b,%xmm0,%xmm0
3.68 │ movdqu %xmm1,-0x30(%rcx)
14.36 │ movdqu %xmm0,-0x20(%rcx)
13.88 │ movdqu -0x40(%rbx,%rdx,4),%xmm0
6.64 │ movdqu -0x30(%rbx,%rdx,4),%xmm1
0.76 │ pshufd $0x1b,%xmm1,%xmm1
0.77 │ pshufd $0x1b,%xmm0,%xmm0
1.87 │ movdqu %xmm1,-0x10(%rcx)
13.01 │ movdqu %xmm0,(%rcx)
38.81 │ add $0x40,%rcx
0.92 │ add $0xfffffffffffffff0,%rdx
1.22 │ ↑ jne 110
```
And the slow one:
```asm
0.42 │9a880: movdqa %xmm2,%xmm1
4.03 │9a884: movq -0x8(%rbx,%rsi,4),%xmm4
8.49 │9a88a: pshufd $0xe1,%xmm4,%xmm4
2.58 │9a88f: movq -0x10(%rbx,%rsi,4),%xmm5
7.02 │9a895: pshufd $0xe1,%xmm5,%xmm5
4.79 │9a89a: punpcklqdq %xmm5,%xmm4
5.77 │9a89e: movdqu %xmm4,-0x18(%rdx)
15.74 │9a8a3: movq -0x18(%rbx,%rsi,4),%xmm4
3.91 │9a8a9: pshufd $0xe1,%xmm4,%xmm4
5.04 │9a8ae: movq -0x20(%rbx,%rsi,4),%xmm5
5.29 │9a8b4: pshufd $0xe1,%xmm5,%xmm5
4.60 │9a8b9: punpcklqdq %xmm5,%xmm4
9.81 │9a8bd: movdqu %xmm4,-0x8(%rdx)
11.05 │9a8c2: paddq %xmm3,%xmm0
0.86 │9a8c6: paddq %xmm3,%xmm2
5.89 │9a8ca: add $0x20,%rdx
0.12 │9a8ce: add $0xfffffffffffffff8,%rsi
1.16 │9a8d2: add $0x2,%rdi
2.96 │9a8d6: → jne 9a880 <<alloc::vec::Vec<T,A> as core::iter::traits::collect::Extend<&T>>::extend+0xd0>
```
alloc: Added `as_slice` method to `BinaryHeap` collection
I initially asked about whether it is useful addition on https://internals.rust-lang.org/t/should-i-add-as-slice-method-to-binaryheap/13816, and it seems there were no objections, so went ahead with this PR.
> There is [`BinaryHeap::into_vec`](https://doc.rust-lang.org/std/collections/struct.BinaryHeap.html#method.into_vec), but it consumes the value. I wonder if there is API design limitation that should be taken into account. Implementation-wise, the inner buffer is just a Vec, so it is trivial to expose as_slice from it.
Please, guide me through if I need to add tests or something else.
UPD: Tracking issue #83659