Commit Graph

1005 Commits

Author SHA1 Message Date
Dylan DPC
6a6c644016
Rollup merge of #84328 - Folyd:stablize_map_into_keys_values, r=m-ou-se
Stablize {HashMap,BTreeMap}::into_{keys,values}

I would propose to stabilize `{HashMap,BTreeMap}::into_{keys,values}`( aka. `map_into_keys_values`).

Closes #75294.
2021-05-06 13:30:54 +02:00
John Ericson
19be438cda alloc: Add unstable Cfg feature no-global_oom_handling
For certain sorts of systems, programming, it's deemed essential that
all allocation failures be explicitly handled where they occur. For
example, see Linus Torvald's opinion in [1]. Merely not calling global
panic handlers, or always `try_reserving` first (for vectors), is not
deemed good enough, because the mere presence of the global OOM handlers
is burdens static analysis.

One option for these projects to use rust would just be to skip `alloc`,
rolling their own allocation abstractions.  But this would, in my
opinion be a real shame. `alloc` has a few `try_*` methods already, and
we could easily have more. Features like custom allocator support also
demonstrate and existing to support diverse use-cases with the same
abstractions.

A natural way to add such a feature flag would a Cargo feature, but
there are currently uncertainties around how std library crate's Cargo
features may or not be stable, so to avoid any risk of stabilizing by
mistake we are going with a more low-level "raw cfg" token, which
cannot be interacted with via Cargo alone.

Note also that since there is no notion of "default cfg tokens" outside
of Cargo features, we have to invert the condition from
`global_oom_handling` to to `not(no_global_oom_handling)`. This breaks
the monotonicity that would be important for a Cargo feature (i.e.
turning on more features should never break compatibility), but it
doesn't matter for raw cfg tokens which are not intended to be
"constraint solved" by Cargo or anything else.

To support this use-case we create a new feature, "global-oom-handling",
on by default, and put the global OOM handler infra and everything else
it that depends on it behind it. By default, nothing is changed, but
users concerned about global handling can make sure it is disabled, and
be confident that all OOM handling is local and explicit.

For this first iteration, non-flat collections are outright disabled.
`Vec` and `String` don't yet have `try_*` allocation methods, but are
kept anyways since they can be oom-safely created "from parts", and we
hope to add those `try_` methods in the future.

[1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-05-05 16:49:04 -04:00
Mara Bos
b6f3dbb65d Bump map_into_keys_values stable version to 1.54.0. 2021-05-05 16:40:06 +02:00
LingMan
eb9f168e1e
Fix stability attributes of byte-to-string specialization 2021-05-03 13:00:34 +02:00
bors
2428cc4816 Auto merge of #84842 - blkerby:null_lowercase, r=joshtriplett
Replace 'NULL' with 'null'

This replaces occurrences of "NULL" with "null" in docs, comments, and compiler error/lint messages. This is for the sake of consistency, as the lowercase "null" is already the dominant form in Rust. The all-caps NULL looks like the C macro (or SQL keyword), which seems out of place in a Rust context, given that NULL does not exist in the Rust language or standard library (instead having [`ptr::null()`](https://doc.rust-lang.org/stable/std/ptr/fn.null.html)).
2021-05-03 05:41:23 +00:00
Brent Kerby
6679f5ceb1 Change 'NULL' to 'null' 2021-05-02 17:46:00 -06:00
bors
8a8ed07883 Auto merge of #82576 - gilescope:to_string, r=Amanieu
i8 and u8::to_string() specialisation (far less asm).

Take 2. Around 1/6th of the assembly to without specialisation.

https://godbolt.org/z/bzz8Mq

(partially fixes #73533 )
2021-05-02 22:01:57 +00:00
Ben-Lichtman
3e016a7682 Minor grammar tweaks for readability 2021-04-28 19:43:33 -07:00
Amanieu d'Antras
22951b7f56 Stabilize vec_extend_from_within 2021-04-28 07:27:06 +01:00
bors
ae54ee6507 Auto merge of #84174 - camsteffen:slice-diag, r=Mark-Simulacrum
Remove slice diagnostic item

...because it is unusally placed on an impl and is redundant with a lang item.

Depends on rust-lang/rust-clippy#7074 (next clippy sync). ~I expect clippy tests to fail in the meantime.~ Nope tests passed...

CC `@flip1995`
2021-04-26 17:16:03 +00:00
Ralf Jung
43126f3573 get rid of min_const_fn references in library/ and rustdoc 2021-04-25 14:14:19 +02:00
bors
b56b175c6c Auto merge of #84310 - RalfJung:const-fn-feature-flags, r=oli-obk
further split up const_fn feature flag

This continues the work on splitting up `const_fn` into separate feature flags:
* `const_fn_trait_bound` for `const fn` with trait bounds
* `const_fn_unsize` for unsizing coercions in `const fn` (looks like only `dyn` unsizing is still guarded here)

I don't know if there are even any things left that `const_fn` guards... at least libcore and liballoc do not need it any more.

`@oli-obk` are you currently able to do reviews?
2021-04-24 23:16:03 +00:00
Yuki Okushi
ed5646bfee
Rollup merge of #84453 - notriddle:waker-from-docs, r=cramertj
Document From implementations for Waker and RawWaker

CC #51430
2021-04-24 12:17:06 +09:00
Yuki Okushi
5b7c98676f
Rollup merge of #84248 - calebsander:refactor/vec-functions, r=Amanieu
Remove duplicated fn(Box<[T]>) -> Vec<T>

`<[T]>::into_vec()` does the same thing as `Vec::from::<Box<[T]>>()`, so they can be implemented in terms of each other. This was the previous implementation of `Vec::from()`, but was changed in #78461. I'm not sure what the rationale was for that change, but it seems preferable to maintain a single implementation.
2021-04-24 03:44:04 +09:00
Michael Howell
60ff298070 Document From implementations for Waker and RawWaker 2021-04-22 14:16:33 -07:00
Mara Bos
f5d72ab69b Add better test for BinaryHeap::retain. 2021-04-22 14:24:30 +02:00
Mara Bos
62226eecb6 Improve BinaryHeap::retain.
It now doesn't fully rebuild the heap, but only the parts that are
necessary.
2021-04-22 14:24:30 +02:00
Caleb Sander
f505d619c4 Remove duplicated fn(Box<[T]>) -> Vec<T> 2021-04-21 23:32:10 -04:00
Mara Bos
a7a7737114
Rollup merge of #84013 - CDirkx:fmt, r=m-ou-se
Replace all `fmt.pad` with `debug_struct`

This replaces any occurrence of:
- `f.pad("X")` with `f.debug_struct("X").finish()`
- `f.pad("X { .. }")` with `f.debug_struct("X").finish_non_exhaustive()`

This is in line with existing formatting code such as
1255053067/library/std/src/sync/mpsc/mod.rs (L1470-L1475)
2021-04-21 23:06:11 +02:00
Christiaan Dirkx
fccc75cf82 Fix alloc::test::test_show 2021-04-21 15:45:41 +02:00
Folyd
33cc3f5116 Stablize {HashMap,BTreeMap}::into_{keys,values} 2021-04-19 14:23:35 +08:00
Ralf Jung
fbfaab2cb7 separate feature flag for unsizing casts in const fn 2021-04-18 19:11:29 +02:00
Ralf Jung
fdad6ab3a3 move 'trait bounds on const fn' to separate feature gate 2021-04-18 18:36:41 +02:00
Waffle Lapkin
3ecaf57b29
Slightly change wording and fix typo in vec/mod.rs 2021-04-18 12:32:10 +03:00
Dylan DPC
a5ec5cf72a
Rollup merge of #84145 - vojtechkral:vecdeque-binary-search, r=m-ou-se
Address comments for vecdeque_binary_search #78021
2021-04-16 14:08:32 +02:00
bors
5e7bebad1d Auto merge of #84220 - gpluscb:weak_doc, r=jyn514
Correct outdated documentation for rc::Weak

This was overlooked in ~~#50357~~ #51901
2021-04-16 02:31:15 +00:00
Vojtech Kral
44be1c2aa0 VecDeque: Improve doc comments in binary search fns
Co-authored-by: Mara Bos <m-ou.se@m-ou.se>
2021-04-15 23:23:46 +02:00
Vojtech Kral
e68680d30d VecDeque: Add partition_point() #78021 2021-04-15 23:23:23 +02:00
Vojtech Kral
bccbf9db1c VecDeque: binary_search_by(): return right away if hit found at back.first() #78021 2021-04-15 23:23:22 +02:00
MarRue
288bd49528 Correct outdated rc::Weak::default documentation 2021-04-15 14:54:39 +02:00
Ivan Tham
eeac70c567
Merge same condition branch in vec spec_extend 2021-04-15 11:58:02 +08:00
Cameron Steffen
b319031808 Remove slice diagnostic item 2021-04-13 15:41:13 -05:00
bors
5c1304205b Auto merge of #84135 - rust-lang:GuillaumeGomez-patch-1, r=kennytm
Improve code example for length comparison

Small fix/improvement: it's much safer to check that you're under the length of an array rather than chacking that you're equal to it. It's even more true in case you update the length of the array while iterating.
2021-04-13 14:03:49 +00:00
Guillaume Gomez
b89c464bed
Improve code example for length comparison 2021-04-12 19:59:52 +02:00
Jubilee Young
7baeaa95e2 Stabilize BTree{Map,Set}::retain 2021-04-12 00:01:31 -07:00
Ralf Jung
63b682b3ec fix incorrect from_raw_in doctest 2021-04-10 12:24:19 +02:00
Dylan DPC
505846ec07
Rollup merge of #83476 - mystor:rc_mutate_strong_count, r=m-ou-se
Add strong_count mutation methods to Rc

The corresponding methods were stabilized on `Arc` in #79285 (tracking: #71983). This patch implements and stabilizes identical methods on the `Rc` types as well.
2021-04-07 13:07:06 +02:00
bors
35aa636159 Auto merge of #83530 - Mark-Simulacrum:bootstrap-bump, r=Mark-Simulacrum
Bump bootstrap to 1.52 beta

This includes the standard bump, but also a workaround for new cargo behavior around clearing out the doc directory when the rustdoc version changes.
2021-04-04 22:45:56 +00:00
Mark Rousskov
b3a4f91b8d Bump cfgs 2021-04-04 14:57:05 -04:00
Dylan DPC
b943ea8cdc
Rollup merge of #83827 - the8472:fix-inplace-panic-on-drop, r=RalfJung
cleanup leak after test to make miri happy

Contains changes that were requested in #83629 but didn't make it into the rollup.

r? `````@RalfJung`````
2021-04-04 19:20:06 +02:00
Dylan DPC
6c13556183
Rollup merge of #82726 - ssomers:btree_node_rearange, r=Mark-Simulacrum
BTree: move blocks around in node.rs

Without changing any names or implementation, reorder some members:
- Move down the ones defined long ago on the demised `struct Root`, to below the definition of their current host `struct NodeRef`.
- Move up some defined on `struct NodeRef` that are interspersed with those defined on `struct Handle`.
- Move up the `correct_…` methods squeezed between the two flavours of `push`.
- Move the unchecked static downcasts (`cast_to_…`) after the upcasts (`forget_`) and the (weirdly named) dynamic downcasts (`force`).
r? ````@Mark-Simulacrum````
2021-04-04 19:20:00 +02:00
Dylan DPC
869726d335
Rollup merge of #81619 - SkiFire13:resultshunt-inplace, r=the8472
Implement `SourceIterator` and `InPlaceIterable` for `ResultShunt`
2021-04-04 19:19:59 +02:00
bors
88e7862dd0 Auto merge of #83267 - ssomers:btree_prune_range_search_overlap, r=Mark-Simulacrum
BTree: no longer search arrays twice to check Ord

A possible addition to / partial replacement of #83147: no longer linearly search the upper bound of a range in the initial portion of the keys we already know are below the lower bound.
- Should be faster: fewer key comparisons at the cost of some instructions dealing with offsets
- Makes code a little more complicated.
- No longer detects ill-defined `Ord` implementations, but that wasn't a publicised feature, and was quite incomplete, and was only done in the `range` and `range_mut` methods.
r? `@Mark-Simulacrum`
2021-04-04 05:52:43 +00:00
the8472
572873fce0
suggestion from review
Co-authored-by: Ralf Jung <post@ralfj.de>
2021-04-04 01:38:58 +02:00
The8472
3bd241f95b cleanup leak after test to make miri happy 2021-04-04 01:37:05 +02:00
Dylan DPC
542f441d44
Rollup merge of #83629 - the8472:fix-inplace-panic-on-drop, r=m-ou-se
Fix double-drop in `Vec::from_iter(vec.into_iter())` specialization when items drop during panic

This fixes the double-drop but it leaves a behavioral difference compared to the default implementation intact: In the default implementation the source and the destination vec are separate objects, so they get dropped separately. Here they share an allocation and the latter only exists as a pointer into the former. So if dropping the former panics then this fix will leak more items than the default implementation would. Is this acceptable or should the specialization also mimic the default implementation's drops-during-panic behavior?

Fixes #83618

`@rustbot` label T-libs-impl
2021-04-02 19:57:31 +02:00
The8472
ad3a791e2a panic early when TrustedLen indicates a length > usize::MAX 2021-03-31 23:09:28 +02:00
bors
689e8470ff Auto merge of #83458 - saethlin:improve-vec-benches, r=dtolnay
Clean up Vec's benchmarks

The Vec benchmarks need a lot of love. I sort of noticed this in https://github.com/rust-lang/rust/pull/83357 but the overall situation is much less awesome than I thought at the time. The first commit just removes a lot of asserts and does a touch of other cleanup.

A number of these benchmarks are poorly-named. For example, `bench_map_fast` is not in fact fast, `bench_rev_1` and `bench_rev_2` are vague, `bench_in_place_zip_iter_mut` doesn't call `zip`, `bench_in_place*` don't do anything in-place... Should I fix these, or is there tooling that depend on the names not changing?

I've also noticed that `bench_rev_1` and `bench_rev_2` are remarkably fragile. It looks like poking other code in `Vec` can cause the codegen of this benchmark to switch to a version that has almost exactly half its current throughput and I have absolutely no idea why.

Here's the fast version:
```asm
  0.69 │110:   movdqu -0x20(%rbx,%rdx,4),%xmm0
  1.76 │       movdqu -0x10(%rbx,%rdx,4),%xmm1
  0.71 │       pshufd $0x1b,%xmm1,%xmm1
  0.60 │       pshufd $0x1b,%xmm0,%xmm0
  3.68 │       movdqu %xmm1,-0x30(%rcx)
 14.36 │       movdqu %xmm0,-0x20(%rcx)
 13.88 │       movdqu -0x40(%rbx,%rdx,4),%xmm0
  6.64 │       movdqu -0x30(%rbx,%rdx,4),%xmm1
  0.76 │       pshufd $0x1b,%xmm1,%xmm1
  0.77 │       pshufd $0x1b,%xmm0,%xmm0
  1.87 │       movdqu %xmm1,-0x10(%rcx)
 13.01 │       movdqu %xmm0,(%rcx)
 38.81 │       add    $0x40,%rcx
  0.92 │       add    $0xfffffffffffffff0,%rdx
  1.22 │     ↑ jne    110
```
And the slow one:
```asm
  0.42 │9a880:   movdqa     %xmm2,%xmm1
  4.03 │9a884:   movq       -0x8(%rbx,%rsi,4),%xmm4
  8.49 │9a88a:   pshufd     $0xe1,%xmm4,%xmm4
  2.58 │9a88f:   movq       -0x10(%rbx,%rsi,4),%xmm5
  7.02 │9a895:   pshufd     $0xe1,%xmm5,%xmm5
  4.79 │9a89a:   punpcklqdq %xmm5,%xmm4
  5.77 │9a89e:   movdqu     %xmm4,-0x18(%rdx)
 15.74 │9a8a3:   movq       -0x18(%rbx,%rsi,4),%xmm4
  3.91 │9a8a9:   pshufd     $0xe1,%xmm4,%xmm4
  5.04 │9a8ae:   movq       -0x20(%rbx,%rsi,4),%xmm5
  5.29 │9a8b4:   pshufd     $0xe1,%xmm5,%xmm5
  4.60 │9a8b9:   punpcklqdq %xmm5,%xmm4
  9.81 │9a8bd:   movdqu     %xmm4,-0x8(%rdx)
 11.05 │9a8c2:   paddq      %xmm3,%xmm0
  0.86 │9a8c6:   paddq      %xmm3,%xmm2
  5.89 │9a8ca:   add        $0x20,%rdx
  0.12 │9a8ce:   add        $0xfffffffffffffff8,%rsi
  1.16 │9a8d2:   add        $0x2,%rdi
  2.96 │9a8d6: → jne        9a880 <<alloc::vec::Vec<T,A> as core::iter::traits::collect::Extend<&T>>::extend+0xd0>
```
2021-03-30 09:03:29 +00:00
bors
32d3276561 Auto merge of #83357 - saethlin:vec-reserve-inlining, r=dtolnay
Reduce the impact of Vec::reserve calls that do not cause any allocation

I think a lot of callers expect `Vec::reserve` to be nearly free when no resizing is required, but unfortunately that isn't the case. LLVM makes remarkably poor inlining choices (along the path from `Vec::reserve` to `RawVec::grow_amortized`), so depending on the surrounding context you either get a huge blob of `RawVec`'s resizing logic inlined into some seemingly-unrelated function, or not enough inlining happens and/or the actual check in `needs_to_grow` ends up behind a function call. My goal is to make the codegen for `Vec::reserve` match the mental that callers seem to have: It's reliably just a `sub cmp ja` if there is already sufficient capacity.

This patch has the following impact on the serde_json benchmarks: ca3efde8a5 run with `cargo +stage1 run --release -- -n 1024`

Before:
```
                                DOM                  STRUCT
======= serde_json ======= parse|stringify ===== parse|stringify ====
data/canada.json         340 MB/s   490 MB/s   630 MB/s   370 MB/s
data/citm_catalog.json   460 MB/s   540 MB/s  1010 MB/s   550 MB/s
data/twitter.json        330 MB/s   840 MB/s   640 MB/s   630 MB/s

======= json-rust ======== parse|stringify ===== parse|stringify ====
data/canada.json         580 MB/s   990 MB/s
data/citm_catalog.json   720 MB/s   660 MB/s
data/twitter.json        570 MB/s   960 MB/s
```

After:
```
                                DOM                  STRUCT
======= serde_json ======= parse|stringify ===== parse|stringify ====
data/canada.json         330 MB/s   510 MB/s   610 MB/s   380 MB/s
data/citm_catalog.json   450 MB/s   640 MB/s   970 MB/s   830 MB/s
data/twitter.json        330 MB/s   880 MB/s   670 MB/s   960 MB/s

======= json-rust ======== parse|stringify ===== parse|stringify ====
data/canada.json         560 MB/s  1130 MB/s
data/citm_catalog.json   710 MB/s   880 MB/s
data/twitter.json        530 MB/s  1230 MB/s

```

That's approximately a one-third increase in throughput on two of the benchmarks, and no effect on one (The benchmark suite has sufficient jitter that I could pick a run where there are no regressions, so I'm not convinced they're meaningful here).

This also produces perf increases on the order of 3-5% in a few other microbenchmarks that I'm tracking. It might be useful to see if this has a cascading effect on inlining choices in some large codebases.

Compiling this simple program demonstrates the change in codegen that causes the perf impact:
```rust
fn main() {
    reserve(&mut Vec::new());
}

#[inline(never)]
fn reserve(v: &mut Vec<u8>) {
    v.reserve(1234);
}
```

Before:
```rust
00000000000069b0 <scratch::reserve>:
    69b0:       53                      push   %rbx
    69b1:       48 83 ec 30             sub    $0x30,%rsp
    69b5:       48 8b 47 08             mov    0x8(%rdi),%rax
    69b9:       48 8b 4f 10             mov    0x10(%rdi),%rcx
    69bd:       48 89 c2                mov    %rax,%rdx
    69c0:       48 29 ca                sub    %rcx,%rdx
    69c3:       48 81 fa d1 04 00 00    cmp    $0x4d1,%rdx
    69ca:       77 73                   ja     6a3f <scratch::reserve+0x8f>
    69cc:       48 81 c1 d2 04 00 00    add    $0x4d2,%rcx
    69d3:       72 75                   jb     6a4a <scratch::reserve+0x9a>
    69d5:       48 89 fb                mov    %rdi,%rbx
    69d8:       48 8d 14 00             lea    (%rax,%rax,1),%rdx
    69dc:       48 39 ca                cmp    %rcx,%rdx
    69df:       48 0f 47 ca             cmova  %rdx,%rcx
    69e3:       48 83 f9 08             cmp    $0x8,%rcx
    69e7:       be 08 00 00 00          mov    $0x8,%esi
    69ec:       48 0f 47 f1             cmova  %rcx,%rsi
    69f0:       48 85 c0                test   %rax,%rax
    69f3:       74 17                   je     6a0c <scratch::reserve+0x5c>
    69f5:       48 8b 0b                mov    (%rbx),%rcx
    69f8:       48 89 0c 24             mov    %rcx,(%rsp)
    69fc:       48 89 44 24 08          mov    %rax,0x8(%rsp)
    6a01:       48 c7 44 24 10 01 00    movq   $0x1,0x10(%rsp)
    6a08:       00 00
    6a0a:       eb 08                   jmp    6a14 <scratch::reserve+0x64>
    6a0c:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
    6a13:       00
    6a14:       48 8d 7c 24 18          lea    0x18(%rsp),%rdi
    6a19:       48 89 e1                mov    %rsp,%rcx
    6a1c:       ba 01 00 00 00          mov    $0x1,%edx
    6a21:       e8 9a fe ff ff          call   68c0 <alloc::raw_vec::finish_grow>
    6a26:       48 8b 7c 24 20          mov    0x20(%rsp),%rdi
    6a2b:       48 8b 74 24 28          mov    0x28(%rsp),%rsi
    6a30:       48 83 7c 24 18 01       cmpq   $0x1,0x18(%rsp)
    6a36:       74 0d                   je     6a45 <scratch::reserve+0x95>
    6a38:       48 89 3b                mov    %rdi,(%rbx)
    6a3b:       48 89 73 08             mov    %rsi,0x8(%rbx)
    6a3f:       48 83 c4 30             add    $0x30,%rsp
    6a43:       5b                      pop    %rbx
    6a44:       c3                      ret
    6a45:       48 85 f6                test   %rsi,%rsi
    6a48:       75 08                   jne    6a52 <scratch::reserve+0xa2>
    6a4a:       ff 15 38 c4 03 00       call   *0x3c438(%rip)        # 42e88 <_GLOBAL_OFFSET_TABLE_+0x490>
    6a50:       0f 0b                   ud2
    6a52:       ff 15 f0 c4 03 00       call   *0x3c4f0(%rip)        # 42f48 <_GLOBAL_OFFSET_TABLE_+0x550>
    6a58:       0f 0b                   ud2
    6a5a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
```

After:
```asm
0000000000006910 <scratch::reserve>:
    6910:       48 8b 47 08             mov    0x8(%rdi),%rax
    6914:       48 8b 77 10             mov    0x10(%rdi),%rsi
    6918:       48 29 f0                sub    %rsi,%rax
    691b:       48 3d d1 04 00 00       cmp    $0x4d1,%rax
    6921:       77 05                   ja     6928 <scratch::reserve+0x18>
    6923:       e9 e8 fe ff ff          jmp    6810 <alloc::raw_vec::RawVec<T,A>::reserve::do_reserve_and_handle>
    6928:       c3                      ret
    6929:       0f 1f 80 00 00 00 00    nopl   0x0(%rax)
```
2021-03-30 03:41:14 +00:00
Dylan DPC
2843baaeb6
Rollup merge of #82331 - frol:feat/std-binary-heap-as-slice, r=Amanieu
alloc: Added `as_slice` method to `BinaryHeap` collection

I initially asked about whether it is useful addition on https://internals.rust-lang.org/t/should-i-add-as-slice-method-to-binaryheap/13816, and it seems there were no objections, so went ahead with this PR.

> There is [`BinaryHeap::into_vec`](https://doc.rust-lang.org/std/collections/struct.BinaryHeap.html#method.into_vec), but it consumes the value. I wonder if there is API design limitation that should be taken into account. Implementation-wise, the inner buffer is just a Vec, so it is trivial to expose as_slice from it.

Please, guide me through if I need to add tests or something else.

UPD: Tracking issue #83659
2021-03-30 00:32:18 +02:00