Commit Graph

986 Commits

Author SHA1 Message Date
bors
b56b175c6c Auto merge of #84310 - RalfJung:const-fn-feature-flags, r=oli-obk
further split up const_fn feature flag

This continues the work on splitting up `const_fn` into separate feature flags:
* `const_fn_trait_bound` for `const fn` with trait bounds
* `const_fn_unsize` for unsizing coercions in `const fn` (looks like only `dyn` unsizing is still guarded here)

I don't know if there are even any things left that `const_fn` guards... at least libcore and liballoc do not need it any more.

`@oli-obk` are you currently able to do reviews?
2021-04-24 23:16:03 +00:00
Yuki Okushi
ed5646bfee
Rollup merge of #84453 - notriddle:waker-from-docs, r=cramertj
Document From implementations for Waker and RawWaker

CC #51430
2021-04-24 12:17:06 +09:00
Yuki Okushi
5b7c98676f
Rollup merge of #84248 - calebsander:refactor/vec-functions, r=Amanieu
Remove duplicated fn(Box<[T]>) -> Vec<T>

`<[T]>::into_vec()` does the same thing as `Vec::from::<Box<[T]>>()`, so they can be implemented in terms of each other. This was the previous implementation of `Vec::from()`, but was changed in #78461. I'm not sure what the rationale was for that change, but it seems preferable to maintain a single implementation.
2021-04-24 03:44:04 +09:00
Michael Howell
60ff298070 Document From implementations for Waker and RawWaker 2021-04-22 14:16:33 -07:00
Mara Bos
f5d72ab69b Add better test for BinaryHeap::retain. 2021-04-22 14:24:30 +02:00
Mara Bos
62226eecb6 Improve BinaryHeap::retain.
It now doesn't fully rebuild the heap, but only the parts that are
necessary.
2021-04-22 14:24:30 +02:00
Caleb Sander
f505d619c4 Remove duplicated fn(Box<[T]>) -> Vec<T> 2021-04-21 23:32:10 -04:00
Mara Bos
a7a7737114
Rollup merge of #84013 - CDirkx:fmt, r=m-ou-se
Replace all `fmt.pad` with `debug_struct`

This replaces any occurrence of:
- `f.pad("X")` with `f.debug_struct("X").finish()`
- `f.pad("X { .. }")` with `f.debug_struct("X").finish_non_exhaustive()`

This is in line with existing formatting code such as
1255053067/library/std/src/sync/mpsc/mod.rs (L1470-L1475)
2021-04-21 23:06:11 +02:00
Christiaan Dirkx
fccc75cf82 Fix alloc::test::test_show 2021-04-21 15:45:41 +02:00
Ralf Jung
fbfaab2cb7 separate feature flag for unsizing casts in const fn 2021-04-18 19:11:29 +02:00
Ralf Jung
fdad6ab3a3 move 'trait bounds on const fn' to separate feature gate 2021-04-18 18:36:41 +02:00
Waffle Lapkin
3ecaf57b29
Slightly change wording and fix typo in vec/mod.rs 2021-04-18 12:32:10 +03:00
Dylan DPC
a5ec5cf72a
Rollup merge of #84145 - vojtechkral:vecdeque-binary-search, r=m-ou-se
Address comments for vecdeque_binary_search #78021
2021-04-16 14:08:32 +02:00
bors
5e7bebad1d Auto merge of #84220 - gpluscb:weak_doc, r=jyn514
Correct outdated documentation for rc::Weak

This was overlooked in ~~#50357~~ #51901
2021-04-16 02:31:15 +00:00
Vojtech Kral
44be1c2aa0 VecDeque: Improve doc comments in binary search fns
Co-authored-by: Mara Bos <m-ou.se@m-ou.se>
2021-04-15 23:23:46 +02:00
Vojtech Kral
e68680d30d VecDeque: Add partition_point() #78021 2021-04-15 23:23:23 +02:00
Vojtech Kral
bccbf9db1c VecDeque: binary_search_by(): return right away if hit found at back.first() #78021 2021-04-15 23:23:22 +02:00
MarRue
288bd49528 Correct outdated rc::Weak::default documentation 2021-04-15 14:54:39 +02:00
Ivan Tham
eeac70c567
Merge same condition branch in vec spec_extend 2021-04-15 11:58:02 +08:00
bors
5c1304205b Auto merge of #84135 - rust-lang:GuillaumeGomez-patch-1, r=kennytm
Improve code example for length comparison

Small fix/improvement: it's much safer to check that you're under the length of an array rather than chacking that you're equal to it. It's even more true in case you update the length of the array while iterating.
2021-04-13 14:03:49 +00:00
Guillaume Gomez
b89c464bed
Improve code example for length comparison 2021-04-12 19:59:52 +02:00
Jubilee Young
7baeaa95e2 Stabilize BTree{Map,Set}::retain 2021-04-12 00:01:31 -07:00
Ralf Jung
63b682b3ec fix incorrect from_raw_in doctest 2021-04-10 12:24:19 +02:00
Dylan DPC
505846ec07
Rollup merge of #83476 - mystor:rc_mutate_strong_count, r=m-ou-se
Add strong_count mutation methods to Rc

The corresponding methods were stabilized on `Arc` in #79285 (tracking: #71983). This patch implements and stabilizes identical methods on the `Rc` types as well.
2021-04-07 13:07:06 +02:00
bors
35aa636159 Auto merge of #83530 - Mark-Simulacrum:bootstrap-bump, r=Mark-Simulacrum
Bump bootstrap to 1.52 beta

This includes the standard bump, but also a workaround for new cargo behavior around clearing out the doc directory when the rustdoc version changes.
2021-04-04 22:45:56 +00:00
Mark Rousskov
b3a4f91b8d Bump cfgs 2021-04-04 14:57:05 -04:00
Dylan DPC
b943ea8cdc
Rollup merge of #83827 - the8472:fix-inplace-panic-on-drop, r=RalfJung
cleanup leak after test to make miri happy

Contains changes that were requested in #83629 but didn't make it into the rollup.

r? `````@RalfJung`````
2021-04-04 19:20:06 +02:00
Dylan DPC
6c13556183
Rollup merge of #82726 - ssomers:btree_node_rearange, r=Mark-Simulacrum
BTree: move blocks around in node.rs

Without changing any names or implementation, reorder some members:
- Move down the ones defined long ago on the demised `struct Root`, to below the definition of their current host `struct NodeRef`.
- Move up some defined on `struct NodeRef` that are interspersed with those defined on `struct Handle`.
- Move up the `correct_…` methods squeezed between the two flavours of `push`.
- Move the unchecked static downcasts (`cast_to_…`) after the upcasts (`forget_`) and the (weirdly named) dynamic downcasts (`force`).
r? ````@Mark-Simulacrum````
2021-04-04 19:20:00 +02:00
Dylan DPC
869726d335
Rollup merge of #81619 - SkiFire13:resultshunt-inplace, r=the8472
Implement `SourceIterator` and `InPlaceIterable` for `ResultShunt`
2021-04-04 19:19:59 +02:00
bors
88e7862dd0 Auto merge of #83267 - ssomers:btree_prune_range_search_overlap, r=Mark-Simulacrum
BTree: no longer search arrays twice to check Ord

A possible addition to / partial replacement of #83147: no longer linearly search the upper bound of a range in the initial portion of the keys we already know are below the lower bound.
- Should be faster: fewer key comparisons at the cost of some instructions dealing with offsets
- Makes code a little more complicated.
- No longer detects ill-defined `Ord` implementations, but that wasn't a publicised feature, and was quite incomplete, and was only done in the `range` and `range_mut` methods.
r? `@Mark-Simulacrum`
2021-04-04 05:52:43 +00:00
the8472
572873fce0
suggestion from review
Co-authored-by: Ralf Jung <post@ralfj.de>
2021-04-04 01:38:58 +02:00
The8472
3bd241f95b cleanup leak after test to make miri happy 2021-04-04 01:37:05 +02:00
Dylan DPC
542f441d44
Rollup merge of #83629 - the8472:fix-inplace-panic-on-drop, r=m-ou-se
Fix double-drop in `Vec::from_iter(vec.into_iter())` specialization when items drop during panic

This fixes the double-drop but it leaves a behavioral difference compared to the default implementation intact: In the default implementation the source and the destination vec are separate objects, so they get dropped separately. Here they share an allocation and the latter only exists as a pointer into the former. So if dropping the former panics then this fix will leak more items than the default implementation would. Is this acceptable or should the specialization also mimic the default implementation's drops-during-panic behavior?

Fixes #83618

`@rustbot` label T-libs-impl
2021-04-02 19:57:31 +02:00
The8472
ad3a791e2a panic early when TrustedLen indicates a length > usize::MAX 2021-03-31 23:09:28 +02:00
bors
689e8470ff Auto merge of #83458 - saethlin:improve-vec-benches, r=dtolnay
Clean up Vec's benchmarks

The Vec benchmarks need a lot of love. I sort of noticed this in https://github.com/rust-lang/rust/pull/83357 but the overall situation is much less awesome than I thought at the time. The first commit just removes a lot of asserts and does a touch of other cleanup.

A number of these benchmarks are poorly-named. For example, `bench_map_fast` is not in fact fast, `bench_rev_1` and `bench_rev_2` are vague, `bench_in_place_zip_iter_mut` doesn't call `zip`, `bench_in_place*` don't do anything in-place... Should I fix these, or is there tooling that depend on the names not changing?

I've also noticed that `bench_rev_1` and `bench_rev_2` are remarkably fragile. It looks like poking other code in `Vec` can cause the codegen of this benchmark to switch to a version that has almost exactly half its current throughput and I have absolutely no idea why.

Here's the fast version:
```asm
  0.69 │110:   movdqu -0x20(%rbx,%rdx,4),%xmm0
  1.76 │       movdqu -0x10(%rbx,%rdx,4),%xmm1
  0.71 │       pshufd $0x1b,%xmm1,%xmm1
  0.60 │       pshufd $0x1b,%xmm0,%xmm0
  3.68 │       movdqu %xmm1,-0x30(%rcx)
 14.36 │       movdqu %xmm0,-0x20(%rcx)
 13.88 │       movdqu -0x40(%rbx,%rdx,4),%xmm0
  6.64 │       movdqu -0x30(%rbx,%rdx,4),%xmm1
  0.76 │       pshufd $0x1b,%xmm1,%xmm1
  0.77 │       pshufd $0x1b,%xmm0,%xmm0
  1.87 │       movdqu %xmm1,-0x10(%rcx)
 13.01 │       movdqu %xmm0,(%rcx)
 38.81 │       add    $0x40,%rcx
  0.92 │       add    $0xfffffffffffffff0,%rdx
  1.22 │     ↑ jne    110
```
And the slow one:
```asm
  0.42 │9a880:   movdqa     %xmm2,%xmm1
  4.03 │9a884:   movq       -0x8(%rbx,%rsi,4),%xmm4
  8.49 │9a88a:   pshufd     $0xe1,%xmm4,%xmm4
  2.58 │9a88f:   movq       -0x10(%rbx,%rsi,4),%xmm5
  7.02 │9a895:   pshufd     $0xe1,%xmm5,%xmm5
  4.79 │9a89a:   punpcklqdq %xmm5,%xmm4
  5.77 │9a89e:   movdqu     %xmm4,-0x18(%rdx)
 15.74 │9a8a3:   movq       -0x18(%rbx,%rsi,4),%xmm4
  3.91 │9a8a9:   pshufd     $0xe1,%xmm4,%xmm4
  5.04 │9a8ae:   movq       -0x20(%rbx,%rsi,4),%xmm5
  5.29 │9a8b4:   pshufd     $0xe1,%xmm5,%xmm5
  4.60 │9a8b9:   punpcklqdq %xmm5,%xmm4
  9.81 │9a8bd:   movdqu     %xmm4,-0x8(%rdx)
 11.05 │9a8c2:   paddq      %xmm3,%xmm0
  0.86 │9a8c6:   paddq      %xmm3,%xmm2
  5.89 │9a8ca:   add        $0x20,%rdx
  0.12 │9a8ce:   add        $0xfffffffffffffff8,%rsi
  1.16 │9a8d2:   add        $0x2,%rdi
  2.96 │9a8d6: → jne        9a880 <<alloc::vec::Vec<T,A> as core::iter::traits::collect::Extend<&T>>::extend+0xd0>
```
2021-03-30 09:03:29 +00:00
bors
32d3276561 Auto merge of #83357 - saethlin:vec-reserve-inlining, r=dtolnay
Reduce the impact of Vec::reserve calls that do not cause any allocation

I think a lot of callers expect `Vec::reserve` to be nearly free when no resizing is required, but unfortunately that isn't the case. LLVM makes remarkably poor inlining choices (along the path from `Vec::reserve` to `RawVec::grow_amortized`), so depending on the surrounding context you either get a huge blob of `RawVec`'s resizing logic inlined into some seemingly-unrelated function, or not enough inlining happens and/or the actual check in `needs_to_grow` ends up behind a function call. My goal is to make the codegen for `Vec::reserve` match the mental that callers seem to have: It's reliably just a `sub cmp ja` if there is already sufficient capacity.

This patch has the following impact on the serde_json benchmarks: ca3efde8a5 run with `cargo +stage1 run --release -- -n 1024`

Before:
```
                                DOM                  STRUCT
======= serde_json ======= parse|stringify ===== parse|stringify ====
data/canada.json         340 MB/s   490 MB/s   630 MB/s   370 MB/s
data/citm_catalog.json   460 MB/s   540 MB/s  1010 MB/s   550 MB/s
data/twitter.json        330 MB/s   840 MB/s   640 MB/s   630 MB/s

======= json-rust ======== parse|stringify ===== parse|stringify ====
data/canada.json         580 MB/s   990 MB/s
data/citm_catalog.json   720 MB/s   660 MB/s
data/twitter.json        570 MB/s   960 MB/s
```

After:
```
                                DOM                  STRUCT
======= serde_json ======= parse|stringify ===== parse|stringify ====
data/canada.json         330 MB/s   510 MB/s   610 MB/s   380 MB/s
data/citm_catalog.json   450 MB/s   640 MB/s   970 MB/s   830 MB/s
data/twitter.json        330 MB/s   880 MB/s   670 MB/s   960 MB/s

======= json-rust ======== parse|stringify ===== parse|stringify ====
data/canada.json         560 MB/s  1130 MB/s
data/citm_catalog.json   710 MB/s   880 MB/s
data/twitter.json        530 MB/s  1230 MB/s

```

That's approximately a one-third increase in throughput on two of the benchmarks, and no effect on one (The benchmark suite has sufficient jitter that I could pick a run where there are no regressions, so I'm not convinced they're meaningful here).

This also produces perf increases on the order of 3-5% in a few other microbenchmarks that I'm tracking. It might be useful to see if this has a cascading effect on inlining choices in some large codebases.

Compiling this simple program demonstrates the change in codegen that causes the perf impact:
```rust
fn main() {
    reserve(&mut Vec::new());
}

#[inline(never)]
fn reserve(v: &mut Vec<u8>) {
    v.reserve(1234);
}
```

Before:
```rust
00000000000069b0 <scratch::reserve>:
    69b0:       53                      push   %rbx
    69b1:       48 83 ec 30             sub    $0x30,%rsp
    69b5:       48 8b 47 08             mov    0x8(%rdi),%rax
    69b9:       48 8b 4f 10             mov    0x10(%rdi),%rcx
    69bd:       48 89 c2                mov    %rax,%rdx
    69c0:       48 29 ca                sub    %rcx,%rdx
    69c3:       48 81 fa d1 04 00 00    cmp    $0x4d1,%rdx
    69ca:       77 73                   ja     6a3f <scratch::reserve+0x8f>
    69cc:       48 81 c1 d2 04 00 00    add    $0x4d2,%rcx
    69d3:       72 75                   jb     6a4a <scratch::reserve+0x9a>
    69d5:       48 89 fb                mov    %rdi,%rbx
    69d8:       48 8d 14 00             lea    (%rax,%rax,1),%rdx
    69dc:       48 39 ca                cmp    %rcx,%rdx
    69df:       48 0f 47 ca             cmova  %rdx,%rcx
    69e3:       48 83 f9 08             cmp    $0x8,%rcx
    69e7:       be 08 00 00 00          mov    $0x8,%esi
    69ec:       48 0f 47 f1             cmova  %rcx,%rsi
    69f0:       48 85 c0                test   %rax,%rax
    69f3:       74 17                   je     6a0c <scratch::reserve+0x5c>
    69f5:       48 8b 0b                mov    (%rbx),%rcx
    69f8:       48 89 0c 24             mov    %rcx,(%rsp)
    69fc:       48 89 44 24 08          mov    %rax,0x8(%rsp)
    6a01:       48 c7 44 24 10 01 00    movq   $0x1,0x10(%rsp)
    6a08:       00 00
    6a0a:       eb 08                   jmp    6a14 <scratch::reserve+0x64>
    6a0c:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
    6a13:       00
    6a14:       48 8d 7c 24 18          lea    0x18(%rsp),%rdi
    6a19:       48 89 e1                mov    %rsp,%rcx
    6a1c:       ba 01 00 00 00          mov    $0x1,%edx
    6a21:       e8 9a fe ff ff          call   68c0 <alloc::raw_vec::finish_grow>
    6a26:       48 8b 7c 24 20          mov    0x20(%rsp),%rdi
    6a2b:       48 8b 74 24 28          mov    0x28(%rsp),%rsi
    6a30:       48 83 7c 24 18 01       cmpq   $0x1,0x18(%rsp)
    6a36:       74 0d                   je     6a45 <scratch::reserve+0x95>
    6a38:       48 89 3b                mov    %rdi,(%rbx)
    6a3b:       48 89 73 08             mov    %rsi,0x8(%rbx)
    6a3f:       48 83 c4 30             add    $0x30,%rsp
    6a43:       5b                      pop    %rbx
    6a44:       c3                      ret
    6a45:       48 85 f6                test   %rsi,%rsi
    6a48:       75 08                   jne    6a52 <scratch::reserve+0xa2>
    6a4a:       ff 15 38 c4 03 00       call   *0x3c438(%rip)        # 42e88 <_GLOBAL_OFFSET_TABLE_+0x490>
    6a50:       0f 0b                   ud2
    6a52:       ff 15 f0 c4 03 00       call   *0x3c4f0(%rip)        # 42f48 <_GLOBAL_OFFSET_TABLE_+0x550>
    6a58:       0f 0b                   ud2
    6a5a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
```

After:
```asm
0000000000006910 <scratch::reserve>:
    6910:       48 8b 47 08             mov    0x8(%rdi),%rax
    6914:       48 8b 77 10             mov    0x10(%rdi),%rsi
    6918:       48 29 f0                sub    %rsi,%rax
    691b:       48 3d d1 04 00 00       cmp    $0x4d1,%rax
    6921:       77 05                   ja     6928 <scratch::reserve+0x18>
    6923:       e9 e8 fe ff ff          jmp    6810 <alloc::raw_vec::RawVec<T,A>::reserve::do_reserve_and_handle>
    6928:       c3                      ret
    6929:       0f 1f 80 00 00 00 00    nopl   0x0(%rax)
```
2021-03-30 03:41:14 +00:00
Dylan DPC
2843baaeb6
Rollup merge of #82331 - frol:feat/std-binary-heap-as-slice, r=Amanieu
alloc: Added `as_slice` method to `BinaryHeap` collection

I initially asked about whether it is useful addition on https://internals.rust-lang.org/t/should-i-add-as-slice-method-to-binaryheap/13816, and it seems there were no objections, so went ahead with this PR.

> There is [`BinaryHeap::into_vec`](https://doc.rust-lang.org/std/collections/struct.BinaryHeap.html#method.into_vec), but it consumes the value. I wonder if there is API design limitation that should be taken into account. Implementation-wise, the inner buffer is just a Vec, so it is trivial to expose as_slice from it.

Please, guide me through if I need to add tests or something else.

UPD: Tracking issue #83659
2021-03-30 00:32:18 +02:00
Vlad Frolov
595f3f25fc Updated the tracking issue # 2021-03-29 22:44:48 +03:00
The8472
421f5d282a fix double-drop in in-place collect specialization 2021-03-29 04:48:13 +02:00
The8472
fa89c0fbcf add testcase for double-drop during Vec in-place collection 2021-03-29 04:39:23 +02:00
bors
0239876020 Auto merge of #83582 - jyn514:might-not, r=joshtriplett
may not -> might not

may not -> might not

"may not" has two possible meanings:
1. A command: "You may not stay up past your bedtime."
2. A fact that's only sometimes true: "Some cities may not have bike lanes."

In some cases, the meaning is ambiguous: "Some cars may not have snow
tires." (do the cars *happen* to not have snow tires, or is it
physically impossible for them to have snow tires?)

This changes places where the standard library uses the "description of
fact" meaning to say "might not" instead.

This is just `std::vec` for now - if you think this is a good idea I can
convert the rest of the standard library.
2021-03-28 14:16:03 +00:00
bors
d4c96de64f Auto merge of #83577 - geeklint:slice_to_ascii_case_doc_links, r=m-ou-se
Adjust documentation links for slice::make_ascii_*case

The documentation for the functions `slice::to_ascii_lowercase` and `slice::to_ascii_uppercase` contain the suggestion

> To lowercase the value in-place, use `make_ascii_lowercase`

however the link to the suggested method takes you to the page for `u8`, rather than the method of that name on the same page.
2021-03-28 11:34:55 +00:00
bors
5208f63ba8 Auto merge of #81728 - Qwaz:fix-80335, r=joshtriplett
Fixes API soundness issue in join()

Fixes #80335
2021-03-28 06:32:34 +00:00
Joshua Nelson
e051db6838 may not -> might not
"may not" has two possible meanings:
1. A command: "You may not stay up past your bedtime."
2. A fact that's only sometimes true: "Some cities may not have bike lanes."

In some cases, the meaning is ambiguous: "Some cars may not have snow
tires." (do the cars *happen* to not have snow tires, or is it
physically impossible for them to have snow tires?)

This changes places where the standard library uses the "description of
fact" meaning to say "might not" instead.

This is just `std::vec` for now - if you think this is a good idea I can
convert the rest of the standard library.
2021-03-27 16:01:16 -04:00
Dylan DPC
b2e254318d
Rollup merge of #82917 - cuviper:iter-zip, r=m-ou-se
Add function core::iter::zip

This makes it a little easier to `zip` iterators:

```rust
for (x, y) in zip(xs, ys) {}
// vs.
for (x, y) in xs.into_iter().zip(ys) {}
```

You can `zip(&mut xs, &ys)` for the conventional `iter_mut()` and
`iter()`, respectively. This can also support arbitrary nesting, where
it's easier to see the item layout than with arbitrary `zip` chains:

```rust
for ((x, y), z) in zip(zip(xs, ys), zs) {}
for (x, (y, z)) in zip(xs, zip(ys, zs)) {}
// vs.
for ((x, y), z) in xs.into_iter().zip(ys).zip(xz) {}
for (x, (y, z)) in xs.into_iter().zip((ys.into_iter().zip(xz)) {}
```

It may also format more nicely, especially when the first iterator is a
longer chain of methods -- for example:

```rust
    iter::zip(
        trait_ref.substs.types().skip(1),
        impl_trait_ref.substs.types().skip(1),
    )
    // vs.
    trait_ref
        .substs
        .types()
        .skip(1)
        .zip(impl_trait_ref.substs.types().skip(1))
```

This replaces the tuple-pair `IntoIterator` in #78204.
There is prior art for the utility of this in [`itertools::zip`].

[`itertools::zip`]: https://docs.rs/itertools/0.10.0/itertools/fn.zip.html
2021-03-27 20:37:07 +01:00
Violet
634d48d9d6 adjust documentation links for slice ascii case functions to use newer rustdoc link format 2021-03-27 14:15:42 -04:00
Violet
d29d87f08b update links to make_ascii_lowercase for slice to point to methods on the same type, rather than on u8 2021-03-27 13:45:30 -04:00
bors
aef11409b4 Auto merge of #78618 - workingjubilee:ieee754-fmt, r=m-ou-se
Add IEEE 754 compliant fmt/parse of -0, infinity, NaN

This pull request improves the Rust float formatting/parsing libraries to comply with IEEE 754's formatting expectations around certain special values, namely signed zero, the infinities, and NaN. It also adds IEEE 754 compliance tests that, while less stringent in certain places than many of the existing flt2dec/dec2flt capability tests, are intended to serve as the beginning of a roadmap to future compliance with the standard. Some relevant documentation is also adjusted with clarifying remarks.

This PR follows from discussion in https://github.com/rust-lang/rfcs/issues/1074, and closes #24623.

The most controversial change here is likely to be that -0 is now printed as -0. Allow me to explain: While there appears to be community support for an opt-in toggle of printing floats as if they exist in the naively expected domain of numbers, i.e. not the extended reals (where floats live), IEEE 754-2019 is clear that a float converted to a string should be capable of being transformed into the original floating point bit-pattern when it satisfies certain conditions (namely, when it is an actual numeric value i.e. not a NaN and the original and destination float width are the same). -0 is given special attention here as a value that should have its sign preserved. In addition, the vast majority of other programming languages not only output `-0` but output `-0.0` here.

While IEEE 754 offers a broad leeway in how to handle producing what it calls a "decimal character sequence", it is clear that the operations a language provides should be capable of round tripping, and it is confusing to advertise the f32 and f64 types as binary32 and binary64 yet have the most basic way of producing a string and then reading it back into a floating point number be non-conformant with the standard. Further, existing documentation suggested that e.g. -0 would be printed with -0 regardless of the presence of the `+` fmt character, but it prints "+0" instead if given such (which was what led to the opening of #24623).

There are other parsing and formatting issues for floating point numbers which prevent Rust from complying with the standard, as well as other well-documented challenges on the arithmetic level, but I hope that this can be the beginning of motion towards solving those challenges.
2021-03-27 10:40:16 +00:00
Yuki Okushi
c143267901
Rollup merge of #83388 - alamb:alamb/fmt-dcs, r=Mark-Simulacrum
Make # pretty print format easier to discover

# Rationale:

I use (cargo cult?) three formats in rust:  `{}`, debug `{:?}`, and pretty-print debug `{:#?}`. I discovered `{:#?}` in some blog post or guide when I started working in Rust. While `#` is documented I think it is hard to discover. So taking the good advice of ```@carols10cents```  I am trying to improve the docs with a PR

As a reminder "pretty print" means that where `{:?}` will print something like
```
foo: { b1: 1, b2: 2}
```

`{:#?}` will prints something like
```
foo {
  b1: 1
  b2: 3
}
```

# Changes
Add an example to `fmt` to try and make it easier to discover `#`
2021-03-27 12:37:20 +09:00
Josh Stone
3b1f5e3462 Use iter::zip in library/ 2021-03-26 09:32:29 -07:00