Commit Graph

1664 Commits

Author SHA1 Message Date
bors
3d31363338 Auto merge of #85176 - a1phyr:impl_clone_from, r=yaahc
Override `clone_from` for some types

Override `clone_from` method of the `Clone` trait for:
- `cell::RefCell`
- `cmp::Reverse`
- `io::Cursor`
- `mem::ManuallyDrop`

This can bring performance improvements.
2021-05-19 02:17:41 +00:00
bors
4e3e6db011 Auto merge of #84767 - scottmcm:try_trait_actual, r=lcnr
Implement the new desugaring from `try_trait_v2`

~~Currently blocked on https://github.com/rust-lang/rust/issues/84782, which has a PR in https://github.com/rust-lang/rust/pull/84811~~ Rebased atop that fix.

`try_trait_v2` tracking issue: https://github.com/rust-lang/rust/issues/84277

Unfortunately this is already touching a ton of things, so if you have suggestions for good ways to split it up, I'd be happy to hear them.  (The combination between the use in the library, the compiler changes, the corresponding diagnostic differences, even MIR tests mean that I don't really have a great plan for it other than trying to have decently-readable commits.

r? `@ghost`

~~(This probably shouldn't go in during the last week before the fork anyway.)~~ Fork happened.
2021-05-18 20:50:01 +00:00
Guillaume Gomez
a181806b8c
Rollup merge of #85338 - lopopolo:core-iter-repeat-gh-81292, r=joshtriplett
Implement more Iterator methods on core::iter::Repeat

`core::iter::Repeat` always returns the same element, which means we can
do better than implementing most `Iterator` methods in terms of
`Iterator::next`.

Fixes #81292.

#81292 raises the question of whether these changes violate the contract of `core::iter::Repeat`, but as far as I can tell `core::iter::repeat` doesn't make any guarantees around how it calls `Clone::clone`.
2021-05-18 14:08:46 +02:00
The8472
39e492a2be mark internal inplace_iteration traits as hidden 2021-05-16 19:36:21 +02:00
Ryan Lopopolo
963bd3b643
Implement more Iterator methods on core::iter::Repeat
`core::iter::Repeat` always returns the same element, which means we can
do better than implementing most `Iterator` methods in terms of
`Iterator::next`.

Fixes #81292.
2021-05-15 10:37:05 -07:00
Guillaume Gomez
62b834fb9f
Rollup merge of #84751 - Soveu:is_char_boundary_opt, r=Amanieu
str::is_char_boundary - slight optimization

Current `str::is_char_boundary` implementation emits slightly more instructions, because it includes an additional branch for `index == s.len()`
```rust
pub fn is_char_boundary(s: &str, index: usize) -> bool {
    if index == 0 || index == s.len() {
        return true;
    }
    match s.as_bytes().get(index) {
        None => false,
        Some(&b) => (b as i8) >= -0x40,
    }
}
```
Just changing the place of `index == s.len()` merges it with `index < s.len()` from `s.as_bytes().get(index)`
```rust
pub fn is_char_boundary2(s: &str, index: usize) -> bool {
    if index == 0 {
        return true;
    }

    match s.as_bytes().get(index) {
        // For some reason, LLVM likes this comparison here more
        None => index == s.len(),
        // This is bit magic equivalent to: b < 128 || b >= 192
        Some(&b) => (b as i8) >= -0x40,
    }
}
```
This one has better codegen on every platform, except powerpc
<details><summary>x86 codegen</summary>
<p>

```nasm
example::is_char_boundary:
        mov     al, 1
        test    rdx, rdx
        je      .LBB0_5
        cmp     rsi, rdx
        je      .LBB0_5
        cmp     rsi, rdx
        jbe     .LBB0_3
        cmp     byte ptr [rdi + rdx], -65
        setg    al
.LBB0_5:
        ret
.LBB0_3:
        xor     eax, eax
        ret

example::is_char_boundary2:
        test    rdx, rdx
        je      .LBB1_1
        cmp     rsi, rdx
        jbe     .LBB1_4
        cmp     byte ptr [rdi + rdx], -65
        setg    al
        ret
.LBB1_1:  ; technically this branch is the same as LBB1_4
        mov     al, 1
        ret
.LBB1_4:
        sete    al
        ret
 ```
</p>
</details>

<details><summary>aarch64 codegen</summary>
<p>

```as
example::is_char_boundary:
        mov     x8, x0
        mov     w0, #1
        cbz     x2, .LBB0_4
        cmp     x1, x2
        b.eq    .LBB0_4
        b.ls    .LBB0_5
        ldrsb   w8, [x8, x2]
        cmn     w8, #65
        cset    w0, gt
.LBB0_4:
        ret
.LBB0_5:
        mov     w0, wzr
        ret

example::is_char_boundary2:
        cbz     x2, .LBB1_3
        cmp     x1, x2
        b.ls    .LBB1_4
        ldrsb   w8, [x0, x2]
        cmn     w8, #65
        cset    w0, gt
        ret
.LBB1_3:
        mov     w0, #1
        ret
.LBB1_4:
        cset    w0, eq
        ret
```

</p>
</details>

<details><summary>riscv64gc codegen</summary>
<p>

example::is_char_boundary:
        seqz    a3, a2
        xor     a4, a1, a2
        seqz    a4, a4
        or      a4, a4, a3
        addi    a3, zero, 1
        bnez    a4, .LBB0_3
        bgeu    a2, a1, .LBB0_4
        add     a0, a0, a2
        lb      a0, 0(a0)
        addi    a1, zero, -65
        slt     a3, a1, a0
.LBB0_3:
        mv      a0, a3
        ret
.LBB0_4:
        mv      a0, zero
        ret

example::is_char_boundary2:
        beqz    a2, .LBB1_3
        bgeu    a2, a1, .LBB1_4
        add     a0, a0, a2
        lb      a0, 0(a0)
        addi    a1, zero, -65
        slt     a0, a1, a0
        ret
.LBB1_3:
        addi    a0, zero, 1
        ret
.LBB1_4:
        xor     a0, a1, a2
        seqz    a0, a0
        ret

</p>
</details>

[Link to godbolt](https://godbolt.org/z/K8avEz8Gr)

`@rustbot` label: A-codegen
2021-05-15 17:56:47 +02:00
Amanieu d'Antras
5918ee4317 Add support for const operands and options to global_asm!
On x86, the default syntax is also switched to Intel to match asm!
2021-05-13 22:31:57 +01:00
Guillaume Gomez
16c825485f
Rollup merge of #85177 - tspiteri:wrapping-bits, r=joshtriplett
add BITS associated constant to core::num::Wrapping

This keeps `Wrapping` synchronized with the primitives it wraps as for the #32463 `wrapping_int_impl` feature.
2021-05-13 15:54:13 +02:00
bors
31bd868c39 Auto merge of #85218 - kornelski:pointerinline, r=scottmcm
#[inline(always)] on basic pointer methods

Retryng #85201 with only inlining pointer methods. The goal is to make pointers behave just like pointers in O0, mainly to reduce overhead in debug builds.

cc `@scottmcm`
2021-05-12 21:50:27 +00:00
bors
28e2b29b89 Auto merge of #84730 - sexxi-goose:rox-auto-trait, r=nikomatsakis
Add auto traits and clone trait migrations for RFC2229

This PR
- renames the existent RFC2229 migration `disjoint_capture_drop_reorder` to `disjoint_capture_migration`
- add additional migrations for auto traits and clone trait

Closes rust-lang/project-rfc-2229#29
Closes rust-lang/project-rfc-2229#28

r? `@nikomatsakis`
2021-05-12 13:33:32 +00:00
Kornel
3773740719 #[inline(always)] on basic pointer methods 2021-05-12 10:10:28 +01:00
Trevor Spiteri
a381e29117 add BITS associated constant to core::num::Wrapping
This keeps `Wrapping` synchronized with the primitives it wraps as for
the #32463 `wrapping_int_impl` feature.
2021-05-11 13:36:43 +02:00
Benoît du Garreau
9332ac3bfc Override clone_from for some types 2021-05-11 13:00:34 +02:00
Dylan DPC
7107c89970
Rollup merge of #85096 - clarfonthey:const_unchecked, r=oli-obk
Make unchecked_{add,sub,mul} inherent methods unstably const

The intrinsics are marked as being stably const (even though they're not stable by nature of being intrinsics), but the currently-unstable inherent versions are not marked as const. This fixes this inconsistency. Split out of #85017,

r? `@oli-obk`
2021-05-10 16:15:02 +02:00
Scott McMurray
bf0e34c001 PR feedback 2021-05-09 22:05:02 -07:00
ltdk
e6b12c8e4f Fix Step feature flag, make tidy lint more useful to find things like this 2021-05-09 17:15:54 -04:00
ltdk
380bbe8d47 Make unchecked_{add,sub,mul} inherent methods unstably const 2021-05-09 16:29:40 -04:00
Dylan DPC
aaf23892ab
Rollup merge of #84871 - richkadel:no-coverage-unstable-only, r=nagisa
Disallows `#![feature(no_coverage)]` on stable and beta (using standard crate-level gating)

Fixes: #84836

Removes the function-level feature gating solution originally implemented, and solves the same problem using `allow_internal_unstable`, so normal crate-level feature gating mechanism can still be used (which disallows the feature on stable and beta).

I tested this, building the compiler with and without `CFG_DISABLE_UNSTABLE_FEATURES=1`

With unstable features disabled, I get the expected result as shown here:

```shell
$ ./build/x86_64-unknown-linux-gnu/stage1/bin/rustc     src/test/run-make-fulldeps/coverage/no_cov_crate.rs
error[E0554]: `#![feature]` may not be used on the dev release channel
 --> src/test/run-make-fulldeps/coverage/no_cov_crate.rs:2:1
  |
2 | #![feature(no_coverage)]
  | ^^^^^^^^^^^^^^^^^^^^^^^^

error: aborting due to previous error

For more information about this error, try `rustc --explain E0554`.
```

r? ````@Mark-Simulacrum````
cc: ````@tmandry```` ````@wesleywiser````
2021-05-07 00:38:40 +02:00
Dylan DPC
7835c7802d
Rollup merge of #84755 - jyn514:core-links, r=kennytm
Allow using `core::` in intra-doc links within core itself

I came up with this idea ages ago, but rustdoc used to ICE on it. Now it doesn't.

Helps with https://github.com/rust-lang/rust/issues/73445. Doesn't fix it completely since `extern crate self as std;` in std still gives strange errors.
2021-05-07 00:38:38 +02:00
Scott McMurray
b7a6c4a905 Perf Experiment: Wait, what if I just skip the trait alias 2021-05-06 11:37:46 -07:00
Scott McMurray
3d9660111c Fix rustdoc::private-intra-doc-links errors in the docs 2021-05-06 11:37:46 -07:00
Scott McMurray
4a7ceea930 Better rustc_on_unimplemented, and UI test fixes 2021-05-06 11:37:45 -07:00
Scott McMurray
266a72637a Simple library test updates 2021-05-06 11:37:45 -07:00
Scott McMurray
ca92b5a23a Actually implement the feature in the compiler
Including all the bootstrapping tweaks in the library.
2021-05-06 11:37:45 -07:00
Scott McMurray
c10eec3a1c Bootstrapping preparation for the library
Since just `ops::Try` will need to change meaning.
2021-05-06 11:37:44 -07:00
Roxane
9afea614bf Add additional migrations to handle auto-traits and clone traits
Combine all 2229 migrations under one flag name
2021-05-06 14:17:59 -04:00
Dylan DPC
ccf0e3e068
Rollup merge of #84949 - sdroege:maybe-unint-typo, r=m-ou-se
Fix typo in `MaybeUninit::array_assume_init` safety comment

And also add backticks around `MaybeUninit`.
2021-05-06 13:31:00 +02:00
Ralf Jung
92f3f0830f
Rollup merge of #84878 - jimblandy:contains-doc-fix, r=joshtriplett
Clarify documentation for `[T]::contains`

Change the documentation to correctly characterize when the suggested alternative to `contains` applies, and correctly explain why it works.

Fixes #84877
2021-05-05 17:52:26 +02:00
Ralf Jung
9ffba0917b
Rollup merge of #84843 - wcampbell0x2a:use-else-if-let, r=dtolnay
use else if in std library

Decreases indentation and improves readability
2021-05-05 17:52:24 +02:00
Ralf Jung
722bebf163
Rollup merge of #83553 - jfrimmel:addr-of, r=m-ou-se
Update `ptr` docs with regards to `ptr::addr_of!`

This updates the documentation since `ptr::addr_of!` and `ptr::addr_of_mut!` are now stable. One might remove the distinction between the sections `# On packed structs` and `# Examples`, as the old section on packed structs was primarily to prevent users of doing undefined behavior, which is not necessary anymore.

Technically there is now wrong/outdated documentation on stable, but I don't think this is worth a point release 😉

Fixes #83509.

``````````@rustbot`````````` modify labels: T-doc
2021-05-05 17:52:18 +02:00
Rich Kadel
3584c1dd0c Disallows #![feature(no_coverage)] on stable and beta
using allow_internal_unstable (as recommended)

Fixes: #84836

```shell
$ ./build/x86_64-unknown-linux-gnu/stage1/bin/rustc     src/test/run-make-fulldeps/coverage/no_cov_crate.rs
error[E0554]: `#![feature]` may not be used on the dev release channel
 --> src/test/run-make-fulldeps/coverage/no_cov_crate.rs:2:1
  |
2 | #![feature(no_coverage)]
  | ^^^^^^^^^^^^^^^^^^^^^^^^

error: aborting due to previous error

For more information about this error, try `rustc --explain E0554`.
```
2021-05-05 07:52:26 -07:00
Sebastian Dröge
42405b4fa8 Fix typo in MaybeUninit::array_assume_init safety comment
And also add backticks around `MaybeUninit`.
2021-05-05 12:31:38 +03:00
Julian Frimmel
389333a21c Update ptr docs with regards to ptr::addr_of!
This updates the documentation since `ptr::addr_of!` and
`ptr::addr_of_mut!` are now stable. One might remove the distinction
between the sections `# On packed structs` and `# Examples`, as the old
section on packed structs was primarily to prevent users of doing unde-
fined behavior, which is not necessary anymore.
There is also a new section in "how to obtain a pointer", which referen-
ces the `ptr::addr_of!` macros.

This commit contains squashed commits from code review.

Co-authored-by: Joshua Nelson <joshua@yottadb.com>
Co-authored-by: Mara Bos <m-ou.se@m-ou.se>
Co-authored-by: Soveu <marx.tomasz@gmail.com>
Co-authored-by: Ralf Jung <post@ralfj.de>
2021-05-03 23:14:17 +02:00
Jim Blandy
d53469c1d3 Clarify documentation for [T]::contains. Fixes #84877. 2021-05-03 12:01:16 -07:00
wcampbell
2e559c8e10
use else if in std library
Clippy: Decreases indentation and improves readability

Signed-off-by: wcampbell <wcampbell1995@gmail.com>
2021-05-03 07:05:08 -04:00
bors
e327a823d8 Auto merge of #84845 - wcampbell0x2a:clippy-redundant-field-names, r=joshtriplett
[clippy] remove redundant field names
2021-05-03 08:05:12 +00:00
wcampbell
962c3416ca
[clippy] remove redundant field names
Signed-off-by: wcampbell <wcampbell1995@gmail.com>
2021-05-02 20:24:17 -04:00
Brent Kerby
6679f5ceb1 Change 'NULL' to 'null' 2021-05-02 17:46:00 -06:00
bors
e244e840f2 Auto merge of #84725 - sebpop:arm64-isb, r=joshtriplett
[Arm64] use isb instruction instead of yield in spin loops

On arm64 we have seen on several databases that ISB (instruction synchronization
barrier) is better to use than yield in a spin loop.  The yield instruction is a
nop.  The isb instruction puts the processor to sleep for some short time.  isb
is a good equivalent to the pause instruction on x86.

Below is an experiment that shows the effects of yield and isb on Arm64 and the
time of a pause instruction on x86 Intel processors.  The micro-benchmarks use
https://github.com/google/benchmark.git

```
$ cat a.cc
static void BM_scalar_increment(benchmark::State& state) {
  int i = 0;
  for (auto _ : state)
    benchmark::DoNotOptimize(i++);
}
BENCHMARK(BM_scalar_increment);
static void BM_yield(benchmark::State& state) {
  for (auto _ : state)
    asm volatile("yield"::);
}
BENCHMARK(BM_yield);
static void BM_isb(benchmark::State& state) {
  for (auto _ : state)
    asm volatile("isb"::);
}
BENCHMARK(BM_isb);
BENCHMARK_MAIN();

$ g++ -o run a.cc -O2 -lbenchmark -lpthread
$ ./run

--------------------------------------------------------------
Benchmark                    Time             CPU   Iterations
--------------------------------------------------------------

AWS Graviton2 (Neoverse-N1) processor:
BM_scalar_increment      0.485 ns        0.485 ns   1000000000
BM_yield                 0.400 ns        0.400 ns   1000000000
BM_isb                    13.2 ns         13.2 ns     52993304

AWS Graviton (A-72) processor:
BM_scalar_increment      0.897 ns        0.874 ns    801558633
BM_yield                 0.877 ns        0.875 ns    800002377
BM_isb                    13.0 ns         12.7 ns     55169412

Apple Arm64 M1 processor:
BM_scalar_increment      0.315 ns        0.315 ns   1000000000
BM_yield                 0.313 ns        0.313 ns   1000000000
BM_isb                    9.06 ns         9.06 ns     77259282
```

```
static void BM_pause(benchmark::State& state) {
  for (auto _ : state)
    asm volatile("pause"::);
}
BENCHMARK(BM_pause);

Intel Skylake processor:
BM_scalar_increment      0.295 ns        0.295 ns   1000000000
BM_pause                  41.7 ns         41.7 ns     16780553
```

Tested on Graviton2 aarch64-linux with `./x.py test`.
2021-05-02 04:54:31 +00:00
Soveu
7bd9d9f1e9 str::is_char_boundary - few comments 2021-04-30 20:51:30 +02:00
Joshua Nelson
4a63e1e991 Allow using core:: in intra-doc links within core itself
I came up with this idea ages ago, but rustdoc used to ICE on it. Now it
doesn't.
2021-04-30 14:57:07 +00:00
Soveu
2ea0410f09 str::is_char_boundary - slight optimization 2021-04-30 16:13:00 +02:00
Sebastian Pop
c064b6560b [Arm64] use isb instruction instead of yield in spin loops
On arm64 we have seen on several databases that ISB (instruction synchronization
barrier) is better to use than yield in a spin loop.  The yield instruction is a
nop.  The isb instruction puts the processor to sleep for some short time.  isb
is a good equivalent to the pause instruction on x86.

Below is an experiment that shows the effects of yield and isb on Arm64 and the
time of a pause instruction on x86 Intel processors.  The micro-benchmarks use
https://github.com/google/benchmark.git

$ cat a.cc
static void BM_scalar_increment(benchmark::State& state) {
  int i = 0;
  for (auto _ : state)
    benchmark::DoNotOptimize(i++);
}
BENCHMARK(BM_scalar_increment);
static void BM_yield(benchmark::State& state) {
  for (auto _ : state)
    asm volatile("yield"::);
}
BENCHMARK(BM_yield);
static void BM_isb(benchmark::State& state) {
  for (auto _ : state)
    asm volatile("isb"::);
}
BENCHMARK(BM_isb);
BENCHMARK_MAIN();

$ g++ -o run a.cc -O2 -lbenchmark -lpthread
$ ./run

--------------------------------------------------------------
Benchmark                    Time             CPU   Iterations
--------------------------------------------------------------

AWS Graviton2 (Neoverse-N1) processor:
BM_scalar_increment      0.485 ns        0.485 ns   1000000000
BM_yield                 0.400 ns        0.400 ns   1000000000
BM_isb                    13.2 ns         13.2 ns     52993304

AWS Graviton (A-72) processor:
BM_scalar_increment      0.897 ns        0.874 ns    801558633
BM_yield                 0.877 ns        0.875 ns    800002377
BM_isb                    13.0 ns         12.7 ns     55169412

Apple Arm64 M1 processor:
BM_scalar_increment      0.315 ns        0.315 ns   1000000000
BM_yield                 0.313 ns        0.313 ns   1000000000
BM_isb                    9.06 ns         9.06 ns     77259282

static void BM_pause(benchmark::State& state) {
  for (auto _ : state)
    asm volatile("pause"::);
}
BENCHMARK(BM_pause);

Intel Skylake processor:
BM_scalar_increment      0.295 ns        0.295 ns   1000000000
BM_pause                  41.7 ns         41.7 ns     16780553

Tested on Graviton2 aarch64-linux with `./x.py test`.
2021-04-29 23:05:40 +00:00
Josh Triplett
20b569f579 Drop alias reduce for fold - we have a reduce function
Searching for "reduce" currently puts the `reduce` alias for `fold`
above the actual `reduce` function. The `reduce` function already has a
cross-reference for `fold`, and vice versa.
2021-04-29 12:05:08 -07:00
Rich Kadel
3a5df48021 adds feature gating of no_coverage at either crate- or function-level 2021-04-27 17:12:51 -07:00
Rich Kadel
888d0b4c96 Derived Eq no longer shows uncovered
The Eq trait has a special hidden function. MIR `InstrumentCoverage`
would add this function to the coverage map, but it is never called, so
the `Eq` trait would always appear uncovered.

Fixes: #83601

The fix required creating a new function attribute `no_coverage` to mark
functions that should be ignored by `InstrumentCoverage` and the
coverage `mapgen` (during codegen).

While testing, I also noticed two other issues:

* spanview debug file output ICEd on a function with no body. The
workaround for this is included in this PR.
* `assert_*!()` macro coverage can appear covered if followed by another
`assert_*!()` macro. Normally they appear uncovered. I submitted a new
Issue #84561, and added a coverage test to demonstrate this issue.
2021-04-27 11:11:56 -07:00
Dan Zwell
6c22b39187 Reorder the parameter descriptions of map_or and map_or_else
They were described backwards. #84608
2021-04-27 18:00:30 +08:00
bors
61e171566a Auto merge of #84092 - scottmcm:try_trait_initial, r=yaahc,m-ou-se
Add the `try_trait_v2` library basics

No compiler changes as part of this -- just new unstable traits and impls thereof.

The goal here is to add the things that aren't going to break anything, to keep the feature implementation simpler in the next PR.

(Draft since the FCP won't end until Saturday, but I was feeling optimistic today -- and had forgotten that FCP was 10 days, not 7 days.)
2021-04-26 23:17:31 +00:00
Mara Bos
9758d532f0
Rollup merge of #84523 - m-ou-se:stabilize-ordering-helpers, r=m-ou-se
Stabilize ordering_helpers.

Tracking issue: https://github.com/rust-lang/rust/issues/79885

Closes https://github.com/rust-lang/rust/issues/79885
2021-04-26 21:06:47 +02:00
Mara Bos
fb1502d570
Rollup merge of #84120 - workingjubilee:stabilize-duration-max, r=m-ou-se
Stabilize Duration::MAX

Following the suggested direction from https://github.com/rust-lang/rust/issues/76416#issuecomment-817278338, this PR proposes that `Duration::MAX` should have been part of the `duration_saturating_ops` feature flag all along, having been

0. heavily referenced by that feature flag
1. an odd duck next to most of `duration_constants`, as I expressed in https://github.com/rust-lang/rust/issues/57391#issuecomment-717681193
2. introduced in #76114 which added `duration_saturating_ops`

and accordingly should be folded into `duration_saturating_ops` and therefore stabilized.

r? `@m-ou-se`
2021-04-26 21:06:46 +02:00