Commit Graph

529 Commits

Author SHA1 Message Date
Guillaume Gomez
0e82dc969f
Rollup merge of #99583 - shepmaster:provider-plus-plus, r=yaahc
Add additional methods to the Demand type

This adds on to the original tracking issue #96024

r? `````@yaahc`````
2022-09-02 11:34:46 +02:00
Dylan DPC
395ce34a95
Rollup merge of #100819 - WaffleLapkin:use_ptr_byte_methods, r=scottmcm
Make use of `[wrapping_]byte_{add,sub}`

These new methods trivially replace old `.cast().wrapping_offset().cast()` & similar code.
Note that [`arith_offset`](https://doc.rust-lang.org/std/intrinsics/fn.arith_offset.html) and `wrapping_offset` are the same thing.

r? ``@scottmcm``

_split off from #100746_
2022-08-29 16:49:43 +05:30
Yuki Okushi
ba31a9b505
Rollup merge of #100604 - dtolnay:okorerr, r=m-ou-se
Remove unstable Result::into_ok_or_err

Pending FCP: https://github.com/rust-lang/rust/issues/82223#issuecomment-1214920203

```@rustbot``` label +waiting-on-fcp
2022-08-26 09:51:44 +09:00
Matthias Krüger
6deca5f067
Rollup merge of #100220 - scottmcm:fix-by-ref-sized, r=joshtriplett
Properly forward `ByRefSized::fold` to the inner iterator

cc ``@timvermeulen,`` who noticed this mistake in https://github.com/rust-lang/rust/pull/100214#issuecomment-1207317625
2022-08-24 18:20:08 +02:00
Maybe Waffle
53565b23ac Make use of [wrapping_]byte_{add,sub}
...replacing `.cast().wrapping_offset().cast()` & similar code.
2022-08-23 19:32:37 +04:00
Jake Goulding
38de102cff Support eager and lazy methods for providing references and values
There are times where computing a value may be cheap, or where
computing a reference may be expensive, so this fills out the
possibilities.
2022-08-23 09:58:50 -04:00
Matthias Krüger
d49906519b
Rollup merge of #99544 - dylni:expose-utf8lossy, r=Mark-Simulacrum
Expose `Utf8Lossy` as `Utf8Chunks`

This PR changes the feature for `Utf8Lossy` from `str_internals` to `utf8_lossy` and improves the API. This is done to eventually expose the API as stable.

Proposal: rust-lang/libs-team#54
Tracking Issue: #99543
2022-08-20 19:32:07 +02:00
dylni
e8ee0b7b2b Expose Utf8Lossy as Utf8Chunks 2022-08-20 12:49:20 -04:00
bors
6c943bad02 Auto merge of #99541 - timvermeulen:flatten_cleanup, r=the8472
Refactor iteration logic in the `Flatten` and `FlatMap` iterators

The `Flatten` and `FlatMap` iterators both delegate to `FlattenCompat`:
```rust
struct FlattenCompat<I, U> {
    iter: Fuse<I>,
    frontiter: Option<U>,
    backiter: Option<U>,
}
```
Every individual iterator method that `FlattenCompat` implements needs to carefully manage this state, checking whether the `frontiter` and `backiter` are present, and storing the current iterator appropriately if iteration is aborted. This has led to methods such as `next`, `advance_by`, and `try_fold` all having similar code for managing the iterator's state.

I have extracted this common logic of iterating the inner iterators with the option to exit early into a `iter_try_fold` method:
```rust
impl<I, U> FlattenCompat<I, U>
where
    I: Iterator<Item: IntoIterator<IntoIter = U>>,
{
    fn iter_try_fold<Acc, Fold, R>(&mut self, acc: Acc, fold: Fold) -> R
    where
        Fold: FnMut(Acc, &mut U) -> R,
        R: Try<Output = Acc>,
    { ... }
}
```
It passes each of the inner iterators to the given function as long as it keep succeeding. It takes care of managing `FlattenCompat`'s state, so that the actual `Iterator` methods don't need to. The resulting code that makes use of this abstraction is much more straightforward:
```rust
fn next(&mut self) -> Option<U::Item> {
    #[inline]
    fn next<U: Iterator>((): (), iter: &mut U) -> ControlFlow<U::Item> {
        match iter.next() {
            None => ControlFlow::CONTINUE,
            Some(x) => ControlFlow::Break(x),
        }
    }

    self.iter_try_fold((), next).break_value()
}
```
Note that despite being implemented in terms of `iter_try_fold`, `next` is still able to benefit from `U`'s `next` method. It therefore does not take the performance hit that implementing `next` directly in terms of `Self::try_fold` causes (in some benchmarks).

This PR also adds `iter_try_rfold` which captures the shared logic of `try_rfold` and `advance_back_by`, as well as `iter_fold` and `iter_rfold` for folding without early exits (used by `fold`, `rfold`, `count`, and `last`).

Benchmark results:
```
                                             before                after
bench_flat_map_sum                       423,255 ns/iter      414,338 ns/iter
bench_flat_map_ref_sum                 1,942,139 ns/iter    2,216,643 ns/iter
bench_flat_map_chain_sum               1,616,840 ns/iter    1,246,445 ns/iter
bench_flat_map_chain_ref_sum           4,348,110 ns/iter    3,574,775 ns/iter
bench_flat_map_chain_option_sum          780,037 ns/iter      780,679 ns/iter
bench_flat_map_chain_option_ref_sum    2,056,458 ns/iter      834,932 ns/iter
```

I added the last two benchmarks specifically to demonstrate an extreme case where `FlatMap::next` can benefit from custom internal iteration of the outer iterator, so take it with a grain of salt. We should probably do a perf run to see if the changes to `next` are worth it in practice.
2022-08-19 02:34:30 +00:00
David Tolnay
83f081fc01
Remove unstable Result::into_ok_or_err 2022-08-17 17:20:42 -07:00
Scott McMurray
7680c8b690 Properly forward ByRefSized::fold to the inner iterator 2022-08-14 22:55:30 -07:00
austinabell
00bc9e8ac4
fix(iter::skip): Optimize next and nth implementations of Skip 2022-08-14 13:25:13 -04:00
Dylan DPC
482a6eaf10
Rollup merge of #100026 - WaffleLapkin:array-chunks, r=scottmcm
Add `Iterator::array_chunks` (take N+1)

A revival of https://github.com/rust-lang/rust/pull/92393.

r? `@Mark-Simulacrum`
cc `@rossmacarthur` `@scottmcm` `@the8472`

I've tried to address most of the review comments on the previous attempt. The only thing I didn't address is `try_fold` implementation, I've left the "custom" one for now, not sure what exactly should it use.
2022-08-14 17:09:14 +05:30
Dylan DPC
51eed00ca9
Rollup merge of #100030 - WaffleLapkin:nice_pointer_sis, r=scottmcm
cleanup code w/ pointers in std a little

Use pointer methods (`byte_add`, `null_mut`, etc) to make code in std a little nicer.
2022-08-12 20:39:10 +05:30
Matthias Krüger
275d4e779a
Rollup merge of #100112 - RalfJung:assert_send_and_sync, r=m-ou-se
Fix test: chunks_mut_are_send_and_sync

Follow-up to https://github.com/rust-lang/rust/pull/100023 to make the test actually effective
2022-08-11 22:53:03 +02:00
bors
908fc5b26d Auto merge of #99174 - scottmcm:reoptimize-layout-array, r=joshtriplett
Reoptimize layout array

This way it's one check instead of two, so hopefully (cc #99117) it'll be simpler for rustc perf too 🤞

Quick demonstration:
```rust
pub fn demo(n: usize) -> Option<Layout> {
    Layout::array::<i32>(n).ok()
}
```

Nightly: <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=e97bf33508aa03f38968101cdeb5322d>
```nasm
	mov	rax, rdi
	mov	ecx, 4
	mul	rcx
	seto	cl
	movabs	rdx, 9223372036854775805
	xor	esi, esi
	cmp	rax, rdx
	setb	sil
	shl	rsi, 2
	xor	edx, edx
	test	cl, cl
	cmove	rdx, rsi
	ret
```

This PR (note no `mul`, in addition to being much shorter):
```nasm
	xor	edx, edx
	lea	rax, [4*rcx]
	shr	rcx, 61
	sete	dl
	shl	rdx, 2
	ret
```

This is built atop `@CAD97` 's #99136; the new changes are cb8aba66ef6a0e17f08a0574e4820653e31b45a0.

I added a bunch more tests for `Layout::from_size_align` and `Layout::array` too.
2022-08-10 23:50:18 +00:00
Eric Holk
c18f22058b Rename integer log* methods to ilog*
This reflects the concensus from the libs team as reported at
https://github.com/rust-lang/rust/issues/70887#issuecomment-1209513261

Co-authored-by: Yosh Wuyts <github@yosh.is>
2022-08-09 10:20:49 -07:00
Maybe Waffle
127b6c4c18 cleanup code w/ pointers in std a little 2022-08-05 16:47:49 +04:00
Tim Vermeulen
3f7004920c Move fold logic to iter_fold method and reuse it in count and last 2022-08-05 03:43:39 +02:00
Ralf Jung
a61c841385 actually call assert_send_and_sync 2022-08-03 12:44:21 -04:00
Maybe Waffle
4db628a801 Remove incorrect impl TrustedLen for ArrayChunks
As explained in the review of the previous attempt to add `ArrayChunks`,
adapters that shrink the length can't implement `TrustedLen`.
2022-08-01 19:16:24 +04:00
Ben Kimock
22dfbdd707 Add back Send and Sync impls on ChunksMut iterators
These were accidentally removed in #94247 because the representation was
changed from &mut [T] to *mut T, which has !Send + !Sync.
2022-08-01 10:32:45 -04:00
Ross MacArthur
f5485181ca Use array::IntoIter for the ArrayChunks remainder 2022-08-01 16:39:30 +04:00
Ross MacArthur
ca3d1010bb Add Iterator::array_chunks() 2022-08-01 16:39:27 +04:00
Guillaume Gomez
ef81fca760
Rollup merge of #94247 - saethlin:chunksmut-aliasing, r=the8472
Fix slice::ChunksMut aliasing

Fixes https://github.com/rust-lang/rust/issues/94231, details in that issue.
cc `@RalfJung`

This isn't done just yet, all the safety comments are placeholders. But otherwise, it seems to work.

I don't really like this approach though. There's a lot of unsafe code where there wasn't before, but as far as I can tell the only other way to uphold the aliasing requirement imposed by `__iterator_get_unchecked` is to use raw slices, which I think require the same amount of unsafe code. All that would do is tie the `len` and `ptr` fields together.

Oh I just looked and I'm pretty sure that `ChunksExactMut`, `RChunksMut`, and `RChunksExactMut` also need to be patched. Even more reason to put up a draft.
2022-07-27 17:55:01 +02:00
bors
41419e7036 Auto merge of #99491 - workingjubilee:sync-psimd, r=workingjubilee
Sync in portable-simd subtree

r? `@ghost`
2022-07-22 09:48:00 +00:00
Jubilee Young
f8aa494c69 Introduce core::simd trait imports in tests 2022-07-20 18:08:20 -07:00
bors
8bd12e8cca Auto merge of #98912 - nrc:provider-it, r=yaahc
core::any: replace some generic types with impl Trait

This gives a cleaner API since the caller only specifies the concrete type they usually want to.

r? `@yaahc`
2022-07-19 11:28:20 +00:00
Dylan DPC
e301cd39ad
Rollup merge of #99434 - timvermeulen:skip_next_non_fused, r=scottmcm
Fix `Skip::next` for non-fused inner iterators

`iter.skip(n).next()` will currently call `nth` and `next` in succession on `iter`, without checking whether `nth` exhausts the iterator. Using `?` to propagate a `None` value returned by `nth` avoids this.
2022-07-19 11:38:58 +05:30
Tim Vermeulen
e52837c362 Add note to test about Unfuse 2022-07-18 21:53:35 +02:00
Tim Vermeulen
50c612faef Fix Skip::next for non-fused inner iterators 2022-07-18 21:10:47 +02:00
Dylan DPC
5ccdf1f6f7
Rollup merge of #98839 - 5225225:assert_transmute_copy_size, r=thomcc
Add assertion that `transmute_copy`'s U is not larger than T

This is called out as a safety requirement in the docs, but because knowing this can be done at compile time and constant folded (just like the `align_of` branch is removed), we can just panic here.

I've looked at the asm (using `cargo-asm`) of a function that both is correct and incorrect, and the panic is completely removed, or is unconditional, without needing build-std.

I don't expect this to cause much breakage in the wild. I scanned through https://miri.saethlin.dev/ub for issues that would look like this (error: Undefined Behavior: memory access failed: alloc1768 has size 1, so pointer to 8 bytes starting at offset 0 is out-of-bounds), but couldn't find any.

That doesn't rule out it happening in crates tested that fail earlier for some other reason, though, but it indicates that doing this is rare, if it happens at all. A crater run for this would need to be build and test, since this is a runtime thing.

Also added a few more transmute_copy tests.
2022-07-18 21:14:42 +05:30
Yuki Okushi
50527690e2
Rollup merge of #99306 - JohnTitor:stabilize-future-poll-fn, r=joshtriplett
Stabilize `future_poll_fn`

FCP is done: https://github.com/rust-lang/rust/issues/72302#issuecomment-1179620512
Closes #72302

r? `@joshtriplett` as you started FCP

Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-07-17 13:08:52 +09:00
bors
db41351753 Auto merge of #98866 - nagisa:nagisa/align-offset-wroom, r=Mark-Simulacrum
Add a special case for align_offset /w stride != 1

This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:

    leaq 15(%rdi), %rax
    andq $-16, %rax

This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:

    pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
        slice.offset(slice.align_offset(16) as isize)
    }

    // =>

    movl %edi, %eax
    andl $3, %eax
    leaq 15(%rdi), %rcx
    andq $-16, %rcx
    subq %rdi, %rcx
    shrq $2, %rcx
    negq %rax
    sbbq %rax, %rax
    orq  %rcx, %rax
    leaq (%rdi,%rax,4), %rax

Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.

This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.

Fixes #98809.
Sadly, this does not help with #72356.
2022-07-16 23:28:28 +00:00
Simonas Kazlauskas
62a182cf7f Add a special case for align_offset /w stride != 1
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:

    leaq 15(%rdi), %rax
    andq $-16, %rax

This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:

    pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
        slice.offset(slice.align_offset(16) as isize)
    }

    // =>

    movl %edi, %eax
    andl $3, %eax
    leaq 15(%rdi), %rcx
    andq $-16, %rcx
    subq %rdi, %rcx
    shrq $2, %rcx
    negq %rax
    sbbq %rax, %rax
    orq  %rcx, %rax
    leaq (%rdi,%rax,4), %rax

Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.

This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.

Fixes #98809.
Sadly, this does not help with #72356.
2022-07-17 01:27:37 +03:00
Yuki Okushi
084ad59622
Stabilize future_poll_fn
Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-07-16 10:04:14 +09:00
bors
24699bcbad Auto merge of #95956 - yaahc:stable-in-unstable, r=cjgillot
Support unstable moves via stable in unstable items

part of https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/moving.20items.20to.20core.20unstably and a blocker of https://github.com/rust-lang/rust/pull/90328.

The libs-api team needs the ability to move an already stable item to a new location unstably, in this case for Error in core. Otherwise these changes are insta-stable making them much harder to merge.

This PR attempts to solve the problem by checking the stability of path segments as well as the last item in the path itself, which is currently the only thing checked.
2022-07-14 13:42:09 +00:00
Dylan DPC
103b8602b7
Rollup merge of #98315 - joshtriplett:stabilize-core-ffi-c, r=Mark-Simulacrum
Stabilize `core::ffi:c_*` and rexport in `std::ffi`

This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
2022-07-14 14:14:20 +05:30
Josh Triplett
d431338b25 Stabilize core::ffi:c_* and rexport in std::ffi
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
2022-07-13 19:28:20 -07:00
Scott McMurray
a32305a80f Re-optimize Layout::array
This way it's one check instead of two, so hopefully it'll be better

Nightly:
```
layout_array_i32:
	movq	%rdi, %rax
	movl	$4, %ecx
	mulq	%rcx
	jo	.LBB1_2
	movabsq	$9223372036854775805, %rcx
	cmpq	%rcx, %rax
	jae	.LBB1_2
	movl	$4, %edx
	retq
.LBB1_2:
	…
```

This PR:
```
	movq	%rcx, %rax
	shrq	$61, %rax
	jne	.LBB2_1
	shlq	$2, %rcx
	movl	$4, %edx
	movq	%rcx, %rax
	retq
.LBB2_1:
	…
```
2022-07-13 17:07:41 -07:00
Konrad Borowski
0753fd117b Partially stabilize const_slice_from_raw_parts
This doesn't stabilize methods working on mutable pointers.
2022-07-09 23:20:02 +02:00
Jane Losare-Lusby
d68cb1f9a3 revert changes to unicode stability 2022-07-08 21:18:15 +00:00
Guillaume Gomez
4755173cf6
Rollup merge of #96935 - thomcc:atomicptr-strict-prov, r=dtolnay
Allow arithmetic and certain bitwise ops on AtomicPtr

This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.

This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.

I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.

API change proposal: https://github.com/rust-lang/libs-team/issues/60

Fixes #95492
2022-07-06 20:43:23 +02:00
Nick Cameron
0c72be3e1a core::any: replace some unstable generic types with impl Trait
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-07-05 15:06:31 +01:00
Dylan DPC
8fa1ed8f12
Rollup merge of #97712 - RalfJung:untyped, r=scottmcm
ptr::copy and ptr::swap are doing untyped copies

The consensus in https://github.com/rust-lang/rust/issues/63159 seemed to be that these operations should be "untyped", i.e., they should treat the data as raw bytes, should work when these bytes violate the validity invariant of `T`, and should exactly preserve the initialization state of the bytes that are being copied. This is already somewhat implied by the description of "copying/swapping size*N bytes" (rather than "N instances of `T`").

The implementations mostly already work that way (well, for LLVM's intrinsics the documentation is not precise enough to say what exactly happens to poison, but if this ever gets clarified to something that would *not* perfectly preserve poison, then I strongly assume there will be some way to make a copy that *does* perfectly preserve poison). However, I had to adjust `swap_nonoverlapping`; after ``@scottmcm's`` [recent changes](https://github.com/rust-lang/rust/pull/94212), that one (sometimes) made a typed copy. (Note that `mem::swap`, which works on mutable references, is unchanged. It is documented as "swapping the values at two mutable locations", which to me strongly indicates that it is indeed typed. It is also safe and can rely on `&mut T` pointing to a valid `T` as part of its safety invariant.)

On top of adding a test (that will be run by Miri), this PR then also adjusts the documentation to indeed stably promise the untyped semantics. I assume this means the PR has to go through t-libs (and maybe t-lang?) FCP.

Fixes https://github.com/rust-lang/rust/issues/63159
2022-07-05 16:04:31 +05:30
5225225
5f5ca88958 Add size assert in transmute_copy 2022-07-03 10:46:20 +01:00
Ben Kimock
7919e4208b Fix slice::ChunksMut aliasing 2022-07-03 00:15:15 -04:00
Pietro Albini
6b2d3d5f3c
update cfg(bootstrap)s 2022-07-01 15:48:23 +02:00
Thom Chiovoloni
e65ecee90e
Rename AtomicPtr::fetch_{add,sub}{,_bytes} 2022-07-01 06:21:19 -07:00
Thom Chiovoloni
2f872afdb5
Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from AtomicUsize, for the strict
provenance experiment.

Fixes #95492
2022-07-01 06:21:18 -07:00