Commit Graph

466 Commits

Author SHA1 Message Date
Erik Desjardins
f18c2f83e9 add -O to some tests which depend on attributes being added 2024-03-10 16:04:12 -04:00
Erik Desjardins
8fdd5e044b convert codegen/repr/transparent-* tests to no_core, fix discrepancies 2024-03-09 23:16:02 -05:00
Erik Desjardins
96a72676d1 use [N x i8] for byval/sret types
This avoids depending on LLVM's struct types to determine the size of
the byval/sret slot.
2024-03-05 18:54:45 -05:00
bors
70aa0b86c0 Auto merge of #121665 - erikdesjardins:ptradd, r=nikic
Always generate GEP i8 / ptradd for struct offsets

This implements #98615, and goes a bit further to remove `struct_gep` entirely.

Upstream LLVM is in the beginning stages of [migrating to `ptradd`](https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699). LLVM 19 will [canonicalize](https://github.com/llvm/llvm-project/pull/68882) all constant-offset GEPs to i8, which has roughly the same effect as this change.

Fixes #121719.

Split out from #121577.

r? `@nikic`
2024-03-03 22:21:53 +00:00
Ramon de C Valle
dee4e02102 Add initial support for DataFlowSanitizer
Adds initial support for DataFlowSanitizer to the Rust compiler. It
currently supports `-Zsanitizer-dataflow-abilist`. Additional options
for it can be passed to LLVM command line argument processor via LLVM
arguments using `llvm-args` codegen option (e.g.,
`-Cllvm-args=-dfsan-combine-pointer-labels-on-load=false`).
2024-03-01 18:50:40 -08:00
Guillaume Gomez
36bd9ef5a8
Rollup merge of #120820 - CKingX:cpu-base-minimum, r=petrochenkov,ChrisDenton
Enable CMPXCHG16B, SSE3, SAHF/LAHF and 128-bit Atomics (in nightly) in Windows x64

As Rust plans to set Windows 10 as the minimum supported OS for target x86_64-pc-windows-msvc, I have added the cmpxchg16b and sse3 feature. Windows 10 requires CMPXCHG16B, LAHF/SAHF, and PrefetchW as stated in the requirements [here](https://download.microsoft.com/download/c/1/5/c150e1ca-4a55-4a7e-94c5-bfc8c2e785c5/Windows%2010%20Minimum%20Hardware%20Requirements.pdf). Furthermore, CPUs that meet these requirements also have SSE3 ([see](https://walbourn.github.io/directxmath-sse3-and-ssse3/))
2024-02-29 17:08:36 +01:00
Guillaume Gomez
b2c3279984
Rollup merge of #121700 - rcvalle:rust-cfi-dont-compress-user-defined-builtin-types, r=compiler-errors
CFI: Don't compress user-defined builtin types

Doesn't compress user-defined builtin types (see https://itanium-cxx-abi.github.io/cxx-abi/abi.html#mangling-builtin and https://itanium-cxx-abi.github.io/cxx-abi/abi.html#mangling-compression).
2024-02-29 14:33:51 +01:00
Erik Desjardins
401651015d test merging of multiple match branches that access fields of the same offset 2024-02-27 23:14:36 -05:00
Erik Desjardins
c1017d4828 use non-inbounds GEP for ZSTs, add fixmes 2024-02-27 23:00:54 -05:00
Ramon de C Valle
8f7b921f52 CFI: Don't compress user-defined builtin types
Doesn't compress user-defined builtin types (see
https://itanium-cxx-abi.github.io/cxx-abi/abi.html#mangling-builtin and
https://itanium-cxx-abi.github.io/cxx-abi/abi.html#mangling-compression).
2024-02-27 12:23:48 -08:00
Erik Desjardins
123015e722 always use gep inbounds i8 (ptradd) for field offsets 2024-02-26 22:28:09 -05:00
bors
71ffdf7ff7 Auto merge of #121655 - matthiaskrgr:rollup-qpx3kks, r=matthiaskrgr
Rollup of 4 pull requests

Successful merges:

 - #121598 (rename 'try' intrinsic to 'catch_unwind')
 - #121639 (Update books)
 - #121648 (Update Vec and String `{from,into}_raw_parts`-family docs)
 - #121651 (Properly emit `expected ;` on `#[attr] expr`)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-02-27 00:55:14 +00:00
Matthias Krüger
d95c321062
Rollup merge of #121598 - RalfJung:catch_unwind, r=oli-obk
rename 'try' intrinsic to 'catch_unwind'

The intrinsic has nothing to do with `try` blocks, and corresponds to the stable `catch_unwind` function, so this makes a lot more sense IMO.

Also rename Miri's special function while we are at it, to reflect the level of abstraction it works on: it's an unwinding mechanism, on which Rust implements panics.
2024-02-27 00:40:00 +01:00
bors
5c786a7fe3 Auto merge of #121516 - RalfJung:platform-intrinsics-begone, r=oli-obk
remove platform-intrinsics ABI; make SIMD intrinsics be regular intrinsics

`@Amanieu` `@workingjubilee` I don't think there is any reason these need to be "special"? The [original RFC](https://rust-lang.github.io/rfcs/1199-simd-infrastructure.html) indicated eventually making them stable, but I think that is no longer the plan, so seems to me like we can clean this up a bit.

Blocked on https://github.com/rust-lang/stdarch/pull/1538, https://github.com/rust-lang/rust/pull/121542.
2024-02-26 22:24:16 +00:00
Ralf Jung
b4ca582b89 rename 'try' intrinsic to 'catch_unwind' 2024-02-26 11:10:18 +01:00
Guillaume Gomez
0e08be5360
Rollup merge of #120656 - Zalathar:filecheck-flags, r=wesleywiser
Allow tests to specify a `//@ filecheck-flags:` header

This allows individual codegen/assembly/mir-opt tests to pass extra flags to the LLVM `filecheck` tool as needed.

---

The original motivation was noticing that `tests/run-make/instrument-coverage` was very close to being an ordinary codegen test, except that it needs some extra logic to set up platform-specific variables to be passed into filecheck.

I then saw the comment in `verify_with_filecheck` indicating that a `filecheck-flags` header might be useful for other purposes as well.
2024-02-26 10:27:41 +01:00
Markus Reiter
b2fbb8a053
Use generic NonZero in tests. 2024-02-25 12:03:48 +01:00
Ralf Jung
c1d0e489e5 fix use of platform_intrinsics in tests 2024-02-25 08:15:44 +01:00
bors
89d8e3116c Auto merge of #120650 - clubby789:switchint-const, r=saethlin
Use `br` instead of a conditional when switching on a constant boolean

r? `@ghost`
2024-02-25 01:27:44 +00:00
Ben Kimock
2f3c0b9859 Ignore less tests in debug builds 2024-02-23 18:04:01 -05:00
clubby789
7159aed51e Use br instead of conditional when branching on constant 2024-02-23 10:52:55 +00:00
Zalathar
e56cc8408d Remove unhelpful DEFINE_INTERNAL from filecheck flags
This define was copied over from the run-make version of the test, but doesn't
seem to serve any useful purpose.
2024-02-23 11:29:01 +11:00
Zalathar
0c19c632ab Convert tests/run-make/instrument-coverage to an ordinary codegen test
This test was already very close to being an ordinary codegen test, except that
it needed some extra logic to set a few variables based on (target) platform
characteristics.

Now that we have support for `//@ filecheck-flags:`, we can instead set those
variables using the normal test revisions mechanism.
2024-02-23 11:28:59 +11:00
Zalathar
c1889b549b Move existing coverage codegen tests into a subdirectory
This makes room for migrating over `tests/run-make/instrument-coverage`,
without increasing the number of top-level items in the codegen test directory.
2024-02-23 11:28:09 +11:00
Zalathar
baec3076db Allow tests to specify a //@ filecheck-flags: header
Any flags specified here will be passed to LLVM's `filecheck` tool, in tests
that use that tool.
2024-02-23 11:28:06 +11:00
Zalathar
36f298c93d Add some simple meta-tests for the handling of filecheck flags 2024-02-23 11:27:38 +11:00
许杰友 Jieyou Xu (Joe)
6e48b96692
[AUTO_GENERATED] Migrate compiletest to use ui_test-style //@ directives 2024-02-22 16:04:04 +00:00
bors
52dba5ffe7 Auto merge of #121225 - RalfJung:simd-extract-insert-const-idx, r=oli-obk,Amanieu
require simd_insert, simd_extract indices to be constants

As discussed in https://github.com/rust-lang/rust/issues/77477 (see in particular [here](https://github.com/rust-lang/rust/issues/77477#issuecomment-703149102)). This PR doesn't touch codegen yet -- the first step is to ensure that the indices are always constants; the second step is to then make use of this fact in backends.

Blocked on https://github.com/rust-lang/stdarch/pull/1530 propagating to the rustc repo.
2024-02-22 09:59:41 +00:00
Ralf Jung
07b6240947 remove simd_reduce_{min,max}_nanless 2024-02-21 20:50:47 +01:00
bors
bb8b11e67d Auto merge of #120718 - saethlin:reasonable-fast-math, r=nnethercote
Add "algebraic" fast-math intrinsics, based on fast-math ops that cannot return poison

Setting all of LLVM's fast-math flags makes our fast-math intrinsics very dangerous, because some inputs are UB. This set of flags permits common algebraic transformations, but according to the [LangRef](https://llvm.org/docs/LangRef.html#fastmath), only the flags `nnan` (no nans) and `ninf` (no infs) can produce poison.

And this uses the algebraic float ops to fix https://github.com/rust-lang/rust/issues/120720

cc `@orlp`
2024-02-21 09:43:33 +00:00
Ben Kimock
cc73b71e8e Add "algebraic" versions of the fast-math intrinsics 2024-02-20 12:39:03 -05:00
Ralf Jung
e19f89b5ff delete a test that no longer makes sense 2024-02-20 08:37:47 +01:00
CKingX
2d25c3b369
Updated test to account for added previous features (thanks erikdesjardins!) 2024-02-19 21:59:13 -08:00
bors
158f00a1c5 Auto merge of #118264 - lukas-code:optimized-draining, r=the8472
Optimize `VecDeque::drain` for (half-)open ranges

The most common use cases of `VecDeque::drain` consume either the entire queue or elements from the front or back.[^1] This PR makes these operations faster by optimizing the generated code of the destructor of the drain:

* `.drain(..)` is now the same as `.clear()`.
* `.drain(n..)` is now (almost[^2]) the same as `.truncate(n)`.
* `.drain(..n)` is now an efficient "advance" function. This operation is not provided by a dedicated function and optimizing it is my main motivation for this PR.

Previously, all of these cases generated a function call to the destructor of the `DropGuard`, emitting a lot of unused machine code as well as unnecessary branches and loads/stores of stack variables.

There are no algorithmic changes in this PR, but it simplifies the code enough to allow LLVM to recognize the special cases and optimize accordingly. Most notably, it allows elimination of the rather large [`wrap_copy`] function.

Some [rudimentary microbenchmarks][benches] show a performance improvement of **~3x-4x** on my machine for the special cases and roughly equal performance for the general case.

Best reviewed commit by commit.

[^1]: source: GitHub code search: [full range `drain(..)` = 7.5k results][full], [from front `drain(..n)` = 3.2k results][front], [from back `drain(n..)` = 1.6k results][back], [from middle `drain(n..m)` = <500 results][middle]

[^2]: `.drain(0..)` and `.clear()` reset the head to 0, but `.truncate(0)` does not.

[full]: https://github.com/search?type=code&q=%2FVecDeque%28.%7C%5Cn%29%2B%5C.drain%5C%280%3F%5C.%5C.%5C%29%2F+lang%3ARust
[front]: https://github.com/search?type=code&q=%2FVecDeque%28.%7C%5Cn%29%2B%5C.drain%5C%280%3F%5C.%5C.%5B%5E%29%5D.*%5C%29%2F+lang%3ARust
[back]: https://github.com/search?type=code&q=%2FVecDeque%28.%7C%5Cn%29%2B%5C.drain%5C%28%5B%5E0%5D.*%5C.%5C.%5C%29%2F+lang%3ARust
[middle]: https://github.com/search?type=code&q=%2FVecDeque%28.%7C%5Cn%29%2B%5C.drain%5C%28%5B%5E0%5D.*%5C.%5C.%5B%5E%29%5D.*%5C%29%2F+lang%3ARust
[`wrap_copy`]: 4fd68eb47b/library/alloc/src/collections/vec_deque/mod.rs (L262-L391)
[benches]: https://gist.github.com/lukas-code/c97bd707d074c4cc31f241edbc7fd2a2

<details>
<summary>generated assembly</summary>

before:
```asm
clear:
	sub rsp, 40
	mov rax, qword ptr [rdi + 24]
	mov qword ptr [rdi + 24], 0
	mov qword ptr [rsp], rdi
	mov qword ptr [rsp + 8], rax
	xorps xmm0, xmm0
	movups xmmword ptr [rsp + 16], xmm0
	mov qword ptr [rsp + 32], rax
	test rax, rax
	je .LBB1_2
	mov rcx, qword ptr [rdi]
	mov rdx, qword ptr [rdi + 16]
	xor esi, esi
	cmp rdx, rcx
	cmovae rsi, rcx
	sub rdx, rsi
	mov rsi, rcx
	sub rsi, rdx
	lea rdi, [rdx + rax]
	cmp rsi, rax
	cmovb rdi, rcx
	sub rdi, rdx
	mov qword ptr [rsp + 16], rdi
	mov qword ptr [rsp + 32], 0
.LBB1_2:
	mov rdi, rsp
	call core::ptr::drop_in_place<<alloc::collections::vec_deque::drain::Drain<T,A> as core::ops::drop::Drop>::drop::DropGuard<i32,alloc::alloc::Global>>
	add rsp, 40
	ret

truncate:
	mov rax, qword ptr [rdi + 24]
	sub rax, rsi
	jbe .LBB2_2
	sub rsp, 40
	mov qword ptr [rdi + 24], rsi
	mov qword ptr [rsp], rdi
	mov qword ptr [rsp + 8], rax
	mov rcx, qword ptr [rdi]
	mov rdx, qword ptr [rdi + 16]
	add rdx, rsi
	xor edi, edi
	cmp rdx, rcx
	cmovae rdi, rcx
	mov qword ptr [rsp + 24], 0
	sub rdx, rdi
	mov rdi, rcx
	sub rdi, rdx
	lea r8, [rdx + rax]
	cmp rdi, rax
	cmovb r8, rcx
	sub rsi, rdx
	add rsi, r8
	mov qword ptr [rsp + 16], rsi
	mov qword ptr [rsp + 32], 0
	mov rdi, rsp
	call core::ptr::drop_in_place<<alloc::collections::vec_deque::drain::Drain<T,A> as core::ops::drop::Drop>::drop::DropGuard<i32,alloc::alloc::Global>>
	add rsp, 40

advance:
	mov rcx, qword ptr [rdi + 24]
	mov rax, rcx
	sub rax, rsi
	jbe .LBB3_1
	sub rsp, 40
	mov qword ptr [rdi + 24], 0
	mov qword ptr [rsp], rdi
	mov qword ptr [rsp + 8], rsi
	mov qword ptr [rsp + 16], 0
	mov qword ptr [rsp + 24], rax
	mov qword ptr [rsp + 32], rsi
	test rsi, rsi
	je .LBB3_6
	mov rax, qword ptr [rdi]
	mov rcx, qword ptr [rdi + 16]
	xor edx, edx
	cmp rcx, rax
	cmovae rdx, rax
	sub rcx, rdx
	mov rdx, rax
	sub rdx, rcx
	lea rdi, [rcx + rsi]
	cmp rdx, rsi
	cmovb rdi, rax
	sub rdi, rcx
	mov qword ptr [rsp + 16], rdi
	mov qword ptr [rsp + 32], 0
.LBB3_6:
	mov rdi, rsp
	call core::ptr::drop_in_place<<alloc::collections::vec_deque::drain::Drain<T,A> as core::ops::drop::Drop>::drop::DropGuard<i32,alloc::alloc::Global>>
	add rsp, 40
	ret
.LBB3_1:
	test rcx, rcx
	je .LBB3_3
	mov qword ptr [rdi + 24], 0
.LBB3_3:
	mov qword ptr [rdi + 16], 0
	ret

remove:
	sub rsp, 40
	cmp rdx, rsi
	jb .LBB4_5
	mov rax, qword ptr [rdi + 24]
	mov rcx, rax
	sub rcx, rdx
	jb .LBB4_6
	mov qword ptr [rdi + 24], rsi
	mov qword ptr [rsp], rdi
	sub rdx, rsi
	mov qword ptr [rsp + 8], rdx
	mov qword ptr [rsp + 16], rsi
	mov qword ptr [rsp + 24], rcx
	mov qword ptr [rsp + 32], rdx
	je .LBB4_4
	mov rax, qword ptr [rdi]
	mov rcx, qword ptr [rdi + 16]
	add rcx, rsi
	xor edi, edi
	cmp rcx, rax
	cmovae rdi, rax
	sub rcx, rdi
	mov rdi, rax
	sub rdi, rcx
	lea r8, [rcx + rdx]
	cmp rdi, rdx
	cmovb r8, rax
	sub rsi, rcx
	add rsi, r8
	mov qword ptr [rsp + 16], rsi
	mov qword ptr [rsp + 32], 0
.LBB4_4:
	mov rdi, rsp
	call core::ptr::drop_in_place<<alloc::collections::vec_deque::drain::Drain<T,A> as core::ops::drop::Drop>::drop::DropGuard<i32,alloc::alloc::Global>>
	add rsp, 40
	ret
.LBB4_5:
	lea rax, [rip + .L__unnamed_2]
	mov rdi, rsi
	mov rsi, rdx
	mov rdx, rax
	call qword ptr [rip + core::slice::index::slice_index_order_fail@GOTPCREL]
.LBB4_6:
	lea rcx, [rip + .L__unnamed_2]
	mov rdi, rdx
	mov rsi, rax
	mov rdx, rcx
	call qword ptr [rip + core::slice::index::slice_end_index_len_fail@GOTPCREL]

core::ptr::drop_in_place<<alloc::collections::vec_deque::drain::Drain<T,A> as core::ops::drop::Drop>::drop::DropGuard<i32,alloc::alloc::Global>>:
	push rbp
	push r15
	push r14
	push r13
	push r12
	push rbx
	sub rsp, 24
	mov rsi, qword ptr [rdi + 32]
	test rsi, rsi
	je .LBB0_2
	mov rax, qword ptr [rdi + 16]
	add rsi, rax
	jb .LBB0_45
.LBB0_2:
	mov r13, qword ptr [rdi]
	mov rbp, qword ptr [rdi + 8]
	mov rbx, qword ptr [r13 + 24]
	lea r12, [rbx + rbp]
	mov r15, qword ptr [rdi + 24]
	lea rsi, [r15 + r12]
	test rbx, rbx
	je .LBB0_10
	test r15, r15
	je .LBB0_42
	cmp rbx, r15
	jbe .LBB0_12
	mov r14, qword ptr [r13]
	mov rax, qword ptr [r13 + 16]
	add r12, rax
	xor ecx, ecx
	cmp r12, r14
	mov rdx, r14
	cmovb rdx, rcx
	sub r12, rdx
	add rbx, rax
	cmp rbx, r14
	cmovae rcx, r14
	sub rbx, rcx
	mov rcx, rbx
	sub rcx, r12
	je .LBB0_42
	mov rdi, qword ptr [r13 + 8]
	mov rax, rcx
	add rax, r14
	cmovae rax, rcx
	mov r8, r14
	sub r8, r12
	mov rcx, r14
	sub rcx, rbx
	mov rdx, r15
	sub rdx, r8
	mov qword ptr [rsp + 16], rsi
	jbe .LBB0_18
	cmp rax, r15
	jae .LBB0_24
	mov rdx, r15
	sub rdx, r8
	shl rdx, 2
	cmp r15, rcx
	jbe .LBB0_30
	sub r8, rcx
	mov qword ptr [rsp], rdi
	mov rax, qword ptr [rsp]
	lea rdi, [rax + 4*r8]
	mov rsi, qword ptr [rsp]
	mov qword ptr [rsp + 8], rcx
	mov r15, r8
	call qword ptr [rip + memmove@GOTPCREL]
	sub r14, r15
	mov rax, qword ptr [rsp]
	lea rsi, [rax + 4*r14]
	shl r15, 2
	mov rdi, qword ptr [rsp]
	mov rdx, r15
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, qword ptr [rsp]
	lea rsi, [rdi + 4*r12]
	lea rdi, [rdi + 4*rbx]
	mov r15, qword ptr [rsp + 8]
	jmp .LBB0_36
.LBB0_10:
	test r15, r15
	je .LBB0_17
	mov rax, qword ptr [r13]
	sub rsi, rbp
	add rbp, qword ptr [r13 + 16]
	xor ecx, ecx
	cmp rbp, rax
	cmovae rcx, rax
	sub rbp, rcx
	mov qword ptr [r13 + 16], rbp
	jmp .LBB0_43
.LBB0_12:
	mov rdx, qword ptr [r13 + 16]
	mov r15, qword ptr [r13]
	lea rax, [rdx + rbp]
	xor ecx, ecx
	cmp rax, r15
	cmovae rcx, r15
	mov r12, rax
	sub r12, rcx
	mov rcx, r12
	sub rcx, rdx
	je .LBB0_41
	mov rdi, qword ptr [r13 + 8]
	mov rax, rcx
	add rax, r15
	cmovae rax, rcx
	mov r8, r15
	sub r8, rdx
	mov rcx, r15
	sub rcx, r12
	mov r14, rbx
	sub r14, r8
	mov qword ptr [rsp + 16], rsi
	jbe .LBB0_21
	cmp rax, rbx
	jae .LBB0_26
	mov qword ptr [rsp], rdx
	mov rdx, rbx
	sub rdx, r8
	shl rdx, 2
	cmp rbx, rcx
	jbe .LBB0_32
	sub r8, rcx
	mov rbx, rdi
	lea rdi, [rdi + 4*r8]
	mov rsi, rbx
	mov qword ptr [rsp + 8], rcx
	mov r14, r8
	call qword ptr [rip + memmove@GOTPCREL]
	sub r15, r14
	lea rsi, [rbx + 4*r15]
	shl r14, 2
	mov rdi, rbx
	mov rdx, r14
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, rbx
	mov rax, qword ptr [rsp]
	lea rsi, [rbx + 4*rax]
	lea rdi, [rbx + 4*r12]
	mov rbx, qword ptr [rsp + 8]
	jmp .LBB0_40
.LBB0_17:
	xorps xmm0, xmm0
	movups xmmword ptr [r13 + 16], xmm0
	jmp .LBB0_44
.LBB0_18:
	mov r14, r15
	sub r14, rcx
	jbe .LBB0_28
	cmp rax, r15
	jae .LBB0_33
	lea rax, [rcx + r12]
	sub r15, rcx
	lea rsi, [rdi + 4*rax]
	shl r15, 2
	mov r14, rdi
	mov rdx, r15
	mov r15, rcx
	jmp .LBB0_31
.LBB0_21:
	mov r14, rbx
	sub r14, rcx
	jbe .LBB0_29
	cmp rax, rbx
	jae .LBB0_34
	lea rax, [rcx + rdx]
	sub rbx, rcx
	lea rsi, [rdi + 4*rax]
	shl rbx, 2
	mov r14, rdi
	mov r15, rdx
	mov rdx, rbx
	mov rbx, rcx
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, r14
	lea rsi, [r14 + 4*r15]
	lea rdi, [r14 + 4*r12]
	jmp .LBB0_40
.LBB0_24:
	sub r15, rcx
	jbe .LBB0_35
	sub rcx, r8
	mov qword ptr [rsp + 8], rcx
	lea rsi, [rdi + 4*r12]
	mov r12, rdi
	lea rdi, [rdi + 4*rbx]
	lea rdx, [4*r8]
	mov r14, r8
	call qword ptr [rip + memmove@GOTPCREL]
	add r14, rbx
	lea rdi, [r12 + 4*r14]
	mov rbx, qword ptr [rsp + 8]
	lea rdx, [4*rbx]
	mov rsi, r12
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, r12
	lea rsi, [r12 + 4*rbx]
	jmp .LBB0_36
.LBB0_26:
	sub rbx, rcx
	jbe .LBB0_37
	sub rcx, r8
	lea rsi, [rdi + 4*rdx]
	mov r15, rdi
	lea rdi, [rdi + 4*r12]
	lea rdx, [4*r8]
	mov r14, rcx
	mov qword ptr [rsp], r8
	call qword ptr [rip + memmove@GOTPCREL]
	add r12, qword ptr [rsp]
	lea rdi, [r15 + 4*r12]
	lea rdx, [4*r14]
	mov rsi, r15
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, r15
	lea rsi, [r15 + 4*r14]
	jmp .LBB0_40
.LBB0_28:
	lea rsi, [rdi + 4*r12]
	lea rdi, [rdi + 4*rbx]
	jmp .LBB0_36
.LBB0_29:
	lea rsi, [rdi + 4*rdx]
	lea rdi, [rdi + 4*r12]
	jmp .LBB0_40
.LBB0_30:
	lea rax, [r8 + rbx]
	mov r14, rdi
	lea rdi, [rdi + 4*rax]
	mov rsi, r14
	mov r15, r8
.LBB0_31:
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, r14
	lea rsi, [r14 + 4*r12]
	lea rdi, [r14 + 4*rbx]
	jmp .LBB0_36
.LBB0_32:
	lea rax, [r12 + r8]
	mov rbx, rdi
	lea rdi, [rdi + 4*rax]
	mov rsi, rbx
	mov r14, r8
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, rbx
	mov rax, qword ptr [rsp]
	lea rsi, [rbx + 4*rax]
	jmp .LBB0_38
.LBB0_33:
	lea rsi, [rdi + 4*r12]
	mov r15, rdi
	lea rdi, [rdi + 4*rbx]
	lea rdx, [4*rcx]
	mov rbx, rcx
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, r15
	add rbx, r12
	lea rsi, [r15 + 4*rbx]
	mov r15, r14
	jmp .LBB0_36
.LBB0_34:
	lea rsi, [rdi + 4*rdx]
	mov rbx, rdi
	lea rdi, [rdi + 4*r12]
	mov r15, rdx
	lea rdx, [4*rcx]
	mov r12, rcx
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, rbx
	add r12, r15
	lea rsi, [rbx + 4*r12]
	jmp .LBB0_39
.LBB0_35:
	lea rsi, [rdi + 4*r12]
	mov r14, rdi
	lea rdi, [rdi + 4*rbx]
	mov r12, rdx
	lea rdx, [4*r8]
	mov r15, r8
	call qword ptr [rip + memmove@GOTPCREL]
	add r15, rbx
	mov rsi, r14
	lea rdi, [r14 + 4*r15]
	mov r15, r12
.LBB0_36:
	shl r15, 2
	mov rdx, r15
	call qword ptr [rip + memmove@GOTPCREL]
	mov rsi, qword ptr [rsp + 16]
	jmp .LBB0_42
.LBB0_37:
	lea rsi, [rdi + 4*rdx]
	mov rbx, rdi
	lea rdi, [rdi + 4*r12]
	lea rdx, [4*r8]
	mov r15, r8
	call qword ptr [rip + memmove@GOTPCREL]
	add r12, r15
	mov rsi, rbx
.LBB0_38:
	lea rdi, [rbx + 4*r12]
.LBB0_39:
	mov rbx, r14
.LBB0_40:
	shl rbx, 2
	mov rdx, rbx
	call qword ptr [rip + memmove@GOTPCREL]
	mov r15, qword ptr [r13]
	mov rax, qword ptr [r13 + 16]
	add rax, rbp
	mov rsi, qword ptr [rsp + 16]
.LBB0_41:
	xor ecx, ecx
	cmp rax, r15
	cmovae rcx, r15
	sub rax, rcx
	mov qword ptr [r13 + 16], rax
.LBB0_42:
	sub rsi, rbp
.LBB0_43:
	mov qword ptr [r13 + 24], rsi
.LBB0_44:
	add rsp, 24
	pop rbx
	pop r12
	pop r13
	pop r14
	pop r15
	pop rbp
	ret
.LBB0_45:
	lea rdx, [rip + .L__unnamed_1]
	mov rdi, rax
	call qword ptr [rip + core::slice::index::slice_index_order_fail@GOTPCREL]
```

after:
```asm
clear:
	movups xmmword ptr [rdi + 16], xmm0
	ret

truncate:
	cmp qword ptr [rdi + 24], rsi
	jbe .LBB2_4
	test rsi, rsi
	jne .LBB2_3
	mov qword ptr [rdi + 16], 0
.LBB2_3:
	mov qword ptr [rdi + 24], rsi
.LBB2_4:
	ret

advance:
	mov rcx, qword ptr [rdi + 24]
	mov rax, rcx
	sub rax, rsi
	jbe .LBB3_1
	mov rcx, qword ptr [rdi]
	add rsi, qword ptr [rdi + 16]
	xor edx, edx
	cmp rsi, rcx
	cmovae rdx, rcx
	sub rsi, rdx
	mov qword ptr [rdi + 16], rsi
	mov qword ptr [rdi + 24], rax
	ret
.LBB3_1:
	test rcx, rcx
	je .LBB3_3
	mov qword ptr [rdi + 24], 0
.LBB3_3:
	mov qword ptr [rdi + 16], 0
	ret

remove:
	push rbp
	push r15
	push r14
	push r13
	push r12
	push rbx
	push rax
	mov r15, rsi
	mov r14, rdx
	sub r14, rsi
	jb .LBB4_9
	mov rbx, rdi
	mov r12, qword ptr [rdi + 24]
	mov r13, r12
	sub r13, rdx
	jb .LBB4_10
	mov qword ptr [rbx + 24], r15
	mov rbp, r12
	sub rbp, r14
	test r15, r15
	je .LBB4_4
	cmp rbp, r15
	jne .LBB4_11
.LBB4_4:
	cmp r12, r14
	jne .LBB4_6
.LBB4_5:
	mov qword ptr [rbx + 16], 0
	jmp .LBB4_8
.LBB4_11:
	mov rdi, rbx
	mov rsi, r14
	mov rdx, r15
	mov rcx, r13
	call <<alloc::collections::vec_deque::drain::Drain<T,A> as core::ops::drop::Drop>::drop::DropGuard<T,A> as core::ops::drop::Drop>::drop::copy_data
	cmp r12, r14
	je .LBB4_5
.LBB4_6:
	cmp r13, r15
	jbe .LBB4_8
	mov rax, qword ptr [rbx]
	add r14, qword ptr [rbx + 16]
	xor ecx, ecx
	cmp r14, rax
	cmovae rcx, rax
	sub r14, rcx
	mov qword ptr [rbx + 16], r14
.LBB4_8:
	mov qword ptr [rbx + 24], rbp
	add rsp, 8
	pop rbx
	pop r12
	pop r13
	pop r14
	pop r15
	pop rbp
	ret
.LBB4_9:
	lea rax, [rip + .L__unnamed_1]
	mov rdi, r15
	mov rsi, rdx
	mov rdx, rax
	call qword ptr [rip + core::slice::index::slice_index_order_fail@GOTPCREL]
.LBB4_10:
	lea rax, [rip + .L__unnamed_1]
	mov rdi, rdx
	mov rsi, r12
	mov rdx, rax
	call qword ptr [rip + core::slice::index::slice_end_index_len_fail@GOTPCREL]

<<alloc::collections::vec_deque::drain::Drain<T,A> as core::ops::drop::Drop>::drop::DropGuard<T,A> as core::ops::drop::Drop>::drop::copy_data:
	push rbp
	push r15
	push r14
	push r13
	push r12
	push rbx
	push rax
	mov r14, rsi
	cmp rdx, rcx
	jae .LBB0_1
	mov r12, qword ptr [rdi]
	mov rax, qword ptr [rdi + 16]
	add r14, rax
	xor ecx, ecx
	cmp r14, r12
	cmovae rcx, r12
	sub r14, rcx
	mov r15, rdx
	mov r13, r14
	mov r14, rax
	mov rcx, r13
	sub rcx, r14
	je .LBB0_18
.LBB0_4:
	mov rdi, qword ptr [rdi + 8]
	mov rax, rcx
	add rax, r12
	cmovae rax, rcx
	mov rbx, r12
	sub rbx, r14
	mov rcx, r12
	sub rcx, r13
	mov rbp, r15
	sub rbp, rbx
	jbe .LBB0_5
	cmp rax, r15
	jae .LBB0_12
	mov rdx, r15
	sub rdx, rbx
	shl rdx, 2
	cmp r15, rcx
	jbe .LBB0_16
	sub rbx, rcx
	mov rbp, rdi
	lea rdi, [rdi + 4*rbx]
	mov r15, qword ptr [rip + memmove@GOTPCREL]
	mov rsi, rbp
	mov qword ptr [rsp], rcx
	call r15
	sub r12, rbx
	lea rsi, [4*r12]
	add rsi, rbp
	shl rbx, 2
	mov rdi, rbp
	mov rdx, rbx
	call r15
	mov rdi, rbp
	lea rsi, [4*r14]
	add rsi, rbp
	lea rdi, [4*r13]
	add rdi, rbp
	mov r15, qword ptr [rsp]
	jmp .LBB0_7
.LBB0_1:
	mov r15, rcx
	add r14, rdx
	mov r12, qword ptr [rdi]
	mov r13, qword ptr [rdi + 16]
	add r14, r13
	xor eax, eax
	cmp r14, r12
	mov rcx, r12
	cmovb rcx, rax
	sub r14, rcx
	add r13, rdx
	cmp r13, r12
	cmovae rax, r12
	sub r13, rax
	mov rcx, r13
	sub rcx, r14
	jne .LBB0_4
.LBB0_18:
	add rsp, 8
	pop rbx
	pop r12
	pop r13
	pop r14
	pop r15
	pop rbp
	ret
.LBB0_5:
	mov rbx, r15
	sub rbx, rcx
	jbe .LBB0_6
	cmp rax, r15
	jae .LBB0_9
	lea rax, [rcx + r14]
	sub r15, rcx
	lea rsi, [rdi + 4*rax]
	shl r15, 2
	mov rbx, rdi
	mov rdx, r15
	mov r15, rcx
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, rbx
	lea rsi, [rbx + 4*r14]
	lea rdi, [rbx + 4*r13]
	jmp .LBB0_7
.LBB0_12:
	sub r15, rcx
	jbe .LBB0_13
	sub rcx, rbx
	lea rsi, [rdi + 4*r14]
	mov r12, rdi
	lea rdi, [rdi + 4*r13]
	lea rdx, [4*rbx]
	mov r14, qword ptr [rip + memmove@GOTPCREL]
	mov rbp, rcx
	call r14
	add rbx, r13
	lea rdi, [r12 + 4*rbx]
	lea rdx, [4*rbp]
	mov rsi, r12
	call r14
	mov rdi, r12
	lea rsi, [r12 + 4*rbp]
	jmp .LBB0_7
.LBB0_6:
	lea rsi, [rdi + 4*r14]
	lea rdi, [rdi + 4*r13]
	jmp .LBB0_7
.LBB0_16:
	lea rax, [rbx + r13]
	mov r15, rdi
	lea rdi, [rdi + 4*rax]
	mov rsi, r15
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, r15
	lea rsi, [r15 + 4*r14]
	lea rdi, [r15 + 4*r13]
	mov r15, rbx
	jmp .LBB0_7
.LBB0_9:
	lea rsi, [rdi + 4*r14]
	mov r15, rdi
	lea rdi, [rdi + 4*r13]
	lea rdx, [4*rcx]
	mov r12, rcx
	call qword ptr [rip + memmove@GOTPCREL]
	mov rdi, r15
	add r12, r14
	lea rsi, [r15 + 4*r12]
	mov r15, rbx
	jmp .LBB0_7
.LBB0_13:
	lea rsi, [rdi + 4*r14]
	mov r14, rdi
	lea rdi, [rdi + 4*r13]
	lea rdx, [4*rbx]
	call qword ptr [rip + memmove@GOTPCREL]
	add rbx, r13
	mov rsi, r14
	lea rdi, [r14 + 4*rbx]
	mov r15, rbp
.LBB0_7:
	shl r15, 2
	mov rdx, r15
	add rsp, 8
	pop rbx
	pop r12
	pop r13
	pop r14
	pop r15
	pop rbp
	jmp qword ptr [rip + memmove@GOTPCREL]
```

</details>
2024-02-18 00:03:39 +00:00
Ben Kimock
7c2db703b0 Don't use mem::zeroed in vec::IntoIter 2024-02-16 10:44:39 -05:00
Lukas Markeffsky
8f259ade66 add codegen test 2024-02-16 13:11:05 +01:00
bors
dfa88b328f Auto merge of #120500 - oli-obk:intrinsics2.0, r=WaffleLapkin
Implement intrinsics with fallback bodies

fixes #93145 (though we can port many more intrinsics)
cc #63585

The way this works is that the backend logic for generating custom code for intrinsics has been made fallible. The only failure path is "this intrinsic is unknown". The `Instance` (that was `InstanceDef::Intrinsic`) then gets converted to `InstanceDef::Item`, which represents the fallback body. A regular function call to that body is then codegenned. This is currently implemented for

* codegen_ssa (so llvm and gcc)
* codegen_cranelift

other backends will need to adjust, but they can just keep doing what they were doing if they prefer (though adding new intrinsics to the compiler will then require them to implement them, instead of getting the fallback body).

cc `@scottmcm` `@WaffleLapkin`

### todo

* [ ] miri support
* [x] default intrinsic name to name of function instead of requiring it to be specified in attribute
* [x] make sure that the bodies are always available (must be collected for metadata)
2024-02-16 09:53:01 +00:00
Augie Fackler
a6ee72df91 tests: LLVM 18 infers an extra noalias here
This test started failing on LLVM 18 after change
61118ffd04. As far as I can tell, it's
just good fortune that LLVM is able to sniff out the new noalias here,
and it's correct.
2024-02-13 10:33:40 +01:00
Oli Scherer
f35a2bd401 Support safe intrinsics with fallback bodies
Turn `is_val_statically_known` into such an intrinsic to demonstrate. It is perfectly safe to call after all.
2024-02-12 17:55:36 +00:00
Matthias Krüger
1843dfd0d5
Rollup merge of #118307 - scottmcm:tuple-eq-simpler, r=joshtriplett
Remove an unneeded helper from the tuple library code

Thanks to https://github.com/rust-lang/rust/pull/107022, this is just what `==` does, so we don't need the helper here anymore.
2024-02-11 08:25:41 +01:00
Michael Goulet
34ed554d81 Build DebugInfo for coroutine-closure 2024-02-09 16:01:29 +00:00
Guillaume Boisseau
7954c28cf9
Rollup merge of #119162 - heiher:direct-access-external-data, r=petrochenkov
Add unstable `-Z direct-access-external-data` cmdline flag for `rustc`

The new flag has been described in the Major Change Proposal at https://github.com/rust-lang/compiler-team/issues/707

Fixes #118053
2024-02-07 18:24:41 +01:00
Matthias Krüger
59ba8024af
Rollup merge of #120502 - clubby789:remove-ffi-returns-twice, r=compiler-errors
Remove `ffi_returns_twice` feature

The [tracking issue](https://github.com/rust-lang/rust/issues/58314) and [RFC](https://github.com/rust-lang/rfcs/pull/2633) have been closed for a couple of years.

There is also an attribute gate in R-A which should be removed if this lands.
2024-02-06 22:45:42 +01:00
bors
268dbbbc4b Auto merge of #120624 - matthiaskrgr:rollup-3gvcl20, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #120484 (Avoid ICE when is_val_statically_known is not of a supported type)
 - #120516 (pattern_analysis: cleanup manual impls)
 - #120517 (never patterns: It is correct to lower `!` to `_`.)
 - #120523 (Improve `io::Read::read_buf_exact` error case)
 - #120528 (Store SHOULD_CAPTURE as AtomicU8)
 - #120529 (Update data layouts in custom target tests for LLVM 18)
 - #120531 (Remove a bunch of `has_errors` checks that have no meaningful or the wrong effect)
 - #120533 (Correct paths for hexagon-unknown-none-elf platform doc)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-02-04 20:51:28 +00:00
Matthias Krüger
6f24836a5b
Rollup merge of #120484 - Teapot4195:issue-120480-fix, r=compiler-errors
Avoid ICE when is_val_statically_known is not of a supported type

2 ICE with 1 stone!
1. Implement `llvm.is.constant.ptr` to avoid first ICE in linked issue.
2. return `false` when the argument is not one of `i*`/`f*`/`ptr` to avoid second ICE.

fixes #120480
2024-02-03 22:25:14 +01:00
Oli Scherer
6ac035df44 Revert unsound libcore changes of #119911 2024-02-01 22:53:25 +00:00
clubby789
7331315898 Remove ffi_returns_twice feature 2024-01-30 22:09:09 +00:00
Alex Huang
a97ff2a750 Add additional test cases for is_val_statically_known 2024-01-30 14:37:59 -05:00
Guillaume Gomez
6a1d34f32a
Rollup merge of #120310 - krasimirgg:jan-v0-sym, r=Mark-Simulacrum
adapt test for v0 symbol mangling

No functional changes intended.

Adapts the test to also work under `new-symbol-mangling = true`.
2024-01-30 16:57:48 +01:00
Nikita Popov
bdf7404b43 Update codegen test for LLVM 18 2024-01-26 15:03:23 +01:00