24 Commits

Author SHA1 Message Date
Nicholas Nethercote
e4bf113027 Box CastTarget within PassMode.
Because `PassMode::Cast` is by far the largest variant, but is
relatively rare.

This requires making `PassMode` not impl `Copy`, and `Clone` is no
longer necessary. This causes lots of sigil adjusting, but nothing very
notable.
2022-08-26 09:35:28 +10:00
Ralf Jung
0318f07bdd various nits from review 2022-07-20 17:12:08 -04:00
Ralf Jung
9cbd1066d7 add range metadata to alignment loads 2022-07-20 17:12:08 -04:00
bjorn3
399e020b96 Move vtable_size and vtable_align impls to cg_ssa 2022-07-20 17:12:08 -04:00
Thom Chiovoloni
2f872afdb5
Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from AtomicUsize, for the strict
provenance experiment.

Fixes #95492
2022-07-01 06:21:18 -07:00
Mara Bos
4982a59986 Rename/restructure memory ordering intrinsics. 2022-06-28 08:58:27 +02:00
Tomasz Miąsko
f4c92cc4d1 rustc_codegen_ssa: cleanup AtomicOrdering
* Remove unused `NotAtomic` ordering.
* Rename `Monotonic` to `Relaxed` - a Rust specific name.
2022-05-25 10:34:35 +02:00
Scott McMurray
89a18cb600 Add unsigned_offset_from on pointers
Like we have `add`/`sub` which are the `usize` version of `offset`, this adds the `usize` equivalent of `offset_from`.  Like how `.add(d)` replaced a whole bunch of `.offset(d as isize)`, you can see from the changes here that it's fairly common that code actually knows the order between the pointers and *wants* a `usize`, not an `isize`.

As a bonus, this can do `sub nuw`+`udiv exact`, rather than `sub`+`sdiv exact`, which can be optimized slightly better because it doesn't have to worry about negatives.  That's why the slice iterators weren't using `offset_from`, though I haven't updated that code in this PR because slices are so perf-critical that I'll do it as its own change.

This is an intrinsic, like `offset_from`, so that it can eventually be allowed in CTFE.  It also allows checking the extra safety condition -- see the test confirming that CTFE catches it if you pass the pointers in the wrong order.
2022-05-11 17:16:25 -07:00
est31
2ef8af6619 Adopt let else in more places 2022-02-19 17:27:43 +01:00
woppopo
29932db09b const_deallocate: Don't deallocate memory allocated in an another const. Does nothing at runtime.
`const_allocate`:  Returns a null pointer at runtime.
2022-01-26 13:06:09 +09:00
Nicholas Nethercote
056d48a2c9 Remove unnecessary sigils around Symbol::as_str() calls. 2021-12-15 17:32:14 +11:00
Gary Guo
1c3409f333 Introduce NullOp::AlignOf 2021-09-13 00:08:35 +01:00
Tomasz Miąsko
77e5e17231 Prepare inbounds_gep for opaque pointers
Implement inbounds_gep using LLVMBuildInBoundsGEP2 which takes an
explicit type argument instead of deriving it from a pointer type.
2021-08-04 15:51:30 +02:00
Tomasz Miąsko
4013e094f5 Prepare gep for opaque pointers
Implement gep using LLVMBuildGEP2 which takes an explicit type argument
instead of deriving it from a pointer type.
2021-08-04 15:51:30 +02:00
Nikita Popov
33e9a6b565 Pass type when creating atomic load
Instead of determining it from the pointer type, explicitly pass
the type to load.
2021-07-09 22:00:19 +02:00
kadmin
217ff6b7ea Switch to changing cp_non_overlap in tform
It was suggested to lower this in MIR instead of ssa, so do that instead.
2021-03-09 16:54:14 +00:00
kadmin
d4ae9ff826 Build StKind::CopyOverlapping
This replaces where it was previously being constructed in intrinsics, with direct construction
of the Statement.
2021-03-09 16:54:14 +00:00
Tomasz Miąsko
4ad53dc9f5 Use pointer type in AtomicPtr::swap implementation 2020-12-20 00:00:00 +00:00
Tomasz Miąsko
a9ff4bd838 Always run intrinsics lowering pass
Move intrinsics lowering pass from the optimization phase (where it
would not run if -Zmir-opt-level=0), to the drop lowering phase where it
runs unconditionally.

The implementation of those intrinsics in code generation and
interpreter is unnecessary. Remove it.
2020-12-15 00:00:00 +00:00
oli
392ea29757 Cast pointers to usize before passing them to atomic operations as some platforms do not support atomic operations on pointers. 2020-11-29 12:58:03 +00:00
oli
aabe70f90e Directly use raw pointers in AtomicPtr store/load 2020-11-28 17:13:47 +00:00
Bastian Kauschke
2bf93bd852 compiler: fold by value 2020-11-16 22:34:57 +01:00
est31
4fa5578774 Replace target.target with target and target.ptr_width with target.pointer_width
Preparation for a subsequent change that replaces
rustc_target::config::Config with its wrapped Target.

On its own, this commit breaks the build. I don't like making
build-breaking commits, but in this instance I believe that it
makes review easier, as the "real" changes of this PR can be
seen much more easily.

Result of running:

find compiler/ -type f -exec sed -i -e 's/target\.target\([)\.,; ]\)/target\1/g' {} \;
find compiler/ -type f -exec sed -i -e 's/target\.target$/target/g' {} \;
find compiler/ -type f -exec sed -i -e 's/target.ptr_width/target.pointer_width/g' {} \;
./x.py fmt
2020-10-15 12:02:24 +02:00
khyperia
21b0c1286a Extract some intrinsics out of rustc_codegen_llvm
A significant amount of intrinsics do not actually need backend-specific
behaviors to be implemented, instead relying on methods already in
rustc_codegen_ssa. So, extract those methods out to rustc_codegen_ssa,
so that each backend doesn't need to reimplement the same code.
2020-09-15 23:35:31 +02:00