Commit Graph

1128 Commits

Author SHA1 Message Date
Camille GILLOT
5b2524eb03 Do not pre-compute reachable blocks. 2023-08-16 19:40:46 +00:00
Camille GILLOT
94c5ea350f Update doc comment. 2023-08-16 18:15:49 +00:00
Camille GILLOT
b8fed2f21c Make dataflow const-prop handle_switch_int monotonic. 2023-08-16 18:12:18 +00:00
Camille GILLOT
388f6a6413 Make TerminatorEdge plural. 2023-08-16 18:12:18 +00:00
Camille GILLOT
6cf15d4cb5 Rename MaybeUnreachable. 2023-08-16 18:12:18 +00:00
Camille GILLOT
f19cd3f2e1 Use TerminatorEdge for dataflow-const-prop. 2023-08-16 18:12:18 +00:00
Camille GILLOT
3acfa092db Only run MaybeInitializedPlaces once for drop elaboration. 2023-08-16 18:12:18 +00:00
Zalathar
5ca30c4646 Store BCB counters externally, not directly in the BCB graph
Storing coverage counter information in `CoverageCounters` has a few advantages
over storing it directly inside BCB graph nodes:

- The graph doesn't need to be mutable when making the counters, making it
easier to see that the graph itself is not modified during this step.

- All of the counter data is clearly visible in one place.

- It becomes possible to use a representation that doesn't correspond 1:1 to
graph nodes, e.g. storing all the edge counters in a single hashmap instead of
several.
2023-08-13 12:18:06 +10:00
Zalathar
5302c9d451 Accumulate intermediate expressions into CoverageCounters
This avoids the need to pass around a separate vector to accumulate into, and
avoids the need to create a fake empty vector when failure occurs.
2023-08-13 12:18:06 +10:00
Zalathar
c74db79c3b Rename helper struct BcbCounters to MakeBcbCounters
This avoids confusion with data structures that actually hold BCB counter
information.
2023-08-13 12:18:06 +10:00
Matthias Krüger
7d78885a8e
Rollup merge of #111891 - rustbox:feat/riscv-isr-cconv, r=jackh726
feat: `riscv-interrupt-{m,s}` calling conventions

Similar to prior support added for the mips430, avr, and x86 targets this change implements the rough equivalent of clang's [`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling e.g.

```rust
static mut CNT: usize = 0;

pub extern "riscv-interrupt-m" fn isr_m() {
    unsafe {
        CNT += 1;
    }
}
```

to produce highly effective assembly like:

```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0:       1141                    addi    sp,sp,-16
    unsafe {
        CNT += 1;
420003a2:       c62a                    sw      a0,12(sp)
420003a4:       c42e                    sw      a1,8(sp)
420003a6:       3fc80537                lui     a0,0x3fc80
420003aa:       63c52583                lw      a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae:       0585                    addi    a1,a1,1
420003b0:       62b52e23                sw      a1,1596(a0)
    }
}
420003b4:       4532                    lw      a0,12(sp)
420003b6:       45a2                    lw      a1,8(sp)
420003b8:       0141                    addi    sp,sp,16
420003ba:       30200073                mret
```

(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)

This outcome is superior to hand-coded interrupt routines which, lacking visibility into any non-assembly body of the interrupt handler, have to be very conservative and save the [entire CPU state to the stack frame][full-frame-save]. By instead asking LLVM to only save the registers that it uses, we defer the decision to the tool with the best context: it can more accurately account for the cost of spills if it knows that every additional register used is already at the cost of an implicit spill.

At the LLVM level, this is apparently [implemented by] marking every register as "[callee-save]," matching the semantics of an interrupt handler nicely (it has to leave the CPU state just as it found it after its `{m|s}ret`).

This approach is not suitable for every interrupt handler, as it makes no attempt to e.g. save the state in a user-accessible stack frame. For a full discussion of those challenges and tradeoffs, please refer to [the interrupt calling conventions RFC][rfc].

Inside rustc, this implementation differs from prior art because LLVM does not expose the "all-saved" function flavor as a calling convention directly, instead preferring to use an attribute that allows for differentiating between "machine-mode" and "superivsor-mode" interrupts.

Finally, some effort has been made to guide those who may not yet be aware of the differences between machine-mode and supervisor-mode interrupts as to why no `riscv-interrupt` calling convention is exposed through rustc, and similarly for why `riscv-interrupt-u` makes no appearance (as it would complicate future LLVM upgrades).

[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
2023-08-09 22:59:58 +02:00
Seth Pellegrino
897c7bb23b feat: riscv-interrupt-{m,s} calling conventions
Similar to prior support added for the mips430, avr, and x86 targets
this change implements the rough equivalent of clang's
[`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling
e.g.

```rust
static mut CNT: usize = 0;

pub extern "riscv-interrupt-m" fn isr_m() {
    unsafe {
        CNT += 1;
    }
}
```

to produce highly effective assembly like:

```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0:       1141                    addi    sp,sp,-16
    unsafe {
        CNT += 1;
420003a2:       c62a                    sw      a0,12(sp)
420003a4:       c42e                    sw      a1,8(sp)
420003a6:       3fc80537                lui     a0,0x3fc80
420003aa:       63c52583                lw      a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae:       0585                    addi    a1,a1,1
420003b0:       62b52e23                sw      a1,1596(a0)
    }
}
420003b4:       4532                    lw      a0,12(sp)
420003b6:       45a2                    lw      a1,8(sp)
420003b8:       0141                    addi    sp,sp,16
420003ba:       30200073                mret
```

(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)

This outcome is superior to hand-coded interrupt routines which, lacking
visibility into any non-assembly body of the interrupt handler, have to
be very conservative and save the [entire CPU state to the stack
frame][full-frame-save]. By instead asking LLVM to only save the
registers that it uses, we defer the decision to the tool with the best
context: it can more accurately account for the cost of spills if it
knows that every additional register used is already at the cost of an
implicit spill.

At the LLVM level, this is apparently [implemented by] marking every
register as "[callee-save]," matching the semantics of an interrupt
handler nicely (it has to leave the CPU state just as it found it after
its `{m|s}ret`).

This approach is not suitable for every interrupt handler, as it makes
no attempt to e.g. save the state in a user-accessible stack frame. For
a full discussion of those challenges and tradeoffs, please refer to
[the interrupt calling conventions RFC][rfc].

Inside rustc, this implementation differs from prior art because LLVM
does not expose the "all-saved" function flavor as a calling convention
directly, instead preferring to use an attribute that allows for
differentiating between "machine-mode" and "superivsor-mode" interrupts.

Finally, some effort has been made to guide those who may not yet be
aware of the differences between machine-mode and supervisor-mode
interrupts as to why no `riscv-interrupt` calling convention is exposed
through rustc, and similarly for why `riscv-interrupt-u` makes no
appearance (as it would complicate future LLVM upgrades).

[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
2023-08-08 18:09:56 -07:00
cedihegi
0166092f87 Added comment on reason for method being public 2023-08-08 17:36:30 +02:00
cedihegi
15d408c6b0 Allow reimplementation of drops_elaborated query
Make module inner and function run_analysis_to_runtime_passes in
rustc_mir_transform public to allow re-implementing the query from the
rust compiler interface.
2023-08-08 16:30:43 +02:00
bors
84ec2633de Auto merge of #113902 - Enselic:lint-recursive-drop, r=oli-obk
Make `unconditional_recursion` warning detect recursive drops

Closes #55388

Also closes #50049 unless we want to keep it for the second example which this PR does not solve, but I think it is better to track that work in #57965.

r? `@oli-obk` since you are the mentor for #55388

Unresolved questions:
- [x] There are two false positives that must be fixed before merging (see diff). I suspect the best way to solve them is to perform analysis after drop elaboration instead of before, as now, but I have not explored that any further yet. Could that be an option? **Answer:** Yes, that solved the problem.

`@rustbot` label +T-compiler +C-enhancement +A-lint
2023-08-07 13:39:28 +00:00
bors
f3623871cf Auto merge of #114502 - cjgillot:steal-ctfe, r=oli-obk
Steal MIR for CTFE when possible.

Some bodies, like constants, have CTFE MIR but no optimized MIR.
In that case, have `mir_for_ctfe` steal the MIR instead of cloning it.
2023-08-06 22:02:12 +00:00
Matthias Krüger
13de583583
Rollup merge of #114505 - ouz-a:cleanup_mir, r=RalfJung
Add documentation to has_deref

Documentation of `has_deref` needed some polish to be more clear about where it should be used and what's it's purpose.

cc https://github.com/rust-lang/rust/issues/114401

r? `@RalfJung`
2023-08-06 17:26:29 +02:00
ouz-a
6df546281b cleanup misinformation regarding has_deref 2023-08-06 17:29:09 +03:00
Camille GILLOT
02e10a054e Steal MIR for CTFE when possible. 2023-08-05 21:16:55 +00:00
bors
1cabb8ed23 Auto merge of #114459 - cjgillot:simplify-ctfe, r=oli-obk
Do not run ConstProp on mir_for_ctfe.

This pass does not seem to be useful any more. The const-prop lints are now run by `tcx.mir_drops_elaborated_and_const_checked`, and the const-prop opt should never emit any diagnostic.
2023-08-05 09:08:34 +00:00
Camille GILLOT
e2230985b3 Do not run ConstProp on mir_for_ctfe. 2023-08-05 06:21:33 +00:00
Matthias Krüger
f36a9b5e18
Rollup merge of #113534 - oli-obk:simd_shuffle_dehackify, r=workingjubilee
Forbid old-style `simd_shuffleN` intrinsics

Don't merge before https://github.com/rust-lang/packed_simd/pull/350 has made its way to crates.io

We used to support specifying the lane length of simd_shuffle ops by attaching the lane length to the name of the intrinsic (like `simd_shuffle16`). After this PR, you cannot do that anymore, and need to instead either rely on inference of the `idx` argument type or specify it as `simd_shuffle::<_, [u32; 16], _>`.

r? `@workingjubilee`
2023-08-04 07:25:45 +02:00
Michael Goulet
3c9549b349 Explicitly don't inline user-written rust-call fns 2023-08-03 18:35:56 +00:00
Michael Goulet
0391af0e1f Only unpack tupled args in inliner if we expect args to be unpacked 2023-08-03 18:35:56 +00:00
Oli Scherer
4457ef2c6d Forbid old-style simd_shuffleN intrinsics 2023-08-03 09:29:00 +00:00
Michael Goulet
99969d282b Use upvar_tys in more places, make it a list 2023-08-01 23:19:31 +00:00
Zalathar
3920e07f0b Make coverage counter IDs count up from 0, not 1
Operand types are now tracked explicitly, so there is no need to reserve ID 0
for the special always-zero counter.

As part of the renumbering, this change fixes an off-by-one error in the way
counters were counted by the `coverageinfo` query. As a result, functions
should now have exactly the number of counters they actually need, instead of
always having an extra counter that is never used.
2023-08-01 11:29:55 +10:00
Zalathar
f103db894f Make coverage expression IDs count up from 0, not down from u32::MAX
Operand types are now tracked explicitly, so there is no need for expression
IDs to avoid counter IDs by descending from `u32::MAX`. Instead they can just
count up from 0, and can be used directly as indices when necessary.
2023-08-01 11:29:55 +10:00
Zalathar
1a014d42f4 Replace ExpressionOperandId with enum Operand
Because the three kinds of operand are now distinguished explicitly, we no
longer need fiddly code to disambiguate counter IDs and expression IDs based on
the total number of counters/expressions in a function.

This does increase the size of operands from 4 bytes to 8 bytes, but that
shouldn't be a big deal since they are mostly stored inside boxed structures,
and the current coverage code is not particularly size-optimized anyway.
2023-08-01 11:29:55 +10:00
Matthias Krüger
23815467a2 inline format!() args up to and including rustc_middle 2023-07-30 13:18:33 +02:00
Zalathar
8745fdc448 Replace a lazy RefCell<Option<T>> with OnceCell<T> 2023-07-28 12:55:13 +10:00
Matthias Krüger
fa21a8c6f8
Rollup merge of #114075 - matthiaskrgr:fmt_args_rustc_3, r=wesleywiser
inline format!() args from rustc_codegen_llvm to the end (4)

r? `@WaffleLapkin`
2023-07-27 06:04:13 +02:00
Matthias Krüger
c64ef5e070 inline format!() args from rustc_codegen_llvm to the end (4)
r? @WaffleLapkin
2023-07-25 23:20:28 +02:00
Ralf Jung
77ff1b83cd interpret: make read functions generic over operand type 2023-07-25 22:33:59 +02:00
Martin Nordholts
b4b33df983 Make unconditional_recursion warning detect recursive drops 2023-07-22 14:04:45 +02:00
Camille GILLOT
b6cd7006e0 Reuse MIR validator for inliner. 2023-07-21 13:58:33 +00:00
bors
6b290367ec Auto merge of #113802 - cjgillot:check-debuginfo, r=compiler-errors
Substitute types before checking inlining compatibility.

Addresses https://github.com/rust-lang/rust/issues/112332 and https://github.com/rust-lang/rust/issues/113781

I don't have a minimal test, but I this seems to remove the ICE locally.

This whole pre-inlining validation mirrors the "real" MIR validation pass to verify that inlined MIR will still pass validation.
The debuginfo loop is added because MIR validation check projections in debuginfo.
Likewise, MIR validation only checks `is_subtype`, so there is no reason for a stronger check.

The types were not being substituted in `check_equal`, so we were not bailing out of inlining if the substituted MIR callee body would not pass validation.
2023-07-21 09:14:55 +00:00
Camille GILLOT
4a177406ee Inline should_const_prop. 2023-07-20 21:30:51 +00:00
Camille GILLOT
3f708add2f Remove visit_terminator. 2023-07-20 21:30:51 +00:00
Camille GILLOT
ccfa9af29d Propagate ScalarPair for any type. 2023-07-20 21:30:51 +00:00
Camille GILLOT
895e2159f8 Also propagate ScalarPair operands. 2023-07-20 21:30:51 +00:00
Camille GILLOT
12a2edd149 Always propagate into operands. 2023-07-20 21:30:51 +00:00
Camille GILLOT
45ffe41d14 Substitute types before checking compatibility. 2023-07-19 12:38:15 +00:00
Camille GILLOT
f5feb3e3ca Turn copy into moves during DSE. 2023-07-19 09:59:12 +00:00
bors
079e544174 Auto merge of #109025 - cjgillot:refprop-dbg, r=JakobDegen
Enable MIR reference propagation by default
2023-07-14 17:32:59 +00:00
Mahdi Dibaiee
e55583c4b8 refactor(rustc_middle): Substs -> GenericArg 2023-07-14 13:27:35 +01:00
Michael Woerister
457b787a52 Introduce ExtentUnord trait for collections that can safely consume UnordItems. 2023-07-14 10:10:15 +02:00
Mark Rousskov
cc907f80b9 Re-format let-else per rustfmt update 2023-07-12 21:49:27 -04:00
Ralf Jung
dd453a6a99 miri: protect Move() function arguments during the call 2023-07-11 21:59:01 +02:00
Camille GILLOT
a5031d569e Call super for debuginfo. 2023-07-10 16:01:19 +00:00