If any block on a goto chain has more than one predecessor, then the new
start block would have basic block predecessors.
Skip the transformation for the start block altogether, to avoid
violating the new invariant that the start block does not have any basic
block predecessors.
Rollup of 10 pull requests
Successful merges:
- #88292 (Enable --generate-link-to-definition for rustc's docs)
- #88729 (Recover from `Foo(a: 1, b: 2)`)
- #88875 (cleanup(rustc_trait_selection): remove vestigial code from rustc_on_unimplemented)
- #88892 (Move object safety suggestions to the end of the error)
- #88928 (Document the closure arguments for `reduce`.)
- #88976 (Clean up and add doc comments for CStr)
- #88983 (Allow calling `get_body_with_borrowck_facts` without `-Z polonius`)
- #88985 (Update clobber_abi list to include k[1-7] regs)
- #88986 (Update the backtrace crate)
- #89009 (Fix typo in `break` docs)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Querying layout of a generator requires its optimized MIR. Thus
computing layout during MIR optimization of a generator might create a
query cycle. Disable RemoveZsts in generators to avoid the issue
(similar approach is used in ConstProp transform already).
Introduce NullOp::AlignOf
This PR introduces `Rvalue::NullaryOp(NullOp::AlignOf, ty)`, which will be lowered from `align_of`, similar to `size_of` lowering to `Rvalue::NullaryOp(NullOp::SizeOf, ty)`.
The changes are originally part of #88700 but since it's not dependent on other changes and could have performance impact on its own, it's separated into its own PR.
Add -Z panic-in-drop={unwind,abort} command-line option
This PR changes `Drop` to abort if an unwinding panic attempts to escape it, making the process abort instead. This has several benefits:
- The current behavior when unwinding out of `Drop` is very unintuitive and easy to miss: unwinding continues, but the remaining drops in scope are simply leaked.
- A lot of unsafe code doesn't expect drops to unwind, which can lead to unsoundness:
- https://github.com/servo/rust-smallvec/issues/14
- https://github.com/bluss/arrayvec/issues/3
- There is a code size and compilation time cost to this: LLVM needs to generate extra landing pads out of all calls in a drop implementation. This can compound when functions are inlined since unwinding will then continue on to process drops in the callee, which can itself unwind, etc.
- Initial measurements show a 3% size reduction and up to 10% compilation time reduction on some crates (`syn`).
One thing to note about `-Z panic-in-drop=abort` is that *all* crates must be built with this option for it to be sound since it makes the compiler assume that dropping `Box<dyn Any>` will never unwind.
cc https://github.com/rust-lang/lang-team/issues/97
generic_const_exprs: use thir for abstract consts instead of mir
Changes `AbstractConst` building to use `thir` instead of `mir` so that there's less chance of consts unifying when they shouldn't because lowering to mir dropped information (see `abstract-consts-as-cast-5.rs` test)
r? `@lcnr`
Encode spans relative to the enclosing item
The aim of this PR is to avoid recomputing queries when code is moved without modification.
MCP at https://github.com/rust-lang/compiler-team/issues/443
This is achieved by :
1. storing the HIR owner LocalDefId information inside the span;
2. encoding and decoding spans relative to the enclosing item in the incremental on-disk cache;
3. marking a dependency to the `source_span(LocalDefId)` query when we translate a span from the short (`Span`) representation to its explicit (`SpanData`) representation.
Since all client code uses `Span`, step 3 ensures that all manipulations
of span byte positions actually create the dependency edge between
the caller and the `source_span(LocalDefId)`.
This query return the actual absolute span of the parent item.
As a consequence, any source code motion that changes the absolute byte position of a node will either:
- modify the distance to the parent's beginning, so change the relative span's hash;
- dirty `source_span`, and trigger the incremental recomputation of all code that
depends on the span's absolute byte position.
With this scheme, I believe the dependency tracking to be accurate.
For the moment, the spans are marked during lowering.
I'd rather do this during def-collection,
but the AST MutVisitor is not practical enough just yet.
The only difference is that we attach macro-expanded spans
to their expansion point instead of the macro itself.