The PR builder on Azure currently takes 2.5h which is a bit long, so
this commit disables debug assertions and llvm assertions in an attempt
to speed up that builder and have PR builds come back a bit more
quickly. Other builders continue to enable debug assertions and test the
compiler there.
Clean up MIR drop generation
* Don't assign twice to the destination of a `while` loop containing a `break` expression
* Use `as_temp` to evaluate statement expression
* Avoid consecutive `StorageLive`s for the condition of a `while` loop
* Unify `return`, `break` and `continue` handling, and move it to `scopes.rs`
* Make some of the `scopes.rs` internals private
* Don't use `Place`s that are always `Local`s in MIR drop generation
Closes#42371Closes#61579Closes#61731Closes#61834Closes#61910Closes#62115
rustc: correctly transform memory_index mappings for generators.
Fixes#61793, closes#62011 (previous attempt at fixing #61793).
During #60187, I made the mistake of suggesting that the (re-)computation of `memory_index` in `ty::layout`, after generator-specific logic split/recombined fields, be done off of the `offsets` of those fields (which needed to be computed anyway), as opposed to the `memory_index`.
`memory_index` maps each field to its in-memory order index, which ranges over the same `0..n` values as the fields themselves, making it a bijective mapping, and more specifically a permutation (indeed, it's the permutation resulting from field reordering optimizations).
Each field has an unique "memory index", meaning a sort based on them, even an unstable one, will not put them in the wrong order. But offsets don't have that property, because of ZSTs (which do not increase the offset), so sorting based on the offset of fields alone can (and did) result in wrong orders.
Instead of going back to sorting based on (slices/subsets of) `memory_index`, or special-casing ZSTs to make sorting based on offsets produce the right results (presumably), as #62011 does, I opted to drop sorting altogether and focus on `O(n)` operations involving *permutations*:
* a permutation is easily inverted (see the `invert_mapping` `fn`)
* an `inverse_memory_index` was already employed in other parts of the `ty::layout` code (that is, a mapping from memory order to field indices)
* inverting twice produces the original permutation, so you can invert, modify, and invert again, if it's easier to modify the inverse mapping than the direct one
* you can modify/remove elements in a permutation, as long as the result remains dense (i.e. using every integer in `0..len`, without gaps)
* for splitting a `0..n` permutation into disjoint `0..x` and `x..n` ranges, you can pick the elements based on a `i < x` / `i >= x` predicate, and for the latter, also subtract `x` to compact the range to `0..n-x`
* in the general case, for taking an arbitrary subset of the permutation, you need a renumbering from that subset to a dense `0..subset.len()` - but notably, this is still `O(n)`!
* you can merge permutations, as long as the result remains disjoint (i.e. each element is unique)
* for concatenating two `0..n` and `0..m` permutations, you can renumber the elements in the latter to `n..n+m`
* some of these operations can be combined, and an inverse mapping (be it a permutation or not) can still be used instead of a forward one by changing the "domain" of the loop performing the operation
I wish I had a nicer / more mathematical description of the recombinations involved, but my focus was to fix the bug (in a way which preserves information more directly than sorting would), so I may have missed potential changes in the surrounding generator layout code, that would make this all more straight-forward.
r? @tmandry
Rollup of 7 pull requests
Successful merges:
- #61814 (Fix an ICE with uninhabited consts)
- #61987 (rustc: produce AST instead of HIR from `hir::lowering::Resolver` methods.)
- #62055 (Fix error counting)
- #62078 (Remove built-in derive macros `Send` and `Sync`)
- #62085 (Add test for issue-38591)
- #62091 (HirIdification: almost there)
- #62096 (Implement From<Local> for Place and PlaceBase)
Failed merges:
r? @ghost
HirIdification: almost there
I'm beginning to run out of stuff to HirIdify 😉.
This time I targeted mainly `hir::map::{find, get_parent_node}`, but a few other bits got changed too.
r? @Zoxc
Fix error counting
Count duplicate errors for `track_errors` and other error counting checks.
Add FIXMEs to make it clear that we should be moving away from this kind of logic.
Closes#61663
rustc: produce AST instead of HIR from `hir::lowering::Resolver` methods.
This avoids synthesizing HIR nodes in `rustc_resolve`, and `rustc::hir::lowering` patching up the result after the fact (I suspect this is even more significant for @Zoxc's chages to arena-allocate the HIR).
r? @oli-obk
rustbuild: detect cxx for all targets
Replaces #61544Fixes#59917
We need CXX to build llvm-libunwind which can be enabled for alltargets.
As we needed it for all hosts anyways, just move the detection so that it is ran for all targets (which contains all hosts) instead.
Fix HIR visit order
Fixes#61442
When rustc::middle::region::ScopeTree computes its yield_in_scope
field, it relies on the HIR visitor order to properly compute which
types must be live across yield points. In order for the computed scopes
to agree with the generated MIR, we must ensure that expressions
evaluated before a yield point are visited before the 'yield'
expression.
However, the visitor order for ExprKind::AssignOp
was incorrect. The left-hand side of a compund assignment expression is
evaluated before the right-hand side, but the right-hand expression was
being visited before the left-hand expression. If the left-hand
expression caused a new type to be introduced (e.g. through a
deref-coercion), the new type would be incorrectly seen as occuring
*after* the yield point, instead of before. This leads to a mismatch
between the computed generator types and the MIR, since the MIR will
correctly see the type as being live across the yield point.
To fix this, we correct the visitor order for ExprKind::AssignOp
to reflect the actual evaulation order.
rustc_mir: Hide initial block state when defining transfer functions
This PR addresses [this FIXME](2887008e0c/src/librustc_mir/dataflow/mod.rs (L594-L596)).
This makes `sets.on_entry` inaccessible in `{before_,}{statement,terminator}_effect`. This field was meant to allow implementors of `BitDenotation` to access the initial state for each block (optionally with the effect of all previous statements applied via `accumulates_intrablock_state`) while defining transfer functions. However, the ability to set the initial value for the entry set of each basic block (except for START_BLOCK) no longer exists. As a result, this functionality is mostly useless, and when it *was* used it was used erroneously (see #62007).
Since `on_entry` is now useless, we can also remove `BlockSets`, which held the `gen`, `kill`, and `on_entry` bitvectors and replace it with a `GenKill` struct. Variables of this type are called `trans` since they represent a transfer function. `GenKill`s are stored contiguously in `AllSets`, which reduces the number of bounds checks and may improve cache performance: one is almost never accessed without the other.
Replacing `BlockSets` with `GenKill` allows us to define some new helper functions which streamline dataflow iteration and the dataflow-at-location APIs. Notably, `state_for_location` used a subtle side-effect of the `kill`/`kill_all` setters to apply the transfer function, and could be incorrect if a transfer function depended on effects of previous statements in the block on `gen_set`.
Additionally, this PR merges `BitSetOperator` and `InitialFlow` into one trait. Since the value of `InitialFlow` defines the semantics of the `join` operation, there's no reason to have seperate traits for each. We can add a default impl of `join` which branches based on `BOTTOM_VALUE`. This should get optimized away.
Refactor miri pointer checks
Centralize bounds, alignment and NULL checking for memory accesses in one function: `memory.check_ptr_access`. That function also takes care of converting a `Scalar` to a `Pointer`, should that be needed. Not all accesses need that though: if the access has size 0, `None` is returned. Everyone accessing memory based on a `Scalar` should use this method to get the `Pointer` they need.
All operations on the `Allocation` work on `Pointer` inputs and expect all the checks to have happened (and will ICE if the bounds are violated). The operations on `Memory` work on `Scalar` inputs and do the checks themselves.
The only other public method to check pointers is `memory.ptr_may_be_null`, which is needed in a few places. No need for `check_align` or similar methods. That makes the public API surface much easier to use and harder to mis-use.
This should be largely no-functional-change, except that ZST accesses to a "true" pointer that is dangling or out-of-bounds are now considered UB. This is to be conservative wrt. whatever LLVM might be doing.
While I am at it, this also removes the assumption that the vtable part of a `dyn Trait`-fat-pointer is a `Pointer` (as opposed to a pointer cast to an integer, stored as raw bits).
r? @oli-obk