incr.comp.: Some preparatory work for caching more query results.
This PR
* adds and updates some encoding/decoding routines for various query result types so they can be cached later, and
* adds missing `[input]` annotations for a few `DepNode` variants.
The situation around having to explicitly mark dep-nodes/queries as inputs is not really satisfactory. I hope we can find a way of making this more fool-proof in the future.
r? @nikomatsakis
avoid type-live-for-region obligations on dummy nodes
Type-live-for-region obligations on DUMMY_NODE_ID cause an ICE, and it
turns out that in the few cases they are needed, these obligations are not
needed anyway because they are verified elsewhere.
Fixes#46069.
Beta-nominating because this is a regression for our new beta.
r? @nikomatsakis
To handle packed structs with destructors (which you'll think are a rare
case, but the `#[repr(packed)] struct Packed<T>(T);` pattern is
ever-popular, which requires handling packed structs with destructors to
avoid monomorphization-time errors), drops of subfields of packed
structs should drop a local move of the field instead of the original
one.
cc #27060 - this should deal with that issue after codegen of drop glue
is updated.
The new errors need to be changed to future-compatibility warnings, but
I'll rather do a crater run first with them as errors to assess the
impact.
Type-live-for-region obligations on DUMMY_NODE_ID cause an ICE, and it
turns out that in the few cases they are needed, these obligations are not
needed anyway because they are verified elsewhere.
Fixes#46069.
Add a MIR pass to lower 128-bit operators to lang item calls
Runs only with `-Z lower_128bit_ops` since it's not hooked into targets yet.
This isn't really useful on its own, but the declarations for the lang items need to be in the compiler before compiler-builtins can be updated to define them, so this is part 1 of at least 3.
cc https://github.com/rust-lang/rust/issues/45676 @est31 @nagisa
move closure kind, signature into `ClosureSubsts`
Instead of using side-tables, store the closure-kind and signature in the substitutions themselves. This has two key effects:
- It means that the closure's type changes as inference finds out more things, which is very nice.
- As a result, it avoids the need for the `freshen_closure_like` code (though we still use it for generators).
- It avoids cyclic closures calls.
- These were never meant to be supported, precisely because they make a lot of the fancy inference that we do much more complicated. However, due to an oversight, it was previously possible -- if challenging -- to create a setup where a closure *directly* called itself (see e.g. #21410).
We have to see what the effect of this change is, though. Needs a crater run. Marking as [WIP] until that has been assessed.
r? @arielb1
* The overflow-checking shift items need to take a full 128-bit type, since they need to be able to detect idiocy like `1i128 << (1u128 << 127)`
* The unchecked ones just take u32, like the `*_sh?` methods in core
* Because shift-by-anything is allowed, cast into a new local for every shift
Refactor type memory layouts and ABIs, to be more general and easier to optimize.
To combat combinatorial explosion, type layouts are now described through 3 orthogonal properties:
* `Variants` describes the plurality of sum types (where applicable)
* `Single` is for one inhabited/active variant, including all C `struct`s and `union`s
* `Tagged` has its variants discriminated by an integer tag, including C `enum`s
* `NicheFilling` uses otherwise-invalid values ("niches") for all but one of its inhabited variants
* `FieldPlacement` describes the number and memory offsets of fields (if any)
* `Union` has all its fields at offset `0`
* `Array` has offsets that are a multiple of its `stride`; guarantees all fields have one type
* `Arbitrary` records all the field offsets, which can be out-of-order
* `Abi` describes how values of the type should be passed around, including for FFI
* `Uninhabited` corresponds to no values, associated with unreachable control-flow
* `Scalar` is ABI-identical to its only integer/floating-point/pointer "scalar component"
* `ScalarPair` has two "scalar components", but only applies to the Rust ABI
* `Vector` is for SIMD vectors, typically `#[repr(simd)]` `struct`s in Rust
* `Aggregate` has arbitrary contents, including all non-transparent C `struct`s and `union`s
Size optimizations implemented so far:
* ignoring uninhabited variants (i.e. containing uninhabited fields), e.g.:
* `Option<!>` is 0 bytes
* `Result<T, !>` has the same size as `T`
* using arbitrary niches, not just `0`, to represent a data-less variant, e.g.:
* `Option<bool>`, `Option<Option<bool>>`, `Option<Ordering>` are all 1 byte
* `Option<char>` is 4 bytes
* using a range of niches to represent *multiple* data-less variants, e.g.:
* `enum E { A(bool), B, C, D }` is 1 byte
Code generation now takes advantage of `Scalar` and `ScalarPair` to, in more cases, pass around scalar components as immediates instead of indirectly, through pointers into temporary memory, while avoiding LLVM's "first-class aggregates", and there's more untapped potential here.
Closes#44426, fixes#5977, fixes#14540, fixes#43278.
integrate MIR type-checker with NLL inference
This branch refactors NLL type inference so that it uses the MIR type-checker to gather constraints. Along the way, it also refactors how region constraints are gathered in the normal inference context mildly. The new setup is like this:
- What used to be `region_inference` is split into two parts:
- `region_constraints`, which just collects up sets of constraints
- `lexical_region_resolve`, which does the iterative, lexical region resolution
- When `resolve_regions_and_report_errors` is invoked, the inference engine converts the constraints into final values.
- In the MIR type checker, however, we do not invoke this method, but instead periodically take the region constraints and package them up for the NLL solver to use later.
- This allows us to track when and where those constraints were incurred.
- We also remove the central fulfillment context from the MIR type checker, instead instantiating new fulfillment contexts at each point. This allows us to capture the set of obligations that occurred at a particular point, and also to ensure that if the same obligation arises at two points, we will enforce the region constraints at both locations.
- The MIR type checker is also enhanced to instantiate late-bound-regions with fresh variables and handle a few other corner cases that arose.
- I also extracted some of the 'outlives' logic from the regionck, which will be needed later (see future work) to handle the type-outlives relationships.
One concern I have with this branch: since the MIR type checker is used even without the `-Znll` switch, I'm not sure if it will impact performance. One simple fix here would be to only enable the MIR type-checker if debug-assertions are enabled, since it just serves to validate the MIR. Longer term I hope to address this by improving the interface to the trait solver to be more query-based (ongoing work).
There is plenty of future work left. Here are two things that leap to mind:
- **Type-region outlives.** Currently, the NLL solver will ICE if it is required to handle a constraint like `T: 'a`. Fixing this will require a small amount of refactoring to extract the implied bounds code. I plan to follow a file-up bug on this (hopefully with mentoring instructions).
- **Testing.** It's a good idea to enumerate some of the tricky scenarios that need testing, but I think it'd be nice to try and parallelize some of the actual test writing (and resulting bug fixing):
- Same obligation occurring at two points.
- Well-formedness and trait obligations of various kinds (which are not all processed by the current MIR type-checker).
- More tests for how subtyping and region inferencing interact.
- More suggestions welcome!
r? @arielb1
We are heading towards deeper integration with the region inference
system in infcx; in particular, prior to the creation of the
`RegionInferenceContext`, it will be the "owner" of the set of region
variables.
check_unsafety: fix unused unsafe block duplication
The duplicate error message is later removed by error message
deduplication, but it still appears on beta and is still a bug.
r? @eddyb