This PR introduces an `ObligationForest` data structure that the fulfillment context can use to track what's going on, instead of the current flat vector. This enables a number of improvements:
1. transactional support, at least for pushing new obligations
2. remove the "errors will be reported" hack -- instead, we only add types to the global cache once their entire subtree has been proven safe. Before, we never knew when this point was reached because we didn't track the subtree.
- this in turn allows us to limit coinductive reasoning to structural traits, which sidesteps #29859
3. keeping the backtrace should allow for an improved error message, where we give the user full context
- we can also remove chained obligation causes
This PR is not 100% complete. In particular:
- [x] Currently, types that embed themselves like `struct Foo { f: Foo }` give an overflow when evaluating whether `Foo: Sized`. This is not a very user-friendly error message, and this is a common beginner error. I plan to special-case this scenario, I think.
- [x] I should do some perf. measurements. (Update: 2% regression.)
- [x] More tests targeting #29859
- [ ] The transactional support is not fully integrated, though that should be easy enough.
- [ ] The error messages are not taking advantage of the backtrace.
I'd certainly like to do 1 through 3 before landing, but 4 and 5 could come as separate PRs.
r? @aturon // good way to learn more about this part of the trait system
f? @arielb1 // already knows this part of the trait system :)
This provides limited support for using associated consts on type parameters. It generally works on things that can be figured out at trans time. This doesn't work for array lengths or match arms. I have another patch to make it work in const expressions.
CC @eddyb @nikomatsakis
Previously we would generate regular calls for these, which is likely to result in worse LLVM code,
especially in presence of cleanups – we needn’t unecessarilly generate landing pads to construct an
ADT!
Get rid of that nasty unit_ty temporary variable created just because it might be handy to have one around, when in reality it isn’t really that useful at all.
r? @nikomatsakis
Fixes https://github.com/rust-lang/rust/issues/30637
Previously it was returning a clone, mostly for the two reasons:
* Cloning Lvalue is very cheap most of the time (i.e. when Lvalue is not a Projection);
* There’s users who want &mut lvalue and there’s users who want &lvalue. Returning a value allows
to make either one easier when pattern matching (i.e. Some(ref dest) or Some(ref mut dest)).
However, I’m now convinced this is an invalid approach. Namely the users which want a mutable
reference may modify the Lvalue in-place, but the changes won’t be reflected in the final MIR,
since the Lvalue modified is merely a clone.
Instead, we have two accessors `destination` and `destination_mut` which return a reference to the
destination in desired mode.
r? @nikomatsakis
Previously it was returning a value, mostly for the two reasons:
* Cloning Lvalue is very cheap most of the time (i.e. when Lvalue is not a Projection);
* There’s users who want &mut lvalue and there’s users who want &lvalue. Returning a value allows
to make either one easier when pattern matching (i.e. Some(ref dest) or Some(ref mut dest)).
However, I’m now convinced this is an invalid approach. Namely the users which want a mutable
reference may modify the Lvalue in-place, but the changes won’t be reflected in the final MIR,
since the Lvalue modified is merely a clone.
Instead, we have two accessors `destination` and `destination_mut` which return a reference to the
destination in desired mode.
Assign a default unit value to the destinations of block expressions without trailing expression,
return expressions without return value (i.e. `return;`) and conditionals without else clause.
This is roughly the same as my previous PR that created a dependency graph, but that:
1. The dependency graph is only optionally constructed, though this doesn't seem to make much of a difference in terms of overhead (see measurements below).
2. The dependency graph is simpler (I combined a lot of nodes).
3. The dependency graph debugging facilities are much better: you can now use `RUST_DEP_GRAPH_FILTER` to filter the dep graph to just the nodes you are interested in, which is super help.
4. The tests are somewhat more elaborate, including a few known bugs I need to fix in a second pass.
This is potentially a `[breaking-change]` for plugin authors. If you are poking about in tcx state or something like that, you probably want to add `let _ignore = tcx.dep_graph.in_ignore();`, which will cause your reads/writes to be ignored and not affect the dep-graph.
After this, or perhaps as an add-on to this PR in some cases, what I would like to do is the following:
- [x] Write-up a little guide to how to use this system, the debugging options available, and what the possible failure modes are.
- [ ] Introduce read-only and perhaps the `Meta` node
- [x] Replace "memoization tasks" with node from the map itself
- [ ] Fix the shortcomings, obviously! Notably, the HIR map needs to register reads, and there is some state that is not yet tracked. (Maybe as a separate PR.)
- [x] Refactor the dep-graph code so that the actual maintenance of the dep-graph occurs in a parallel thread, and the main thread simply throws things into a shared channel (probably a fixed-size channel). There is no reason for dep-graph construction to be on the main thread. (Maybe as a separate PR.)
Regarding performance: adding this tracking does add some overhead, approximately 2% in my measurements (I was comparing the build times for rustdoc). Interestingly, enabling or disabling tracking doesn't seem to do very much. I want to poke at this some more and gather a bit more data -- in some tests I've seen that 2% go away, but on others it comes back. It's not entirely clear to me if that 2% is truly due to constructing the dep-graph at all.
The next big step after this is write some code to dump the dep-graph to disk and reload it.
r? @michaelwoerister
This merges two separate Call terminators and uses a separate CallKind sub-enum instead.
A little bit unrelatedly, copying into destination value for a certain kind of invoke, is also
implemented here. See the associated comment in code for various details that arise with this
implementation.
DivergingCall is different enough from the regular converging Call to warrant the split. This also
inlines CallData struct and creates a new CallTargets enum in order to have a way to differentiate
between calls that do not have an associated cleanup block.
Note, that this patch still does not produce DivergingCall terminator anywhere. Look for that in
the next patches.
So far, calls going through `Fn::call`, `FnMut::call_mut`, or `FnOnce::call_once` have not been translated properly into MIR:
The call `f(a, b, c)` where `f: Fn(T1, T2, T3)` would end up in MIR as:
```
call `f` with arguments `a`, `b`, `c`
```
What we really want is:
```
call `Fn::call` with arguments `f`, `a`, `b`, `c`
```
This PR transforms these kinds of overloaded calls during `HIR -> HAIR` translation.
What's still a bit funky is that the `Fn` traits expect arguments to be tupled but due to special handling type-checking and trans, we do not actually tuple arguments and everything still checks out fine. So, after this PR we end up with MIR containing calls where function signature and arguments seemingly don't match:
```
call Fn::call(&self, args: (T1, T2, T3)) with arguments `f`, `a`, `b`, `c`
```
instead of
```
call Fn::call(&self, args: (T1, T2, T3)) with arguments `f`, (`a`, `b`, `c`) // <- args tupled!
```
It would be nice if the call traits could go without special handling in MIR and later on.
Previously, all references to closure arguments went to the argument before the one they should (e.g. to `arg1` when it was supposed to go to `arg2`). This was because the MIR builder did not account for the implicit arguments that come before the explicit arguments, and closures have one implicit argument - the struct containing the captures.
This is my test code and a diff of the MIR generated for the closure:
```rust
let a = 2i32;
let _f = |b: i32| -> i32 { a + b }:
```
```diff
--- old 2015-12-29 23:16:32.027926372 -0600
+++ new 2015-12-29 23:16:42.975400757 -0600
@@ -1,22 +1,22 @@
fn(arg0: &[closure@closure-args.rs:8:14: 8:39 a:&i32], arg1: i32) -> i32 {
let var0: i32; // b
let tmp0: ();
let tmp1: i32;
let tmp2: i32;
bb0: {
- var0 = arg0;
+ var0 = arg1;
tmp1 = (*(*arg0).0);
tmp2 = var0;
ReturnPointer = Add(tmp1, tmp2);
goto -> bb1;
}
bb1: {
return;
}
bb2: {
diverge;
}
}
```
(If you're wondering where this text MIR output comes from, it's from another branch of mine waiting on https://github.com/rust-lang/rust/pull/30602 to get merged.)
The previous version using `PartialOrd::le` was broken since it passed `T`
arguments where `&T` was expected.
It makes sense to use primitive comparisons since range patterns can only be
used with chars and numeric types.
Previously, all references to closure arguments went to the argument before the
one they should (e.g. to arg1 when it was supposed to be arg2). This was because
the MIR builder did not account for the implicit arguments that come before the
explicit arguments, and closures have one implicit argument - the struct
containing the captures.
This PR is a rebase of the original PR by @eddyb https://github.com/rust-lang/rust/pull/21836 with some unrebasable parts manually reapplied, feature gate added + type equality restriction added as described below.
This implementation is partial because the type equality restriction is applied to all type ascription expressions and not only those in lvalue contexts. Thus, all difficulties with detection of these contexts and translation of coercions having effect in runtime are avoided.
So, you can't write things with coercions like `let slice = &[1, 2, 3]: &[u8];`. It obviously makes type ascription less useful than it should be, but it's still much more useful than not having type ascription at all.
In particular, things like `let v = something.iter().collect(): Vec<_>;` and `let u = t.into(): U;` work as expected and I'm pretty happy with these improvements alone.
Part of https://github.com/rust-lang/rust/issues/23416