2023-01-06 13:21:47 -06:00
|
|
|
// normalize-stderr-test: "long-type-\d+" -> "long-type-hash"
|
|
|
|
|
2019-07-09 04:15:05 -05:00
|
|
|
// rust-lang/rust#30786: the use of `for<'b> &'b mut A: Stream<Item=T`
|
|
|
|
// should act as assertion that item does not borrow from its stream;
|
|
|
|
// but an earlier buggy rustc allowed `.map(|x: &_| x)` which does
|
|
|
|
// have such an item.
|
|
|
|
//
|
|
|
|
// This tests double-checks that we do not allow such behavior to leak
|
|
|
|
// through again.
|
|
|
|
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
pub trait Stream {
|
2019-07-09 04:15:05 -05:00
|
|
|
type Item;
|
|
|
|
fn next(self) -> Option<Self::Item>;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Example stream
|
|
|
|
pub struct Repeat(u64);
|
|
|
|
|
|
|
|
impl<'a> Stream for &'a mut Repeat {
|
|
|
|
type Item = &'a u64;
|
|
|
|
fn next(self) -> Option<Self::Item> {
|
|
|
|
Some(&self.0)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub struct Map<S, F> {
|
|
|
|
stream: S,
|
|
|
|
func: F,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<'a, A, F, T> Stream for &'a mut Map<A, F>
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
where
|
|
|
|
&'a mut A: Stream,
|
|
|
|
F: FnMut(<&'a mut A as Stream>::Item) -> T,
|
2019-07-09 04:15:05 -05:00
|
|
|
{
|
|
|
|
type Item = T;
|
|
|
|
fn next(self) -> Option<T> {
|
|
|
|
match self.stream.next() {
|
|
|
|
Some(item) => Some((self.func)(item)),
|
|
|
|
None => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub struct Filter<S, F> {
|
|
|
|
stream: S,
|
|
|
|
func: F,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<'a, A, F, T> Stream for &'a mut Filter<A, F>
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
where
|
|
|
|
for<'b> &'b mut A: Stream<Item = T>, // <---- BAD
|
|
|
|
F: FnMut(&T) -> bool,
|
2019-07-09 04:15:05 -05:00
|
|
|
{
|
|
|
|
type Item = <&'a mut A as Stream>::Item;
|
|
|
|
fn next(self) -> Option<Self::Item> {
|
|
|
|
while let Some(item) = self.stream.next() {
|
|
|
|
if (self.func)(&item) {
|
|
|
|
return Some(item);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
None
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
pub trait StreamExt
|
|
|
|
where
|
|
|
|
for<'b> &'b mut Self: Stream,
|
|
|
|
{
|
|
|
|
fn mapx<F>(self, func: F) -> Map<Self, F>
|
|
|
|
where
|
|
|
|
Self: Sized,
|
|
|
|
for<'a> &'a mut Map<Self, F>: Stream,
|
2019-07-09 04:15:05 -05:00
|
|
|
{
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
Map { func: func, stream: self }
|
2019-07-09 04:15:05 -05:00
|
|
|
}
|
|
|
|
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
fn filterx<F>(self, func: F) -> Filter<Self, F>
|
|
|
|
where
|
|
|
|
Self: Sized,
|
|
|
|
for<'a> &'a mut Filter<Self, F>: Stream,
|
2019-07-09 04:15:05 -05:00
|
|
|
{
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
Filter { func: func, stream: self }
|
2019-07-09 04:15:05 -05:00
|
|
|
}
|
|
|
|
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
fn countx(mut self) -> usize
|
|
|
|
where
|
|
|
|
Self: Sized,
|
2019-07-09 04:15:05 -05:00
|
|
|
{
|
|
|
|
let mut count = 0;
|
|
|
|
while let Some(_) = self.next() {
|
|
|
|
count += 1;
|
|
|
|
}
|
|
|
|
count
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
impl<T> StreamExt for T where for<'a> &'a mut T: Stream {}
|
2019-07-09 04:15:05 -05:00
|
|
|
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
fn identity<T>(x: &T) -> &T {
|
|
|
|
x
|
|
|
|
}
|
|
|
|
|
|
|
|
fn variant1() {
|
2019-07-09 04:15:05 -05:00
|
|
|
let source = Repeat(10);
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
|
|
|
|
// Here, the call to `mapx` returns a type `T` to which `StreamExt`
|
|
|
|
// is not applicable, because `for<'b> &'b mut T: Stream`) doesn't hold.
|
|
|
|
//
|
|
|
|
// More concretely, the type `T` is `Map<Repeat, Closure>`, and
|
|
|
|
// the where clause doesn't hold because the signature of the
|
|
|
|
// closure gets inferred to a signature like `|&'_ Stream| -> &'_`
|
|
|
|
// for some specific `'_`, rather than a more generic
|
|
|
|
// signature.
|
|
|
|
//
|
|
|
|
// Why *exactly* we opt for this signature is a bit unclear to me,
|
|
|
|
// we deduce it somehow from a reuqirement that `Map: Stream` I
|
|
|
|
// guess.
|
|
|
|
let map = source.mapx(|x: &_| x);
|
|
|
|
let filter = map.filterx(|x: &_| true);
|
2022-05-21 23:39:32 -05:00
|
|
|
//~^ ERROR the method
|
2019-07-09 04:15:05 -05:00
|
|
|
}
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
|
|
|
|
fn variant2() {
|
|
|
|
let source = Repeat(10);
|
|
|
|
|
|
|
|
// Here, we use a function, which is not subject to the vagaries
|
|
|
|
// of closure signature inference. In this case, we get the error
|
|
|
|
// on `countx` as, I think, the test originally expected.
|
|
|
|
let map = source.mapx(identity);
|
|
|
|
let filter = map.filterx(|x: &_| true);
|
|
|
|
let count = filter.countx();
|
2022-05-21 23:39:32 -05:00
|
|
|
//~^ ERROR the method
|
move leak-check to during coherence, candidate eval
In particular, it no longer occurs during the subtyping check. This is
important for enabling lazy normalization, because the subtyping check
will be producing sub-obligations that could affect its results.
Consider an example like
for<'a> fn(<&'a as Mirror>::Item) =
fn(&'b u8)
where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a
new subobligation like
<'!1 as Mirror>::Item = &'b u8
This will, after being solved, ultimately yield a constraint that `'!1
= 'b` which will fail. But with the leak-check being performed on
subtyping, there is no opportunity to normalize `<'!1 as
Mirror>::Item` (unless we invoke that normalization directly from
within subtyping, and I would prefer that subtyping and unification
are distinct operations rather than part of the trait solving stack).
The reason to keep the leak check during coherence and trait
evaluation is partly for backwards compatibility. The coherence change
permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait
evaluation change means that we can distinguish those two cases
without ambiguity errors. It also avoids recreating #57639, where we
were incorrectly choosing a where clause that would have failed the
leak check over the impl which succeeds.
The other reason to keep the leak check in those places is that I
think it is actually close to the model we want. To the point, I think
the trait solver ought to have the job of "breaking down"
higher-ranked region obligation like ``!1: '2` into into region
obligations that operate on things in the root universe, at which
point they should be handed off to polonius. The leak check isn't
*really* doing that -- these obligations are still handed to the
region solver to process -- but if/when we do adopt that model, the
decision to pass/fail would be happening in roughly this part of the
code.
This change had somewhat more side-effects than I anticipated. It
seems like there are cases where the leak-check was not being enforced
during method proving and trait selection. I haven't quite tracked
this down but I think it ought to be documented, so that we know what
precisely we are committing to.
One surprising test was `issue-30786.rs`. The behavior there seems a
bit "fishy" to me, but the problem is not related to the leak check
change as far as I can tell, but more to do with the closure signature
inference code and perhaps the associated type projection, which
together seem to be conspiring to produce an unexpected
signature. Nonetheless, it is an example of where changing the
leak-check can have some unexpected consequences: we're now failing to
resolve a method earlier than we were, which suggests we might change
some method resolutions that would have been ambiguous to be
successful.
TODO:
* figure out remainig test failures
* add new coherence tests for the patterns we ARE disallowing
2020-05-20 05:19:36 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
fn main() {}
|