Disable the evaluation cache when in intercrate mode
It's possible to use the same `InferCtxt` with both
an intercrate and non-intercrate `SelectionContext`. However,
the local (inferctxt) evaluation cache is not aware of this
distinction, so this kind of `InferCtxt` re-use will pollute
the cache wth bad results.
This commit avoids the issue by disabling the evaluation cache
entirely during intercrate mode.
Simplify lazy DefPathHash decoding by using an on-disk hash table.
This PR simplifies the logic around mapping `DefPathHash` values encountered during incremental compilation to valid `DefId`s in the current session. It is able to do so by using an on-disk hash table encoding that allows for looking up values directly, i.e. without deserializing the entire table.
The main simplification comes from not having to keep track of `DefPathHashes` being used during the compilation session.
Skip single use lifetime lint for generated opaque types
Fix: #77175
The opaque type generated by the desugaring process of an async function uses the lifetimes defined by the originating function. The DefId for the lifetimes in the opaque type are different from the ones in the originating async function - as they should be, as far as I understand, and could therefore be considered a single use lifetimes, this causes the single_use_lifetimes lint to fail compilation if explicitly denied. This fix skips the lint for lifetimes used only once in generated opaque types for an async function that are declared in the parent async function definition.
More info in the comments on the original issue: 1 and 2
Avoid codegen for Result::into_ok in lang_start
This extra codegen seems to be the cause for the regressions in max-rss on #86034. While LLVM will certainly optimize the dead code away, avoiding it's generation in the first place seems good, particularly when it is so simple.
#86034 produced this [diff](https://gist.github.com/Mark-Simulacrum/95c7599883093af3b960c35ffadf4dab#file-86034-diff) for a simple `fn main() {}`. With this PR, that diff [becomes limited to just a few extra IR instructions](https://gist.github.com/Mark-Simulacrum/95c7599883093af3b960c35ffadf4dab#file-88988-from-pre-diff) -- no extra functions.
Note that these are pre-optimization; LLVM surely will eliminate this during optimization. However, that optimization can end up generating more work and bump memory usage, and this eliminates that.
Use explicit log level in tracing instrument macro
Specify a log level in tracing instrument macro explicitly.
Additionally reduce the used log level from a default info level to a
debug level (all of those appear to be developer oriented logs, so there
should be no need to include them in release builds).
Move the Lock into symbol::Interner
This makes it easier to make the symbol interner (near) lock free in case of concurrent accesses in the future.
With https://github.com/rust-lang/rust/pull/87867 landed this shouldn't affect performance anymore.
Fast reject for NeedsNonConstDrop
Hopefully fixes the regression in #88558.
I've always wanted to help with the performance of rustc, but it doesn't feel the same when you are fixing a regression caused by your own PR...
r? `@oli-obk`
If any block on a goto chain has more than one predecessor, then the new
start block would have basic block predecessors.
Skip the transformation for the start block altogether, to avoid
violating the new invariant that the start block does not have any basic
block predecessors.
New lint: `same_name_method`
changelog: ``[`same_name_method`]``
fix: https://github.com/rust-lang/rust-clippy/issues/7632
It only compares a method in `impl` with another in `impl trait for`
It doesn't lint two methods in two traits.
I'm not sure my approach is the best way. I meet difficulty in other approaches.
inline(always) on check_recursion_limit
r? `@oli-obk`
#88558 caused a regression, this PR adds `#[inline(always)]` to `check_recursion_limit`, a possible suspect of that regression.
* Implement `black_box` as intrinsic
Responsibility of implementing the black box is now lies on backend
* Remove some TODOs
* Update to nightly-2021-09-17
* CI: don't fail on warnings
Extend the `DepthFirstSearch` iterator so that it can be re-used and
extended with add'l start nodes. Then replace the FxHashSets of nodes
we were using in the fallback analysis with a single iterator. This
way we won't re-walk portions of the graph that are reached more than
once, and we also do less allocation etc.
Instead, we now record those type variables that are the target of a
`NeverToAny` adjustment and consider those to be the "diverging" type
variables. This allows us to remove the special case logic that
creates a type variable for `!` in coercion.
The comment seems incorrect. Testing revealed that the examples in
question still work (as well as some variants) even without the
special casing here.