The old code created a flat listing of "HIR -> WorkProduct" edges.
While perfectly general, this could lead to a lot of repetition if the
same HIR nodes affect many work-products. This is set to be a problem
when we start to skip typeck, since we will be adding a lot more
"work-product"-like nodes.
The newer code uses an alternative strategy: it "reduces" the graph
instead. Basically we walk the dep-graph and convert it to a DAG, where
we only keep intermediate nodes if they are used by multiple
work-products.
This DAG does not contain the same set of nodes as the original graph,
but it is guaranteed that (a) every output node is included in the graph
and (b) the set of input nodes that can reach each output node is
unchanged.
(Input nodes are basically HIR nodes and foreign metadata; output nodes
are nodes that have assocaited state which we will persist to disk in
some way. These are assumed to be disjoint sets.)
incr.comp.: Make cross-crate tracking for incr. comp. opt-in.
The current implementation of cross-crate dependency tracking can cause quite long compile times and high memory usage for some crates (see #39208 for example). This PR therefore makes that part of dependency tracking optional. Incremental compilation still works, it will only have very coarse dep-tracking for upstream crates.
r? @nikomatsakis
incr.comp.: Delete orphaned work-products.
The new partitioning scheme uncovered a hole in our incr. comp. cache directory garbage collection. So far, we relied on unneeded work products being deleted during the initial cache invalidation phase. However, we the new scheme, we get object files/work products that only contain code from upstream crates. Sometimes this code is not needed anymore (because all callers have been removed from the source) but because nothing that actually influences the contents of these work products had changed, we never deleted them from disk.
r? @nikomatsakis
Remove not(stage0) from deny(warnings)
Historically this was done to accommodate bugs in lints, but there hasn't been a
bug in a lint since this feature was added which the warnings affected. Let's
completely purge warnings from all our stages by denying warnings in all stages.
This will also assist in tracking down `stage0` code to be removed whenever
we're updating the bootstrap compiler.
Historically this was done to accommodate bugs in lints, but there hasn't been a
bug in a lint since this feature was added which the warnings affected. Let's
completely purge warnings from all our stages by denying warnings in all stages.
This will also assist in tracking down `stage0` code to be removed whenever
we're updating the bootstrap compiler.
The standard implementations of Hasher have architecture-dependent
results when hashing integers. This causes problems when the hashes are
stored within metadata - metadata written by one host architecture can't
be read by another.
To fix that, implement an architecture-independent StableHasher and use
it in all places an architecture-independent hasher is needed.
Fixes#38177.
add a `-Z incremental-dump-hash` flag
This causes us to dump a bunch of has information to stdout that can be
useful in tracking down incremental compilation invalidations,
particularly across crates.
incr.comp.: Add more output to -Z incremental-info.
Also makes sure that all output from `-Z incremental-info` is prefixed with `incremental:` for better grep-ability.
r? @nikomatsakis
This causes us to dump a bunch of has information to stdout that can be
useful in tracking down incremental compilation invalidations,
particularly across crates.