Better attribute and metaitem encapsulation throughout the compiler
This PR refactors most (hopefully all?) of the `MetaItem` interactions outside of `libsyntax` (and a few inside) to interact with MetaItems through the provided traits instead of directly creating / destruct / matching against them. This is a necessary first step to eventually converting `MetaItem`s to internally use `TokenStream` representations (which will make `MetaItem` interactions much nicer for macro writers once the new macro system is in place).
r? @nrc
In the older version, a `.o` and ` .bc` file were separate
work-products. This newer version keeps, for each codegen-unit, a set
of files of different kinds. We assume that if any kinds are available
then all the kinds we need are available, since the precise set of
switches will depend on attributes and command-line switches.
Should probably test this: the effect of changing attributes in
particular might not be successfully tracked?
This checks the `previous_work_products` data from the dep-graph and
tries to simply copy a `.o` file if possible. We also add new
work-products into the dep-graph, and create edges to/from the dep-node
for a work-product.
We used to use `Name`, but the session outlives the tokenizer, which
means that attempts to read this field after trans has complete
otherwise panic. All reads want an `InternedString` anyhow.
To handle the general case, we include a vector of def-ids, so that we
can account for things like `(Foo, Bar)` which references both `Foo` and
`Bar`. This means it is not Copy, so re-jigger some APIs to use
borrowing more intelligently.
Apply visit_path to import prefixes by default
Overriding `visit_path` is not enough to visit all paths, some import prefixes are not visited and `visit_path_list_item` need to be overridden as well. This PR removes this catch, it should be less error prone this way. Also, the prefix is visited once now, not repeatedly for each path list item.
r? @eddyb
This makes the "shadowing labels" warning *not* print the entire loop
as a span, but only the lifetime.
Also makes #31719 go away, but does not fix its root cause (the span
of the expanded loop is still wonky, but not used anymore).
This commit reorganizes how the persist code treats hashing. The idea is
that each crate saves a file containing hashes representing the metadata
for each item X. When we see a read from `MetaData(X)`, we can load this
hash up (if we don't find a file for that crate, we just use the SVH for
the entire crate).
To compute the hash for `MetaData(Y)`, where Y is some local item, we
examine all the predecessors of the `MetaData(Y)` node and hash their
hashes together.
For external crates, we must build up a map that goes from
the DefKey to the DefIndex. We do this by iterating over each
index that is found in the metadata and loading the associated
DefKey.
The compiler created a directory as part of `-Z incremental` but that may be
hierarchically used concurrently so we need to protect ourselves against that.
This commit rewrites all of the tidy checks we have, namely:
* featureck
* errorck
* tidy
* binaries
into Rust under a new `tidy` tool inside of the `src/tools` directory. This at
the same time deletes all the corresponding Python tidy checks so we can be sure
to only have one source of truth for all the tidy checks.
cc #31590