These are only called pre-TyCtxt (e.g. lowering/resolve), so make it explicit in
the name that they're untracked and therefore unsuitable to called elsewhere.
The main use of `CrateStore` *before* the `TyCtxt` is created is during
resolution, but we want to be sure that any methods used before resolution are
not used after the `TyCtxt` is created. This commit starts moving the methods
used by resolve to all be named `{name}_untracked` where the rest of the
compiler uses just `{name}` as a query.
During this transition a number of new queries were added to account for
post-resolve usage of these methods.
This commit started by moving methods from `CrateStore` to queries, but it ended
up necessitating some deeper refactorings to move more items in general to
queries.
Before this commit the *resolver* would walk over the AST and process foreign
modules (`extern { .. }` blocks) and collect `#[link]` annotations. It would
then also process the command line `-l` directives and such. This information
was then stored as precalculated lists in the `CrateStore` object for iterating
over later.
After this, commit, however, this pass no longer happens during resolution but
now instead happens through queries. A query for the linked libraries of a crate
will crawl the crate for `extern` blocks and then process the linkage
annotations at that time.
This comit applies the following changes:
* Deletes the `is_allocator` query as it's no longer used
* Moves the `is_sanitizer_runtime` method to a query
* Moves the `is_profiler_runtime` method to a query
* Moves the `panic_strategy` method to a query
* Moves the `is_no_builtins` method to a query
* Deletes the cstore method of `is_compiler_builtins`. The query was added in
#42588 but the `CrateStore` method was not deleted
A good bit of these methods were used late in linking during trans so a new
dedicated structure was created to ship a calculated form of this information
over to the linker rather than having to ship the whole of `TyCtxt` over to
linking.
This commit adds a new field to the `Item` AST node in libsyntax to optionally
contain the original token stream that the item itself was parsed from. This is
currently `None` everywhere but is intended for use later with procedural
macros.
rustc: Implement the #[global_allocator] attribute
This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.
[RFC 1974]: https://github.com/rust-lang/rfcs/pull/1974
The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.
cc #27389
This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.
[RFC 1974]: https://github.com/rust-lang/rfcs/pull/197
The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.
cc #27389
Build instruction profiler runtime as part of compiler-rt
r? @alexcrichton
This is #38608 with some fixes.
Still missing:
- [x] testing with profiler enabled on some builders (on which ones? Should I add the option to some of the already existing configurations, or create a new configuration?);
- [x] enabling distribution (on which builders?);
- [x] documentation.
incr.comp.: Make DepNode `Copy` and valid across compilation sessions
This PR moves `DepNode` to a representation that does not need retracing and thus simplifies comparing dep-graphs from different compilation sessions. The code also gets a lot simpler in many places, since we don't need the generic parameter on `DepNode` anymore. See https://github.com/rust-lang/rust/issues/42294 for details.
~~NOTE: Only the last commit of this is new, the rest is already reviewed in https://github.com/rust-lang/rust/pull/42504.~~
This PR is almost done but there are some things I still want to do:
- [x] Add some module-level documentation to `dep_node.rs`, explaining especially what the `define_dep_nodes!()` macro is about.
- [x] Do another pass over the dep-graph loading logic. I suspect that we can get rid of building the `edges` map and also use arrays instead of hash maps in some places.
cc @rust-lang/compiler
r? @nikomatsakis
Fix translation of external spans
Previously, I noticed that spans from external crates don't generate any output. This limitation is problematic if analysis is performed on one or more external crates, as is the case with [rust-semverver](https://github.com/ibabushkin/rust-semverver). This change should address this behaviour, with the potential drawback that a minor performance hit is to be expected, as spans from potentially large crates have to be translated now.
Remove interior mutability from TraitDef by turning fields into queries
This PR gets rid of anything `std::cell` in `TraitDef` by
- moving the global list of trait impls from `TraitDef` into a query,
- moving the list of trait impls relevent for some self-type from `TraitDef` into a query
- moving the specialization graph of trait impls into a query, and
- moving `TraitDef::object_safety` into a query.
I really like how querifying things not only helps with incremental compilation and on-demand, but also just plain makes the code cleaner `:)`
There are also some smaller fixes in the PR. Commits can be reviewed separately.
r? @eddyb or @nikomatsakis
Move the code for loading metadata from rlibs and dylibs from
rustc_metadata into rustc_trans, and introduce a trait to avoid
introducing a direct dependency on rustc_trans.
This means rustc_metadata is no longer rebuilt when LLVM changes.
rustc: Add a new `-Z force-unstable-if-unmarked` flag
This commit adds a new `-Z` flag to the compiler for use when bootstrapping the
compiler itself. We want to be able to use crates.io crates, but we also want
the usage of such crates to be as ergonomic as possible! To that end compiler
crates are a little tricky in that the crates.io crates are not annotated as
unstable, nor do they expect to pull in unstable dependencies.
To cover all these situations it's intended that the compiler will forever now
bootstrap with `-Z force-unstable-if-unmarked`. This flags serves a dual purpose
of forcing crates.io crates to themselves be unstable while also allowing them
to use other "unstable" crates.io crates. This should mean that adding a
dependency to compiler no longer requires upstream modification with
unstable/staged_api attributes for inclusion!
This commit adds a new `-Z` flag to the compiler for use when bootstrapping the
compiler itself. We want to be able to use crates.io crates, but we also want
the usage of such crates to be as ergonomic as possible! To that end compiler
crates are a little tricky in that the crates.io crates are not annotated as
unstable, nor do they expect to pull in unstable dependencies.
To cover all these situations it's intended that the compiler will forever now
bootstrap with `-Z force-unstable-if-unmarked`. This flags serves a dual purpose
of forcing crates.io crates to themselves be unstable while also allowing them
to use other "unstable" crates.io crates. This should mean that adding a
dependency to compiler no longer requires upstream modification with
unstable/staged_api attributes for inclusion!
This is a more principled version of the `RefCell` we were using
before. We now allocate a `Steal<Mir<'tcx>>` for each intermediate MIR
pass; when the next pass steals the entry, any later attempts to use it
will panic (there is no way to *test* if MIR is stolen, you're just
supposed to *know*).
The new setup is as follows. There is a pipeline of MIR passes that each
run **per def-id** to optimize a particular function. You are intended
to request MIR at whatever stage you need it. At the moment, there is
only one stage you can request:
- `optimized_mir(def_id)`
This yields the final product. Internally, it pulls the MIR for the
given def-id through a series of steps. Right now, these are still using
an "interned ref-cell" but they are intended to "steal" from one
another:
- `mir_build` -- performs the initial construction for local MIR
- `mir_pass_set` -- performs a suite of optimizations and transformations
- `mir_pass` -- an individual optimization within a suite
So, to construct the optimized MIR, we invoke:
mir_pass_set((MIR_OPTIMIZED, def_id))
which will build up the final MIR.
When -Z profile is passed, the GCDAProfiling LLVM pass is added
to the pipeline, which uses debug information to instrument the IR.
After compiling with -Z profile, the $(OUT_DIR)/$(CRATE_NAME).gcno
file is created, containing initial profiling information.
After running the program built, the $(OUT_DIR)/$(CRATE_NAME).gcda
file is created, containing branch counters.
The created *.gcno and *.gcda files can be processed using
the "llvm-cov gcov" and "lcov" tools. The profiling data LLVM
generates does not faithfully follow the GCC's format for *.gcno
and *.gcda files, and so it will probably not work with other tools
(such as gcov itself) that consume these files.
Implement a file-path remapping feature in support of debuginfo and reproducible builds
This PR adds the `-Zremap-path-prefix-from`/`-Zremap-path-prefix-to` commandline option pair and is a more general implementation of #41419. As opposed to the previous attempt, this implementation should enable reproducible builds regardless of the working directory of the compiler.
This implementation of the feature is more general in the sense that the re-mapping will affect *all* paths the compiler emits, including the ones in error messages.
r? @alexcrichton
#37653 support `default impl` for specialization
this commit implements the first step of the `default impl` feature:
> all items in a `default impl` are (implicitly) `default` and hence
> specializable.
In order to test this feature I've copied all the tests provided for the
`default` method implementation (in run-pass/specialization and
compile-fail/specialization directories) and moved the `default` keyword
from the item to the impl.
See [referenced](https://github.com/rust-lang/rust/issues/37653) issue for further info
r? @aturon
this commit implements the first step of the `default impl` feature:
all items in a `default impl` are (implicitly) `default` and hence
specializable.
In order to test this feature I've copied all the tests provided for the
`default` method implementation (in run-pass/specialization and
compile-fail/specialization directories) and moved the `default` keyword
from the item to the impl.
See referenced issue for further info
this avoids parsing item attributes on each call to `item_attrs`, which takes
off 33% (!) of translation time and 50% (!) of trans-item collection time.
This may seem like overkill, but it's exactly what we want/need for
incremental compilation I think. In particular, while generating code
for some codegen unit X, we can wind up querying about any number of
external items, and we only want to be forced to rebuild X is some of
those changed from a foreign item to otherwise. Factoring this into a
query means we would re-run only if some `false` became `true` (or vice
versa).
Instead of collecting all potential inputs to some metadata entry and
hashing those, we directly hash the values we are storing in metadata.
This is more accurate and doesn't suffer from quadratic blow-up when
many entries have the same dependencies.
cstore: return an immutable borrow from `visible_parent_map`
This prevents an ICE when `visible_parent_map` is called multiple times, for example when an item referenced in an impl signature is imported from an `extern crate` statement occurs within an impl.
Fixes#41053.
r? @eddyb
on-demand-ify `custom_coerce_unsized_kind` and `inherent-impls`
This "on-demand" task both checks for errors and computes the custom unsized kind, if any. This task is only defined on impls of `CoerceUnsized`; invoking it on any other kind of impl results in a bug. This is just to avoid having an `Option`, could easily be changed.
r? @eddyb