Commit Graph

133 Commits

Author SHA1 Message Date
bors
4f372b14de Auto merge of #97239 - jhpratt:remove-crate-vis, r=joshtriplett
Remove `crate` visibility modifier

FCP to remove this syntax is just about complete in #53120. Once it completes, this should be merged ASAP to avoid merge conflicts.

The first two commits remove usage of the feature in this repository, while the last removes the feature itself.
2022-05-21 06:38:49 +00:00
Jacob Pratt
49c82f31a8
Remove crate visibility usage in compiler 2022-05-20 20:04:54 -04:00
Camille GILLOT
9900ea352b Cache more queries on disk. 2022-05-13 08:06:48 +02:00
xFrednet
2c5e85249f
Move lint expectation checking into a separate query (RFC 2383) 2022-05-08 14:37:14 +02:00
Oli Scherer
0d5a738b8b Enable tracing for all queryies 2022-05-04 16:15:26 +00:00
b-naber
28af967bb9 implement (as of now still unused) query for valtree -> constvalue conversion 2022-04-21 16:37:24 +02:00
Camille GILLOT
07ee031763 Stop using CRATE_DEF_INDEX.
`CRATE_DEF_ID` and `CrateNum::as_def_id` are almost always what we want.
2022-04-17 12:14:42 +02:00
lcnr
bef6f3e895 rework implementation for inherent impls for builtin types 2022-03-30 11:23:58 +02:00
klensy
008fc79dcd Propagate parallel_compiler feature through rustc crates. Turned off feature gives change of builded crates: 238 -> 224. 2022-03-28 08:41:12 +03:00
bors
3b1fe7e7c9 Auto merge of #94084 - Mark-Simulacrum:drop-sharded, r=cjgillot
Avoid query cache sharding code in single-threaded mode

In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results.

Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
2022-02-27 14:04:07 +00:00
Mark Rousskov
22c3a71de1 Switch bootstrap cfgs 2022-02-25 08:00:52 -05:00
bors
026d8ce7f5 Auto merge of #94066 - Mark-Simulacrum:factor-out-simple-def-kind, r=davidtwco
Remove SimpleDefKind

Now that rustc_query_system depends on rustc_hir, we can just directly make use of the regular DefKind.
2022-02-21 03:36:55 +00:00
Mark Rousskov
75ef068920 Delete QueryLookup
This was largely just caching the shard value at this point, which is not
particularly useful -- in the use sites the key was being hashed nearby anyway.
2022-02-20 12:11:28 -05:00
Mark Rousskov
9deed6f74e Move Sharded maps into each QueryCache impl 2022-02-20 12:10:46 -05:00
Mark Rousskov
ddda851fd5 Remove SimpleDefKind 2022-02-17 18:08:45 -05:00
Mark Rousskov
9763486034 Move ty::print methods to Drop-based scope guards 2022-02-16 17:24:23 -05:00
Nicholas Nethercote
a95fb8b150 Overhaul Const.
Specifically, rename the `Const` struct as `ConstS` and re-introduce `Const` as
this:
```
pub struct Const<'tcx>(&'tcx Interned<ConstS>);
```
This now matches `Ty` and `Predicate` more closely, including using
pointer-based `eq` and `hash`.

Notable changes:
- `mk_const` now takes a `ConstS`.
- `Const` was copy, despite being 48 bytes. Now `ConstS` is not, so need a
  we need separate arena for it, because we can't use the `Dropless` one any
  more.
- Many `&'tcx Const<'tcx>`/`&Const<'tcx>` to `Const<'tcx>` changes
- Many `ct.ty` to `ct.ty()` and `ct.val` to `ct.val()` changes.
- Lots of tedious sigil fiddling.
2022-02-15 16:19:59 +11:00
Nicholas Nethercote
e9a0c429c5 Overhaul TyS and Ty.
Specifically, change `Ty` from this:
```
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
to this
```
pub struct Ty<'tcx>(Interned<'tcx, TyS<'tcx>>);
```
There are two benefits to this.
- It's now a first class type, so we can define methods on it. This
  means we can move a lot of methods away from `TyS`, leaving `TyS` as a
  barely-used type, which is appropriate given that it's not meant to
  be used directly.
- The uniqueness requirement is now explicit, via the `Interned` type.
  E.g. the pointer-based `Eq` and `Hash` comes from `Interned`, rather
  than via `TyS`, which wasn't obvious at all.

Much of this commit is boring churn. The interesting changes are in
these files:
- compiler/rustc_middle/src/arena.rs
- compiler/rustc_middle/src/mir/visit.rs
- compiler/rustc_middle/src/ty/context.rs
- compiler/rustc_middle/src/ty/mod.rs

Specifically:
- Most mentions of `TyS` are removed. It's very much a dumb struct now;
  `Ty` has all the smarts.
- `TyS` now has `crate` visibility instead of `pub`.
- `TyS::make_for_test` is removed in favour of the static `BOOL_TY`,
  which just works better with the new structure.
- The `Eq`/`Ord`/`Hash` impls are removed from `TyS`. `Interned`s impls
  of `Eq`/`Hash` now suffice. `Ord` is now partly on `Interned`
  (pointer-based, for the `Equal` case) and partly on `TyS`
  (contents-based, for the other cases).
- There are many tedious sigil adjustments, i.e. adding or removing `*`
  or `&`. They seem to be unavoidable.
2022-02-15 16:03:24 +11:00
bors
e7aca89598 Auto merge of #93741 - Mark-Simulacrum:global-job-id, r=cjgillot
Refactor query system to maintain a global job id counter

This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.

The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.

A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
2022-02-09 18:54:30 +00:00
bors
9747ee4755 Auto merge of #93724 - Mark-Simulacrum:drop-query-stats, r=michaelwoerister
Delete -Zquery-stats infrastructure

These statistics are computable from the self-profile data and/or ad-hoc collectable as needed, and in the meantime contribute to rustc bootstrap times -- locally, this PR shaves ~2.5% from rustc_query_impl builds in instruction counts.

If this does lose some functionality we want to keep, I think we should migrate it to self-profile (or a similar interface) rather than this ad-hoc reporting.
2022-02-09 15:53:10 +00:00
Mark Rousskov
e240783a4d Switch QueryJobId to a single global counter
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.

The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.

A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
2022-02-08 18:49:55 -05:00
klensy
eb3b29fd09 14956 -> 14952 exports 2022-02-08 00:13:31 +03:00
klensy
7a75ebed09 15221 -> 14956 exports 2022-02-07 22:45:29 +03:00
Mark Rousskov
257839bd88 Delete query stats
These statistics are computable from the self-profile data and/or ad-hoc
collectable as needed, and in the meantime contribute to rustc bootstrap times.
2022-02-06 21:35:00 -05:00
lcnr
a1a30f7548 add a rustc::query_stability lint 2022-02-01 10:15:59 +01:00
Nicholas Nethercote
416399dc10 Make Decodable and Decoder infallible.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
  currently panics on failure (e.g. if the input is too short, or on a
  bad `Result` discriminant), and in some places it returns an error
  (e.g. on a bad `Option` discriminant). The number of places where
  either happens is surprisingly small, just because the binary
  representation has very little redundancy and a lot of input reading
  can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
  `.rlink` file production, and there's a `FIXME` comment suggesting it
  should change to a binary format, and (b) in a few tests in
  non-fundamental ways. Indeed #85993 is open to remove it entirely.

And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.

Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
  optimization for small counts that the impl for `Result<T, E>` has,
  because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
  `collect`, which is nice; the one for `Vec` uses unsafe code, because
  that gave better perf on some benchmarks.
2022-01-22 10:38:31 +11:00
Aaron Hill
70d36a05bc
Show a more informative panic message when DefPathHash does not exist
This should hopefully make it easier to debug incremental compilation
bugs like #93096 without affecting performance.
2022-01-19 17:36:44 -05:00
Josh Stone
f3b8812f24 Update rayon and rustc-rayon 2022-01-10 11:34:07 -08:00
Aaron Hill
d9220924dc
Import SourceFiles from crate before decoding foreign Span
Fixes #92163
Fixes #92014

When writing to the incremental cache, we encode all `Span`s
we encounter, regardless of whether or not their `SourceFile`
comes from the local crate, or from a foreign crate.

When we decode a `Span`, we use the `StableSourceFileId` we encoded
to locate the matching `SourceFile` in the current session. If this
id corresponds to a `SourceFile` from another crate, then we need to
have already imported that `SourceFile` into our current session.

This usually happens automatically during resolution / macro expansion,
when we try to resolve definitions from other crates. In certain cases,
however, we may try to load a `Span` from a transitive dependency
without having ever imported the `SourceFile`s from that crate, leading
to an ICE.

This PR fixes the issue by calling `imported_source_files()`
when we encounter a `SourceFile` with a foreign `CrateNum`.
This ensures that all `SourceFile`s from that crate are imported
into the current session.
2021-12-23 12:56:12 -05:00
bors
a41a6925ba Auto merge of #91957 - nnethercote:rm-SymbolStr, r=oli-obk
Remove `SymbolStr`

This was originally proposed in https://github.com/rust-lang/rust/pull/74554#discussion_r466203544. As well as removing the icky `SymbolStr` type, it allows the removal of a lot of `&` and `*` occurrences.

Best reviewed one commit at a time.

r? `@oli-obk`
2021-12-19 09:31:37 +00:00
Nicholas Nethercote
8cddcd39ba Remove SymbolStr.
By changing `as_str()` to take `&self` instead of `self`, we can just
return `&str`. We're still lying about lifetimes, but it's a smaller lie
than before, where `SymbolStr` contained a (fake) `&'static str`!
2021-12-15 13:30:26 +11:00
LegionMammal978
77a0c65264 Remove in_band_lifetimes from rustc_query_impl
See #91867 for more information.
2021-12-14 12:13:07 -05:00
Deadbeef
42963f4d50
Query modifier 2021-12-12 12:35:00 +08:00
est31
15de4cbc4b Remove redundant [..]s 2021-12-09 00:01:29 +01:00
Mark Rousskov
3215eeb99f
Revert "Add rustc lint, warning when iterating over hashmaps" 2021-10-28 11:01:42 -04:00
bjorn3
f5c3e83013 Avoid a branch on key being local for queries that use the same local and extern providers 2021-10-25 13:36:23 +02:00
bors
28d0e75269 Auto merge of #90210 - cjgillot:qarray2, r=Mark-Simulacrum
Build the query vtable directly.

Continuation of https://github.com/rust-lang/rust/pull/89978.

This shrinks the query interface and attempts to reduce the amount of function pointer calls.
2021-10-25 01:10:50 +00:00
Matthias Krüger
87822b27ee
Rollup merge of #89558 - lcnr:query-stable-lint, r=estebank
Add rustc lint, warning when iterating over hashmaps

r? rust-lang/wg-incr-comp
2021-10-24 15:48:42 +02:00
Camille GILLOT
138e96b719 Do not require QueryCtxt for cache_on_disk. 2021-10-23 18:12:43 +02:00
Camille GILLOT
7c0920f5fb Build the query vtable directly. 2021-10-23 16:59:19 +02:00
Camille GILLOT
0a5666b838 Do not depend on the stored value when trying to cache on disk. 2021-10-21 20:00:45 +02:00
Camille GILLOT
bd5c107672 Build jump table at runtime. 2021-10-20 18:32:29 +02:00
Camille GILLOT
602d3cbce3 Invoke callbacks from rustc_middle. 2021-10-20 18:29:33 +02:00
Camille GILLOT
b09de95fab Merge two query callbacks arrays. 2021-10-20 18:29:27 +02:00
Camille GILLOT
aa404c24dd Make hash_result an Option. 2021-10-20 18:29:18 +02:00
Camille GILLOT
e53404cca6 Move def_path_hash_to_def_id to rustc_middle. 2021-10-20 18:28:54 +02:00
Yuki Okushi
3d95330230
Rollup merge of #87404 - rylev:artifact-size-profiling, r=wesleywiser
Add support for artifact size profiling

This adds support for profiling artifact file sizes (incremental compilation artifacts and query cache to begin with).

Eventually we want to track this in perf.rlo so we can ensure that file sizes do not change dramatically on each pull request.

This relies on support in measureme: https://github.com/rust-lang/measureme/pull/169. Once that lands we can update this PR to not point to a git dependency.

This was worked on together with `@michaelwoerister.`

r? `@wesleywiser`
2021-10-20 04:35:11 +09:00
lcnr
00e5abe9b6 allow potential_query_instability everywhere 2021-10-15 10:58:18 +02:00
Mark Rousskov
127373822e Remove built-in cache_hit tracking
This was already only enabled in debug_assertions builds. Generally, it seems
like most use cases that would use this could also use the -Zself-profile flag
which also tracks cache hits (in all builds), and so the extra cfg's and such
are not really necessary.

This is largely just a small cleanup though, which primarily is intended to make
other changes easier by avoiding the need to deal with this field.
2021-10-11 16:33:49 -04:00
bors
15491d7b6b Auto merge of #89343 - Mark-Simulacrum:no-args-queries, r=cjgillot
Refactor fingerprint reconstruction

This PR replaces can_reconstruct_query_key with fingerprint_style, which returns the style of the fingerprint for that query. This allows us to avoid trying to extract a DefId (or equivalent) from keys which *are* reconstructible because they're () but not as DefIds.

This is done with the goal of fixing -Zdump-dep-graph, which seems to have broken a while ago (I didn't try to bisect). Currently even on a `fn main() {}` file it'll ICE (you need to also pass -Zquery-dep-graph for it to work at all), and this patch indirectly fixes the cause of that ICE. This also adds a test for it continuing to work.
2021-10-09 13:13:07 +00:00