Commit Graph

161 Commits

Author SHA1 Message Date
Camille GILLOT
e912c8dfe0 Use a dedicated DepKind for the forever-red node. 2022-07-06 23:20:12 +02:00
Camille GILLOT
15530a1c84 Create a forever red node and use it to force side effects. 2022-07-06 23:11:44 +02:00
Camille GILLOT
43bb31b954 Allow to create definitions inside the query system. 2022-07-06 22:50:55 +02:00
Dylan DPC
707c0d9a2d
Rollup merge of #98881 - cjgillot:q-def-kind, r=fee1-dead
Only compute DefKind through the query.
2022-07-06 14:49:08 +05:30
Camille GILLOT
974b1e3e51 Clarify the behaviour from inside the query system. 2022-07-05 22:52:21 +02:00
Camille GILLOT
e3d63203a3 Only compute DefKind through the query. 2022-07-04 10:42:23 +02:00
SparrowLii
fbca21edd2 get rid of tcx in deadlock handler when parallel compilation 2022-06-29 10:02:30 +08:00
bors
3a8b0144c8 Auto merge of #98106 - cjgillot:split-definitions, r=michaelwoerister
Split up `Definitions` and `ResolverAstLowering`.

Split off https://github.com/rust-lang/rust/pull/95573

r? `@michaelwoerister`
2022-06-17 10:00:11 +00:00
Nicholas Nethercote
bb02cc47c4 Move finish out of the Encoder trait.
This simplifies things, but requires making `CacheEncoder` non-generic.

(This was previously merged as commit 4 in #94732 and then was reverted
in #97905 because it caused a perf regression.)
2022-06-16 16:20:32 +10:00
Yuki Okushi
97b9347c93
Rollup merge of #98083 - nnethercote:rename-Encoder, r=bjorn3
Rename rustc_serialize::opaque::Encoder as MemEncoder.

This avoids the name clash with `rustc_serialize::Encoder` (a trait),
and allows lots qualifiers to be removed and imports to be simplified
(e.g. fewer `as` imports).

(This was previously merged as commit 5 in #94732 and then was reverted
in #97905 because of a perf regression caused by commit 4 in #94732.)

r? ```@bjorn3```
2022-06-15 12:02:04 +09:00
Yuki Okushi
bb4805118a
Rollup merge of #98067 - klensy:compiler-deps2, r=Dylan-DPC
compiler: remove unused deps

Removed unused dependencies in compiler crates and moves few `libc` under `target.cfg(unix)` .
2022-06-15 12:02:02 +09:00
Camille GILLOT
34e4d72929 Separate source_span and expn_that_defined from Definitions. 2022-06-14 22:45:51 +02:00
Nicholas Nethercote
abe45a9ffa Rename rustc_serialize::opaque::Encoder as MemEncoder.
This avoids the name clash with `rustc_serialize::Encoder` (a trait),
and allows lots qualifiers to be removed and imports to be simplified
(e.g. fewer `as` imports).

(This was previously merged as commit 5 in #94732 and then was reverted
in #97905 because of a perf regression caused by commit 4 in #94732.)
2022-06-14 14:52:01 +10:00
klensy
4ea4e2e76d remove currently unused deps 2022-06-13 22:20:51 +03:00
Eduard-Mihai Burtescu
d76573abd1 Integrate measureme's hardware performance counter support. 2022-06-13 07:56:47 +00:00
Nicholas Nethercote
3186e311e5 Revert dc08bc51f2. 2022-06-10 11:58:29 +10:00
Nicholas Nethercote
7f51a1b976 Revert b983e42936. 2022-06-10 08:35:03 +10:00
Nicholas Nethercote
b983e42936 Rename rustc_serialize::opaque::Encoder as MemEncoder.
This avoids the name clash with `rustc_serialize::Encoder` (a trait),
and allows lots qualifiers to be removed and imports to be simplified
(e.g. fewer `as` imports).
2022-06-08 09:50:44 +10:00
Nicholas Nethercote
dc08bc51f2 Move finish out of the Encoder trait.
This simplifies things, but requires making `CacheEncoder` non-generic.
2022-06-08 09:21:05 +10:00
Nicholas Nethercote
1acbe7573d Use delayed error handling for Encodable and Encoder infallible.
There are two impls of the `Encoder` trait: `opaque::Encoder` and
`opaque::FileEncoder`. The former encodes into memory and is infallible, the
latter writes to file and is fallible.

Currently, standard `Result`/`?`/`unwrap` error handling is used, but this is a
bit verbose and has non-trivial cost, which is annoying given how rare failures
are (especially in the infallible `opaque::Encoder` case).

This commit changes how `Encoder` fallibility is handled. All the `emit_*`
methods are now infallible. `opaque::Encoder` requires no great changes for
this. `opaque::FileEncoder` now implements a delayed error handling strategy.
If a failure occurs, it records this via the `res` field, and all subsequent
encoding operations are skipped if `res` indicates an error has occurred. Once
encoding is complete, the new `finish` method is called, which returns a
`Result`. In other words, there is now a single `Result`-producing method
instead of many of them.

This has very little effect on how any file errors are reported if
`opaque::FileEncoder` has any failures.

Much of this commit is boring mechanical changes, removing `Result` return
values and `?` or `unwrap` from expressions. The more interesting parts are as
follows.
- serialize.rs: The `Encoder` trait gains an `Ok` associated type. The
  `into_inner` method is changed into `finish`, which returns
  `Result<Vec<u8>, !>`.
- opaque.rs: The `FileEncoder` adopts the delayed error handling
  strategy. Its `Ok` type is a `usize`, returning the number of bytes
  written, replacing previous uses of `FileEncoder::position`.
- Various methods that take an encoder now consume it, rather than being
  passed a mutable reference, e.g. `serialize_query_result_cache`.
2022-06-08 07:01:26 +10:00
Jack Huey
410dcc9674 Fully stabilize NLL 2022-06-03 17:16:41 -04:00
bjorn3
7381ea019c Remove emit_unit
It doesn't do anything for all encoders
2022-06-03 17:02:14 +00:00
Nicholas Nethercote
0b81d7cdc6 Lazify SourceFile::lines.
`SourceFile::lines` is a big part of metadata. It's stored in a compressed form
(a difference list) to save disk space. Decoding it is a big fraction of
compile time for very small crates/programs.

This commit introduces a new type `SourceFileLines` which has a `Lines`
form and a `Diffs` form. The latter is used when the metadata is first
read, and it is only decoded into the `Lines` form when line data is
actually needed. This avoids the decoding cost for many files,
especially in `std`. It's a performance win of up to 15% for tiny
crates/programs where metadata decoding is a high part of compilation
costs.

A `Lock` is needed because the methods that access lines data (which can
trigger decoding) take `&self` rather than `&mut self`. To allow for this,
`SourceFile::lines` now takes a `FnMut` that operates on the lines slice rather
than returning the lines slice.
2022-06-01 10:36:39 +10:00
bors
0f06824013 Auto merge of #97287 - compiler-errors:type-interner, r=jackh726,oli-obk
Move things to `rustc_type_ir`

Finishes some work proposed in https://github.com/rust-lang/compiler-team/issues/341.

r? `@ghost`
2022-05-29 08:20:13 +00:00
Michael Goulet
4638915940 Make TyCtxt implement Interner, make HashStable generic and move to rustc_type_ir 2022-05-28 12:16:05 -07:00
Michael Goulet
a056a953f0 Initial fixes on top of type interner commit 2022-05-28 11:38:22 -07:00
Wilco Kusee
a7015fe816 Move things to rustc_type_ir 2022-05-28 11:38:22 -07:00
Josh Stone
ab57e36268 Update to rebased rustc-rayon 0.4 2022-05-27 20:20:41 -07:00
bors
4f372b14de Auto merge of #97239 - jhpratt:remove-crate-vis, r=joshtriplett
Remove `crate` visibility modifier

FCP to remove this syntax is just about complete in #53120. Once it completes, this should be merged ASAP to avoid merge conflicts.

The first two commits remove usage of the feature in this repository, while the last removes the feature itself.
2022-05-21 06:38:49 +00:00
Jacob Pratt
49c82f31a8
Remove crate visibility usage in compiler 2022-05-20 20:04:54 -04:00
Camille GILLOT
9900ea352b Cache more queries on disk. 2022-05-13 08:06:48 +02:00
xFrednet
2c5e85249f
Move lint expectation checking into a separate query (RFC 2383) 2022-05-08 14:37:14 +02:00
Oli Scherer
0d5a738b8b Enable tracing for all queryies 2022-05-04 16:15:26 +00:00
b-naber
28af967bb9 implement (as of now still unused) query for valtree -> constvalue conversion 2022-04-21 16:37:24 +02:00
Camille GILLOT
07ee031763 Stop using CRATE_DEF_INDEX.
`CRATE_DEF_ID` and `CrateNum::as_def_id` are almost always what we want.
2022-04-17 12:14:42 +02:00
lcnr
bef6f3e895 rework implementation for inherent impls for builtin types 2022-03-30 11:23:58 +02:00
klensy
008fc79dcd Propagate parallel_compiler feature through rustc crates. Turned off feature gives change of builded crates: 238 -> 224. 2022-03-28 08:41:12 +03:00
bors
3b1fe7e7c9 Auto merge of #94084 - Mark-Simulacrum:drop-sharded, r=cjgillot
Avoid query cache sharding code in single-threaded mode

In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results.

Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
2022-02-27 14:04:07 +00:00
Mark Rousskov
22c3a71de1 Switch bootstrap cfgs 2022-02-25 08:00:52 -05:00
bors
026d8ce7f5 Auto merge of #94066 - Mark-Simulacrum:factor-out-simple-def-kind, r=davidtwco
Remove SimpleDefKind

Now that rustc_query_system depends on rustc_hir, we can just directly make use of the regular DefKind.
2022-02-21 03:36:55 +00:00
Mark Rousskov
75ef068920 Delete QueryLookup
This was largely just caching the shard value at this point, which is not
particularly useful -- in the use sites the key was being hashed nearby anyway.
2022-02-20 12:11:28 -05:00
Mark Rousskov
9deed6f74e Move Sharded maps into each QueryCache impl 2022-02-20 12:10:46 -05:00
Mark Rousskov
ddda851fd5 Remove SimpleDefKind 2022-02-17 18:08:45 -05:00
Mark Rousskov
9763486034 Move ty::print methods to Drop-based scope guards 2022-02-16 17:24:23 -05:00
Nicholas Nethercote
a95fb8b150 Overhaul Const.
Specifically, rename the `Const` struct as `ConstS` and re-introduce `Const` as
this:
```
pub struct Const<'tcx>(&'tcx Interned<ConstS>);
```
This now matches `Ty` and `Predicate` more closely, including using
pointer-based `eq` and `hash`.

Notable changes:
- `mk_const` now takes a `ConstS`.
- `Const` was copy, despite being 48 bytes. Now `ConstS` is not, so need a
  we need separate arena for it, because we can't use the `Dropless` one any
  more.
- Many `&'tcx Const<'tcx>`/`&Const<'tcx>` to `Const<'tcx>` changes
- Many `ct.ty` to `ct.ty()` and `ct.val` to `ct.val()` changes.
- Lots of tedious sigil fiddling.
2022-02-15 16:19:59 +11:00
Nicholas Nethercote
e9a0c429c5 Overhaul TyS and Ty.
Specifically, change `Ty` from this:
```
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
to this
```
pub struct Ty<'tcx>(Interned<'tcx, TyS<'tcx>>);
```
There are two benefits to this.
- It's now a first class type, so we can define methods on it. This
  means we can move a lot of methods away from `TyS`, leaving `TyS` as a
  barely-used type, which is appropriate given that it's not meant to
  be used directly.
- The uniqueness requirement is now explicit, via the `Interned` type.
  E.g. the pointer-based `Eq` and `Hash` comes from `Interned`, rather
  than via `TyS`, which wasn't obvious at all.

Much of this commit is boring churn. The interesting changes are in
these files:
- compiler/rustc_middle/src/arena.rs
- compiler/rustc_middle/src/mir/visit.rs
- compiler/rustc_middle/src/ty/context.rs
- compiler/rustc_middle/src/ty/mod.rs

Specifically:
- Most mentions of `TyS` are removed. It's very much a dumb struct now;
  `Ty` has all the smarts.
- `TyS` now has `crate` visibility instead of `pub`.
- `TyS::make_for_test` is removed in favour of the static `BOOL_TY`,
  which just works better with the new structure.
- The `Eq`/`Ord`/`Hash` impls are removed from `TyS`. `Interned`s impls
  of `Eq`/`Hash` now suffice. `Ord` is now partly on `Interned`
  (pointer-based, for the `Equal` case) and partly on `TyS`
  (contents-based, for the other cases).
- There are many tedious sigil adjustments, i.e. adding or removing `*`
  or `&`. They seem to be unavoidable.
2022-02-15 16:03:24 +11:00
bors
e7aca89598 Auto merge of #93741 - Mark-Simulacrum:global-job-id, r=cjgillot
Refactor query system to maintain a global job id counter

This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.

The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.

A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
2022-02-09 18:54:30 +00:00
bors
9747ee4755 Auto merge of #93724 - Mark-Simulacrum:drop-query-stats, r=michaelwoerister
Delete -Zquery-stats infrastructure

These statistics are computable from the self-profile data and/or ad-hoc collectable as needed, and in the meantime contribute to rustc bootstrap times -- locally, this PR shaves ~2.5% from rustc_query_impl builds in instruction counts.

If this does lose some functionality we want to keep, I think we should migrate it to self-profile (or a similar interface) rather than this ad-hoc reporting.
2022-02-09 15:53:10 +00:00
Mark Rousskov
e240783a4d Switch QueryJobId to a single global counter
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.

The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.

A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
2022-02-08 18:49:55 -05:00
klensy
eb3b29fd09 14956 -> 14952 exports 2022-02-08 00:13:31 +03:00