993 Commits

Author SHA1 Message Date
Michael Woerister
6a5e2a5a9e incr.comp.: Hash more pieces of crate metadata to detect changes there. 2017-05-08 12:31:26 +02:00
Michael Wu
cc4efd1370 Add support for Hexagon v60 HVX intrinsics 2017-05-07 15:07:36 -04:00
Niko Matsakis
25be7983e8 remove mir_passes from Session and add a FIXME 2017-05-02 16:21:58 -04:00
Niko Matsakis
afc5acd84b fix librustc_driver 2017-05-02 16:21:58 -04:00
Niko Matsakis
2fa1ba3e7e pacify the mercilous tidy 2017-05-02 16:21:56 -04:00
Niko Matsakis
1dd9c3e52a support inlining by asking for optimizer mir for callees
I tested this with it enabled 100% of the time, and we were able to run
mir-opt tests successfully.
2017-05-02 16:21:56 -04:00
Niko Matsakis
c253df5249 remove Pass and (temporarily) drop Inline 2017-05-02 16:21:55 -04:00
Niko Matsakis
c1ff10464d rename mir_map to queries and remove build_mir_for_crate 2017-05-02 14:01:37 -04:00
Niko Matsakis
a26e966307 convert the inline pass to use the new multi result
This involves changing various details about that system,
though the basic shape remains the same.
2017-05-02 14:01:36 -04:00
Niko Matsakis
2b32cb90c7 retool MIR passes completely
The new setup is as follows. There is a pipeline of MIR passes that each
run **per def-id** to optimize a particular function. You are intended
to request MIR at whatever stage you need it. At the moment, there is
only one stage you can request:

- `optimized_mir(def_id)`

This yields the final product. Internally, it pulls the MIR for the
given def-id through a series of steps. Right now, these are still using
an "interned ref-cell" but they are intended to "steal" from one
another:

- `mir_build` -- performs the initial construction for local MIR
- `mir_pass_set` -- performs a suite of optimizations and transformations
- `mir_pass` -- an individual optimization within a suite

So, to construct the optimized MIR, we invoke:

    mir_pass_set((MIR_OPTIMIZED, def_id))

which will build up the final MIR.
2017-05-02 14:01:01 -04:00
Niko Matsakis
f23a7bc98a move to only def-id passes
this temporary disables `inline`
2017-05-02 14:01:01 -04:00
Niko Matsakis
668886a6cc rewrite Passes to have sets of passes
Also, store the completed set of passes in the tcx.
2017-05-02 14:01:01 -04:00
Niko Matsakis
e9e6ccc042 introduce DefIdPass and remove all impls of Pass but Inline 2017-05-02 14:01:01 -04:00
Niko Matsakis
46b342fbc0 simplify the MirPass traits and passes dramatically
Overall goal: reduce the amount of context a mir pass needs so that it
resembles a query.

- The hooks are no longer "threaded down" to the pass, but rather run
  automatically from the top-level (we also thread down the current pass
  number, so that the files are sorted better).
  - The hook now receives a *single* callback, rather than a callback per-MIR.
- The traits are no longer lifetime parameters, which moved to the
  methods -- given that we required
  `for<'tcx>` objecs, there wasn't much point to that.
- Several passes now store a `String` instead of a `&'l str` (again, no
  point).
2017-05-02 14:01:01 -04:00
Niko Matsakis
11b6b0663a rework MirPass API to be stateless and extract helper fns 2017-05-02 14:01:00 -04:00
Niko Matsakis
0e5e2f3634 introduce mir_keys()
Each MIR key is a DefId that has MIR associated with it
2017-05-02 14:01:00 -04:00
Corey Farwell
e0bfd19add Rollup merge of #41693 - est31:anon_params_removal, r=eddyb
Removal pass for anonymous parameters

Removes occurences of anonymous parameters from the
rustc codebase, as they are to be deprecated.

See issue #41686 and RFC 1685.

r? @frewsxcv
2017-05-02 09:09:59 -04:00
est31
d290849a23 Removal pass for anonymous parameters
Removes occurences of anonymous parameters from the
rustc codebase, as they are to be deprecated.

See issue #41686 and RFC 1685.
2017-05-02 05:55:20 +02:00
Niko Matsakis
c008f0540e patch the librustc_driver unit tests 2017-05-01 20:11:36 -04:00
Taylor Cramer
73cd9bde37 introduce per-fn RegionMaps
Instead of requesting the region maps for the entire crate, request for
a given item etc. Several bits of code were modified to take
`&RegionMaps` as input (e.g., the `resolve_regions_and_report_errors()`
function). I am not totally happy with this setup -- I *think* I'd
rather have the region maps be part of typeck tables -- but at least the
`RegionMaps` works in a "parallel" way to `FreeRegionMap`, so it's not
too bad. Given that I expect a lot of this code to go away with NLL, I
didn't want to invest *too* much energy tweaking it.
2017-04-30 17:03:30 -04:00
Niko Matsakis
c7dc39dbf0 intern CodeExtents
Make a `CodeExtent<'tcx>` be something allocated in an arena
instead of an index into the `RegionMaps`.
2017-04-30 17:02:59 -04:00
Taylor Cramer
eff39b73d1 On-demandify region mapping 2017-04-30 17:02:56 -04:00
bors
2971d491b9 Auto merge of #41508 - michaelwoerister:generic-path-remapping, r=alexcrichton
Implement a file-path remapping feature in support of debuginfo and reproducible builds

This PR adds the `-Zremap-path-prefix-from`/`-Zremap-path-prefix-to` commandline option pair and is a more general implementation of #41419. As opposed to the previous attempt, this implementation should enable reproducible builds regardless of the working directory of the compiler.

This implementation of the feature is more general in the sense that the re-mapping will affect *all* paths the compiler emits, including the ones in error messages.

r? @alexcrichton
2017-04-28 12:09:37 +00:00
Ariel Ben-Yehuda
a517343566 cache symbol names in ty::maps
this fixes a performance regression introduced in commit
39a58c38a0b9ac9e822a1732f073abe8ddf65cfb.
2017-04-26 17:45:02 +03:00
Michael Woerister
39ffea31df Implement a file-path remapping feature in support of debuginfo and reproducible builds. 2017-04-26 15:44:02 +02:00
Ariel Ben-Yehuda
ece6c8434b cache attributes of items from foreign crates
this avoids parsing item attributes on each call to `item_attrs`, which takes
off 33% (!) of translation time and 50% (!) of trans-item collection time.
2017-04-22 21:00:50 +03:00
Ariel Ben-Yehuda
e1377a4f47 avoid calling mk_region unnecessarily
this improves typeck & trans performance by 1%. This looked hotter on
callgrind than it is on a CPU.
2017-04-22 21:00:50 +03:00
Niko Matsakis
e48a759f51 sort provide 2017-04-21 17:26:53 -04:00
Niko Matsakis
264f237f98 make reachable_set ref-counted
Once it is computed, no need to deep clone the set.
2017-04-21 17:26:53 -04:00
Niko Matsakis
93e10977d8 propagate other obligations that were left out
cc #32730 -- I left exactly one instance where I wasn't sure of the
right behavior.
2017-04-19 07:20:36 -04:00
Niko Matsakis
8388772f42 kill a bunch of one off tasks 2017-04-18 08:20:12 -04:00
Eduard-Mihai Burtescu
6dc21b71cf rustc: use monomorphic const_eval for cross-crate enum discriminants. 2017-04-16 01:31:37 +03:00
Eduard-Mihai Burtescu
63064ec190 rustc: expose monomorphic const_eval through on-demand. 2017-04-16 01:31:06 +03:00
Eduard-Mihai Burtescu
2a17b84cbc rustc: provide adt_sized_constraint as an on-demand query. 2017-04-15 15:40:38 +03:00
Corey Farwell
e6f6b445aa Rollup merge of #40702 - mrhota:global_asm, r=nagisa
Implement global_asm!() (RFC 1548)

This is a first attempt. ~~One (potential) problem I haven't solved is how to handle multiple usages of `global_asm!` in a module/crate. It looks like `LLVMSetModuleInlineAsm` overwrites module asm, and `LLVMAppendModuleInlineAsm` is not provided in LLVM C headers 😦~~

I can provide more detail as needed, but honestly, there's not a lot going on here.

r? @eddyb

CC @Amanieu @jackpot51

Tracking issue: #35119
2017-04-14 17:41:03 -04:00
Niko Matsakis
c22fdf9a3a use tcx.crate_name(LOCAL_CRATE) rather than LinkMeta::crate_name 2017-04-13 18:37:47 -04:00
A.J. Gardner
da0742c070 Add global_asm tests 2017-04-12 19:12:50 -05:00
Tim Neumann
1dd9801fa5 Rollup merge of #41232 - arielb1:mir-rvalues, r=eddyb
move rvalue checking to MIR
2017-04-12 14:45:47 +02:00
Tim Neumann
1b006b78a6 Rollup merge of #41141 - michaelwoerister:direct-metadata-ich-final, r=nikomatsakis
ICH: Replace old, transitive metadata hashing with direct hashing approach.

This PR replaces the old crate metadata hashing strategy with a new one that directly (but stably) hashes all values we encode into the metadata. Previously we would track what data got accessed during metadata encoding and then hash the input nodes (HIR and upstream metadata) that were transitively reachable from the accessed data. While this strategy was sound, it had two major downsides:

1. It was susceptible to generating false positives, i.e. some input node might have changed without actually affecting the content of the metadata. That metadata entry would still show up as changed.
2. It was susceptible to quadratic blow-up when many metadata nodes shared the same input nodes, which would then get hashed over and over again.

The new method does not have these disadvantages and it's also a first step towards caching more intermediate results in the compiler.

Metadata hashing/cross-crate incremental compilation is still kept behind the `-Zincremental-cc` flag even after this PR. Once the new method has proven itself with more tests, we can remove the flag and enable cross-crate support by default again.

r? @nikomatsakis
cc @rust-lang/compiler
2017-04-12 14:45:42 +02:00
Tim Neumann
49082ae9f2 Rollup merge of #41063 - nikomatsakis:issue-40746-always-exec-loops, r=eddyb
remove unnecessary tasks

Remove various unnecessary tasks. All of these are "always execute" tasks that don't do any writes to tracked state (or else an assert would trigger, anyhow). In some cases, they issue lints or errors, but we''ll deal with that -- and anyway side-effects outside of a task don't cause problems for anything that I can see.

The one non-trivial refactoring here is the borrowck conversion, which adds the requirement to go from a `DefId` to a `BodyId`. I tried to make a useful helper here.

r? @eddyb

cc #40746
cc @cramertj @michaelwoerister
2017-04-12 14:45:40 +02:00
Michael Woerister
ca2dce9b48 ICH: Replace old, transitive metadata hashing with direct hashing approach.
Instead of collecting all potential inputs to some metadata entry and
hashing those, we directly hash the values we are storing in metadata.
This is more accurate and doesn't suffer from quadratic blow-up when
many entries have the same dependencies.
2017-04-12 11:47:26 +02:00
Ariel Ben-Yehuda
0144613078 Move rvalue checking to MIR
Fixes #41139.
2017-04-11 23:53:20 +03:00
Simonas Kazlauskas
e18c59fd48 Fix some nits 2017-04-11 16:06:30 +03:00
Austin Hicks
63ebf08be5 Initial attempt at implementing optimization fuel and re-enabling struct field reordering. 2017-04-11 14:36:05 +03:00
Corey Farwell
88e97f0541 Rollup merge of #41056 - michaelwoerister:central-defpath-hashes, r=nikomatsakis
Handle DefPath hashing centrally as part of DefPathTable (+ save work during SVH calculation)

In almost all cases where we construct a `DefPath`, we just hash it and throw it away again immediately.
With this PR, the compiler will immediately compute and store the hash for each `DefPath` as it is allocated. This way we
+ can get rid of any subsequent `DefPath` hash caching (e.g. the `DefPathHashes`),
+ don't need to allocate a transient `Vec` for holding the `DefPath` (although I'm always surprised how little these small, dynamic allocations seem to hurt performance), and
+ we don't hash `DefPath` prefixes over and over again.

That last part is because we construct the hash for `prefix::foo` by hashing `(hash(prefix), foo)` instead of hashing every component of prefix.

The last commit of this PR is pretty neat, I think:
```
The SVH (Strict Version Hash) of a crate is currently computed
by hashing the ICHes (Incremental Computation Hashes) of the
crate's HIR. This is fine, expect that for incr. comp. we compute
two ICH values for each HIR item, one for the complete item and
one that just includes the item's interface. The two hashes are
are needed for dependency tracking but if we are compiling
non-incrementally and just need the ICH values for the SVH,
one of them is enough, giving us the opportunity to save some
work in this case.
```

r? @nikomatsakis

This PR depends on https://github.com/rust-lang/rust/pull/40878 to be merged first (you can ignore the first commit for reviewing, that's just https://github.com/rust-lang/rust/pull/40878).
2017-04-07 09:20:05 -04:00
Michael Woerister
edc1ac3016 ICH: Centrally compute and cache DefPath hashes as part of DefPathTable. 2017-04-07 14:36:51 +02:00
bors
4c59c92bc4 Auto merge of #40873 - cramertj:on-demandify-queries, r=nikomatsakis
On demandify reachability

cc https://github.com/rust-lang/rust/issues/40746

I tried following this guidance from #40746:
> The following tasks currently execute before a tcx is built, but they could be easily converted into queries that are requested after tcx is built. The main reason they are the way they are was to avoid a gratuitious refcell (but using the refcell map seems fine)...

but the result of moving `region_maps` out of `TyCtxt` and into a query caused a lot of churn, and seems like it could potentially result in a rather large performance hit, since it means a dep-graph lookup on every use of `region_maps` (rather than just a field access). Possibly `TyCtxt` could store a `RefCell<Option<RegionMap>>` internally and use that to prevent repeat lookups, but that feels like it's duplicating the work of the dep-graph. @nikomatsakis What did you have in mind for this?
2017-04-07 08:36:11 +00:00
Simonas Kazlauskas
201b1a9032 Properly adjust filenames when multiple emissions
Fixes #40993
2017-04-05 19:02:25 +03:00
Niko Matsakis
a4d7c1fec3 push borrowck into its own task 2017-04-04 12:06:35 -04:00
Taylor Cramer
aab2cca046 On-demandify reachability 2017-04-04 07:46:18 -07:00