120 Commits

Author SHA1 Message Date
Michael Woerister
5b093ebab2 Make names of types used in LLVM IR stable.
Before this PR, type names could depend on the cratenum being used
for a given crate and also on the source location of closures.
Both are undesirable for incremental compilation where we cache
LLVM IR and don't want it to depend on formatting or in which
order crates are loaded.
2016-11-13 19:49:46 -05:00
Eduard-Mihai Burtescu
f19f939994 Rollup merge of #37551 - Mark-Simulacrum:upgrade-accvec, r=eddyb
Replace syntax's SmallVector with AccumulateVec

This adds a new type to data_structures, `SmallVec`, which wraps `AccumulateVec` with support for re-allocating onto the heap (`SmallVec::reserve`). `SmallVec` is then used to replace the implementation of `SmallVector` in libsyntax.

r? @eddyb

Fixes #37371. Using `SmallVec` instead of libsyntax's `SmallVector` will provide the `N = 2/4` case easily (just needs a few more `Array` impls).

cc @nnethercote, probably interested in this area
2016-11-12 10:38:38 +02:00
Eduard-Mihai Burtescu
c3ab57c99e Rollup merge of #37535 - Havvy:graph, r=eddyb
Graph Changes

General cleanup and adding a few methods that I want to use in Clippy.

Need somebody to bikeshed names.
2016-11-12 10:38:38 +02:00
Mark-Simulacrum
7bbebb1f54 Change implementation of syntax::util::SmallVector to use data_structures::SmallVec. 2016-11-11 07:38:48 -07:00
Nicholas Nethercote
00e48affde Replace FnvHasher use with FxHasher.
This speeds up compilation by 3--6% across most of rustc-benchmarks.
2016-11-08 15:14:59 +11:00
Nicholas Nethercote
eca1cc957f Add FxHasher, a faster alternative to FnvHasher. 2016-11-08 15:14:00 +11:00
bors
49c36bf16f Auto merge of #36306 - nagisa:mir-local-cleanup, r=eddyb
A way to remove otherwise unused locals from MIR

There is a certain amount of desire for a pass which cleans up the provably unused variables (no assignments or reads). There has been an implementation of such pass by @scottcarr, and another (two!) implementations by me in my own dataflow efforts.

PR like https://github.com/rust-lang/rust/pull/35916 proves that this pass is useful even on its own, which is why I cherry-picked it out from my dataflow effort.

@nikomatsakis previously expressed concerns over this pass not seeming to be very cheap to run and therefore unsuitable for regular cleanup duties. Turns out, regular cleanup of local declarations is not at all necessary, at least now, because majority of passes simply do not (or should not) care about them. That’s why it is viable to only run this pass once (perhaps a few more times in the future?) per function, right before translation.

r? @eddyb or @nikomatsakis
2016-11-03 22:58:55 -07:00
Havvy
9ddbb9133c Added Graph::is_cyclicic_node algorithm 2016-11-02 21:57:36 -07:00
Simonas Kazlauskas
475236770f A way to remove otherwise unused locals from MIR
Replaces the hack where a similar thing is done within trans.
2016-11-03 06:17:01 +02:00
Havvy
7d91581cca Change Make comment into doc comment on Graph::iterate_until_fixed_point 2016-11-02 01:48:11 -07:00
Havvy
fcf02623ee Added general iterators for graph nodes and edges
Also used those general iterators in other methods.
2016-11-02 01:47:54 -07:00
Havvy
3d1ecc50ed Normalize generic bounds in graph iterators
Use where clasues and only where clauses for bounds in the
iterators for Graph.

The rest of the code uses bounds on the generic declarations for
Debug, and we may want to change those to for consistency. I did
not do that here because I don't know whether or not that's a good
idea. But for the iterators, they were inconsistent causing
confusion, at least for me.
2016-11-01 20:32:02 -07:00
Nicholas Nethercote
7b33f7e3e7 Optimize ObligationForest's NodeState handling.
This commit partially inlines two functions, `find_cycles_from_node` and
`mark_as_waiting_from`, at two call sites in order to avoid function
unnecessary function calls on hot paths.

It also fully inlines and removes `is_popped`.

These changes speeds up rustc-benchmarks/inflate-0.1.0 by about 2% when
doing debug builds with a stage1 compiler.
2016-11-02 13:37:10 +11:00
Michael Woerister
a2a2763e6d Replace all uses of SHA-256 with BLAKE2b. 2016-10-30 19:14:18 -04:00
bors
a6b3b01b5f Auto merge of #37270 - Mark-Simulacrum:smallvec-optimized-arenas, r=eddyb
Add ArrayVec and AccumulateVec to reduce heap allocations during interning of slices

Updates `mk_tup`, `mk_type_list`, and `mk_substs` to allow interning directly from iterators. The previous PR, #37220, changed some of the calls to pass a borrowed slice from `Vec` instead of directly passing the iterator, and these changes further optimize that to avoid the allocation entirely.

This change yields 50% less malloc calls in [some cases](https://pastebin.mozilla.org/8921686). It also yields decent, though not amazing, performance improvements:
```
futures-rs-test  4.091s vs  4.021s --> 1.017x faster (variance: 1.004x, 1.004x)
helloworld       0.219s vs  0.220s --> 0.993x faster (variance: 1.010x, 1.018x)
html5ever-2016-  3.805s vs  3.736s --> 1.018x faster (variance: 1.003x, 1.009x)
hyper.0.5.0      4.609s vs  4.571s --> 1.008x faster (variance: 1.015x, 1.017x)
inflate-0.1.0    3.864s vs  3.883s --> 0.995x faster (variance: 1.232x, 1.005x)
issue-32062-equ  0.309s vs  0.299s --> 1.033x faster (variance: 1.014x, 1.003x)
issue-32278-big  1.614s vs  1.594s --> 1.013x faster (variance: 1.007x, 1.004x)
jld-day15-parse  1.390s vs  1.326s --> 1.049x faster (variance: 1.006x, 1.009x)
piston-image-0. 10.930s vs 10.675s --> 1.024x faster (variance: 1.006x, 1.010x)
reddit-stress    2.302s vs  2.261s --> 1.019x faster (variance: 1.010x, 1.026x)
regex.0.1.30     2.250s vs  2.240s --> 1.005x faster (variance: 1.087x, 1.011x)
rust-encoding-0  1.895s vs  1.887s --> 1.005x faster (variance: 1.005x, 1.018x)
syntex-0.42.2   29.045s vs 28.663s --> 1.013x faster (variance: 1.004x, 1.006x)
syntex-0.42.2-i 13.925s vs 13.868s --> 1.004x faster (variance: 1.022x, 1.007x)
```

We implement a small-size optimized vector, intended to be used primarily for collection of presumed to be short iterators. This vector cannot be "upsized/reallocated" into a heap-allocated vector, since that would require (slow) branching logic, but during the initial collection from an iterator heap-allocation is possible.

We make the new `AccumulateVec` and `ArrayVec` generic over implementors of the `Array` trait, of which there is currently one, `[T; 8]`. In the future, this is likely to expand to other values of N.

Huge thanks to @nnethercote for collecting the performance and other statistics mentioned above.
2016-10-26 03:47:55 -07:00
Mark-Simulacrum
a4f7ba376e Add AccumulateVec, a potentially stack-allocated vector.
AccumulateVec is generic over the Array trait, which is currently only
implemented for [T; 8].
2016-10-25 20:06:17 -06:00
bors
4879166194 Auto merge of #37294 - nikomatsakis:issue-37154, r=nikomatsakis
remove keys w/ skolemized regions from proj cache when popping skolemized regions

This addresses #37154 (a regression). The projection cache was incorrectly caching the results for skolemized regions -- when we pop skolemized regions, we are supposed to drop cache keys for them (just as we remove those skolemized regions from the region inference graph). This is because those skolemized region numbers will be reused later with different meaning (and we have determined that the old ones don't leak out in any meaningful way).

I did a *somewhat* aggressive fix here of only removing keys that mention the skolemized regions. One could imagine just removing all keys added since we started the skolemization (as indeed I did in my initial commit). This more aggressive fix required fixing a latent bug in `TypeFlags`, as an aside.

I believe the more aggressive fix is correct; clearly there can be entries that are unrelated to the skoelemized region, and it's a shame to remove them. My one concern was that it *is* possible I believe to have some region variables that are created and related to skolemized regions, and maybe some of them could end up in the cache. However, that seems harmless enough to me-- those relations will be removed, and couldn't have impacted how trait resolution proceeded anyway (iow, the cache entry is not wrong, though it is kind of useless).

r? @pnkfelix
cc @arielb1
2016-10-22 03:30:23 -07:00
Guillaume Gomez
59faa20156 Rollup merge of #37286 - srinivasreddy:graph, r=nrc
run rustfmt on graph folder
2016-10-22 01:21:59 +02:00
Niko Matsakis
567b11fc3a only remove keys that mention skolemized regions 2016-10-21 11:13:36 -04:00
Niko Matsakis
974817d493 when pop skol, also remove from proj cache 2016-10-21 11:13:34 -04:00
Guillaume Gomez
dd3a014ed9 Rollup merge of #37288 - srinivasreddy:snapshot_map, r=eddyb
run rustfmt on snapshot_map
2016-10-19 23:15:01 +02:00
Guillaume Gomez
655ecd89ed Rollup merge of #37287 - srinivasreddy:unify, r=eddyb
run rustfmt on unify folder
2016-10-19 23:15:01 +02:00
Srinivas Reddy Thatiparthy
4e89811690
run rustfmt on snapshot_map 2016-10-20 00:40:05 +05:30
Srinivas Reddy Thatiparthy
c2d0d4fea6
run rustfmt on unify folder 2016-10-20 00:38:55 +05:30
Srinivas Reddy Thatiparthy
20bda8d5f4
run rustfmt on graph folder 2016-10-20 00:37:24 +05:30
Srinivas Reddy Thatiparthy
f1e4ae17b1
run rustfmt on control_flow_graph folder 2016-10-20 00:25:19 +05:30
Eduard-Mihai Burtescu
7343291ac3 Rollup merge of #37233 - michaelwoerister:blake2-for-ich, r=nikomatsakis
ICH: Use 128-bit Blake2b hash instead of 64-bit SipHash for incr. comp. fingerprints

This PR makes incr. comp. hashes 128 bits wide in order to push collision probability below a threshold that we need to worry about. It also replaces SipHash, which has been mentioned multiple times as not being built for fingerprinting, with the [BLAKE2b hash function](https://blake2.net/), an improved version of the BLAKE sha-3 finalist.

I was worried that using a cryptographic hash function would make ICH computation noticeably slower, but after doing some performance tests, I'm not any more. Most of the time BLAKE2b is actually faster than using two SipHashes (in order to get 128 bits):

```
SipHash
libcore: 0.199 seconds
libstd:  0.090 seconds

BLAKE2b
libcore: 0.162 seconds
libstd:  0.078 seconds
```

If someone can prove that something like MetroHash128 provides a comparably low collision probability as BLAKE2, I'm happy to switch. But for now we are at least not taking a performance hit.

I also suggest that we throw out the sha-256 implementation in the compiler and replace it with BLAKE2, since our sha-256 implementation is two to three times slower than the BLAKE2 implementation in this PR (cc @alexcrichton @eddyb @brson)

r? @nikomatsakis (although there's not much incr. comp. specific in here, so feel free to re-assign)
2016-10-19 08:00:03 +03:00
Jonas Schievink
0c844d2304 Set stalled=false when encountering an error 2016-10-17 21:53:27 +02:00
Michael Woerister
d07523c716 ICH: Use 128-bit Blake2b hash instead of 64-bit SipHash for incr. comp. fingerprints. 2016-10-17 12:40:25 -04:00
Jonas Schievink
88fde7f728 Don't process cycles when stalled
This improves the `inflate-0.1.0` benchmark by about 10% for me.
2016-10-17 02:28:31 +02:00
Wesley Wiser
5e91c073f2 Move IdxSetBuf and BitSlice to rustc_data_structures
Resolves a FIXME
2016-10-10 20:26:26 -04:00
Niels Sascha Reedijk
1a6fc8b7b8 Add support for the Haiku operating system on x86 and x86_64 machines
* Hand rebased from Niels original work on 1.9.0
2016-09-25 11:12:23 -05:00
athulappadan
49e77dbf25 Documentation of what does for each type 2016-09-11 17:00:09 +05:30
Michael Woerister
7310a8ffea ICH: Adapt to changes in the MetaItem AST representation. 2016-09-01 14:39:31 -04:00
Michael Woerister
794fd315ad incr.comp.: Move lock files out of directory being locked 2016-08-29 14:27:40 -04:00
Michael Woerister
3e9bed92da Implement copy-on-write scheme for managing the incremental compilation cache. 2016-08-29 14:27:40 -04:00
Michael Woerister
206e7b6fc7 Add some features to flock. 2016-08-29 14:27:40 -04:00
Michael Woerister
6ef8198406 Move flock.rs from librustdoc to librustc_data_structures. 2016-08-29 14:27:40 -04:00
Niko Matsakis
9978cbc8f4 generalize BitMatrix to be NxM and not just NxN 2016-08-09 08:26:06 -04:00
Niko Matsakis
0e97240f98 isolate predecessor computation
The new `Predecessors` type computes a set of interesting targets and
their HIR predecessors, and discards everything in between.
2016-08-09 08:26:05 -04:00
bors
1ab87b65a2 Auto merge of #34605 - arielb1:bug-in-the-jungle, r=eddyb
fail obligations that depend on erroring obligations

Fix a bug where an obligation that depend on an erroring obligation would
be regarded as successful, leading to global cache pollution and random
lossage.

Fixes #33723.
Fixes #34503.

r? @eddyb since @nikomatsakis is on vacation

beta-nominating because of the massive lossage potential (e.g. with `Copy` this could lead to random memory leaks), plus this is a regression.
2016-07-02 12:25:29 -07:00
Ariel Ben-Yehuda
201cdd33df fail obligations that depend on erroring obligations
Fix a bug where an obligation that depend on an erroring obligation would
be regarded as successful, leading to global cache pollution and random
lossage.

Fixes #33723.
Fixes #34503.
2016-07-02 02:20:45 +03:00
Ariel Ben-Yehuda
bff28ec468 refactor rustc_metadata to use CamelCase names and IndexVec 2016-06-28 23:41:09 +03:00
Scott A Carr
66d60c78b3 add control flow graph and algorithms. add dominator to mir 2016-06-23 14:00:00 -07:00
bors
6b4511755c Auto merge of #34221 - srinivasreddy:rm_redundant, r=alexcrichton
remove redundant test case in bitvector.rs

`bitvec_iter_works_2` does exactly same as `bitvec_iter_works_1`, so i removed it.
2016-06-14 13:42:28 -07:00
Srinivas Reddy Thatiparthy
fa91c14bc0 Add additional test cases to test all arities of tuple; And remove type suffix - i32 on integers 2016-06-11 22:31:24 +05:30
Srinivas Reddy Thatiparthy
99e9f2ddc1 remove redundant test case 2016-06-11 20:59:58 +05:30
Ariel Ben-Yehuda
e3af9fa490 make the basic_blocks field private 2016-06-09 14:55:19 +03:00
Ariel Ben-Yehuda
bc1eb67721 introduce the type-safe IdxVec and use it instead of loose indexes 2016-06-09 14:26:08 +03:00
bors
22b36c70f9 Auto merge of #33999 - scottcarr:master, r=nikomatsakis
generate fewer basic blocks for variant switches

CC #33567
Adds a new field to TestKind::Switch that tracks the variants that are actually matched against.  The other candidates target a common "otherwise" block.
2016-06-05 03:12:38 -07:00