Fix for old school error issues, improvements to new school
This PR:
* Fixes some old school error issues, specifically #33559, #33543, #33366
* Improves wording borrowck errors with match patterns
* De-emphasize multi-line spans, so we don't color the single source character when we're trying to say "span starts here"
* Rollup of #33392 (which should help fix#33390)
r? @nikomatsakis
track incr. comp. dependencies across crates
This PR refactors the compiler's incremental compilation hashing so that it can track dependencies across crates. The main bits are:
- computing a hash representing the metadata for an item we are emitting
- we do this by making `MetaData(X)` be the current task while computing metadata for an item
- this naturally registers reads from any tables and things that we read for that purpose
- we can then hash all the inputs to those tables
- tracking when we access metadata
- we do this by registering a read of `MetaData(X)` for each foreign item `X` whose metadata we read
- hashing metadata from foreign items
- we do this by loading up metadata from a file in the incr. comp. directory
- if there is no file, we use the SVH for the entire crate
There is one very simple test only at this point. The next PR will be focused on expanding out the tests.
Note that this is based on top of https://github.com/rust-lang/rust/pull/33228
r? @michaelwoerister
This commit reorganizes how the persist code treats hashing. The idea is
that each crate saves a file containing hashes representing the metadata
for each item X. When we see a read from `MetaData(X)`, we can load this
hash up (if we don't find a file for that crate, we just use the SVH for
the entire crate).
To compute the hash for `MetaData(Y)`, where Y is some local item, we
examine all the predecessors of the `MetaData(Y)` node and hash their
hashes together.
For external crates, we must build up a map that goes from
the DefKey to the DefIndex. We do this by iterating over each
index that is found in the metadata and loading the associated
DefKey.
introduce a specializes cache
This query is frequently used during trait selection and caching the
result can be a reasonable performance win.
The one case I examined thus far was the mrusty package (v0.5.1), where I saw an improvement in "typeck item bodies" from ~8.3s to ~1.9s.
r? @aturon
syntax_ext: format: nest_level's are no more
Just noticed this while working on #33642 and here's a quick fix, shouldn't touch anything else. It's some historic code indeed...
Warnings for issue #32330
This is an extension of the previous PR that issues warnings in more situations than before. It does not handle *all* cases of #32330 but I believe it issues warnings for all cases I've seen in practice.
Before merging I'd like to address:
- open a good issue explaining the problem and how to fix it (I have a [draft writeup][])
- work on the error message, which I think is not as clear as it could/should be (suggestions welcome)
r? @aturon
[draft writeup]: https://gist.github.com/nikomatsakis/631ec8b4af9a18b5d062d9d9b7d3d967
Added a big-picture explanation for thread::park() & co.
As I said in https://www.reddit.com/r/rust/comments/4ihvv1/hey_rust_programmers_got_a_question_ask_here/d372s4i, the current explanation of the `park()` and `unpark()` is a bit unclear. It says that they're used for blocking, but then it goes on explaining the semantics in detail, leaving the bigger picture a bit unclear.
I added a short high-level explanation that explains how the functions are used. I also exposed the full paths (`thread::park()` and `thread::Thread::unpark()`), because `unpark()`, being a method, is not directly visible at the module level.
Update i686-linux-android features to match android x86 ABI.
Based on [android's official x86 ABI info](http://developer.android.com/ndk/guides/abis.html#x86), the x86 baseline CPU can be safely updated to `pentiumpro`, with the addition of `MMX`, `SSE`, `SSE2`, `SSE3`, `SSSE3` features.
r? @alexcrichton
Replace the obligation forest with a graph
In the presence of caching, arbitrary nodes in the obligation forest can be merged, which makes it a general graph. Handle it as such, using cycle-detection algorithms in the processing.
I should do performance measurements sometime.
This was pretty much written as a proof-of-concept. Please help me write this in a less-ugly way. I should also add comments explaining what is going on.
r? @nikomatsakis