[8/n] rustc: clean up lookup_item_type and remove TypeScheme.
_This is part of a series ([prev](https://github.com/rust-lang/rust/pull/37676) | [next]()) of patches designed to rework rustc into an out-of-order on-demand pipeline model for both better feature support (e.g. [MIR-based](https://github.com/solson/miri) early constant evaluation) and incremental execution of compiler passes (e.g. type-checking), with beneficial consequences to IDE support as well.
If any motivation is unclear, please ask for additional PR description clarifications or code comments._
<hr>
* `tcx.tcache` -> `tcx.item_types`
* `TypeScheme` (grouping `Ty` and `ty::Generics`) is removed
* `tcx.item_types` entries no longer duplicated in `tcx.tables.node_types`
* `tcx.lookup_item_type(def_id).ty` -> `tcx.item_type(def_id)`
* `tcx.lookup_item_type(def_id).generics` -> `tcx.item_generics(def_id)`
* `tcx.lookup_generics(def_id)` -> `tcx.item_generics(def_id)`
* `tcx.lookup_{super_,}predicates(def_id)` -> `tcx.item_{super_,}predicates(def_id)`
macros 1.1: Allow proc_macro functions to declare attributes to be mark as used
This PR allows proc macro functions to declare attribute names that should be marked as used when attached to the deriving item. There are a few questions for this PR.
- Currently this uses a separate attribute named `#[proc_macro_attributes(..)]`, is this the best choice?
- In order to make this work, the `check_attribute` function had to be modified to not error on attributes marked as used. This is a pretty large change in semantics, is there a better way to do this?
- I've got a few clones where I don't know if I need them (like turning `item` into a `TokenStream`), can these be avoided?
- Is switching to `MultiItemDecorator` the right thing here?
Also fixes https://github.com/rust-lang/rust/issues/37563.
Replace FNV with a faster hash function.
Hash table lookups are very hot in rustc profiles and the time taken within `FnvHash` itself is a big part of that. Although FNV is a simple hash, it processes its input one byte at a time. In contrast, Firefox has a homespun hash function that is also simple but works on multiple bytes at a time. So I tried it out and the results are compelling:
```
futures-rs-test 4.326s vs 4.212s --> 1.027x faster (variance: 1.001x, 1.007x)
helloworld 0.233s vs 0.232s --> 1.004x faster (variance: 1.037x, 1.016x)
html5ever-2016- 5.397s vs 5.210s --> 1.036x faster (variance: 1.009x, 1.006x)
hyper.0.5.0 5.018s vs 4.905s --> 1.023x faster (variance: 1.007x, 1.006x)
inflate-0.1.0 4.889s vs 4.872s --> 1.004x faster (variance: 1.012x, 1.007x)
issue-32062-equ 0.347s vs 0.335s --> 1.035x faster (variance: 1.033x, 1.019x)
issue-32278-big 1.717s vs 1.622s --> 1.059x faster (variance: 1.027x, 1.028x)
jld-day15-parse 1.537s vs 1.459s --> 1.054x faster (variance: 1.005x, 1.003x)
piston-image-0. 11.863s vs 11.482s --> 1.033x faster (variance: 1.060x, 1.002x)
regex.0.1.30 2.517s vs 2.453s --> 1.026x faster (variance: 1.011x, 1.013x)
rust-encoding-0 2.080s vs 2.047s --> 1.016x faster (variance: 1.005x, 1.005x)
syntex-0.42.2 32.268s vs 31.275s --> 1.032x faster (variance: 1.014x, 1.022x)
syntex-0.42.2-i 17.629s vs 16.559s --> 1.065x faster (variance: 1.013x, 1.021x)
```
(That's a stage1 compiler doing debug builds. Results for a stage2 compiler are similar.)
The attached commit is not in a state suitable for landing because I changed the implementation of FnvHasher without changing its name (because that would have required touching many lines in the compiler). Nonetheless, it is a good place to start discussions.
Profiles show very clearly that this new hash function is a lot faster to compute than FNV. The quality of the new hash function is less clear -- it seems to do better in some cases and worse in others (judging by the number of instructions executed in `Hash{Map,Set}::get`).
CC @brson, @arthurprs
By using a second attribute `attributes(Bar)` on
proc_macro_derive, whitelist any attributes with
the name `Bar` in the deriving item. This allows
a proc_macro function to use custom attribtues
without a custom attribute error or unused attribute
lint.
Stabilize `..` in tuple (struct) patterns
I'd like to nominate `..` in tuple and tuple struct patterns for stabilization.
This feature is a relatively small extension to existing stable functionality and doesn't have known blockers.
The feature first appeared in Rust 1.10 6 months ago.
An example of use: https://github.com/rust-lang/rust/pull/36203
Closes https://github.com/rust-lang/rust/issues/33627
r? @nikomatsakis
Most of the Rust community agrees that the vec! macro is clearer when
called using square brackets [] instead of regular brackets (). Most of
these ocurrences are from before macros allowed using different types of
brackets.
There is one left unchanged in a pretty-print test, as the pretty
printer still wants it to have regular brackets.
Move `CrateConfig` from `Crate` to `ParseSess`
This is a syntax-[breaking-change]. Most breakage can be fixed by removing a `CrateConfig` argument.
r? @eddyb
[2/n] rustc_metadata: move is_extern_item to trans.
*This is part of a series ([prev](https://github.com/rust-lang/rust/pull/37400) | [next](https://github.com/rust-lang/rust/pull/37402)) of patches designed to rework rustc into an out-of-order on-demand pipeline model for both better feature support (e.g. [MIR-based](https://github.com/solson/miri) early constant evaluation) and incremental execution of compiler passes (e.g. type-checking), with beneficial consequences to IDE support as well.
If any motivation is unclear, please ask for additional PR description clarifications or code comments.*
<hr>
Minor cleanup missed by #36551: `is_extern_item` is one of, if not the only `CrateStore` method who takes a `TyCtxt` but doesn't produce something cached in it, and such methods are going away.
[1/n] Move the MIR map into the type context.
*This is part of a series ([prev]() | [next](https://github.com/rust-lang/rust/pull/37401)) of patches designed to rework rustc into an out-of-order on-demand pipeline model for both better feature support (e.g. [MIR-based](https://github.com/solson/miri) early constant evaluation) and incremental execution of compiler passes (e.g. type-checking), with beneficial consequences to IDE support as well.
If any motivation is unclear, please ask for additional PR description clarifications or code comments.*
<hr>
The first commit reorganizes the `rustc::mir` module to contain the MIR types directly without an extraneous `repr` module which serves no practical purpose but is rather an eyesore.
The second commit performs the actual move of the MIR map into the type context, for the purposes of future integration with requesting analysis/lowering by-products through `TyCtxt`.
Local `Mir` bodies need to be mutated by passes (hence `RefCell`), and at least one pass (`qualify_consts`) needs simultaneous access to multiple `Mir` bodies (hence arena-allocation).
`Mir` bodies loaded from other crates appear as if immutably borrowed (by `.borrow()`-ing one `Ref` and subsequently "leaking" it) to avoid, at least dynamically, *any* possibility of their local mutation.
One caveat is that lint passes can now snoop at the MIR (helpful) or even mutate it (dangerous).
However, lints are unstable anyway and we can find a way to deal with this in due time.
Future work will result in a tighter API, potentially hiding mutation *completely* outside of MIR passes.
Add ArrayVec and AccumulateVec to reduce heap allocations during interning of slices
Updates `mk_tup`, `mk_type_list`, and `mk_substs` to allow interning directly from iterators. The previous PR, #37220, changed some of the calls to pass a borrowed slice from `Vec` instead of directly passing the iterator, and these changes further optimize that to avoid the allocation entirely.
This change yields 50% less malloc calls in [some cases](https://pastebin.mozilla.org/8921686). It also yields decent, though not amazing, performance improvements:
```
futures-rs-test 4.091s vs 4.021s --> 1.017x faster (variance: 1.004x, 1.004x)
helloworld 0.219s vs 0.220s --> 0.993x faster (variance: 1.010x, 1.018x)
html5ever-2016- 3.805s vs 3.736s --> 1.018x faster (variance: 1.003x, 1.009x)
hyper.0.5.0 4.609s vs 4.571s --> 1.008x faster (variance: 1.015x, 1.017x)
inflate-0.1.0 3.864s vs 3.883s --> 0.995x faster (variance: 1.232x, 1.005x)
issue-32062-equ 0.309s vs 0.299s --> 1.033x faster (variance: 1.014x, 1.003x)
issue-32278-big 1.614s vs 1.594s --> 1.013x faster (variance: 1.007x, 1.004x)
jld-day15-parse 1.390s vs 1.326s --> 1.049x faster (variance: 1.006x, 1.009x)
piston-image-0. 10.930s vs 10.675s --> 1.024x faster (variance: 1.006x, 1.010x)
reddit-stress 2.302s vs 2.261s --> 1.019x faster (variance: 1.010x, 1.026x)
regex.0.1.30 2.250s vs 2.240s --> 1.005x faster (variance: 1.087x, 1.011x)
rust-encoding-0 1.895s vs 1.887s --> 1.005x faster (variance: 1.005x, 1.018x)
syntex-0.42.2 29.045s vs 28.663s --> 1.013x faster (variance: 1.004x, 1.006x)
syntex-0.42.2-i 13.925s vs 13.868s --> 1.004x faster (variance: 1.022x, 1.007x)
```
We implement a small-size optimized vector, intended to be used primarily for collection of presumed to be short iterators. This vector cannot be "upsized/reallocated" into a heap-allocated vector, since that would require (slow) branching logic, but during the initial collection from an iterator heap-allocation is possible.
We make the new `AccumulateVec` and `ArrayVec` generic over implementors of the `Array` trait, of which there is currently one, `[T; 8]`. In the future, this is likely to expand to other values of N.
Huge thanks to @nnethercote for collecting the performance and other statistics mentioned above.
macros: Future proof `#[no_link]`
This PR future proofs `#[no_link]` for macro modularization (cc #35896).
First, we resolve all `#[no_link] extern crate`s. `#[no_link]` crates without `#[macro_use]` or `#[macro_reexport]` are not resolved today, this is a [breaking-change]. For example,
```rust
```
Any breakage can be fixed by simply removing the `#[no_link] extern crate`.
Second, `#[no_link] extern crate`s will define an empty module in type namespace to eventually allow importing the crate's macros with `use`. This is a [breaking-change], for example:
```rust
mod syntax {} //< This becomes a duplicate error.
```
r? @nrc
macros 1.1: future proofing and cleanup
This PR
- uses the macro namespace for custom derives (instead of a dedicated custom derive namespace),
- relaxes the shadowing rules for `#[macro_use]`-imported custom derives to match the shadowing rules for ordinary `#[macro_use]`-imported macros, and
- treats custom derive `extern crate`s like empty modules so that we can eventually allow, for example, `extern crate serde_derive; use serde_derive::Serialize;` backwards compatibly.
r? @alexcrichton
Clarify the positions of the lexer and parser
The lexer and parser use unclear names to indicate their positions in the
source code. I propose the following renamings.
Lexer:
```
pos -> next_pos # it's actually the next pos!
last_pos -> pos # it's actually the current pos!
curr -> ch # the current char
curr_is -> ch_is # tests the current char
col (unchanged) # the current column
```
parser
```
- last_span -> prev_span # the previous token's span
- last_token_kind -> prev_token_kind # the previous token's kind
- LastTokenKind -> PrevTokenKind # ditto (but the type)
- token (unchanged) # the current token
- span (unchanged) # the current span
```
Things to note:
- This proposal removes all uses of "last", which is an unclear word because it
could mean (a) previous, (b) final, or (c) most recent, i.e. current.
- The "current" things (ch, col, token, span) consistently lack a prefix. The
"previous" and "next" things consistently have a prefix.
Avoid allocations in `Decoder::read_str`.
`opaque::Decoder::read_str` is very hot within `rustc` due to its use in
the reading of crate metadata, and it currently returns a `String`. This
commit changes it to instead return a `Cow<str>`, which avoids a heap
allocation.
This change reduces the number of calls to `malloc` by almost 10% in
some benchmarks.
This is a [breaking-change] to libserialize.