The DirEntryExt::ino() implementation was omitted from the first
iteration of this patch, because a dependency needed to be
configured. The fix is straightforward enough.
Use a faster `deflate` setting
In #37086 we have considered various ideas for reducing the cost of LLVM bytecode compression. This PR implements the simplest of these: use a faster `deflate` setting. It's very simple and reduces the compression time by almost half while increasing the size of the resulting rlibs by only about 2%.
I looked at using zstd, which might be able to halve the compression time again. But integrating zstd is beyond my Rust FFI integration abilities at the moment -- it consists of a few dozen C files, has a non-trivial build system, etc. I decided it was worth getting a big chunk of the possible improvement with minimum effort.
The following table shows the before and after percentages of instructions executed during compression while doing debug builds of some of the rustc-benchmarks with a stage1 compiler.
```
html5ever-2016-08-25 1.4% -> 0.7%
hyper.0.5.0 3.8% -> 2.4%
inflate-0.1.0 1.0% -> 0.5%
piston-image-0.10.3 2.9% -> 1.8%
regex.0.1.30 3.4% -> 2.1%
rust-encoding-0.3.0 4.8% -> 2.9%
syntex-0.42.2 2.9% -> 1.8%
syntex-0.42.2-incr-clean 14.2% -> 8.9%
```
The omitted ones spend 0% of their time in decompression.
And here are actual timings:
```
futures-rs-test 4.110s vs 4.102s --> 1.002x faster (variance: 1.017x, 1.004x)
helloworld 0.223s vs 0.226s --> 0.986x faster (variance: 1.012x, 1.022x)
html5ever-2016- 4.218s vs 4.186s --> 1.008x faster (variance: 1.008x, 1.010x)
hyper.0.5.0 4.746s vs 4.661s --> 1.018x faster (variance: 1.002x, 1.016x)
inflate-0.1.0 4.194s vs 4.143s --> 1.012x faster (variance: 1.007x, 1.006x)
issue-32062-equ 0.317s vs 0.316s --> 1.001x faster (variance: 1.013x, 1.005x)
issue-32278-big 1.811s vs 1.825s --> 0.992x faster (variance: 1.014x, 1.006x)
jld-day15-parse 1.412s vs 1.412s --> 1.001x faster (variance: 1.019x, 1.008x)
piston-image-0. 11.058s vs 10.977s --> 1.007x faster (variance: 1.008x, 1.039x)
reddit-stress 2.331s vs 2.342s --> 0.995x faster (variance: 1.019x, 1.006x)
regex.0.1.30 2.294s vs 2.276s --> 1.008x faster (variance: 1.007x, 1.007x)
rust-encoding-0 1.963s vs 1.924s --> 1.020x faster (variance: 1.009x, 1.006x)
syntex-0.42.2 29.667s vs 29.391s --> 1.009x faster (variance: 1.002x, 1.023x)
syntex-0.42.2-i 15.257s vs 14.148s --> 1.078x faster (variance: 1.018x, 1.008x)
```
r? @alexcrichton
remove keys w/ skolemized regions from proj cache when popping skolemized regions
This addresses #37154 (a regression). The projection cache was incorrectly caching the results for skolemized regions -- when we pop skolemized regions, we are supposed to drop cache keys for them (just as we remove those skolemized regions from the region inference graph). This is because those skolemized region numbers will be reused later with different meaning (and we have determined that the old ones don't leak out in any meaningful way).
I did a *somewhat* aggressive fix here of only removing keys that mention the skolemized regions. One could imagine just removing all keys added since we started the skolemization (as indeed I did in my initial commit). This more aggressive fix required fixing a latent bug in `TypeFlags`, as an aside.
I believe the more aggressive fix is correct; clearly there can be entries that are unrelated to the skoelemized region, and it's a shame to remove them. My one concern was that it *is* possible I believe to have some region variables that are created and related to skolemized regions, and maybe some of them could end up in the cache. However, that seems harmless enough to me-- those relations will be removed, and couldn't have impacted how trait resolution proceeded anyway (iow, the cache entry is not wrong, though it is kind of useless).
r? @pnkfelix
cc @arielb1
TRPL: guessing game: minor clarification
The original text is correct and exact, but might confuse a non-English speaker (at least I was confused), so I made it a bit more plain (see https://github.com/rust-lang/rust/issues/37307).
I know minor wording changes like these are affected by personal style, so I'd understand if you don't find this useful.
r? @steveklabnik
Add error explaination for E0182, E0230 and E0399
This PR adds some error descriptions requested in issue https://github.com/rust-lang/rust/issues/32777.
r? @GuillaumeGomez
Specifically this adds descriptions for
E0182 - unexpected binding of associated item in expression path
E0230 - missing type parameter from on_unimplemented description
E0399 - overriding a trait type without re-implementing default methods
Every codegen unit gets its own local counter for generating new symbol
names. This makes bitcode and object files reproducible at the binary
level even when incremental compilation is used.
Motivation: the `selectors` crate is generic over a string type,
in order to support all of `String`, `string_cache::Atom`, and
`gecko_string_cache::Atom`. Multiple trait bounds are used
for the various operations done with these strings.
One of these operations is creating a string (as efficiently as possible,
re-using an existing memory allocation if possible) from `Cow<str>`.
The `std::convert::From` trait seems natural for this, but
the relevant implementation was missing before this PR.
To work around this I’ve added a `FromCowStr` trait in `selectors`,
but with trait coherence that means one of `selectors` or `string_cache`
needs to depend on the other to implement this trait.
Using a trait from `std` would solve this.
The `Vec<T>` implementation is just added for consistency.
I also tried a more general
`impl<'a, O, B: ?Sized + ToOwned<Owned=O>> From<Cow<'a, B>> for O`,
but (the compiler thinks?) it conflicts with `From<T> for T` the impl
(after moving all of `collections::borrow` into `core::borrow`
to work around trait coherence).
syntax: Tweak path parsing logic
Associated paths starting with `<<` are parsed in patterns.
Paths like `self::foo::bar` are interpreted as paths and not as `self` arguments in methods (cc @matklad).
Now, I believe, *all* paths are consistently parsed greedily in case of ambiguity.
Detection of `&'a mut self::` requires pretty large (but still fixed) lookahead, so I had to increase the size of parser's lookahead buffer.
Curiously, if `lookahead_distance >= lookahead_buffer_size` was used previously, the parser hung forever, I fixed this as well, now it ICEs.
r? @jseyfried
macros: Future proof `#[no_link]`
This PR future proofs `#[no_link]` for macro modularization (cc #35896).
First, we resolve all `#[no_link] extern crate`s. `#[no_link]` crates without `#[macro_use]` or `#[macro_reexport]` are not resolved today, this is a [breaking-change]. For example,
```rust
```
Any breakage can be fixed by simply removing the `#[no_link] extern crate`.
Second, `#[no_link] extern crate`s will define an empty module in type namespace to eventually allow importing the crate's macros with `use`. This is a [breaking-change], for example:
```rust
mod syntax {} //< This becomes a duplicate error.
```
r? @nrc
Enable line number debuginfo in releases
This commit enables by default passing the `-C debuginfo=1` argument to the
compiler for the stable, beta, and nightly release channels. A new configure
option was also added, `--enable-debuginfo-lines`, to enable this behavior in
developer builds as well.
Closes#36452