Use `SmallVector` in `CombineFields::instantiate`.
This avoids 4% of malloc calls when compiling
rustc-benchmarks/issue-32278-big-array-of-strings, and 1--2% for other
benchmarks. A small win, but an easy one.
This avoids 4% of malloc calls when compiling
rustc-benchmarks/issue-32278-big-array-of-strings, and 1--2% for other
benchmarks. A small win, but an easy one.
Fix typo that resulted in comparison-to-self
This line was introduced in commit 843db01bd9, rebased from eddyb's c5f6f84d59b853b6a3485380721b71b479ece337.
I don't know whether this fixes anything in practice, but it seems like the changed version was what was intended originally.
r? @eddyb
libcore documentation for builtin macros
Fixes: #36272
Additionally I've edited docstring for `include!` a bit. (related PR #36404)
Unfortunately it seems there is no sane way to reexport empty macros definitions for their docstrings. To avoid copying the whole documentation for builtin macros I've only copied description and added links to `std` macro pages.
Implement `From<Cow<str>> for String` and `From<Cow<[T]>> for Vec<T>`.
Motivation: the `selectors` crate is generic over a string type, in order to support all of `String`, `string_cache::Atom`, and `gecko_string_cache::Atom`. Multiple trait bounds are used for the various operations done with these strings. One of these operations is creating a string (as efficiently as possible, re-using an existing memory allocation if possible) from `Cow<str>`.
The `std::convert::From` trait seems natural for this, but the relevant implementation was missing before this PR. To work around this I’ve added a `FromCowStr` trait in `selectors`, but with trait coherence that means one of `selectors` or `string_cache` needs to depend on the other to implement this trait. Using a trait from `std` would solve this.
The `Vec<T>` implementation is just added for consistency. I also tried a more general `impl<'a, O, B: ?Sized + ToOwned<Owned=O>> From<Cow<'a, B>> for O`, but (the compiler thinks?) it conflicts with `From<T> for T` the impl (after moving all of `collections::borrow` into `core::borrow` to work around trait coherence).
Avoid some allocations in the macro parser
These three commits reduce the number of heap allocations done when compiling rustc-benchmarks/html5ever-2016-08-25 by 20%, from 16.5M to 13.3M. This speeds up (debug) compilation of it with a stage1 compiler by about 7%.
Fix line stepping in debugger.
Attribute drop code to block's closing brace, instead of the line where the allocation was done.
Attribute function epilogues to function body's closing brace, rather than the function header.
Fixes#37032
r? @michaelwoerister
The DirEntryExt::ino() implementation was omitted from the first
iteration of this patch, because a dependency needed to be
configured. The fix is straightforward enough.
Use a faster `deflate` setting
In #37086 we have considered various ideas for reducing the cost of LLVM bytecode compression. This PR implements the simplest of these: use a faster `deflate` setting. It's very simple and reduces the compression time by almost half while increasing the size of the resulting rlibs by only about 2%.
I looked at using zstd, which might be able to halve the compression time again. But integrating zstd is beyond my Rust FFI integration abilities at the moment -- it consists of a few dozen C files, has a non-trivial build system, etc. I decided it was worth getting a big chunk of the possible improvement with minimum effort.
The following table shows the before and after percentages of instructions executed during compression while doing debug builds of some of the rustc-benchmarks with a stage1 compiler.
```
html5ever-2016-08-25 1.4% -> 0.7%
hyper.0.5.0 3.8% -> 2.4%
inflate-0.1.0 1.0% -> 0.5%
piston-image-0.10.3 2.9% -> 1.8%
regex.0.1.30 3.4% -> 2.1%
rust-encoding-0.3.0 4.8% -> 2.9%
syntex-0.42.2 2.9% -> 1.8%
syntex-0.42.2-incr-clean 14.2% -> 8.9%
```
The omitted ones spend 0% of their time in decompression.
And here are actual timings:
```
futures-rs-test 4.110s vs 4.102s --> 1.002x faster (variance: 1.017x, 1.004x)
helloworld 0.223s vs 0.226s --> 0.986x faster (variance: 1.012x, 1.022x)
html5ever-2016- 4.218s vs 4.186s --> 1.008x faster (variance: 1.008x, 1.010x)
hyper.0.5.0 4.746s vs 4.661s --> 1.018x faster (variance: 1.002x, 1.016x)
inflate-0.1.0 4.194s vs 4.143s --> 1.012x faster (variance: 1.007x, 1.006x)
issue-32062-equ 0.317s vs 0.316s --> 1.001x faster (variance: 1.013x, 1.005x)
issue-32278-big 1.811s vs 1.825s --> 0.992x faster (variance: 1.014x, 1.006x)
jld-day15-parse 1.412s vs 1.412s --> 1.001x faster (variance: 1.019x, 1.008x)
piston-image-0. 11.058s vs 10.977s --> 1.007x faster (variance: 1.008x, 1.039x)
reddit-stress 2.331s vs 2.342s --> 0.995x faster (variance: 1.019x, 1.006x)
regex.0.1.30 2.294s vs 2.276s --> 1.008x faster (variance: 1.007x, 1.007x)
rust-encoding-0 1.963s vs 1.924s --> 1.020x faster (variance: 1.009x, 1.006x)
syntex-0.42.2 29.667s vs 29.391s --> 1.009x faster (variance: 1.002x, 1.023x)
syntex-0.42.2-i 15.257s vs 14.148s --> 1.078x faster (variance: 1.018x, 1.008x)
```
r? @alexcrichton
remove keys w/ skolemized regions from proj cache when popping skolemized regions
This addresses #37154 (a regression). The projection cache was incorrectly caching the results for skolemized regions -- when we pop skolemized regions, we are supposed to drop cache keys for them (just as we remove those skolemized regions from the region inference graph). This is because those skolemized region numbers will be reused later with different meaning (and we have determined that the old ones don't leak out in any meaningful way).
I did a *somewhat* aggressive fix here of only removing keys that mention the skolemized regions. One could imagine just removing all keys added since we started the skolemization (as indeed I did in my initial commit). This more aggressive fix required fixing a latent bug in `TypeFlags`, as an aside.
I believe the more aggressive fix is correct; clearly there can be entries that are unrelated to the skoelemized region, and it's a shame to remove them. My one concern was that it *is* possible I believe to have some region variables that are created and related to skolemized regions, and maybe some of them could end up in the cache. However, that seems harmless enough to me-- those relations will be removed, and couldn't have impacted how trait resolution proceeded anyway (iow, the cache entry is not wrong, though it is kind of useless).
r? @pnkfelix
cc @arielb1
TRPL: guessing game: minor clarification
The original text is correct and exact, but might confuse a non-English speaker (at least I was confused), so I made it a bit more plain (see https://github.com/rust-lang/rust/issues/37307).
I know minor wording changes like these are affected by personal style, so I'd understand if you don't find this useful.
r? @steveklabnik