Implement Iterator::fold for .chain(), .cloned(), .map() and the VecDeque iterators.
Chain can do something interesting here where it passes on the fold
into its inner iterators.
The lets the underlying iterator's custom fold() be used, and skips the
regular chain logic in next.
Also implement .fold() specifically for .map() and .cloned() so that any
inner fold improvements are available through map and cloned.
The same way, a VecDeque iterator fold can be turned into two slice folds.
These changes lend the power of the slice iterator's loop codegen to
VecDeque, and to chains of slice iterators, and so on.
It's an improvement for .sum() and .product(), and other uses of fold.
Add ArrayVec and AccumulateVec to reduce heap allocations during interning of slices
Updates `mk_tup`, `mk_type_list`, and `mk_substs` to allow interning directly from iterators. The previous PR, #37220, changed some of the calls to pass a borrowed slice from `Vec` instead of directly passing the iterator, and these changes further optimize that to avoid the allocation entirely.
This change yields 50% less malloc calls in [some cases](https://pastebin.mozilla.org/8921686). It also yields decent, though not amazing, performance improvements:
```
futures-rs-test 4.091s vs 4.021s --> 1.017x faster (variance: 1.004x, 1.004x)
helloworld 0.219s vs 0.220s --> 0.993x faster (variance: 1.010x, 1.018x)
html5ever-2016- 3.805s vs 3.736s --> 1.018x faster (variance: 1.003x, 1.009x)
hyper.0.5.0 4.609s vs 4.571s --> 1.008x faster (variance: 1.015x, 1.017x)
inflate-0.1.0 3.864s vs 3.883s --> 0.995x faster (variance: 1.232x, 1.005x)
issue-32062-equ 0.309s vs 0.299s --> 1.033x faster (variance: 1.014x, 1.003x)
issue-32278-big 1.614s vs 1.594s --> 1.013x faster (variance: 1.007x, 1.004x)
jld-day15-parse 1.390s vs 1.326s --> 1.049x faster (variance: 1.006x, 1.009x)
piston-image-0. 10.930s vs 10.675s --> 1.024x faster (variance: 1.006x, 1.010x)
reddit-stress 2.302s vs 2.261s --> 1.019x faster (variance: 1.010x, 1.026x)
regex.0.1.30 2.250s vs 2.240s --> 1.005x faster (variance: 1.087x, 1.011x)
rust-encoding-0 1.895s vs 1.887s --> 1.005x faster (variance: 1.005x, 1.018x)
syntex-0.42.2 29.045s vs 28.663s --> 1.013x faster (variance: 1.004x, 1.006x)
syntex-0.42.2-i 13.925s vs 13.868s --> 1.004x faster (variance: 1.022x, 1.007x)
```
We implement a small-size optimized vector, intended to be used primarily for collection of presumed to be short iterators. This vector cannot be "upsized/reallocated" into a heap-allocated vector, since that would require (slow) branching logic, but during the initial collection from an iterator heap-allocation is possible.
We make the new `AccumulateVec` and `ArrayVec` generic over implementors of the `Array` trait, of which there is currently one, `[T; 8]`. In the future, this is likely to expand to other values of N.
Huge thanks to @nnethercote for collecting the performance and other statistics mentioned above.
check target abi support
This PR checks for each extern function / block whether the ABI / calling convention used is supported by the current target.
This was achieved by adding an `abi_blacklist` field to the target specifications, listing the calling conventions unsupported for that target.
Chain can do something interesting here where it passes on the fold
into its inner iterators.
The lets the underlying iterator's custom fold() be used, and skips the
regular chain logic in next.
Disallow Unsized Enums
Fixes#16812.
This PR is a potential fix for #16812, an issue which is reported [again](https://github.com/rust-lang/rust/issues/36801) and [again](https://github.com/rust-lang/rust/issues/36975), with over a dozen duplicates by now.
This PR is mainly meant to promoted discussion about the issue and the correct way to fix it.
This is a [breaking-change] since the error is now reported during wfchecking, so that even the definition of a (potentially) unsized enum will cause an error (whereas it would previously cause an ICE at trans time if the enum was used in an unsized manner).
Link to PathBuf from the Path docs
I got stuck trying to use `Path` when `PathBuf` was what I needed. Hopefully this makes `PathBuf` and the module docs a bit easier to find for others.
r? @steveklabnik
debuginfo: Use TypeIdHasher for generating global debuginfo type IDs.
The only requirement for debuginfo type IDs is that they are globally unique. The `TypeIdHasher` (which is used for `std::intrinsic::type_id()` provides that, so we can get rid of some redundancy by re-using it for debuginfo. Values produced by the `TypeIdHasher` are also more stable than the current `UniqueTypeId` generation algorithm produces -- these incorporate the `NodeId`s, which is not good for incremental compilation.
@alexcrichton @eddyb : Could you take a look at the endianess adaptations that I made to the `TypeIdHasher`?
Also, are we sure that a 64 bit hash is wide enough for something that is supposed to be globally unique? For debuginfo I'm using 160 bits to make sure that we don't run into conflicts there.