This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:
* Caches are only updated for successful builds, meaning that if a build times
out or fails in a different location the sucessfully-created docker images
isn't always cached. While this makes sense as a general rule of caches it
hurts our use cases.
* Caches are per-branch and builder which means that we don't have a separate
cache on each release channel. All our merges go through the `auto` branch
which means that they're all sharing the same cache, even those for merging to
master/beta. This means that PRs which switch between master/beta will keep
rebuilting and having cache misses.
* Caches have historically been invaliated somewhat regularly a little more
aggressively than we'd want (I think).
* We don't always need to update the contents of the cache if the Docker image
didn't change at all, and saving off the docker layers can sometimes be quite
expensive.
For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!
The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.
This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
dpl 1.9.5 has been released, revert #49217.
dpl 1.9.5 has been released which includes travis-ci/dpl#789, so we could move back to the standard Travis settings before that `s3-eager-autoload` branch is removed.
fix vector fmin/fmax non-fast/fast intrinsics NaN handling
This bugs shows up in release mode tests of `stdsimd`: https://github.com/rust-lang-nursery/stdsimd/pull/391 . The intrinsics are thoroughly tested there for roundoff errors, NaN, and overflow behavior.
The problem was that the non-fast intrinsics where specifying `NoNaNs == true`, which meant that they don't support NaNs. This is incorrect, the non-fast intrinsics should handle NaNs properly.
Also, the "fast" intrinsics where specifying `NoNaNs == false` which meant that they support NaNs and then fast-math, which probably disables this support. This was not intended either.
I've added a comment specifying what the boolean flags do.
whitelist every target feature for rustdoc
When https://github.com/rust-lang-nursery/stdsimd/pull/367 was attempted to be upstreamed, it failed to document on non-x86 targets because it made every intrinsic visible, even the ones on foreign arches. This change makes it so that whenever rustdoc asks for the target feature whitelist, it gets a list of every feature known to every arch in `rustc_trans/llvm_util.rs`.
Before pushing, i temporarily updated the `stdsimd` submodule to include the `doc(cfg)` change, generated documentation for `aarch64-unknown-linux-gnu`, and it completed without a problem. The generated `core::arch` docs contained complete submodules for all main arches.
Don't check interpret_interner when accessing a static to fix miri mutable statics
Mutable statics don't work in my PR to fix the standalone [miri](https://github.com/solson/miri), as init_static didn't get called when the interpret_interner already contained a entry for the static, which is always immutable.
cc solson/miri#364
Implement Chalk lowering rule "Implemented-From-Env"
This extends the Chalk lowering pass with the "Implemented-From-Env" rule for generating program clauses from a trait definition as part of #49177.
r? @nikomatsakis
Make compiletest do exact matching on triples
This avoids the issues of the previous substring matching, ensuring `ARCH_TABLE` and `OS_TABLE` will no longer contain redundant entries. Fixes#48893.
r? @rkruppe
Allow test target to pass without installing
explicitly pass -L target-lib to rustdoc
on OpenBSD, without it, it fails on several tests with:
```
error[E0463]: can't find crate for `std`
```
Deprecate the AsciiExt trait in favor of inherent methods
The trait and some of its methods are stable and will remain.
Some of the newer methods are unstable and can be removed later.
Fixes https://github.com/rust-lang/rust/issues/39658
Convert SerializedDepGraph to be a struct-of-arrays
Fixes#47326
I did not try the "`mem::swap()` to avoid copying the arrays" idea because that would leave the DepGraph in an incorrect state and that doesn't seem like a good idea for me.
r? @michaelwoerister
rustdoc: expose #[target_feature] attributes as doc(cfg) flags
This change exposes `#[target_feature(enable = "feat")]` attributes on an item as if they were also `#[doc(cfg(target_feature = "feat"))]` attributes. This gives them a banner on their documentation listing which feature is required to use the item. It also modifies the rendering code for doc(cfg) tags to handle `target_feature` tags. I made it print just the feature name on "short" printings (as in the function listing on a module page), and use "target feature `feat`" in the full banner on the item page itself.
This way, the function listing in `std::arch` shows which feature is required for each function:


Add warning for invalid start of code blocks in rustdoc
Follow up of #48382.
Still two things to consider:
1. Adding test for rustdoc output (but where? In UI or in rustdoc tests?).
2. Try to fix the span issue.
r? @QuietMisdreavus
ci: Print out how long each step takes on CI
This commit updates CI configuration to inform rustbuild that it should print
out how long each step takes on CI. This'll hopefully allow us to track the
duration of steps over time and follow regressions a bit more closesly (as well
as have closer analysis of differences between two builds).
cc #48829
Detect illegal hidden lifetimes in `impl Trait`
This branch fixes#46541 -- however, it presently doesn't build because it also *breaks* a number of existing usages of impl Trait. I'm opening it as a WIP for now, just because we want to move on impl Trait, but I'll try to fix the problem in a bit.
~~(The problem is due to the fact that we apparently infer stricter lifetimes in closures that we need to; for example, if you capture a variable of type `&'a &'b u32`, we will put *precisely* those lifetimes into the closure, even if the closure would be happy with `&'a &'a u32`. This causes the present chance to affect things that are not invariant.)~~ fixed
r? @cramertj
Download the GCC artifacts from the HTTP server instead of FTP server.
Try to bring back the `dist-i686-linux` and `dist-x86_64-linux alt` builders which has mysteriously lost their cache 14 hours ago and stuck forever unable to download `mpfr-2.4.2.tar.bz2` since it keeps getting
```
==> PASV ... couldn't connect to 209.132.180.131 port 10058: Connection timed out
```
Unfortunately we don't have sufficient time to rebuild the cache *and*
distribute everything in `dist-x86_64-linux alt`, the debug assertions are
really slow.
We will re-enable them after the PR has been successfully merged, thus
successfully updating the cache (freeing up 40 minutes), giving us enough
time to build these tools.
We used to make the upvar types in the closure `==` but that was
stronger than we needed. Subtyping suffices, since we are copying the
upvar value into the closure field. This in turn allows us to infer
smaller lifetimes in captured values in some cases (like the example
here), avoiding errors.