This commit disables all builders on Travis and almost all builders on
AppVeyor now that they're all running on Azure Pipelines. There is one
remaining builder on AppVeyor which hasn't been migrated yet due to a
test failure on Azure, which we'll be debugging soon. One remaining
builder is left on Travis which is the tools builder whenever a
submodule is changed, but we'll probably turn that off soon since it's
just for PRs.
The other major change in this PR is that the auto builders on Azure are
now configured with "real" prod credentials which should cause them to
publish all artifacts into the official CI buckets.
ci: Publish toolstate changes from Azure
This commit moves toolstate publishing from Travis to Azure. We've been
testing on azure for some time now and this works by deleting the Travis
config and updating the credentials used on Azure.
ci: Switch official `try` builds to happen on Azure
This commit switches the `try` builers to officially happen on Azure
Pipelines instead of Travis where they're currently run. This also cuts
back the number of builders to just the two we run on Travis, leaving
expansion as a possible future extension.
This commit moves toolstate publishing from Travis to Azure. We've been
testing on azure for some time now and this works by deleting the Travis
config and updating the credentials used on Azure.
This commit switches the `try` builers to officially happen on Azure
Pipelines instead of Travis where they're currently run. This also cuts
back the number of builders to just the two we run on Travis, leaving
expansion as a possible future extension.
Since #61212 we've been timing out on OSX, and this looks to be because
we're building tools like Cargo and the RLS twice instead of once. This
turns out to be a slight bug in our configuration. CI builders using the
`RUST_CHECK_TARGET` directive actually execute `make all` just before
their acual target. In `make all` we're building a stage2 cargo, and
then in `make dist` we're building a stage1 cargo.
Other builders use `SCRIPT` which provides explicit control over what
`x.py` script, for example, is used to execute the build. This moves
almost all targets to using `SCRIPT` to ensure that we're explicitly
specifying what's being built where. Additionally this updates the logic
of `RUST_CHECK_TARGET` to remove the pre-flight tidy as well as the
pre-flight `make all`. The system LLVM builder (run on PRs) now
explicitly runs tidy first and then runs the rest of the test suite.
This should make it easier to identify what each job is doing when looking at the Travis or Appveyor UI.
- Set `name` for each job in Travis.
- Move `CI_JOB_NAME` to the front in Appveyor so that it appears first in the UI.
This commit adds opt-in support to the compiler to link to `jemalloc` in
the compiler. When activated the compiler will depend on `jemalloc-sys`,
instruct jemalloc to unprefix its symbols, and then link to it. The
feature is activated by default on Linux/OSX compilers for x86_64/i686
platforms, and it's not enabled anywhere else for now. We may be able to
opt-in other platforms in the future! Also note that the opt-in only
happens on CI, it's otherwise unconditionally turned off by default.
Closes#36963
This commit moves a number of our encrypted credentials stored in
configuration files in this repository to env vars on the web UI. This
will hopefully make it easier to rotate credentials in the future as
well as quickly change them if the need arises. (quicker than landing a
PR that is).
This also updates the travis deployment process to always use the `aws`
command line tool which we're already installing on Linux and should
enable us to avoid all `dpl` gem issues as well as have greater control
over what's going where.
In one of my travis builds, I was surprised to find that the gdb
pager was in use and caused travis to time out. Adding `--batch`
to the gdb invocation will disable the pager. Note that the
`-ex q` is retained, to make sure gdb exits with status 0, just in
case `set -e` is in effect somehow.
commit 6c10142251 ("Update LLVM submodule") disabled the lldb build.
This patch updates the lldb and clang submodules to once again build
against the LLVM that is included in the Rust tree, and reverts the
.travis.yml changes from that patch.
This commit updates the LLVM submodule to the current trunk of LLVM itself. This
brings a few notable improvements for the wasm target:
* Support for wasm atomic instructions is greatly improved
* Renamed memory wasm intrinsics are fully supported
* LLD has fixed a quadratic execution bug with large numbers of relocations in
wasm files.
The compiler-rt submodule has been updated in tandem as well.
debug_assert to ensure that from_raw_parts is only used properly aligned
This does not help nearly as much as I would hope because everybody uses the distributed libstd which is compiled without debug assertions. For this reason, I am not sure if this is even worth it. OTOH, this would have caught the misalignment fixed by https://github.com/rust-lang/rust/issues/42789 *if* there had been any tests actually using ZSTs with alignment >1 (we have a CI runner which has debug assertions in libstd enabled), and it seems to currently [fail in the rg testsuite](https://ci.appveyor.com/project/rust-lang/rust/build/1.0.8403/job/v7dfdcgn8ay5j6sb). So maybe it is worth it, after all.
I have seen the attribute `#[rustc_inherit_overflow_checks]` in some places, does that make it so that the *caller's* debug status is relevant? Is there a similar attribute for `debug_assert!`? That could even subsume `rustc_inherit_overflow_checks`: Something like `rustc_inherit_debug_flag` could affect *all* places that change the generated code depending on whether we are in debug or release mode. In fact, given that we have to keep around the MIR for generic functions anyway, is there ever a reason *not* to handle the debug flag that way? I guess currently we apply debug flags like `cfg` so this is dropped early during the MIR pipeline?
EDIT: I learned from @eddyb that because of how `debug_assert!` works, this is not realistic. Well, we could still have it for the rustc CI runs and then maybe, eventually, when libstd gets compiled client-side and there is both a debug and a release build... then this will also benefit users.^^
This optionally adds lldb (and clang, which it needs) to the build.
Because rust uses LLVM 7, and because clang 7 is not yet released, a
recent git master version of clang is used.
The lldb that is used includes the Rust plugin.
lldb is only built when asked for, or when doing a nightly build on
macOS. Only macOS is done for now due to difficulties with the Python
dependency.
Currently on CI we predominately compile LLVM with the default system compiler
which means gcc on Linux, some version of Clang on OSX, MSVC on Windows, and
gcc on MinGW. This commit switches Linux, OSX, and Windows to all use Clang
6.0.0 to build LLVM (aka the C/C++ compiler as part of the bootstrap). This
looks to generate faster code according to #49879 which translates to a faster
rustc (as LLVM internally is faster)
The major changes here were to the containers that build Linux releases,
basically adding a new step that uses the previous gcc 4.8 compiler to compile
the next Clang 6.0.0 compiler. Otherwise the OSX and Windows scripts have been
updated to download precompiled versions of Clang 6 and configure the build to
use them.
Note that `cc` was updated here to fix using `clang-cl` with `cc-rs` on MSVC, as
well as an update to `sccache` on Windows which was needed to correctly work
with `clang-cl`. Finally the MinGW compiler is entirely left out here
intentionally as it's currently thought that Clang can't generate C++ code for
MinGW and we need to use gcc, but this should be verified eventually.
Ideally I'd like to soon enable sccache for rustbuild itself and some of the
stage0 tools, but for that to work we'll need some better Rust support than the
pretty old version we were previously using!
This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:
* Caches are only updated for successful builds, meaning that if a build times
out or fails in a different location the sucessfully-created docker images
isn't always cached. While this makes sense as a general rule of caches it
hurts our use cases.
* Caches are per-branch and builder which means that we don't have a separate
cache on each release channel. All our merges go through the `auto` branch
which means that they're all sharing the same cache, even those for merging to
master/beta. This means that PRs which switch between master/beta will keep
rebuilting and having cache misses.
* Caches have historically been invaliated somewhat regularly a little more
aggressively than we'd want (I think).
* We don't always need to update the contents of the cache if the Docker image
didn't change at all, and saving off the docker layers can sometimes be quite
expensive.
For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!
The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.
This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.