Currently on CI we predominately compile LLVM with the default system compiler
which means gcc on Linux, some version of Clang on OSX, MSVC on Windows, and
gcc on MinGW. This commit switches Linux, OSX, and Windows to all use Clang
6.0.0 to build LLVM (aka the C/C++ compiler as part of the bootstrap). This
looks to generate faster code according to #49879 which translates to a faster
rustc (as LLVM internally is faster)
The major changes here were to the containers that build Linux releases,
basically adding a new step that uses the previous gcc 4.8 compiler to compile
the next Clang 6.0.0 compiler. Otherwise the OSX and Windows scripts have been
updated to download precompiled versions of Clang 6 and configure the build to
use them.
Note that `cc` was updated here to fix using `clang-cl` with `cc-rs` on MSVC, as
well as an update to `sccache` on Windows which was needed to correctly work
with `clang-cl`. Finally the MinGW compiler is entirely left out here
intentionally as it's currently thought that Clang can't generate C++ code for
MinGW and we need to use gcc, but this should be verified eventually.
Ideally I'd like to soon enable sccache for rustbuild itself and some of the
stage0 tools, but for that to work we'll need some better Rust support than the
pretty old version we were previously using!
We've made headway towards splitting the test suite across two appveyor builders
and this moves one more tests suite between builders. The last [failed
build][fail] had its longest running test suite and I've moved that to the
secondary builder.
cc #48844
[fail]: https://ci.appveyor.com/project/rust-lang/rust/build/1.0.6782
This commit imports the LLD project from LLVM to serve as the default linker for
the `wasm32-unknown-unknown` target. The `binaryen` submoule is consequently
removed along with "binaryen linker" support in rustc.
Moving to LLD brings with it a number of benefits for wasm code:
* LLD is itself an actual linker, so there's no need to compile all wasm code
with LTO any more. As a result builds should be *much* speedier as LTO is no
longer forcibly enabled for all builds of the wasm target.
* LLD is quickly becoming an "official solution" for linking wasm code together.
This, I believe at least, is intended to be the main supported linker for
native code and wasm moving forward. Picking up support early on should help
ensure that we can help LLD identify bugs and otherwise prove that it works
great for all our use cases!
* Improvements to the wasm toolchain are currently primarily focused around LLVM
and LLD (from what I can tell at least), so it's in general much better to be
on this bandwagon for bugfixes and new features.
* Historical "hacks" like `wasm-gc` will soon no longer be necessary, LLD
will [natively implement][gc] `--gc-sections` (better than `wasm-gc`!) which
means a postprocessor is no longer needed to show off Rust's "small wasm
binary size".
LLD is added in a pretty standard way to rustc right now. A new rustbuild target
was defined for building LLD, and this is executed when a compiler's sysroot is
being assembled. LLD is compiled against the LLVM that we've got in tree, which
means we're currently on the `release_60` branch, but this may get upgraded in
the near future!
LLD is placed into rustc's sysroot in a `bin` directory. This is similar to
where `gcc.exe` can be found on Windows. This directory is automatically added
to `PATH` whenever rustc executes the linker, allowing us to define a `WasmLd`
linker which implements the interface that `wasm-ld`, LLD's frontend, expects.
Like Emscripten the LLD target is currently only enabled for Tier 1 platforms,
notably OSX/Windows/Linux, and will need to be installed manually for compiling
to wasm on other platforms. LLD is by default turned off in rustbuild, and
requires a `config.toml` option to be enabled to turn it on.
Finally the unstable `#![wasm_import_memory]` attribute was also removed as LLD
has a native option for controlling this.
[gc]: https://reviews.llvm.org/D42511
This commit introduces a separately compiled backend for Emscripten, avoiding
compiling the `JSBackend` target in the main LLVM codegen backend. This builds
on the foundation provided by #47671 to create a new codegen backend dedicated
solely to Emscripten, removing the `JSBackend` of the main codegen backend in
the process.
A new field was added to each target for this commit which specifies the backend
to use for translation, the default being `llvm` which is the main backend that
we use. The Emscripten targets specify an `emscripten` backend instead of the
main `llvm` one.
There's a whole bunch of consequences of this change, but I'll try to enumerate
them here:
* A *second* LLVM submodule was added in this commit. The main LLVM submodule
will soon start to drift from the Emscripten submodule, but currently they're
both at the same revision.
* Logic was added to rustbuild to *not* build the Emscripten backend by default.
This is gated behind a `--enable-emscripten` flag to the configure script. By
default users should neither check out the emscripten submodule nor compile
it.
* The `init_repo.sh` script was updated to fetch the Emscripten submodule from
GitHub the same way we do the main LLVM submodule (a tarball fetch).
* The Emscripten backend, turned off by default, is still turned on for a number
of targets on CI. We'll only be shipping an Emscripten backend with Tier 1
platforms, though. All cross-compiled platforms will not be receiving an
Emscripten backend yet.
This commit means that when you download the `rustc` package in Rustup for Tier
1 platforms you'll be receiving two trans backends, one for Emscripten and one
that's the general LLVM backend. If you never compile for Emscripten you'll
never use the Emscripten backend, so we may update this one day to only download
the Emscripten backend when you add the Emscripten target. For now though it's
just an extra 10MB gzip'd.
Closes#46819
If a PR intends to update a tool but its test has failed, abort the merge
regardless of current channel. This should help the tool maintainers if the
update turns out to be failing due to changes in latest master.
ci: Upload/download from a new S3 bucket
Moving buckets from us-east-1 to us-west-1 because us-west-1 is where
rust-central-station itself runs and in general is where we have all our other
buckets.
Moving buckets from us-east-1 to us-west-1 because us-west-1 is where
rust-central-station itself runs and in general is where we have all our other
buckets.
Most of the other rust-lang buckets are in us-west-1 and I think the original
bucket was just accidentally created in the us-east-1 region. Let's consolidate
by moving it to the same location as the rest of our buckets.
Fixes#43881. Reduces AppVeyor test time back to ~2 hours on
average.
The i586 libstd was never tested before Aug 13th, so this PR
brings the situation back to the previous status-quo.
Now that the final bug fixes have been merged into sccache we can start
leveraging sccache on the MSVC builders on AppVeyor instead of relying on the
ad-hoc caching strategy of trigger files and whatnot.
This commit sort of brings back #40777 by upgrading back to 6.3.0. While
investigating #40546 it was discovered that 6.3.0 appears to not spurious
fail in the same way that 6.2.0 does (which we're currently using). The
workaround for #40184 contained in #40777 did not work so this commit also
contains a different workaround for the gdb issue. We will not download the
6.2.0 version of gdb and use that instead of the default version that comes with
6.3.0.
I'm going to optimistically say...
Closes#40546
LLVM 4.0 Upgrade
Since nobody has done this yet, I decided to get things started:
**Todo:**
* [x] push the relevant commits to `rust-lang/llvm` and `rust-lang/compiler-rt`
* [x] cleanup `.gitmodules`
* [x] Verify if there are any other commits from `rust-lang/llvm` which need backporting
* [x] Investigate / fix debuginfo ("`<optimized out>`") failures
* [x] Use correct emscripten version in docker image
---
Closes#37609.
---
**Test results:**
Everything is green 🎉
Re-enable appveyor cache
After breaking the queue last time, I'm cautiously back with a PR to re-enable caching on appveyor.
If you look at https://ci.appveyor.com/project/rust-lang/rust/build/1.0.2623/job/46o90by4ari6gege (one of the multiple runs that started failed, there are actually two errors - one for restoring the cache, one right at the bottom for creating a directory. I only noticed the restore error at the time as I was a bit rushed to revert and didn't stop to wonder why it continued - turns out appveyor [does not abort on cache restore failure](https://github.com/appveyor/ci/issues/723).
Turns out the cause of the build failures was the cache directory existing and me being thinking that because mkdir on windows is [recursive by default](http://stackoverflow.com/a/905239/2352259), it ignores the error if the directory already exists. Apparently this is not true, so now it checks if the directory exists before attempting to create.
In addition, I've added some more paranoia to double check everything is sane.
I've tracked down what I believe is the last spurious sccache failure on #40240
to behavior in mio (carllerche/mio#583), and this commit updates the binaries to
a version which has that fix incorporated.
Attempt to cache git modules
Partial resolution of #40772, appveyor remains to be done once travis looks like it's working ok.
The approach in this PR is based on the `--reference` flag to `git-clone`/`git-submodule --update` and is a compromise based on the current limitations of the tools we're using.
The ideal would be:
1. have a cached pristine copy of rust-lang/rust master in `$HOME/rustsrc` with all submodules initialised
2. clone the PR branch with `git clone --recurse-submodules --reference $HOME/rustsrc git@github.com:rust-lang/rust.git`
This would (in the nonexistent ideal world) use the pristine copy as an object cache for the top level repo and all submodules, transferring over the network only the changes on the branch. Unfortunately, a) there is no way to manually control the initial clone with travis and b) even if there was, cloned submodules don't use the submodules of the reference as an object cache. So the steps we end up with are:
1. have a cached pristine copy of rust-lang/rust master in `$HOME/rustsrc` with all submodules initialised
2. have a cloned PR branch
3. extract the path of each submodule, and explicitly `git submodule update --init --reference $HOME/rustsrc/$module $module` (i.e. point directly to the location of the pristine submodule repo) for each one
I've also taken some care to make this forward compatible, both for adding and removing submodules.
r? @alexcrichton
It looks like the 6.3.0 MinGW comes with a gdb which has issues (#40184) that an
attempted workaround (#40777) does not actually fix (#40835). The original
motivation for upgradin MinGW was to fix build flakiness (#40546) due to newer
builds not exhibiting the same bug, so let's hope that 6.2.0 isn't too far back
in time and still contains the fix we need.
Closes#40835
appveyor: Upgrade MinGW toolchains we use
In debugging #40546 I was able to reproduce locally finally using
the literal toolchain that the bots were using. I reproduced the error maybe 4
in 10 builds. I also have the 6.3.0 toolchain installed through `pacman` which
has yet to have a failed build.
When attempting to reproduce the bug with the toolchain that this commit
switches to I was unable to reproduce anything after a few builds. I have no
idea what the original problem was, but I'm hoping that it was just some random
bug fixed somewhere along the way.
I don't currently know of a technical reason to stick to the 4.9.2 toolchains we
were previously using. Historcal 5.3.* toolchains would cause llvm to segfault
(maybe a miscompile?) but this seems to have been fixed recently. To me if it
passes CI then I think we're good.
Closes#40546
In debugging #40546 I was able to reproduce locally finally using
the literal toolchain that the bots were using. I reproduced the error maybe 4
in 10 builds. I also have the 6.3.0 toolchain installed through `pacman` which
has yet to have a failed build.
When attempting to reproduce the bug with the toolchain that this commit
switches to I was unable to reproduce anything after a few builds. I have no
idea what the original problem was, but I'm hoping that it was just some random
bug fixed somewhere along the way.
I don't currently know of a technical reason to stick to the 4.9.2 toolchains we
were previously using. Historcal 5.3.* toolchains would cause llvm to segfault
(maybe a miscompile?) but this seems to have been fixed recently. To me if it
passes CI then I think we're good.
Closes#40546
This was recently implemented (appveyor/ci#1387) in response to one of our
feature requests, so let's take advantage of it! I'm going to optimistically
say...
Closes#39074