This commit updates the LLVM submodule to the current trunk of LLVM itself. This
brings a few notable improvements for the wasm target:
* Support for wasm atomic instructions is greatly improved
* Renamed memory wasm intrinsics are fully supported
* LLD has fixed a quadratic execution bug with large numbers of relocations in
wasm files.
The compiler-rt submodule has been updated in tandem as well.
debug_assert to ensure that from_raw_parts is only used properly aligned
This does not help nearly as much as I would hope because everybody uses the distributed libstd which is compiled without debug assertions. For this reason, I am not sure if this is even worth it. OTOH, this would have caught the misalignment fixed by https://github.com/rust-lang/rust/issues/42789 *if* there had been any tests actually using ZSTs with alignment >1 (we have a CI runner which has debug assertions in libstd enabled), and it seems to currently [fail in the rg testsuite](https://ci.appveyor.com/project/rust-lang/rust/build/1.0.8403/job/v7dfdcgn8ay5j6sb). So maybe it is worth it, after all.
I have seen the attribute `#[rustc_inherit_overflow_checks]` in some places, does that make it so that the *caller's* debug status is relevant? Is there a similar attribute for `debug_assert!`? That could even subsume `rustc_inherit_overflow_checks`: Something like `rustc_inherit_debug_flag` could affect *all* places that change the generated code depending on whether we are in debug or release mode. In fact, given that we have to keep around the MIR for generic functions anyway, is there ever a reason *not* to handle the debug flag that way? I guess currently we apply debug flags like `cfg` so this is dropped early during the MIR pipeline?
EDIT: I learned from @eddyb that because of how `debug_assert!` works, this is not realistic. Well, we could still have it for the rustc CI runs and then maybe, eventually, when libstd gets compiled client-side and there is both a debug and a release build... then this will also benefit users.^^
This optionally adds lldb (and clang, which it needs) to the build.
Because rust uses LLVM 7, and because clang 7 is not yet released, a
recent git master version of clang is used.
The lldb that is used includes the Rust plugin.
lldb is only built when asked for, or when doing a nightly build on
macOS. Only macOS is done for now due to difficulties with the Python
dependency.
Currently on CI we predominately compile LLVM with the default system compiler
which means gcc on Linux, some version of Clang on OSX, MSVC on Windows, and
gcc on MinGW. This commit switches Linux, OSX, and Windows to all use Clang
6.0.0 to build LLVM (aka the C/C++ compiler as part of the bootstrap). This
looks to generate faster code according to #49879 which translates to a faster
rustc (as LLVM internally is faster)
The major changes here were to the containers that build Linux releases,
basically adding a new step that uses the previous gcc 4.8 compiler to compile
the next Clang 6.0.0 compiler. Otherwise the OSX and Windows scripts have been
updated to download precompiled versions of Clang 6 and configure the build to
use them.
Note that `cc` was updated here to fix using `clang-cl` with `cc-rs` on MSVC, as
well as an update to `sccache` on Windows which was needed to correctly work
with `clang-cl`. Finally the MinGW compiler is entirely left out here
intentionally as it's currently thought that Clang can't generate C++ code for
MinGW and we need to use gcc, but this should be verified eventually.
Ideally I'd like to soon enable sccache for rustbuild itself and some of the
stage0 tools, but for that to work we'll need some better Rust support than the
pretty old version we were previously using!
This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:
* Caches are only updated for successful builds, meaning that if a build times
out or fails in a different location the sucessfully-created docker images
isn't always cached. While this makes sense as a general rule of caches it
hurts our use cases.
* Caches are per-branch and builder which means that we don't have a separate
cache on each release channel. All our merges go through the `auto` branch
which means that they're all sharing the same cache, even those for merging to
master/beta. This means that PRs which switch between master/beta will keep
rebuilting and having cache misses.
* Caches have historically been invaliated somewhat regularly a little more
aggressively than we'd want (I think).
* We don't always need to update the contents of the cache if the Docker image
didn't change at all, and saving off the docker layers can sometimes be quite
expensive.
For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!
The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.
This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
Some minor CI changes
1. On macOS, ensure crash log printing won't error, and only real crash logs are printed. This may avoid the `find` process exiting abnormally and truncated the Travis log (I guess).
2. Print `/proc/cpuinfo` and `/proc/meminfo`. To determine if there's any variation in the reported clock rate between jobs.
travis: Upgrade OSX builders
This upgrades the OSX builders to the `xcode9.3-moar` image which has 3 cores as
opposed to the 2 that our builders currently have. Should help make those OSX
builds a bit speedier!
This commit upgrades the dist builders for OSX to Travis's new `xcode9.3-moar`
image which has 3 cores available to it instead of 2. This should help us
provide speedier builds on OSX and hit timeouts less in theory!
Note that historically the dist builders for OSX have been a different version
than the ones that are running tests. I had forgotten why this was the case and
digging around brought up 307615567 where apparently Xcode 8 wasn't able to
compile LLVM with `MACOSX_DEPLOYMENT_TARGET=10.7` which we desired. On a whim I
gave this PR a spin and it [looks like][green] this has since been fixed (maybe
in LLVM?). In any case those green builds should hopefully mean that we can
safely upgrade and get faster infrastructure to boot.
This commit also includes an upgrade of OpenSSL. This is not done for security
reasons but rather build system reasons. Originally builds with the new image
[did not succeed][red] due to weird build failures in OpenSSL, but upgrading
seems to have made the spurious errors go away to here's to also hoping that's
fixed!
[green]: https://travis-ci.org/rust-lang/rust/builds/351353412
[red]: https://travis-ci.org/rust-lang/rust/builds/350969248
This upgrades the OSX builders to the `xcode9.3-moar` image which has 3 cores as
opposed to the 2 that our builders currently have. Should help make those OSX
builds a bit speedier!
This commit imports the LLD project from LLVM to serve as the default linker for
the `wasm32-unknown-unknown` target. The `binaryen` submoule is consequently
removed along with "binaryen linker" support in rustc.
Moving to LLD brings with it a number of benefits for wasm code:
* LLD is itself an actual linker, so there's no need to compile all wasm code
with LTO any more. As a result builds should be *much* speedier as LTO is no
longer forcibly enabled for all builds of the wasm target.
* LLD is quickly becoming an "official solution" for linking wasm code together.
This, I believe at least, is intended to be the main supported linker for
native code and wasm moving forward. Picking up support early on should help
ensure that we can help LLD identify bugs and otherwise prove that it works
great for all our use cases!
* Improvements to the wasm toolchain are currently primarily focused around LLVM
and LLD (from what I can tell at least), so it's in general much better to be
on this bandwagon for bugfixes and new features.
* Historical "hacks" like `wasm-gc` will soon no longer be necessary, LLD
will [natively implement][gc] `--gc-sections` (better than `wasm-gc`!) which
means a postprocessor is no longer needed to show off Rust's "small wasm
binary size".
LLD is added in a pretty standard way to rustc right now. A new rustbuild target
was defined for building LLD, and this is executed when a compiler's sysroot is
being assembled. LLD is compiled against the LLVM that we've got in tree, which
means we're currently on the `release_60` branch, but this may get upgraded in
the near future!
LLD is placed into rustc's sysroot in a `bin` directory. This is similar to
where `gcc.exe` can be found on Windows. This directory is automatically added
to `PATH` whenever rustc executes the linker, allowing us to define a `WasmLd`
linker which implements the interface that `wasm-ld`, LLD's frontend, expects.
Like Emscripten the LLD target is currently only enabled for Tier 1 platforms,
notably OSX/Windows/Linux, and will need to be installed manually for compiling
to wasm on other platforms. LLD is by default turned off in rustbuild, and
requires a `config.toml` option to be enabled to turn it on.
Finally the unstable `#![wasm_import_memory]` attribute was also removed as LLD
has a native option for controlling this.
[gc]: https://reviews.llvm.org/D42511
This commit introduces a separately compiled backend for Emscripten, avoiding
compiling the `JSBackend` target in the main LLVM codegen backend. This builds
on the foundation provided by #47671 to create a new codegen backend dedicated
solely to Emscripten, removing the `JSBackend` of the main codegen backend in
the process.
A new field was added to each target for this commit which specifies the backend
to use for translation, the default being `llvm` which is the main backend that
we use. The Emscripten targets specify an `emscripten` backend instead of the
main `llvm` one.
There's a whole bunch of consequences of this change, but I'll try to enumerate
them here:
* A *second* LLVM submodule was added in this commit. The main LLVM submodule
will soon start to drift from the Emscripten submodule, but currently they're
both at the same revision.
* Logic was added to rustbuild to *not* build the Emscripten backend by default.
This is gated behind a `--enable-emscripten` flag to the configure script. By
default users should neither check out the emscripten submodule nor compile
it.
* The `init_repo.sh` script was updated to fetch the Emscripten submodule from
GitHub the same way we do the main LLVM submodule (a tarball fetch).
* The Emscripten backend, turned off by default, is still turned on for a number
of targets on CI. We'll only be shipping an Emscripten backend with Tier 1
platforms, though. All cross-compiled platforms will not be receiving an
Emscripten backend yet.
This commit means that when you download the `rustc` package in Rustup for Tier
1 platforms you'll be receiving two trans backends, one for Emscripten and one
that's the general LLVM backend. If you never compile for Emscripten you'll
never use the Emscripten backend, so we may update this one day to only download
the Emscripten backend when you add the Emscripten target. For now though it's
just an extra 10MB gzip'd.
Closes#46819
If a PR intends to update a tool but its test has failed, abort the merge
regardless of current channel. This should help the tool maintainers if the
update turns out to be failing due to changes in latest master.
[auto-toolstate] Upload the toolstate result to an external git repository, and removes BuildExpectation
This PR consists of 3 commits.
1. (Steps 4–6) The `toolstate.json` output previously collected is now pushed to the https://github.com/rust-lang-nursery/rust-toolstate repository.
2. (Step 7) Revert commit ab018c7, thus removing all traces of `BuildExpectation` and `toolstate.toml`.
3. (Step 8) Adjust CONTRIBUTION.md for the new procedure.
These are the last steps of #45861. After this PR, the toolstate will be automatically computed and published to https://rust-lang-nursery.github.io/rust-toolstate/. There is no need to manage toolstate.toml again.
Closes#45861.
Validate miri against the HIR const evaluator
r? @eddyb
cc @alexcrichton @arielb1 @RalfJung
The interesting parts are the last few functions in `librustc_const_eval/eval.rs`
* We warn if miri produces an error while HIR const eval does not.
* We warn if miri produces a value that does not match the value produced by HIR const eval
* if miri succeeds and HIR const eval fails, nothing is emitted, but we still return the HIR error
* if both error, nothing is emitted and the HIR const eval error is returned
So there are no actual changes, except that miri is forced to produce the same values as the old const eval.
* This does **not** touch the const evaluator in trans at all. That will come in a future PR.
* This does **not** cause any code to compile that didn't compile before. That will also come in the future
It would be great if someone could start a crater run if travis passes
Revert #46360, re-enable macOS dist images.
This PR reverts #46360, which disabled all macOS dist images to workaround travis-ci/travis-ci#8821.
This PR should be merged as soon as the Travis bug has been fixed.
Closes#46357.
cc @rust-lang/infra