Update emscripten
This updates emscripten to 1.38.15, which is based on LLVM 6.0.1 and would allow us to drop code for handling LLVM 4.
The main issue I ran into is that exporting statics through `EXPORTED_FUNCTIONS` no longer works. As far as I understand exporting non-functions doesn't really make sense under emscripten anyway, so I've modified the symbol export code to not even try.
Closes#52323.
This fixes a regression from #53031 where specifying `-C target-cpu=native` is
printing a lot of warnings from LLVM about `native` being an unknown CPU. It
turns out that `native` is indeed an unknown CPU and we have to perform a
mapping to an actual CPU name, but this mapping is only performed in one
location rather than all locations we inform LLVM about the target CPU.
This commit centralizes the mapping of `native` to LLVM's value of the native
CPU, ensuring that all locations we inform LLVM about the `target-cpu` it's
never `native`.
Closes#53322
In some profiling on OSX I saw the `write` syscall as quite high up on
the profiling graph, which is definitely not good! It looks like we're
setting the output stream of an object file as directly to a file
descriptor which means that we run the risk of doing lots of little
writes rather than a few large writes.
This commit fixes this issue by adding a buffered stream on the output,
causing the `write` syscall to disappear from the profiles on OSX.
Fix -Wpessimizing-move warnings in rustllvm/PassWrapper
These are producing warnings when building rustc (`warning: moving a local object in a return statement prevents copy elision [-Wpessimizing-move]`).
rustc: Work around an upstream wasm ThinLTO bug
This commit implements a workaround for an [upstream LLVM bug][1] where custom
sections were accidentally duplicated amongst codegen units when ThinLTO passes
were performed. This is due to the fact that custom sections for wasm are stored
as metadata nodes which are automatically imported into modules when ThinLTO
happens. The fix here is to forcibly delete the metadata node from imported
modules before LLVM has a chance to try to copy it over.
[1]: https://bugs.llvm.org/show_bug.cgi?id=38184
This commit implements a workaround for an [upstream LLVM bug][1] where custom
sections were accidentally duplicated amongst codegen units when ThinLTO passes
were performed. This is due to the fact that custom sections for wasm are stored
as metadata nodes which are automatically imported into modules when ThinLTO
happens. The fix here is to forcibly delete the metadata node from imported
modules before LLVM has a chance to try to copy it over.
[1]: https://bugs.llvm.org/show_bug.cgi?id=38184
This commit removes a hack in our ThinLTO passes which removes available
externally functions manually. The [upstream bug][1] has long since been fixed,
so we should be able to rely on LLVM natively for this now!
[1]: https://bugs.llvm.org/show_bug.cgi?id=35736
This commit upgrades the main LLVM submodule to LLVM's current master branch.
The LLD submodule is updated in tandem as well as compiler-builtins.
Along the way support was also added for LLVM 7's new features. This primarily
includes the support for custom section concatenation natively in LLD so we now
add wasm custom sections in LLVM IR rather than having custom support in rustc
itself for doing so.
Some other miscellaneous changes are:
* We now pass `--gc-sections` to `wasm-ld`
* The optimization level is now passed to `wasm-ld`
* A `--stack-first` option is passed to LLD to have stack overflow always cause
a trap instead of corrupting static data
* The wasm target for LLVM switched to `wasm32-unknown-unknown`.
* The syntax for aligned pointers has changed in LLVM IR and tests are updated
to reflect this.
* The `thumbv6m-none-eabi` target is disabled due to an [LLVM bug][llbug]
Nowadays we've been mostly only upgrading whenever there's a major release of
LLVM but enough changes have been happening on the wasm target that there's been
growing motivation for quite some time now to upgrade out version of LLD. To
upgrade LLD, however, we need to upgrade LLVM to avoid needing to build yet
another version of LLVM on the builders.
The revision of LLVM in use here is arbitrarily chosen. We will likely need to
continue to update it over time if and when we discover bugs. Once LLVM 7 is
fully released we can switch to that channel as well.
[llbug]: https://bugs.llvm.org/show_bug.cgi?id=37382
The crash that happened in #23566 doesn't happen anymore with the LLVM mergefunc
pass enabled and it hugely reduces code size (for example it shaves off 10% of the
final Servo executable). This patch reenables it.
The LLVM PassManager has a PrepareForThinLTO flag, which is intended
when compilation occurs in conjunction with linking by ThinLTO. The
flag has two effects:
* The NameAnonGlobal pass is run after all other passes, which
ensures that all globals have a name.
* In optimized builds, a number of late passes (mainly related to
vectorization and unrolling) are disabled, on the rationale that
these a) will increase codesize of the intermediate artifacts
and b) will be run by ThinLTO again anyway.
This patch enables the use of PrepareForThinLTO if Thin or ThinLocal
linking is used.
The background for this change is the CI failure in #49479, which
we assume to be caused by the NameAnonGlobal pass not being run.
As this changes which passes LLVM runs, this might have performance
(or other) impact, so we want to land this separately.
In `LLVMRustHasFeature()`, rather than using `MCInfo->getFeatureTable()`
that is specific to Rust's LLVM fork, we can use this in LLVM 6:
/// Check whether the subtarget features are enabled/disabled as per
/// the provided string, ignoring all other features.
bool checkFeatures(StringRef FS) const;
Now rustc using external LLVM can also have `target_feature`.
LLVM has since removed the `CodeModel::Default` enum value in favor of an
`Optional` implementationg throughout LLVM. Let's mirror the same change in Rust
and update the various bindings we call accordingly.
Removed in llvm-mirror/llvm@9aafb854c
This commit is the next attempt to enable multiple codegen units by default in
release mode, getting some of those sweet, sweet parallelism wins by running
codegen in parallel. Performance should not be lost due to ThinLTO being on by
default as well.
Closes#45320
This commit implements a workaround for #46346 which basically just
avoids triggering the situation that LLVM's bug
https://bugs.llvm.org/show_bug.cgi?id=35562 arises. More details can be
found in the code itself but this commit is also intended to ...
Closes#46346
In #46382 the logic around linkage preservation with ThinLTO ws tweaked but the
loop that registered all otherwise exported GUID values as "don't internalize
me please" was erroneously too conservative and only asking "external" linkage
items to not be internalized. Instead we actually want the inversion of that
condition, everything *without* "local" linkage to be internalized.
This commit updates the condition there, adds a test, and...
Closes#46543
Assume at least LLVM 3.9 in rustllvm and rustc_llvm
We bumped the minimum LLVM to 3.9 in #45326. This just cleans up the conditional code in the `rustllvm` C++ wrappers to assume that minimum, and similarly cleans up the `rustc_llvm` build script.
Previously we were too eagerly exporting almost all symbols used in ThinLTO
which can cause a whole host of problems downstream! This commit instead fixes
this error by aligning more closely with `lib/LTO/LTO.cpp` in LLVM's codebase
which is to only change the linkage of summaries which are computed as dead.
Closes#46374
This makes it more robust when assertions are disabled,
crashing instead of causing UB.
Also introduces a tidy check to enforce this rule,
which in turn necessitated making tidy run on src/rustllvm.
Fixes#44020
This commit adds a new target to the compiler: wasm32-unknown-unknown. This
target is a reimagining of what it looks like to generate WebAssembly code from
Rust. Instead of using Emscripten which can bring with it a weighty runtime this
instead is a target which uses only the LLVM backend for WebAssembly and a
"custom linker" for now which will hopefully one day be direct calls to lld.
Notable features of this target include:
* There is zero runtime footprint. The target assumes nothing exists other than
the wasm32 instruction set.
* There is zero toolchain footprint beyond adding the target. No custom linker
is needed, rustc contains everything.
* Very small wasm modules can be generated directly from Rust code using this
target.
* Most of the standard library is stubbed out to return an error, but anything
related to allocation works (aka `HashMap`, `Vec`, etc).
* Naturally, any `#[no_std]` crate should be 100% compatible with this new
target.
This target is currently somewhat janky due to how linking works. The "linking"
is currently unconditional whole program LTO (aka LLVM is being used as a
linker). Naturally that means compiling programs is pretty slow! Eventually
though this target should have a linker.
This target is also intended to be quite experimental. I'm hoping that this can
act as a catalyst for further experimentation in Rust with WebAssembly. Breaking
changes are very likely to land to this target, so it's not recommended to rely
on it in any critical capacity yet. We'll let you know when it's "production
ready".
---
Currently testing-wise this target is looking pretty good but isn't complete.
I've got almost the entire `run-pass` test suite working with this target (lots
of tests ignored, but many passing as well). The `core` test suite is still
getting LLVM bugs fixed to get that working and will take some time. Relatively
simple programs all seem to work though!
---
It's worth nothing that you may not immediately see the "smallest possible wasm
module" for the input you feed to rustc. For various reasons it's very difficult
to get rid of the final "bloat" in vanilla rustc (again, a real linker should
fix all this). For now what you'll have to do is:
cargo install --git https://github.com/alexcrichton/wasm-gc
wasm-gc foo.wasm bar.wasm
And then `bar.wasm` should be the smallest we can get it!
---
In any case for now I'd love feedback on this, particularly on the various
integration points if you've got better ideas of how to approach them!
Enable LLVM's TrapUnreachable flag, which tells it to translate
`unreachable` instructions into hardware trap instructions, rather
than allowing control flow to "fall through" into whatever code
happens to follow it in memory.
First the `addPreservedGUID` function forgot to take care of "alias" summaries.
I'm not 100% sure what this is but the current code now matches upstream. Next
the `computeDeadSymbols` return value wasn't actually being used, but it needed
to be used! Together these should...
Closes#45195
This commit is an implementation of LLVM's ThinLTO for consumption in rustc
itself. Currently today LTO works by merging all relevant LLVM modules into one
and then running optimization passes. "Thin" LTO operates differently by having
more sharded work and allowing parallelism opportunities between optimizing
codegen units. Further down the road Thin LTO also allows *incremental* LTO
which should enable even faster release builds without compromising on the
performance we have today.
This commit uses a `-Z thinlto` flag to gate whether ThinLTO is enabled. It then
also implements two forms of ThinLTO:
* In one mode we'll *only* perform ThinLTO over the codegen units produced in a
single compilation. That is, we won't load upstream rlibs, but we'll instead
just perform ThinLTO amongst all codegen units produced by the compiler for
the local crate. This is intended to emulate a desired end point where we have
codegen units turned on by default for all crates and ThinLTO allows us to do
this without performance loss.
* In anther mode, like full LTO today, we'll optimize all upstream dependencies
in "thin" mode. Unlike today, however, this LTO step is fully parallelized so
should finish much more quickly.
There's a good bit of comments about what the implementation is doing and where
it came from, but the tl;dr; is that currently most of the support here is
copied from upstream LLVM. This code duplication is done for a number of
reasons:
* Controlling parallelism means we can use the existing jobserver support to
avoid overloading machines.
* We will likely want a slightly different form of incremental caching which
integrates with our own incremental strategy, but this is yet to be
determined.
* This buys us some flexibility about when/where we run ThinLTO, as well as
having it tailored to fit our needs for the time being.
* Finally this allows us to reuse some artifacts such as our `TargetMachine`
creation, where all our options we used today aren't necessarily supported by
upstream LLVM yet.
My hope is that we can get some experience with this copy/paste in tree and then
eventually upstream some work to LLVM itself to avoid the duplication while
still ensuring our needs are met. Otherwise I fear that maintaining these
bindings may be quite costly over the years with LLVM updates!