Building for x86_64-unknown-linux-musl currently results in an executable lacking debug information for musl libc itself. If you request a backtrace in GDB while control flow is within musl – including sycalls made by musl – the result looks like:
#0 0x0000000000434b46 in __cp_end ()
#1 0x0000000000432dbd in __syscall_cp_c ()
#2 0x0000000000000000 in ?? ()
i.e. not very helpful. Adding --enable-debug resolves this, and --enable-optimize re-enables optimisations which default to off given the previous flag.
add a dist builder to build rust-std components for the THUMB targets
the rust-std component only contains the core and compiler-builtins (+c +mem) crates
cc #49382
- I'm not entirely sure if this PR alone will produce rust-std components installable by rustup or if something else needs to be changed
- I could have done the THUMB builds in an existing builder / image; I wasn't sure if that was a good idea so I added a new image
- I could build other crates like alloc into the rust-std component but, AFAICT, that would require calling Cargo a second time (one for alloc and one for compiler-builtins), or have alloc depend on compiler-builtins (#49503 will perform that change) *and* have alloc resurface the "c" and "mem" Cargo features.
r? @alexcrichton
Ideally I'd like to soon enable sccache for rustbuild itself and some of the
stage0 tools, but for that to work we'll need some better Rust support than the
pretty old version we were previously using!
This commit disables building documentation on cross-compiled compilers, for
example ARM/MIPS/PowerPC/etc. Currently I believe we're not getting much use out
of these documentation artifacts and they often take 10-15 minutes total to
build as it requires building rustdoc/rustbook and then also generating all the
documentation, especially for the reference and the book itself.
In an effort to cut down on the amount of work that we're doing on dist CI
builders in light of recent timeouts this was some relatively low hanging fruit
to cut which in theory won't have much impact on the ecosystem in the hopes that
the documentation isn't used too heavily anyway.
While initial analysis in #48827 showed only shaving 5 minutes off local builds
the same 5 minute conclusion was drawn from #48826 which ended up having nearly
a half-hour impact on the bots. In that sense I'm hoping that we can land this
and test out what happens on CI to see how it affects timing.
Note that all tier 1 platforms, Windows, Mac, and Linux, will continue to
generate documentation.
ci: Don't use Travis caches for docker images
This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:
* Caches are only updated for successful builds, meaning that if a build times
out or fails in a different location the sucessfully-created docker images
isn't always cached. While this makes sense as a general rule of caches it
hurts our use cases.
* Caches are per-branch and builder which means that we don't have a separate
cache on each release channel. All our merges go through the `auto` branch
which means that they're all sharing the same cache, even those for merging to
master/beta. This means that PRs which switch between master/beta will keep
rebuilting and having cache misses.
* Caches have historically been invaliated somewhat regularly a little more
aggressively than we'd want (I think).
* We don't always need to update the contents of the cache if the Docker image
didn't change at all, and saving off the docker layers can sometimes be quite
expensive.
For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!
The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.
This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:
* Caches are only updated for successful builds, meaning that if a build times
out or fails in a different location the sucessfully-created docker images
isn't always cached. While this makes sense as a general rule of caches it
hurts our use cases.
* Caches are per-branch and builder which means that we don't have a separate
cache on each release channel. All our merges go through the `auto` branch
which means that they're all sharing the same cache, even those for merging to
master/beta. This means that PRs which switch between master/beta will keep
rebuilting and having cache misses.
* Caches have historically been invaliated somewhat regularly a little more
aggressively than we'd want (I think).
* We don't always need to update the contents of the cache if the Docker image
didn't change at all, and saving off the docker layers can sometimes be quite
expensive.
For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!
The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.
This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
This commit adds a new attribute to the Rust compiler specific to the wasm
target (and no other targets). The `#[wasm_import_module]` attribute is used to
specify the module that a name is imported from, and is used like so:
#[wasm_import_module = "./foo.js"]
extern {
fn some_js_function();
}
Here the import of the symbol `some_js_function` is tagged with the `./foo.js`
module in the wasm output file. Wasm-the-format includes two fields on all
imports, a module and a field. The field is the symbol name (`some_js_function`
above) and the module has historically unconditionally been `"env"`. I'm not
sure if this `"env"` convention has asm.js or LLVM roots, but regardless we'd
like the ability to configure it!
The proposed ES module integration with wasm (aka a wasm module is "just another
ES module") requires that the import module of wasm imports is interpreted as an
ES module import, meaning that you'll need to encode paths, NPM packages, etc.
As a result, we'll need this to be something other than `"env"`!
Unfortunately neither our version of LLVM nor LLD supports custom import modules
(aka anything not `"env"`). My hope is that by the time LLVM 7 is released both
will have support, but in the meantime this commit adds some primitive
encoding/decoding of wasm files to the compiler. This way rustc postprocesses
the wasm module that LLVM emits to ensure it's got all the imports we'd like to
have in it.
Eventually I'd ideally like to unconditionally require this attribute to be
placed on all `extern { ... }` blocks. For now though it seemed prudent to add
it as an unstable attribute, so for now it's not required (as that'd force usage
of a feature gate). Hopefully it doesn't take too long to "stabilize" this!
cc rust-lang-nursery/rust-wasm#29
ci: Print out how long each step takes on CI
This commit updates CI configuration to inform rustbuild that it should print
out how long each step takes on CI. This'll hopefully allow us to track the
duration of steps over time and follow regressions a bit more closesly (as well
as have closer analysis of differences between two builds).
cc #48829
Unfortunately we don't have sufficient time to rebuild the cache *and*
distribute everything in `dist-x86_64-linux alt`, the debug assertions are
really slow.
We will re-enable them after the PR has been successfully merged, thus
successfully updating the cache (freeing up 40 minutes), giving us enough
time to build these tools.
This commit updates CI configuration to inform rustbuild that it should print
out how long each step takes on CI. This'll hopefully allow us to track the
duration of steps over time and follow regressions a bit more closesly (as well
as have closer analysis of differences between two builds).
cc #48829
Many tests run on the asmjs builder like compile-fail, ui, parse-fail, etc,
aren't actually specific to asm.js. Instead of running redundant test suites
this commit changes things up to only run tests that actually emit JS we then
pass to node.
Since all tests are compiled with LTO effectively in Emscripten this commit
disables optimizations to hopefully squeeze some more time out of the CI
builders.
Closes#48826
rustbuild: Remove ThinLTO-related configuration
This commit removes some ThinLTO/codegen unit cruft primarily only needed during
the initial phase where we were adding ThinLTO support to rustc itself. The
current bootstrap compiler knows about ThinLTO and has it enabled by default for
multi-CGU builds which are also enabled by default. One CGU builds (aka
disabling ThinLTO) can be achieved by configuring the number of codegen units to
1 for a particular builds.
This also changes the defaults for our dist builders to go back to multiple
CGUs. Unfortunately we're seriously bleeding for cycle time on the bots right
now so we need to recover any time we can.
This commit removes some ThinLTO/codegen unit cruft primarily only needed during
the initial phase where we were adding ThinLTO support to rustc itself. The
current bootstrap compiler knows about ThinLTO and has it enabled by default for
multi-CGU builds which are also enabled by default. One CGU builds (aka
disabling ThinLTO) can be achieved by configuring the number of codegen units to
1 for a particular builds.
This also changes the defaults for our dist builders to go back to multiple
CGUs. Unfortunately we're seriously bleeding for cycle time on the bots right
now so we need to recover any time we can.
rustbuild: Pass `ccache` to build scripts
This is a re-attempt at #48192 hopefully this time with 100% less randomly
[blocking builds for 20 minutes][block]. To work around #48192 the sccache
server is started in the `run.sh` script very early on in the compilation
process.
[block]: https://github.com/rust-lang/rust/issues/48192
This commit imports the LLD project from LLVM to serve as the default linker for
the `wasm32-unknown-unknown` target. The `binaryen` submoule is consequently
removed along with "binaryen linker" support in rustc.
Moving to LLD brings with it a number of benefits for wasm code:
* LLD is itself an actual linker, so there's no need to compile all wasm code
with LTO any more. As a result builds should be *much* speedier as LTO is no
longer forcibly enabled for all builds of the wasm target.
* LLD is quickly becoming an "official solution" for linking wasm code together.
This, I believe at least, is intended to be the main supported linker for
native code and wasm moving forward. Picking up support early on should help
ensure that we can help LLD identify bugs and otherwise prove that it works
great for all our use cases!
* Improvements to the wasm toolchain are currently primarily focused around LLVM
and LLD (from what I can tell at least), so it's in general much better to be
on this bandwagon for bugfixes and new features.
* Historical "hacks" like `wasm-gc` will soon no longer be necessary, LLD
will [natively implement][gc] `--gc-sections` (better than `wasm-gc`!) which
means a postprocessor is no longer needed to show off Rust's "small wasm
binary size".
LLD is added in a pretty standard way to rustc right now. A new rustbuild target
was defined for building LLD, and this is executed when a compiler's sysroot is
being assembled. LLD is compiled against the LLVM that we've got in tree, which
means we're currently on the `release_60` branch, but this may get upgraded in
the near future!
LLD is placed into rustc's sysroot in a `bin` directory. This is similar to
where `gcc.exe` can be found on Windows. This directory is automatically added
to `PATH` whenever rustc executes the linker, allowing us to define a `WasmLd`
linker which implements the interface that `wasm-ld`, LLD's frontend, expects.
Like Emscripten the LLD target is currently only enabled for Tier 1 platforms,
notably OSX/Windows/Linux, and will need to be installed manually for compiling
to wasm on other platforms. LLD is by default turned off in rustbuild, and
requires a `config.toml` option to be enabled to turn it on.
Finally the unstable `#![wasm_import_memory]` attribute was also removed as LLD
has a native option for controlling this.
[gc]: https://reviews.llvm.org/D42511
Remove --host and --target arguments to configure in Dockerfiles
These arguments are passed to the relevant x.py invocation in all cases
anyway. As such, there is no need to separately configure them. x.py
will ignore the configuration when they are passed on the command line
anyway.
r? @alexcrichton
These arguments are passed to the relevant x.py invocation in all cases
anyway. As such, there is no need to separately configure them. x.py
will ignore the configuration when they are passed on the command line
anyway.
This is a re-attempt at #48192 hopefully this time with 100% less randomly
[blocking builds for 20 minutes][block]. To work around #48192 the sccache
server is started in the `run.sh` script very early on in the compilation
process.
[block]: https://github.com/rust-lang/rust/issues/48192
ci: Actually bootstrap on i686 dist
Right now the `--build` option was accidentally omitted, so we're bootstraping
from `x86_64` to `i686`. In addition to being slower (more compiles) that's not
actually bootstrapping!
The following submodules have been updated for a new version of LLVM:
- `src/llvm`
- `src/libcompiler_builtins` - transitively contains compiler-rt
- `src/dlmalloc`
This also updates the docker container for dist-i686-freebsd as the old 16.04
container is no longer capable of building LLVM. The
compiler-rt/compiler-builtins and dlmalloc updates are pretty routine without
much interesting happening, but the LLVM update here is of particular note.
Unlike previous updates I haven't cherry-picked all existing patches we had on
top of our LLVM branch as we have a [huge amount][patches4] and have at this
point forgotten what most of them are for. Instead I started from the current
`release_60` branch in LLVM and only applied patches that were necessary to get
our tests working and building.
The current set of custom rustc-specific patches included in this LLVM update are:
* rust-lang/llvm@1187443 - this is how we actually implement
`cfg(target_feature)` for now and continues to not be upstreamed. While a
hazard for SIMD stabilization this commit is otherwise keeping the status
quo of a small rustc-specific feature.
* rust-lang/llvm@013f2ec - this is a rustc-specific optimization that we haven't
upstreamed, notably teaching LLVM about our allocation-related routines (which
aren't malloc/free). Once we stabilize the global allocator routines we will
likely want to upstream this patch, but for now it seems reasonable to keep it
on our fork.
* rust-lang/llvm@a65bbfd - I found this necessary to fix compilation of LLVM in
our 32-bit linux container. I'm not really sure why it's necessary but my
guess is that it's because of the absolutely ancient glibc that we're using.
In any case it's only updating pieces we're not actually using in LLVM so I'm
hoping it'll turn out alright. This doesn't seem like something we'll want to
upstream.c
* rust-lang/llvm@77ab1f0 - this is what's actually enabling LLVM to build in our
i686-freebsd container, I'm not really sure what's going on but we for sure
probably don't want to upstream this and otherwise it seems not too bad for
now at least.
* rust-lang/llvm@9eb9267 - we currently suffer on MSVC from an [upstream bug]
which although diagnosed to a particular revision isn't currently fixed
upstream (and the bug itself doesn't seem too active). This commit is a
partial revert of the suspected cause of this regression (found via a
bisection). I'm sort of hoping that this eventually gets fixed upstream with a
similar fix (which we can replace in our branch), but for now I'm also hoping
it's a relatively harmless change to have.
After applying these patches (plus one [backport] which should be [backported
upstream][llvm-back]) I believe we should have all tests working on all
platforms in our current test suite. I'm like 99% sure that we'll need some more
backports as issues are reported for LLVM 6 when this propagates through
nightlies, but that's sort of just par for the course nowadays!
In any case though some extra scrutiny of the patches here would definitely be
welcome, along with scrutiny of the "missing patches" like a [change to pass
manager order](rust-lang/llvm@2717444753), [another change to pass manager
order](rust-lang/llvm@c782febb7b), some [compile fixes for
sparc](rust-lang/llvm@1a83de63c4), and some [fixes for
solaris](rust-lang/llvm@c2bfe0abb).
[patches4]: https://github.com/rust-lang/llvm/compare/5401fdf23...rust-llvm-release-4-0-1
[backport]: https://github.com/rust-lang/llvm/commit/5c54c252db
[llvm-back]: https://bugs.llvm.org/show_bug.cgi?id=36114
[upstream bug]: https://bugs.llvm.org/show_bug.cgi?id=36096
---
The update to LLVM 6 is desirable for a number of reasons, notably:
* This'll allow us to keep up with the upstream wasm backend, picking up new
features as they start landing.
* Upstream LLVM has fixed a number of SIMD-related compilation errors,
especially around AVX-512 and such.
* There's a few assorted known bugs which are fixed in LLVM 5 and aren't fixed
in the LLVM 4 branch we're using.
* Overall it's not a great idea to stagnate with our codegen backend!
This update is mostly powered by #47730 which is allowing us to update LLVM
*independent* of the version of LLVM that Emscripten is locked to. This means
that when compiling code for Emscripten we'll still be using the old LLVM 4
backend, but when compiling code for any other target we'll be using the new
LLVM 6 target. Once Emscripten updates we may no longer need this distinction,
but we're not sure when that will happen!
Closes#43370Closes#43418Closes#47015Closes#47683Closesrust-lang-nursery/stdsimd#157Closesrust-lang-nursery/rust-wasm#3
Right now the `--build` option was accidentally omitted, so we're bootstraping
from `x86_64` to `i686`. In addition to being slower (more compiles) that's not
actually bootstrapping!
Dist builds should always be as fast as we can make them, and since
those run on CI we don't care quite as much for the build being somewhat
slower. As such, we don't automatically enable ThinLTO on builds for the
dist builders.
This commit introduces a separately compiled backend for Emscripten, avoiding
compiling the `JSBackend` target in the main LLVM codegen backend. This builds
on the foundation provided by #47671 to create a new codegen backend dedicated
solely to Emscripten, removing the `JSBackend` of the main codegen backend in
the process.
A new field was added to each target for this commit which specifies the backend
to use for translation, the default being `llvm` which is the main backend that
we use. The Emscripten targets specify an `emscripten` backend instead of the
main `llvm` one.
There's a whole bunch of consequences of this change, but I'll try to enumerate
them here:
* A *second* LLVM submodule was added in this commit. The main LLVM submodule
will soon start to drift from the Emscripten submodule, but currently they're
both at the same revision.
* Logic was added to rustbuild to *not* build the Emscripten backend by default.
This is gated behind a `--enable-emscripten` flag to the configure script. By
default users should neither check out the emscripten submodule nor compile
it.
* The `init_repo.sh` script was updated to fetch the Emscripten submodule from
GitHub the same way we do the main LLVM submodule (a tarball fetch).
* The Emscripten backend, turned off by default, is still turned on for a number
of targets on CI. We'll only be shipping an Emscripten backend with Tier 1
platforms, though. All cross-compiled platforms will not be receiving an
Emscripten backend yet.
This commit means that when you download the `rustc` package in Rustup for Tier
1 platforms you'll be receiving two trans backends, one for Emscripten and one
that's the general LLVM backend. If you never compile for Emscripten you'll
never use the Emscripten backend, so we may update this one day to only download
the Emscripten backend when you add the Emscripten target. For now though it's
just an extra 10MB gzip'd.
Closes#46819
First round of LLVM 6.0.0 compatibility
This includes a number of commits for the first round of upgrading to LLVM 6. There are still [lingering bugs](https://github.com/rust-lang/rust/issues/47683) but I believe all of this will nonetheless be necessary!
Now that the Rust codebase depends on cc 1.0.4, there is no longer any
need to specify a compiler for CloudABI manually. Cargo will
automatically call into the right compiler executable.
This is a forward-port of:
* 9426dda83d7a928d6ced377345e14b84b0f11c21
* cbfb9858951da7aee22d82178405306fca9decb1
from the beta branch which is used to automatically calculate the beta number
based on the number of merges to the beta branch so far.
As discussed in #47427, let's not have a separate container for doing
CloudABI builds. It's a lot faster if we integrate it into an existing
container, so there's less duplication of what's being built.
Upgrade the existing container to Ubuntu 17.10, which is required for
CloudABI builds. The version of Clang shipped with 16.04 is not recent
enough to support CloudABI properly.
Setting up a cross compilation toolchain for CloudABI is relatively
easy. It's just a matter of installing a somewhat recent version of
Clang (5.0 preferred) and installing the corresponding
${target}-cxx-runtime package, containing a set of core C/C++ libraries
(libc, libc++, libunwind, etc).
Eventually it would be nice if we could also run 'x.py test'. That,
however still requires some more work. Both libtest and compiletest
would need to be adjusted to deal with CloudABI's requirement of having
all of an application's dependencies injected. Let's settle for just
doing 'x.py dist' for now.
Update musl to 1.1.18
According to http://www.musl-libc.org/download.html:
This release corrects regressions in glob() and armv4t build failure
introduced in the previous release, and includes an important bug fix
for posix_spawnp in the presence of a large PATH environment variable.
According to http://www.musl-libc.org/download.html:
This release corrects regressions in glob() and armv4t build failure
introduced in the previous release, and includes an important bug fix
for posix_spawnp in the presence of a large PATH environment variable.
ci: use a shared script to build musl
The dist-x86_64-musl, dist-various-1 and dist-i586-gnu-i686-musl builders had different scripts to build musl. This PR creates an unified script, which makes it easier to add new musl targets and update musl and libunwind (used in the musl targets).
The libunwind is update from 3.7 to 3.9 for dist-x86_64-musl and dist-i586-gnu-i686-musl (dist-various-1 already used 3.9 version).
* Bump the release version to 1.25
* Bump the bootstrap compiler to the recent beta
* Allow using unstable rustdoc features on beta - this fix has been applied to
the beta branch but needed to go to the master branch as well.
If a PR intends to update a tool but its test has failed, abort the merge
regardless of current channel. This should help the tool maintainers if the
update turns out to be failing due to changes in latest master.
[auto-toolstate] Upload the toolstate result to an external git repository, and removes BuildExpectation
This PR consists of 3 commits.
1. (Steps 4–6) The `toolstate.json` output previously collected is now pushed to the https://github.com/rust-lang-nursery/rust-toolstate repository.
2. (Step 7) Revert commit ab018c7, thus removing all traces of `BuildExpectation` and `toolstate.toml`.
3. (Step 8) Adjust CONTRIBUTION.md for the new procedure.
These are the last steps of #45861. After this PR, the toolstate will be automatically computed and published to https://rust-lang-nursery.github.io/rust-toolstate/. There is no need to manage toolstate.toml again.
Closes#45861.
The main goal here is to use FreeBSD's normal libc++, instead of
statically linking the libstdc++ packaged with GCC, because that
libstdc++ has bugs that cause rustc to deadlock inside LLVM.
But the easiest way to use libc++ is to switch the build from GCC to
Clang, and the Clang package in the Ubuntu image already knows how to
cross-compile (given a sysroot and preferably cross-binutils), so the
toolchain script now uses that instead of building a custom compiler.
This also de-duplicates the `build-toolchain.sh` script.
Reverts https://github.com/rust-lang/rust/pull/46498
I must have made some mistake when I tested that commit and thought
armv5te target worked. but testing it now the produced binaries
segfaults
(https://github.com/rust-lang/rust/pull/46498#issuecomment-350599233).
I tried using crosstool-ng and buildroot toolchain (for armv5te) but
the produced binaries also segfaults. Maybe there is a issue with the
target, but I cannot investigate it any further.
I think the best for now is not to distribute the armv5te target.
I'm sorry for what happened.
This commit allocates a builder to running wasm32 tests on Travis. Not all test
suites pass right now so this is starting out with just the run-pass and the
libcore test suites. This'll hopefully give us a pretty broad set of coverage
for integration in rustc itself as well as a somewhat broad coverage of the llvm
backend itself through integration/unit tests.
This commit alters how we compile LLVM by default enabling the WebAssembly
backend. This then also adds the wasm32-unknown-unknown target to get compiled
on the `cross` builder and distributed through rustup. Tests are not yet enabled
for this target but that should hopefully be coming soon!
fix linking error on i586
Try to fix this linking error on i586 in cross:
https://travis-ci.org/japaric/cross/builds/302095949#L8670
The problem is that `std` is built in Ubuntu 16.04 and `cross` uses a linker from 12.04.
Currently this fix solves the problem for `i686-musl` making it "supercompatible", this PR applies the fix to `i586` as well.
The cross PR is here: https://github.com/japaric/cross/pull/157
Use #!/usr/bin/env as shebang for Bash scripts
On some systems, the bash command could be available in another
directory than /bin. As such, to offer an env shebang is more
convenient.
This make sense even for docker scripts, as you can use Docker
on FreeBSD or SmartOS for example.
On some systems, the bash command could be available in another
directory than /bin. As such, to offer an env shebang is more
convenient.
This make sense even for docker scripts, as you can use Docker
on FreeBSD or SmartOS for example.
* SDK tools is upgraded to 27.0.0.
- Refactored to use `sdkmanager`/`avdmanager` instead of the deprecated
`android` tool.
* The Java version used by Android SDK is downgraded to OpenJDK-8, in order
to download the SDK through HTTPS.
* NDK is upgrade to r15c.
- Dropped support for android-9 (2.3 / Gingerbread), the minimal
supported version is now android-14 (4.0 / Ice Cream Sandwich).
- Changed the default Android compiler from GCC to clang.
- For details of change introduced by NDK r15, see
https://github.com/android-ndk/ndk/wiki/Changelog-r15.