libcompiler-rt.a is dead, long live libcompiler-builtins.rlib
This commit moves the logic that used to build libcompiler-rt.a into a
compiler-builtins crate on top of the core crate and below the std crate.
This new crate still compiles the compiler-rt instrinsics using gcc-rs
but produces an .rlib instead of a static library.
Also, with this commit rustc no longer passes -lcompiler-rt to the
linker. This effectively makes the "no-compiler-rt" field of target
specifications a no-op. Users of `no_std` will have to explicitly add
the compiler-builtins crate to their crate dependency graph *if* they
need the compiler-rt intrinsics. Users of the `std` have to do nothing
extra as the std crate depends on compiler-builtins.
Finally, this a step towards lazy compilation of std with Cargo as the
compiler-rt intrinsics can now be built by Cargo instead of having to
be supplied by the user by some other method.
closes#34400
Allow setting --docdir
This will allow setting `--docdir` during configure, this is useful because not all linux distributions install documentation to `/usr/share/doc`. For example in Slackware documentation is installed to `/usr/doc/$PRGNAM-$VERSION` and `/usr/share/doc` is a symlink to `/usr/doc`.
To use this `./configure --docdir=/usr/doc/$PRGNAM-$VERSION` can be used.
This adds support for building the Rust compiler and standard
library for s390x-linux, allowing a full cross-bootstrap sequence
to complete. This includes:
- Makefile/configure changes to allow native s390x builds
- Full Rust compiler support for the s390x C ABI
(only the non-vector ABI is supported at this point)
- Port of the standard library to s390x
- Update the liblibc submodule to a version including s390x support
- Testsuite fixes to allow clean "make check" on s390x
Caveats:
- Resets base cpu to "z10" to bring support in sync with the default
behaviour of other compilers on the platforms. (Usually, upstream
supports all older processors; a distribution build may then chose
to require a more recent base version.) (Also, using zEC12 causes
failures in the valgrind tests since valgrind doesn't fully support
this CPU yet.)
- z13 vector ABI is not yet supported. To ensure compatible code
generation, the -vector feature is passed to LLVM. Note that this
means that even when compiling for z13, no vector instructions
will be used. In the future, support for the vector ABI should be
added (this will require common code support for different ABIs
that need different data_layout strings on the same platform).
- Two test cases are (temporarily) ignored on s390x to allow passing
the test suite. The underlying issues still need to be fixed:
* debuginfo/simd.rs fails because of incorrect debug information.
This seems to be a LLVM bug (also seen with C code).
* run-pass/union/union-basic.rs simply seems to be incorrect for
all big-endian platforms.
Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
rustc: Implement custom derive (macros 1.1)
This commit is an implementation of [RFC 1681] which adds support to the
compiler for first-class user-define custom `#[derive]` modes with a far more
stable API than plugins have today.
[RFC 1681]: https://github.com/rust-lang/rfcs/blob/master/text/1681-macros-1.1.md
The main features added by this commit are:
* A new `rustc-macro` crate-type. This crate type represents one which will
provide custom `derive` implementations and perhaps eventually flower into the
implementation of macros 2.0 as well.
* A new `rustc_macro` crate in the standard distribution. This crate will
provide the runtime interface between macro crates and the compiler. The API
here is particularly conservative right now but has quite a bit of room to
expand into any manner of APIs required by macro authors.
* The ability to load new derive modes through the `#[macro_use]` annotations on
other crates.
All support added here is gated behind the `rustc_macro` feature gate, both for
the library support (the `rustc_macro` crate) as well as the language features.
There are a few minor differences from the implementation outlined in the RFC,
such as the `rustc_macro` crate being available as a dylib and all symbols are
`dlsym`'d directly instead of having a shim compiled. These should only affect
the implementation, however, not the public interface.
This commit also ended up touching a lot of code related to `#[derive]`, making
a few notable changes:
* Recognized derive attributes are no longer desugared to `derive_Foo`. Wasn't
sure how to keep this behavior and *not* expose it to custom derive.
* Derive attributes no longer have access to unstable features by default, they
have to opt in on a granular level.
* The `derive(Copy,Clone)` optimization is now done through another "obscure
attribute" which is just intended to ferry along in the compiler that such an
optimization is possible. The `derive(PartialEq,Eq)` optimization was also
updated to do something similar.
---
One part of this PR which needs to be improved before stabilizing are the errors
and exact interfaces here. The error messages are relatively poor quality and
there are surprising spects of this such as `#[derive(PartialEq, Eq, MyTrait)]`
not working by default. The custom attributes added by the compiler end up
becoming unstable again when going through a custom impl.
Hopefully though this is enough to start allowing experimentation on crates.io!
test: Add a min-llvm-version directive
We've got tests which require a particular version of LLVM to run as they're
testing bug fixes. Our build system, however, supports multiple LLVM versions,
so we can't run these tests on all LLVM versions.
This adds a new `min-llvm-version` directive for tests so they can opt out of
being run on older versions of LLVM. This then namely applies that logic to the
`issue-36023.rs` test case and...
Closes#36138
This commit is an implementation of [RFC 1681] which adds support to the
compiler for first-class user-define custom `#[derive]` modes with a far more
stable API than plugins have today.
[RFC 1681]: https://github.com/rust-lang/rfcs/blob/master/text/1681-macros-1.1.md
The main features added by this commit are:
* A new `rustc-macro` crate-type. This crate type represents one which will
provide custom `derive` implementations and perhaps eventually flower into the
implementation of macros 2.0 as well.
* A new `rustc_macro` crate in the standard distribution. This crate will
provide the runtime interface between macro crates and the compiler. The API
here is particularly conservative right now but has quite a bit of room to
expand into any manner of APIs required by macro authors.
* The ability to load new derive modes through the `#[macro_use]` annotations on
other crates.
All support added here is gated behind the `rustc_macro` feature gate, both for
the library support (the `rustc_macro` crate) as well as the language features.
There are a few minor differences from the implementation outlined in the RFC,
such as the `rustc_macro` crate being available as a dylib and all symbols are
`dlsym`'d directly instead of having a shim compiled. These should only affect
the implementation, however, not the public interface.
This commit also ended up touching a lot of code related to `#[derive]`, making
a few notable changes:
* Recognized derive attributes are no longer desugared to `derive_Foo`. Wasn't
sure how to keep this behavior and *not* expose it to custom derive.
* Derive attributes no longer have access to unstable features by default, they
have to opt in on a granular level.
* The `derive(Copy,Clone)` optimization is now done through another "obscure
attribute" which is just intended to ferry along in the compiler that such an
optimization is possible. The `derive(PartialEq,Eq)` optimization was also
updated to do something similar.
---
One part of this PR which needs to be improved before stabilizing are the errors
and exact interfaces here. The error messages are relatively poor quality and
there are surprising spects of this such as `#[derive(PartialEq, Eq, MyTrait)]`
not working by default. The custom attributes added by the compiler end up
becoming unstable again when going through a custom impl.
Hopefully though this is enough to start allowing experimentation on crates.io!
syntax-[breaking-change]
add mips64-gnu and mips64el-gnu targets
With this commit one can build no_core (and probably no_std as well)
Rust programs for these targets. It's not yet possible to cross compile
std for these targets because rust-lang/libc doesn't know about the
mips64 architecture.
These targets have been tested by cross compiling the "smallest hello"
program (see code below) and then running it under QEMU.
``` rust
extern {
fn puts(_: *const u8);
}
fn start(_: isize, _: *const *const u8) -> isize {
unsafe {
let msg = b"Hello, world!\0";
puts(msg as *const _ as *const u8);
}
0
}
trait Copy {}
trait Sized {}
```
cc #36015
r? @alexcrichton
cc @brson
The cabi stuff is likely wrong. I just copied cabi_mips source and changed some `4`s to `8`s and `32`s to `64`s. It was enough to get libc's `puts` to work but I'd like someone familiar with this module to check it.
We've got tests which require a particular version of LLVM to run as they're
testing bug fixes. Our build system, however, supports multiple LLVM versions,
so we can't run these tests on all LLVM versions.
This adds a new `min-llvm-version` directive for tests so they can opt out of
being run on older versions of LLVM. This then namely applies that logic to the
`issue-36023.rs` test case and...
Closes#36138
Implement synchronization scheme for incr. comp. directory
This PR implements a copy-on-write-based synchronization scheme for the incremental compilation cache directory. For technical details, see the documentation at the beginning of `rustc_incremental/persist/fs.rs`.
The PR contains unit tests for some functions but for testing whether the scheme properly handles races, a more elaborate test setup would be needed. It would probably involve a small tool that allows to manipulate the incremental compilation directory in a controlled way and then letting a compiler instance run against directories in different states. I don't know if it's worth the trouble of adding another test category to `compiletest`, but I'd be happy to do so.
Fixes#32754Fixes#34957
initial support for s390x
A new target, `s390x-unknown-linux-gnu`, has been added to the compiler
and can be used to build no_core/no_std Rust programs.
Known limitations:
- librustc_trans/cabi_s390x.rs is missing. This means no support for
`extern "C" fn`.
- No support for this arch in libc. This means std can't be cross
compiled for this target.
r? @alexcrichton
This time I couldn't test running a binary cross compiled to this target under QEMU because the qemu-s390x that ships with Ubuntu 16.04 SIGABRTs with every s390x binary I run it with.
Change in binary size of `librustc_llvm.so`:
Without this commit (stage1): 41895736 bytes
With this commit (stage1): 42899016 bytes
~2.4% increase
A new target, `s390x-unknown-linux-gnu`, has been added to the compiler
and can be used to build no_core/no_std Rust programs.
Known limitations:
- librustc_trans/cabi_s390x.rs is missing. This means no support for
`extern "C" fn`.
- No support for this arch in libc. This means std can be cross compiled
for this target.
With this commit one can build no_core (and probably no_std as well)
Rust programs for these targets. It's not yet possible to cross compile
std for these targets because rust-lang/libc doesn't know about the
mips64 architecture.
These targets have been tested by cross compiling the "smallest hello"
program (see code below) and then running it under QEMU.
``` rust
#![feature(start)]
#![feature(lang_items)]
#![feature(no_core)]
#![no_core]
#[link(name = "c")]
extern {
fn puts(_: *const u8);
}
#[start]
fn start(_: isize, _: *const *const u8) -> isize {
unsafe {
let msg = b"Hello, world!\0";
puts(msg as *const _ as *const u8);
}
0
}
#[lang = "copy"]
trait Copy {}
#[lang = "sized"]
trait Sized {}
```
add mips-uclibc targets
These targets cover OpenWRT 15.05 devices, which use the soft float ABI
and the uclibc library. None of the other built-in mips targets covered
those devices (mips-gnu is hard float and glibc-based, mips-musl is
musl-based).
With this commit one can now build std for these devices using these
commands:
```
$ configure --enable-rustbuild --target=mips-unknown-linux-uclibc
$ make
```
cc #35673
---
r? @alexcrichton
cc @felixalias This is the target the rust-tessel project should be using.
Note that the libc crate doesn't support the uclibc library and will have to be updated. We are lucky that uclibc and glibc are somewhat similar and one can build std and even run the libc-test test suite with the current, unmodified libc. About that last part, I tried to run the libc-test and got a bunch of compile errors. I don't intend to fix them but I'll post some instruction about how to run libc-test in the rust-lang/libc issue tracker.
Kicking off libproc_macro
This PR introduces `libproc_macro`, which is currently quite bare-bones (just a few macro construction tools and an initial `quote!` macro).
This PR also introduces a few test cases for it, and an additional `shim` file (at `src/libsyntax/ext/proc_macro_shim.rs` to allow a facsimile usage of Macros 2.0 *today*!
These targets cover OpenWRT 15.05 devices, which use the soft float ABI
and the uclibc library. None of the other built-in mips targets covered
those devices (mips-gnu is hard float and glibc-based, mips-musl is
musl-based).
With this commit one can now build std for these devices using these
commands:
```
$ configure --enable-rustbuild --target=mips-unknown-linux-uclibc
$ make
```
cc #35673
add GNU make files for arm-unknown-linux-musleabi
For Yocto (Embedded Linux meta distro) Rust is provided via the [meta-rust layer](https://github.com/meta-rust/meta-rust). In this project there have been patches to add `arm-unknown-linux-musleabi`. Rust recently acquired that support via #35060 but only for rustbuild. meta-rust is currently only able to build Rust support with the existing GNU Makefiles. This adds `arm-unknown-linux-musleabi` support to Rust for the GNU Makefiles until meta-rust is able to sort out why using rustbuild does not work for it.
/cc @srwalter @derekstraka @jmesmon @japaric
add -mrelax-relocations=no to i686-musl and i586-gnu
I've been experiencing #34978 with these two targets. This applies the
hack in #35178 to these targets as well.
r? @alexcrichton
The arm-unknown-linux-musleabi target used in meta-rust for Yocto didn't
explicitly set the arch to ARMv6 and soft float but was instead done via
target spec files and never had the compiler running on the target.
Add -mrelax-relocations=no hacks to fix musl build
* this is just a start, dunno if it will work, but I'll just push it out to get feedback (my rust is still building 😢)
* I don't know much about rustbuild, so i just added that flag in there. it's a total hack, don't judge me
* I suspect the places in the musl .mk files are sufficient (but we may also need it present when building std), I'm not sure, needs more testing.
LLVM upgrade
As discussed in https://internals.rust-lang.org/t/need-help-with-emscripten-port/3154/46 I'm trying to update the used LLVM checkout in Rust.
I basically took @shepmaster's code and applied it on top (though I did the commits manually, the [original commits have better descriptions](https://github.com/rust-lang/rust/compare/master...avr-rust:avr-support).
With these changes I was able to build rustc. `make check` throws one last error on `run-pass/issue-28950.rs`. Output: https://gist.github.com/badboy/bcdd3bbde260860b6159aa49070a9052
I took the metadata changes as is and they seem to work, though it now uses the module in another step. I'm not sure if this is the best and correct way.
Things to do:
* [x] ~~Make `run-pass/issue-28950.rs` pass~~ unrelated
* [x] Find out how the `PositionIndependentExecutable` setting is now used
* [x] Is the `llvm::legacy` still the right way to do these things?
cc @brson @alexcrichton
Add ARM MUSL targets
Rebase of #33189.
I tested this by producing a std for `arm-unknown-linux-musleabi` then I cross compiled Hello world to said target. Checked that the produced binary was statically linked and verified that the binary worked under QEMU.
This depends on rust-lang/libc#341. I'll have to update this PR after that libc PR is merged.
I'm also working on generating ARM musl cross toolchain via crosstool-ng. Once I verified those work, I'll send a PR to rust-buildbot.
r? @alexcrichton
cc @timonvo
The targets are:
- `arm-unknown-linux-musleabi`
- `arm-unknown-linux-musleabihf`
- `armv7-unknown-linux-musleabihf`
These mirror the existing `gnueabi` targets.
All of these targets produce fully static binaries, similar to the
x86 MUSL targets.
For now these targets can only be used with `--rustbuild` builds, as
https://github.com/rust-lang/compiler-rt/pull/22 only made the
necessary compiler-rt changes in the CMake configs, not the plain
GNU Make configs.
I've tested these targets GCC 5.3.0 compiled again musl-1.1.12
(downloaded from http://musl.codu.org/). An example `./configure`
invocation is:
```
./configure \
--enable-rustbuild
--target=arm-unknown-linux-musleabi \
--musl-root="$MUSL_ROOT"
```
where `MUSL_ROOT` points to the `arm-linux-musleabi` prefix.
Usually that path will be of the form
`/foobar/arm-linux-musleabi/arm-linux-musleabi`.
Usually the cross-compile toolchain will live under
`/foobar/arm-linux-musleabi/bin`. That path should either by added
to your `PATH` variable, or you should add a section to your
`config.toml` as follows:
```
[target.arm-unknown-linux-musleabi]
cc = "/foobar/arm-linux-musleabi/bin/arm-linux-musleabi-gcc"
cxx = "/foobar/arm-linux-musleabi/bin/arm-linux-musleabi-g++"
```
As a prerequisite you'll also have to put a cross-compiled static
`libunwind.a` library in `$MUSL_ROOT/lib`. This is similar to [how
the x86_64 MUSL targets are built]
(https://doc.rust-lang.org/book/advanced-linking.html).
Add MIR Optimization Tests
I've starting working on the infrastructure for testing MIR optimizations.
The plan now is to have a set of test cases (written in Rust), compile them with -Z dump-mir, and check the MIR before and after each pass.
The compiler-rt build system has been a never ending cause of pain for Rust
unfortunately:
* The build system is very difficult to invoke and configure to only build
compiler-rt, especially across platforms.
* The standard build system doesn't actually do what we want, not working for
some of our platforms and requiring a significant number of patches on our end
which are difficult to apply when updating compiler-rt.
* Compiling compiler-rt requires LLVM to be compiled, which... is a big
dependency! This also means that over time compiler-rt is not guaranteed to
build against older versions of LLVM (or newer versions), and we often want to
work with multiple versions of LLVM simultaneously.
The makefiles and rustbuild already know how to compile C code, the code here is
far from the *only* C code we're compiling. This patch jettisons all logic to
work with compiler-rt's build system and just goes straight to the source. We
just list all files manually (copied from compiler-rt's
lib/builtins/CMakeLists.txt) and compile them into an archive.
It's likely that this means we'll fail to pick up new files when we upgrade
compiler-rt, but that seems like a much less significant cost to pay than what
we're currently paying.
cc #34400, first steps towards that
llvm, rt: build using the Ninja generator if available
The Ninja generator generally builds much faster than make. It may also
be used on Windows to have a vast speed improvement over the Visual
Studio generators.
Currently hidden behind an `--enable-ninja` flag because it does not
obey the top-level `-j` or `-l` flags given to `make`.
If local-rust is the same as the current version, then force a local-rebuild
In Debian, we would like the option to build/rebuild the current release from
*either* the current or previous stable release. So we use enable-local-rust
instead of enable-local-rebuild, and read the bootstrap key dynamically from
whatever is installed locally.
In general, it does not make much sense to allow enable-local-rust without also
setting the bootstrap key, since the build would fail otherwise.
(The way I detect "the bootstrap key of [the local] rustc installation" is a bit hacky, suggestions welcome.)
Soon the LLVM upgrade (#34743) will require an updated CMake installation, and
the easiest way to do this was to upgrade the Ubuntu version of the bots to
16.04. This in turn brings in a new MIPS compiler on the linux-cross builder,
which is now from the "official" ubuntu repositories. Unfortunately these
new compilers don't support compiling with the `-msoft-float` flag like we're
currently passing, causing compiles to fail.
This commit removes these flags as it's not clear *why* they're being passed, as
the mipsel targets also don't have it. At least if it's not supported by a
debian default compiler, perhaps it's not too relevant to support?
The Ninja generator generally builds much faster than make. It may also
be used on Windows to have a vast speed improvement over the Visual
Studio generators.
Currently hidden behind an `--enable-ninja` flag because it does not
obey the top-level `-j` or `-l` flags given to `make`.
mk: Request -march=i686 on i686 Linux
Apparently the gcc on our dist bot is so old and/or obscure that the default
`-m32` switch doesn't think it can generate i686 code (or something like that).
The compiler-rt build system probes for the `__i686__` define in GCC to compile
for an i686 (vs i386) target, so this was failing on the bots.
This tweaks instead to pass `-march=i686` on i686-unknown-linux-gnu to C code to
ensure that we're compiling for i686 instead of i386. This should hopefully not
actually have an impact other than maybe doing some random optimization it
wasn't able to do so before. In theory this isn't making the target less
compatible as all Rust code is already compiled for i686.
Hopefully closes#34572
Apparently the gcc on our dist bot is so old and/or obscure that the default
`-m32` switch doesn't think it can generate i686 code (or something like that).
The compiler-rt build system probes for the `__i686__` define in GCC to compile
for an i686 (vs i386) target, so this was failing on the bots.
This tweaks instead to pass `-march=i686` on i686-unknown-linux-gnu to C code to
ensure that we're compiling for i686 instead of i386. This should hopefully not
actually have an impact other than maybe doing some random optimization it
wasn't able to do so before. In theory this isn't making the target less
compatible as all Rust code is already compiled for i686.
Hopefully closes#34572
Currently if an LLVM build is interrupted *after* it creates the llvm-config
binary but before it's done it puts us in an inconsistent state where we think
LLVM is compiled but it's not actually. This tweaks our logic to only consider
LLVM done building once it's actually done building.
This should hopefully alleviate problems on the bots where if we interrupt at
the wrong time it doesn't corrupt the build directory.
Try to fix the nightlies
They look to be failing right after the CMake PR landed. I've diagnosed and confirmed the first issue fixed, the second is a bit of a shot in the dark to see if it fixes things.
* Implement the clean-llvm target for those cases where makefiles are being used
* Have all cross-compiled LLVMs depend on the **host** LLVM as they'll require
the llvm-tablegen executable from there
This PR refactors the 'errors' part of libsyntax into its own crate (librustc_errors). This is the first part of a few refactorings to simplify error reporting and potentially support more output formats (like a standardized JSON output and possibly an --explain mode that can work with the user's code), though this PR stands on its own and doesn't assume further changes.
As part of separating out the errors crate, I have also refactored the code position portion of codemap into its own crate (libsyntax_pos). While it's helpful to have the common code positions in a separate crate for the new errors crate, this may also enable further simplifications in the future.
mk: Fix bootstrapping cross-hosts on beta
The beta builds are currently failing, unfortunately, due to what is presumably
some odd behavior with our makefiles. The wrong bootstrap key is being used to
generate the stage1 cross-compiled libraries, which fails the build.
Interestingly enough if the targets are directly specified as part of the build
then it works just fine! Just a bare `make` fails...
Instead of trying to understand what's happening in the makefiles instead just
tweak how we configure the bootstrap key in a way that's more likely to work.
The beta builds are currently failing, unfortunately, due to what is presumably
some odd behavior with our makefiles. The wrong bootstrap key is being used to
generate the stage1 cross-compiled libraries, which fails the build.
Interestingly enough if the targets are directly specified as part of the build
then it works just fine! Just a bare `make` fails...
Instead of trying to understand what's happening in the makefiles instead just
tweak how we configure the bootstrap key in a way that's more likely to work.
In Linux distributions, it is often necessary to rebuild packages for
cases like applying new patches or linking against new system libraries.
In this scenario, the rustc in the distro build environment may already
match the current release that we're trying to rebuild. Thus we don't
want to use the prior release's bootstrap key, nor `--cfg stage0` for
the prior unstable features.
The new `configure --enable-local-rebuild` option specifies that we are
rebuilding from the current release. The current bootstrap key is used
for the local rustc, and current stage1 features are also assumed.
This commit is an implementation of [RFC 1513] which allows applications to
alter the behavior of panics at compile time. A new compiler flag, `-C panic`,
is added and accepts the values `unwind` or `panic`, with the default being
`unwind`. This model affects how code is generated for the local crate, skipping
generation of landing pads with `-C panic=abort`.
[RFC 1513]: https://github.com/rust-lang/rfcs/blob/master/text/1513-less-unwinding.md
Panic implementations are then provided by crates tagged with
`#![panic_runtime]` and lazily required by crates with
`#![needs_panic_runtime]`. The panic strategy (`-C panic` value) of the panic
runtime must match the final product, and if the panic strategy is not `abort`
then the entire DAG must have the same panic strategy.
With the `-C panic=abort` strategy, users can expect a stable method to disable
generation of landing pads, improving optimization in niche scenarios,
decreasing compile time, and decreasing output binary size. With the `-C
panic=unwind` strategy users can expect the existing ability to isolate failure
in Rust code from the outside world.
Organizationally, this commit dismantles the `sys_common::unwind` module in
favor of some bits moving part of it to `libpanic_unwind` and the rest into the
`panicking` module in libstd. The custom panic runtime support is pretty similar
to the custom allocator support with the only major difference being how the
panic runtime is injected (takes the `-C panic` flag into account).
Add armv7-linux-androideabi target
This PR adds `armv7-linux-androideabi` target that matches `armeabi-v7a` Android ABI, ~~downscales `arm-linux-androideabi` target to match `armeabi` Android ABI~~ (TBD later if needed).
This should allow us to get the best performance from every [Android ABI level](http://developer.android.com/ndk/guides/abis.html).
Currently existing target `arm-linux-androideabi` started gaining features out of the supported range of [android `armeabi`](http://developer.android.com/ndk/guides/abis.html). While android compiler does not use a different target for later supported `armv7` architecture, it has distinct ABI name `armeabi-v7a`. We decided to add rust target `armv7-linux-androideabi` to match it.
Note that `NEON`, `VFPv3-D32`, and `ThumbEE` instruction sets are not added, because not all android devices are guaranteed to support all or some of these, and [their availability should be checked at runtime](http://developer.android.com/ndk/guides/abis.html#v7a).
~~This reduces performance of existing `arm-linux-androideabi` and may make it _much_ slower (we are talking more than order of magnitude in some random ad-hoc fp benchmark that I did).~~
Part of #33278.
make dist: specify the archive file as stdout
If the `-f` option isn't given, GNU tar will use environment variable
`TAPE` first, and next use the compiled-in default, which isn't
necessary `stdout` (it is the tape device `/dev/rst0` under OpenBSD for
example).
Implement constant support in MIR.
All of the intended features in `trans::consts` are now supported by `mir::constant`.
The implementation is considered a temporary measure until `miri` replaces it.
A `-Z orbit` bootstrap build will only translate LLVM IR from AST for `#[rustc_no_mir]` functions.
Furthermore, almost all checks of constant expressions have been moved to MIR.
In non-`const` functions, trees of temporaries are promoted, as per RFC 1414 (rvalue promotion).
Promotion before MIR borrowck would allow reasoning about promoted values' lifetimes.
The improved checking comes at the cost of four `[breaking-change]`s:
* repeat counts must contain a constant expression, e.g.:
`let arr = [0; { println!("foo"); 5 }];` used to be allowed (it behaved like `let arr = [0; 5];`)
* dereference of a reference to a `static` cannot be used in another `static`, e.g.:
`static X: [u8; 1] = [1]; static Y: u8 = (&X)[0];` was unintentionally allowed before
* the type of a `static` *must* be `Sync`, irrespective of the initializer, e.g.
`static FOO: *const T = &BAR;` worked as `&T` is `Sync`, but it shouldn't because `*const T` isn't
* a `static` cannot wrap `UnsafeCell` around a type that *may* need drop, e.g.
`static X: MakeSync<UnsafeCell<Option<String>>> = MakeSync(UnsafeCell::new(None));`
was previously allowed based on the fact `None` alone doesn't need drop, but in `UnsafeCell`
it can be later changed to `Some(String)` which *does* need dropping
The drop restrictions are relaxed by RFC 1440 (#33156), which is implemented, but feature-gated.
However, creating `UnsafeCell` from constants is unstable, so users can just enable the feature gate.
mk: Fix building with --enable-ccache
We will no longer use `ccache` in the makefiles for our local dependencies like
miniz, but they're so small anyway it doesn't really matter.
Closes#33285
Move auxiliary directories to live with the tests
This is a step for enabling testing of cross-crate incremental compilation. The idea is that instead of having a central auxiliary directory, when you have a `// aux-build:foo.rs` annotation in the test `run-pass/bar.rs`, it will look in (e.g.) `run-pass/aux/foo.rs`. In general, it looks for an `aux` directory in the same directory as the test. We also ignore the `aux` directories when enumerating the set of tests.
As part of this PR, also refactor `runtest.rs` to use methods on a context, which means we can stop passing around context everywhere.
r? @alexcrichton
Looks like the real bug on nightlies is that the `llvm-pass` run-make test is
not actually getting the value of `LLVM_CXXFLAGS` correct. Namely, it's blank!
Now the only change #33093 which actually affected this is that the argument
`$(LLVM_CXXFLAGS_$(2))` was moved up from a makefile rule into the definition of
a variable. Sounds innocuous?
Turns out the variable this was moved into is defined with `:=`, which means
that it's not recursively expanded, which basically means that it's expanded
immediately. Unfortunately part of this expansion involves running
`llvm-config`, which doesn't exist at the start of distcheck build!
This didn't show up on the bots because they run `make` *then* `make check`, and
the first step builds llvm-config so the next time `make` is loaded everything
is available. The distcheck bots, however, run just a plain `distcheck` so
`make` doesn't exist ahead of time. You can see this in action where the
distcheck bots start out with a bunch of "llvm-config not found" error messages.
This commit just changes a few variables to be defined with `=` which
essentially means they're lazily expanded. I did not run a full distcheck
locally, but this makes the initial "llvm-config not found" error messages go
away so I suspect that this is the fix.
Closes#33379
We will no longer use `ccache` in the makefiles for our local dependencies like
miniz, but they're so small anyway it doesn't really matter.
Closes#33285
If the `-f` option isn't given, GNU tar will use environment variable
`TAPE` first, and next use the compiled-in default, which isn't
necessary `stdout` (it is the tape device `/dev/rst0` under OpenBSD for
example).
This changes the CFLAGS and related variables passed to compiletest to be passed
for the target, not the host, so we can correctly test 32-bit cross compiles on
64-bit host machines.
Hopefuly fixes#33379
test: Move run-make tests into compiletest
Forcing them to be embedded in makefiles precludes being able to run them in
rustbuild, and adding them to compiletest gives us a great way to leverage
future enhancements to our "all encompassing test suite runner" as well as just
moving more things into Rust.
All tests are still Makefile-based in the sense that they rely on `make` being
available to run them, but there's no longer any Makefile-trickery to run them
and rustbuild can now run them out of the box as well.
Forcing them to be embedded in makefiles precludes being able to run them in
rustbuild, and adding them to compiletest gives us a great way to leverage
future enhancements to our "all encompassing test suite runner" as well as just
moving more things into Rust.
All tests are still Makefile-based in the sense that they rely on `make` being
available to run them, but there's no longer any Makefile-trickery to run them
and rustbuild can now run them out of the box as well.
The `--android-cross-path` has been deprecated for some time now, we should use
`CFG_ARM_LINUX_ANDROIDEABI_NDK` instead.
Ideally this would use the right triple, but we're only testing ARM for now.
Sanity check Python on OSX for LLDB tests
Two primary changes:
* Don't get past the configure stage if `python` isn't coming from `/usr/bin`
* Call `debugger.Terminate()` to prevent segfaults on newer versions of LLDB.
Closes#32994
This uncovered a lot of bugs in compiletest and also some shortcomings
of our existing JSON output. We had to add information to the JSON
output, such as suggested text and macro backtraces. We also had to fix
various bugs in the existing tests.
Joint work with jntrnr.
Compute `target_feature` from LLVM
This is a work-in-progress fix for #31662.
The logic that computes the target features from the command line has been replaced with queries to the `TargetMachine`.
This commit removes all infrastructure from the repository for our so-called
snapshots to instead bootstrap the compiler from stable releases. Bootstrapping
from a previously stable release is a long-desired feature of distros because
they're not fans of downloading binary stage0 blobs from us. Additionally, this
makes our own CI easier as we can decommission all of the snapshot builders and
start having a regular cadence to when we update the stage0 compiler.
A new `src/etc/get-stage0.py` script was added which shares some code with
`src/bootstrap/bootstrap.py` to read a new file, `src/stage0.txt`, which lists
the current stage0 compiler as well as cargo that we bootstrap from. This script
will download the relevant `rustc` package an unpack it into `$target/stage0` as
we do today.
One problem of bootstrapping from stable releases is that we're not able to
compile unstable code (e.g. all the `#![feature]` directives in libcore/libstd).
To overcome this we employ two strategies:
* The bootstrap key of the previous compiler is hardcoded into `src/stage0.txt`
(enabled as a result of #32731) and exported by the build system. This enables
nightly features in the compiler we download.
* The standard library and compiler are pinned to a specific stage0, which
doesn't change, so we're guaranteed that we'll continue compiling as we start
from a known fixed source.
The process for making a release will also need to be tweaked now to continue to
cadence of bootstrapping from the previous release. This process looks like:
1. Merge `beta` to `stable`
2. Produce a new stable compiler.
3. Change `master` to bootstrap from this new stable compiler.
4. Merge `master` to `beta`
5. Produce a new beta compiler
6. Change `master` to bootstrap from this new beta compiler.
Step 3 above should involve very few changes as `master` was previously
bootstrapping from `beta` which is the same as `stable` at that point in time.
Step 6, however, is where we benefit from removing lots of `#[cfg(stage0)]` and
get to use new features. This also shouldn't slow the release too much as steps
1-5 requires little work other than waiting and step 6 just needs to happen at
some point during a release cycle, it's not time sensitive.
Closes#29555Closes#29557