Add s390x support
This adds support for building the Rust compiler and standard
library for s390x-linux, allowing a full cross-bootstrap sequence
to complete. This includes:
- Makefile/configure changes to allow native s390x builds
- Full Rust compiler support for the s390x C ABI
(only the non-vector ABI is supported at this point)
- Port of the standard library to s390x
- Update the liblibc submodule to a version including s390x support
- Testsuite fixes to allow clean "make check" on s390x
Caveats:
- Resets base cpu to "z10" to bring support in sync with the default
behaviour of other compilers on the platforms. (Usually, upstream
supports all older processors; a distribution build may then chose
to require a more recent base version.) (Also, using zEC12 causes
failures in the valgrind tests since valgrind doesn't fully support
this CPU yet.)
- z13 vector ABI is not yet supported. To ensure compatible code
generation, the -vector feature is passed to LLVM. Note that this
means that even when compiling for z13, no vector instructions
will be used. In the future, support for the vector ABI should be
added (this will require common code support for different ABIs
that need different data_layout strings on the same platform).
- Two test cases are (temporarily) ignored on s390x to allow passing
the test suite. The underlying issues still need to be fixed:
* debuginfo/simd.rs fails because of incorrect debug information.
This seems to be a LLVM bug (also seen with C code).
* run-pass/union/union-basic.rs simply seems to be incorrect for
all big-endian platforms.
Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
The first branch of the if statement already checks if `buf.len() >= self.buf.capacity()`, which makes the `cmp::min(buf.len(), self.buf.capacity())` redundant: the result will always be `buf.len()`. Therefore, we can pass the `buf` slice directly into `Write::write`.
Update the wording for E0063. This will truncate the fields to 3.
Instead of listing every field it will now show missing `a`, `z`, `b`, and 1 other field
This is for #35218 as part of #35233
r? @jonathandturner
This adds support for building the Rust compiler and standard
library for s390x-linux, allowing a full cross-bootstrap sequence
to complete. This includes:
- Makefile/configure changes to allow native s390x builds
- Full Rust compiler support for the s390x C ABI
(only the non-vector ABI is supported at this point)
- Port of the standard library to s390x
- Update the liblibc submodule to a version including s390x support
- Testsuite fixes to allow clean "make check" on s390x
Caveats:
- Resets base cpu to "z10" to bring support in sync with the default
behaviour of other compilers on the platforms. (Usually, upstream
supports all older processors; a distribution build may then chose
to require a more recent base version.) (Also, using zEC12 causes
failures in the valgrind tests since valgrind doesn't fully support
this CPU yet.)
- z13 vector ABI is not yet supported. To ensure compatible code
generation, the -vector feature is passed to LLVM. Note that this
means that even when compiling for z13, no vector instructions
will be used. In the future, support for the vector ABI should be
added (this will require common code support for different ABIs
that need different data_layout strings on the same platform).
- Two test cases are (temporarily) ignored on s390x to allow passing
the test suite. The underlying issues still need to be fixed:
* debuginfo/simd.rs fails because of incorrect debug information.
This seems to be a LLVM bug (also seen with C code).
* run-pass/union/union-basic.rs simply seems to be incorrect for
all big-endian platforms.
Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Fix argument to FIONBIO ioctl
The FIONBIO ioctl takes as argument a pointer to an integer, which
should be either 0 or 1 to indicate whether nonblocking mode is to
be switched off or on. The type of the pointed-to variable is "int".
However, the set_nonblocking routine in libstd/sys/unix/net.rs passes
a pointer to a libc::c_ulong variable. This doesn't matter on all
32-bit platforms and on all litte-endian platforms, but it will
break on big-endian 64-bit platforms.
Found while porting Rust to s390x (a big-endian 64-bit platform).
Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Follow target ABI sign-/zero-extension rules for enum types
While attempting to port Rust to s390x, I ran into an ABI violation
(that caused rust_eh_personality to be miscompiled, breaking unwinding).
The problem is that this function returns an enum type, which is
supposed to be sign-extended according to the s390x ABI. However,
common code would ignore target sign-/zero-extension rules for any
types that do not satisfy is_integral(), which includes enums.
For the general case of Rust enum types, which map to structure types
with a discriminant, that seems correct. However, in the special case
of simple enums that map directly to C enum types (i.e. LLVM integers),
this is incorrect; we must follow the target extension rules for those.
Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Changes the definition for opaque structs to look like `pub struct Vec<T>
{ /* fields omitted */ }` to save space on the page.
Also only use one line for empty braced structs.
Due to missing noalias annotations for &mut T in general (issue #31681),
in larger programs extend_from_slice and extend_with_element may both
compile very poorly. What is observed is that the .set_len() calls are
not lifted out of the loop, even for `Vec<u8>`.
Use a local length variable for the Vec length instead, and use a scope
guard to write this value back to self.len when the scope ends or on
panic. Then the alias analysis is easy.
This affects extend_from_slice, extend_with_element, the vec![x; n]
macro, Write impls for Vec<u8>, BufWriter, etc (but may / may not
have triggered since inlining can be enough for the compiler to get it right).
Fix soundness bug described in #29859
This is an attempt at fixing the problems described in #29859 based on an IRC conversation between @nikomatsakis and I today. I'm waiting on a full build to come back, otherwise both tests trigger the correct error.