This commit introduces 128-bit integers. Stage 2 builds and produces a working compiler which
understands and supports 128-bit integers throughout.
The general strategy used is to have rustc_i128 module which provides aliases for iu128, equal to
iu64 in stage9 and iu128 later. Since nowhere in rustc we rely on large numbers being supported,
this strategy is good enough to get past the first bootstrap stages to end up with a fully working
128-bit capable compiler.
In order for this strategy to work, number of locations had to be changed to use associated
max_value/min_value instead of MAX/MIN constants as well as the min_value (or was it max_value?)
had to be changed to use xor instead of shift so both 64-bit and 128-bit based consteval works
(former not necessarily producing the right results in stage1).
This commit includes manual merge conflict resolution changes from a rebase by @est31.
Historically this was done to accommodate bugs in lints, but there hasn't been a
bug in a lint since this feature was added which the warnings affected. Let's
completely purge warnings from all our stages by denying warnings in all stages.
This will also assist in tracking down `stage0` code to be removed whenever
we're updating the bootstrap compiler.
PTX support, take 2
- You can generate PTX using `--emit=asm` and the right (custom) target. Which
then you can run on a NVIDIA GPU.
- You can compile `core` to PTX. [Xargo] also works and it can compile some
other crates like `collections` (but I doubt all of those make sense on a GPU)
[Xargo]: https://github.com/japaric/xargo
- You can create "global" functions, which can be "called" by the host, using
the `"ptx-kernel"` ABI, e.g. `extern "ptx-kernel" fn kernel() { .. }`. Every
other function is a "device" function and can only be called by the GPU.
- Intrinsics like `__syncthreads()` and `blockIdx.x` are available as
`"platform-intrinsics"`. These intrinsics are *not* in the `core` crate but
any Rust user can create "bindings" to them using an `extern
"platform-intrinsics"` block. See example at the end.
- Trying to emit PTX with `-g` (debuginfo); you get an LLVM error. But I don't
think PTX can contain debuginfo anyway so `-g` should be ignored and a warning
should be printed ("`-g` doesn't work with this target" or something).
- "Single source" support. You *can't* write a single source file that contains
both host and device code. I think that should be possible to implement that
outside the compiler using compiler plugins / build scripts.
- The equivalent to CUDA `__shared__` which it's used to declare memory that's
shared between the threads of the same block. This could be implemented using
attributes: `#[shared] static mut SCRATCH_MEMORY: [f32; 64]` but hasn't been
implemented yet.
- Built-in targets. This PR doesn't add targets to the compiler just yet but one
can create custom targets to be able to emit PTX code (see the example at the
end). The idea is to have people experiment with this feature before
committing to it (built-in targets are "insta-stable")
- All functions must be "inlined". IOW, the `.rlib` must always contain the LLVM
bitcode of all the functions of the crate it was produced from. Otherwise, you
end with "undefined references" in the final PTX code but you won't get *any*
linker error because no linker is involved. IOW, you'll hit a runtime error
when loading the PTX into the GPU. The workaround is to use `#[inline]` on
non-generic functions and to never use `#[inline(never)]` but this may not
always be possible because e.g. you could be relying on third party code.
- Should `--emit=asm` generate a `.ptx` file instead of a `.s` file?
TL;DR Use Xargo to turn a crate into a PTX module (a `.s` file). Then pass that
PTX module, as a string, to the GPU and run it.
The full code is in [this repository]. This section gives an overview of how to
run Rust code on a NVIDIA GPU.
[this repository]: https://github.com/japaric/cuda
- Create a custom target. Here's the 64-bit NVPTX target (NOTE: the comments
are not valid because this is supposed to be a JSON file; remove them before
you use this file):
``` js
// nvptx64-nvidia-cuda.json
{
"arch": "nvptx64", // matches LLVM
"cpu": "sm_20", // "oldest" compute capability supported by LLVM
"data-layout": "e-i64:64-v16:16-v32:32-n16:32:64",
"llvm-target": "nvptx64-nvidia-cuda",
"max-atomic-width": 0, // LLVM errors with any other value :-(
"os": "cuda", // matches LLVM
"panic-strategy": "abort",
"target-endian": "little",
"target-pointer-width": "64",
"target-vendor": "nvidia", // matches LLVM -- not required
}
```
(There's a 32-bit target specification in the linked repository)
- Write a kernel
``` rust
extern "platform-intrinsic" {
fn nvptx_block_dim_x() -> i32;
fn nvptx_block_idx_x() -> i32;
fn nvptx_thread_idx_x() -> i32;
}
/// Copies an array of `n` floating point numbers from `src` to `dst`
pub unsafe extern "ptx-kernel" fn memcpy(dst: *mut f32,
src: *const f32,
n: usize) {
let i = (nvptx_block_dim_x() as isize)
.wrapping_mul(nvptx_block_idx_x() as isize)
.wrapping_add(nvptx_thread_idx_x() as isize);
if (i as usize) < n {
*dst.offset(i) = *src.offset(i);
}
}
```
- Emit PTX code
```
$ xargo rustc --target nvptx64-nvidia-cuda --release -- --emit=asm
Compiling core v0.0.0 (file://..)
(..)
Compiling nvptx-builtins v0.1.0 (https://github.com/japaric/nvptx-builtins)
Compiling kernel v0.1.0
$ cat target/nvptx64-nvidia-cuda/release/deps/kernel-*.s
//
// Generated by LLVM NVPTX Back-End
//
.version 3.2
.target sm_20
.address_size 64
// .globl memcpy
.visible .entry memcpy(
.param .u64 memcpy_param_0,
.param .u64 memcpy_param_1,
.param .u64 memcpy_param_2
)
{
.reg .pred %p<2>;
.reg .s32 %r<5>;
.reg .s64 %rd<12>;
ld.param.u64 %rd7, [memcpy_param_2];
mov.u32 %r1, %ntid.x;
mov.u32 %r2, %ctaid.x;
mul.wide.s32 %rd8, %r2, %r1;
mov.u32 %r3, %tid.x;
cvt.s64.s32 %rd9, %r3;
add.s64 %rd10, %rd9, %rd8;
setp.ge.u64 %p1, %rd10, %rd7;
@%p1 bra LBB0_2;
ld.param.u64 %rd3, [memcpy_param_0];
ld.param.u64 %rd4, [memcpy_param_1];
cvta.to.global.u64 %rd5, %rd4;
cvta.to.global.u64 %rd6, %rd3;
shl.b64 %rd11, %rd10, 2;
add.s64 %rd1, %rd6, %rd11;
add.s64 %rd2, %rd5, %rd11;
ld.global.u32 %r4, [%rd2];
st.global.u32 [%rd1], %r4;
LBB0_2:
ret;
}
```
- Run it on the GPU
``` rust
// `kernel.ptx` is the `*.s` file we got in the previous step
const KERNEL: &'static str = include_str!("kernel.ptx");
driver::initialize()?;
let device = Device(0)?;
let ctx = device.create_context()?;
let module = ctx.load_module(KERNEL)?;
let kernel = module.function("memcpy")?;
let h_a: Vec<f32> = /* create some random data */;
let h_b = vec![0.; N];
let d_a = driver::allocate(bytes)?;
let d_b = driver::allocate(bytes)?;
// Copy from host to GPU
driver::copy(h_a, d_a)?;
// Run `memcpy` on the GPU
kernel.launch(d_b, d_a, N)?;
// Copy from GPU to host
driver::copy(d_b, h_b)?;
// Verify
assert_eq!(h_a, h_b);
// `d_a`, `d_b`, `h_a`, `h_b` are dropped/freed here
```
---
cc @alexcrichton @brson @rkruppe
> What has changed since #34195?
- `core` now can be compiled into PTX. Which makes it very easy to turn `no_std`
crates into "kernels" with the help of Xargo.
- There's now a way, the `"ptx-kernel"` ABI, to generate "global" functions. The
old PR required a manual step (it was hack) to "convert" "device" functions
into "global" functions. (Only "global" functions can be launched by the host)
- Everything is unstable. There are not "insta stable" built-in targets this
time (\*). The users have to use a custom target to experiment with this
feature. Also, PTX instrinsics, like `__syncthreads` and `blockIdx.x`, are now
implemented as `"platform-intrinsics"` so they no longer live in the `core`
crate.
(\*) I'd actually like to have in-tree targets because that makes this target
more discoverable, removes the need to lug around .json files, etc.
However, bundling a target with the compiler immediately puts it in the path
towards stabilization. Which gives us just two cycles to find and fix any
problem with the target specification. Afterwards, it becomes hard to tweak
the specification because that could be a breaking change.
A possible solution could be "unstable built-in targets". Basically, to use an
unstable target, you'll have to also pass `-Z unstable-options` to the compiler.
And unstable targets, being unstable, wouldn't be available on stable.
> Why should this be merged?
- To let people experiment with the feature out of tree. Having easy access to
the feature (in every nightly) allows this. I also think that, as it is, it
should be possible to start prototyping type-safe single source support using
build scripts, macros and/or plugins.
- It's a straightforward implementation. No different that adding support for
any other architecture.
- `--emit=asm --target=nvptx64-nvidia-cuda` can be used to turn a crate
into a PTX module (a `.s` file).
- intrinsics like `__syncthreads` and `blockIdx.x` are exposed as
`"platform-intrinsics"`.
- "cabi" has been implemented for the nvptx and nvptx64 architectures.
i.e. `extern "C"` works.
- a new ABI, `"ptx-kernel"`. That can be used to generate "global"
functions. Example: `extern "ptx-kernel" fn kernel() { .. }`. All
other functions are "device" functions.
initial SPARC support
### UPDATE
Can now compile `no_std` executables with:
```
$ cargo new --bin app && cd $_
$ edit Cargo.toml && tail -n2 $_
[dependencies]
core = { path = "/path/to/rust/src/libcore" }
$ edit src/main.rs && cat $_
#![feature(lang_items)]
#![no_std]
#![no_main]
#[no_mangle]
pub fn _start() -> ! {
loop {}
}
#[lang = "panic_fmt"]
fn panic_fmt() -> ! {
loop {}
}
$ edit sparc-none-elf.json && cat $_
{
"arch": "sparc",
"data-layout": "E-m:e-p:32:32-i64:64-f128:64-n32-S64",
"executables": true,
"llvm-target": "sparc",
"os": "none",
"panic-strategy": "abort",
"target-endian": "big",
"target-pointer-width": "32"
}
$ cargo rustc --target sparc-none-elf -- -C linker=sparc-unknown-elf-gcc -C link-args=-nostartfiles
$ file target/sparc-none-elf/debug/app
app: ELF 32-bit MSB executable, SPARC, version 1 (SYSV), statically linked, not stripped
$ sparc-unknown-elf-readelf -h target/sparc-none-elf/debug/app
ELF Header:
Magic: 7f 45 4c 46 01 02 01 00 00 00 00 00 00 00 00 00
Class: ELF32
Data: 2's complement, big endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Sparc
Version: 0x1
Entry point address: 0x10074
Start of program headers: 52 (bytes into file)
Start of section headers: 1188 (bytes into file)
Flags: 0x0
Size of this header: 52 (bytes)
Size of program headers: 32 (bytes)
Number of program headers: 2
Size of section headers: 40 (bytes)
Number of section headers: 14
Section header string table index: 11
$ sparc-unknown-elf-objdump -Cd target/sparc-none-elf/debug/app
target/sparc-none-elf/debug/app: file format elf32-sparc
Disassembly of section .text:
00010074 <_start>:
10074: 9d e3 bf 98 save %sp, -104, %sp
10078: 10 80 00 02 b 10080 <_start+0xc>
1007c: 01 00 00 00 nop
10080: 10 80 00 02 b 10088 <_start+0x14>
10084: 01 00 00 00 nop
10088: 10 80 00 00 b 10088 <_start+0x14>
1008c: 01 00 00 00 nop
```
---
Someone wants to attempt launching some Rust [into space](https://www.reddit.com/r/rust/comments/5h76oa/c_interop/) but their platform is based on the SPARCv8 architecture. Let's not block them by enabling LLVM's SPARC backend.
Something very important that they'll also need is the "cabi" stuff as they'll be embedding some Rust code into a bigger C application (i.e. heavy use of `extern "C"`). The question there is what name(s) should we use for "target_arch" as the "cabi" implementation [varies according to that parameter](https://github.com/rust-lang/rust/blob/1.13.0/src/librustc_trans/abi.rs#L498-L523).
AFAICT, SPARCv8 is a 32-bit architecture and SPARCv9 is a 64-bit architecture. And, LLVM uses `sparc`, `sparcv9` and `sparcel` for [the architecture triple](ac1c94226e/include/llvm/ADT/Triple.h (L67-L69)) so perhaps we should use `target_arch = "sparc"` (32-bit) and `target_arch = "sparcv9"` (64-bit) as well.
r? @alexcrichton This PR only enables this LLVM backend when rustbuild is used. Do I also need to implement this for the old Makefile-based build system? Or are all our nightlies now being generated using rustbuild?
cc @brson
stdc++ is from base, and is an old library (GCC 4.2)
estdc++ is from ports, and is a recent library (GCC 4.9 currently)
as LLVM requires the newer version, use it if under OpenBSD.
Alignment was removed from createBasicType and moved to
- createGlobalVariable
- createAutoVariable
- createStaticMemberType (unused in Rust)
- createTempGlobalVariableFwdDecl (unused in Rust)
e69c459a6e
Add new #[target_feature = "..."] attribute.
This commit adds a new attribute that instructs the compiler to emit
target specific code for a single function. For example, the following
function is permitted to use instructions that are part of SSE 4.2:
#[target_feature = "+sse4.2"]
fn foo() { ... }
In particular, use of this attribute does not require setting the
-C target-feature or -C target-cpu options on rustc.
This attribute does not have any protections built into it. For example,
nothing stops one from calling the above `foo` function on hosts without
SSE 4.2 support. Doing so may result in a SIGILL.
I've also expanded the x86 target feature whitelist.
In LLVM 4.0, this enum becomes an actual type-safe enum, which breaks
all of the interfaces. Introduce our own copy of the bitflags that we
can then safely convert to the LLVM one.
[LLVM 4.0] Don't assume llvm::StringRef is null terminated
StringRefs have a length and their contents are not usually null-terminated. The solution is to either copy the string data (in `rustc_llvm::diagnostic`) or take the size into account (in LLVMRustPrintPasses).
I couldn't trigger a bug caused by this (apparently all the strings returned in practice are actually null-terminated) but this is more correct and more future-proof.
cc #37609
This commit adds a new attribute that instructs the compiler to emit
target specific code for a single function. For example, the following
function is permitted to use instructions that are part of SSE 4.2:
#[target_feature = "+sse4.2"]
fn foo() { ... }
In particular, use of this attribute does not require setting the
-C target-feature or -C target-cpu options on rustc.
This attribute does not have any protections built into it. For example,
nothing stops one from calling the above `foo` function on hosts without
SSE 4.2 support. Doing so may result in a SIGILL.
This commit also expands the target feature whitelist to include lzcnt,
popcnt and sse4a. Namely, lzcnt and popcnt have their own CPUID bits,
but were introduced with SSE4.
StringRefs have a length and their contents are not usually null-terminated.
The solution is to either copy the string data (in rustc_llvm::diagnostic) or take the size into account (in LLVMRustPrintPasses).
I couldn't trigger a bug caused by this (apparently all the strings returned in practice are actually null-terminated) but this is more correct and more future-proof.
[LLVM 4.0] Use llvm::Attribute APIs instead of "raw value" APIs
The latter will be removed in LLVM 4.0 (see 4a6fc8bacf).
The librustc_llvm API remains mostly unchanged, except that llvm::Attribute is no longer a bitflag but represents only a *single* attribute.
The ability to store many attributes in a small number of bits and modify them without interacting with LLVM is only used in rustc_trans::abi and closely related modules, and only attributes for function arguments are considered there.
Thus rustc_trans::abi now has its own bit-packed representation of argument attributes, which are translated to rustc_llvm::Attribute when applying the attributes.
cc #37609
rustbuild: allow dynamically linking LLVM
The makefiles and `mklldeps.py` called `llvm-config --shared-mode` to
find out if LLVM defaulted to shared or static libraries, and just went
with that. But under rustbuild, `librustc_llvm/build.rs` was assuming
that LLVM should be static, and even forcing `--link-static` for 3.9+.
Now that build script also uses `--shared-mode` to learn the default,
which should work better for pre-3.9 configured for dynamic linking, as
it wasn't possible back then to choose differently via `llvm-config`.
Further, the configure script now has a new `--enable-llvm-link-shared`
option, which allows one to manually override `--link-shared` on 3.9+
instead of forcing static.
Update: There are now four static/shared scenarios that can happen
for the supported LLVM versions:
- 3.9+: By default use `llvm-config --link-static`
- 3.9+ and `--enable-llvm-link-shared`: Use `--link-shared` instead.
- 3.8: Use `llvm-config --shared-mode` and go with its answer.
- 3.7: Just assume static, maintaining the status quo.
There are now four static/shared scenarios that can happen for the
supported LLVM versions:
- 3.9+: By default use `llvm-config --link-static`
- 3.9+ and `--enable-llvm-link-shared`: Use `--link-shared` instead.
- 3.8: Use `llvm-config --shared-mode` and go with its answer.
- 3.7: Just assume static, maintaining the status quo.
The librustc_llvm API remains mostly unchanged, except that llvm::Attribute is no longer a bitflag but represents only a *single* attribute.
The ability to store many attributes in a small number of bits and modify them without interacting with LLVM is only used in rustc_trans::abi and closely related modules, and only attributes for function arguments are considered there.
Thus rustc_trans::abi now has its own bit-packed representation of argument attributes, which are translated to rustc_llvm::Attribute when applying the attributes.
The makefiles and `mklldeps.py` called `llvm-config --shared-mode` to
find out if LLVM defaulted to shared or static libraries, and just went
with that. But under rustbuild, `librustc_llvm/build.rs` was assuming
that LLVM should be static, and even forcing `--link-static` for 3.9+.
Now that build script also uses `--shared-mode` to learn the default,
which should work better for pre-3.9 configured for dynamic linking, as
it wasn't possible back then to choose differently via `llvm-config`.
Further, the configure script now has a new `--enable-llvm-link-shared`
option, which allows one to manually override `--link-shared` on 3.9+
instead of forcing static.
to actually use the AAPCS calling convention
closes#37810
This is technically a [breaking-change] because it changes the ABI of
`extern "aapcs"` functions that (a) involve `f32`/`f64` arguments/return
values and (b) are compiled for arm-eabihf targets from
"aapcs-vfp" (wrong) to "aapcs" (correct).
Appendix:
What these ABIs mean?
- In the "aapcs-vfp" ABI or "hard float" calling convention: Floating
point values are passed/returned through FPU registers (s0, s1, d0, etc.)
- Whereas, in the "aapcs" ABI or "soft float" calling convention:
Floating point values are passed/returned through general purpose
registers (r0, r1, etc.)
Mixing these ABIs can cause problems if the caller assumes that the
routine is using one of these ABIs but it's actually using the other
one.
rustc: Flag all builtins functions as hidden
When compiling compiler-rt you typically compile with `-fvisibility=hidden`
which to ensure that all symbols are hidden in shared objects and don't show up
in symbol tables. This is important for these intrinsics being linked in every
crate to ensure that we're not unnecessarily bloating the public ABI of Rust
crates.
This should help allow the compiler-builtins project with Rust-defined builtins
start landing in-tree as well.
to let people experiment with this target out of tree.
The MSP430 architecture is used in 16-bit microcontrollers commonly used
in Digital Signal Processing applications.
When compiling compiler-rt you typically compile with `-fvisibility=hidden`
which to ensure that all symbols are hidden in shared objects and don't show up
in symbol tables. This is important for these intrinsics being linked in every
crate to ensure that we're not unnecessarily bloating the public ABI of Rust
crates.
This should help allow the compiler-builtins project with Rust-defined builtins
start landing in-tree as well.
The `Linkage` enum in librustc_llvm got out of sync with the version in LLVM and it caused two variants of the #[linkage=""] attribute to break.
This adds the functions `LLVMRustGetLinkage` and `LLVMRustSetLinkage` which convert between the Rust Linkage enum and the LLVM one, which should stop this from breaking every time LLVM changes it.
Fixes#33992
A new target, `s390x-unknown-linux-gnu`, has been added to the compiler
and can be used to build no_core/no_std Rust programs.
Known limitations:
- librustc_trans/cabi_s390x.rs is missing. This means no support for
`extern "C" fn`.
- No support for this arch in libc. This means std can be cross compiled
for this target.
Macro expansions produce code tagged with debug locations that are completely different from the surrounding expressions. This wrecks havoc on debugger's ability the step over source lines.
In order to have a good line stepping behavior in debugger, we overwrite debug locations of macro expansions with that of the outermost expansion site.
LLVM upgrade
As discussed in https://internals.rust-lang.org/t/need-help-with-emscripten-port/3154/46 I'm trying to update the used LLVM checkout in Rust.
I basically took @shepmaster's code and applied it on top (though I did the commits manually, the [original commits have better descriptions](https://github.com/rust-lang/rust/compare/master...avr-rust:avr-support).
With these changes I was able to build rustc. `make check` throws one last error on `run-pass/issue-28950.rs`. Output: https://gist.github.com/badboy/bcdd3bbde260860b6159aa49070a9052
I took the metadata changes as is and they seem to work, though it now uses the module in another step. I'm not sure if this is the best and correct way.
Things to do:
* [x] ~~Make `run-pass/issue-28950.rs` pass~~ unrelated
* [x] Find out how the `PositionIndependentExecutable` setting is now used
* [x] Is the `llvm::legacy` still the right way to do these things?
cc @brson @alexcrichton
This is to pull in changes to support ARM MUSL targets.
This change also commits a couple of other cargo-generated changes
to other dependencies in the various Cargo.toml files.
This is a spiritual succesor to #34268/8531d581, in which we replaced a
number of matches of None to the unit value with `if let` conditionals
where it was judged that this made for clearer/simpler code (as would be
recommended by Manishearth/rust-clippy's `single_match` lint). The same
rationale applies to matches of None to the empty block.
Compute `target_feature` from LLVM
This is a work-in-progress fix for #31662.
The logic that computes the target features from the command line has been replaced with queries to the `TargetMachine`.
When reuing a definition across codegen units, we obviously cannot use
internal linkage, but using external linkage means that we can end up
with multiple conflicting definitions of a single symbol across
multiple crates. Since the definitions should all be equal
semantically, we can use weak_odr linkage to resolve the situation.
Fixes#32518
We use a 64bit integer to pass the set of attributes that is to be
removed, but the called C function expects a 32bit integer. On most
platforms this doesn't cause any problems other than being unable to
unset some attributes, but on ARM even the lower 32bit aren't handled
correctly because the 64bit value is passed in different registers, so
the C function actually sees random garbage.
So we need to fix the relevant functions to use 32bit integers instead.
Additionally we need an implementation that actually accepts 64bit
integers because some attributes can only be unset that way.
Fixes#32360
`fast` a.k.a UnsafeAlgebra is the flag for enabling all "unsafe"
(according to llvm) float optimizations.
See LangRef for more information http://llvm.org/docs/LangRef.html#fast-math-flags
Providing these operations with less precise associativity rules (for
example) is useful to numerical applications.
For example, the summation loop:
let sum = 0.;
for element in data {
sum += *element;
}
Using the default floating point semantics, this loop expresses the
floats must be added in a sequence, one after another. This constraint
is usually completely unintended, and it means that no autovectorization
is possible.
Unfortunately on i686-pc-windows-gnu LLVM's answer to `--host-target` is
`x86_64-pc-windows-gnu` even though we're building in a 32-bit shell as well as
compiling 32-bit libraries. For now use Cargo's `HOST` environment variable to
determine whether we're doing a cross compilation or not.
Zeroing on-drop seems to work fine. Still thinking about the best way to approach zeroing on-move.
(based on top of the other drop PR; only the last 2 commits are relevant)
Currently all multi-host builds assume the the build platform can run the
`llvm-config` binary generated for each host platform we're creating a compiler
for. Unfortunately this assumption isn't always true when cross compiling, so we
need to handle this case.
This commit alters the build script of `rustc_llvm` to understand when it's
running an `llvm-config` which is different than the platform we're targeting for.
Hopefully the author caught all the cases. For the mir_dynamic_drops_3 test case the ratio of
memsets to other instructions is 12%. On the other hand we actually do not double drop for at least
the test cases provided anymore in MIR.
The standard library doesn't depend on rustc_bitflags, so move it to explicit
dependencies on all other crates. Additionally, the arena/fmt_macros deps could
be dropped from libsyntax.
Have all Cargo-built crates pass `--cfg cargobuild` and then add appropriate
`#[cfg]` definitions to all crates to avoid linking anything if this is passed.
This should help allow libstd to compile with both the makefiles and with Cargo.
This commits adds build scripts to the necessary Rust crates for all the native
dependencies. This is currently a duplication of the support found in mk/rt.mk
and is my best effort at representing the logic twice, but there may be some
unfortunate-and-inevitable divergence.
As a summary:
* alloc_jemalloc - build script to compile jemallocal
* flate - build script to compile miniz.c
* rustc_llvm - build script to run llvm-config and learn about how to link it.
Note that this crucially (and will not ever) compile LLVM as that would take
far too long.
* rustdoc - build script to compile hoedown
* std - script to determine lots of libraries/linkages as well as compile
libbacktrace
This commit transitions the compiler to using the new exception handling
instructions in LLVM for implementing unwinding for MSVC. This affects both 32
and 64-bit MSVC as they're both now using SEH-based strategies. In terms of
standard library support, lots more details about how SEH unwinding is
implemented can be found in the commits.
In terms of trans, this change necessitated a few modifications:
* Branches were added to detect when the old landingpad instruction is used or
the new cleanuppad instruction is used to `trans::cleanup`.
* The return value from `cleanuppad` is not stored in an `alloca` (because it
cannot be).
* Each block in trans now has an `Option<LandingPad>` instead of `is_lpad: bool`
for indicating whether it's in a landing pad or not. The new exception
handling intrinsics require that on MSVC each `call` inside of a landing pad
is annotated with which landing pad that it's in. This change to the basic
block means that whenever a `call` or `invoke` instruction is generated we
know whether to annotate it as part of a cleanuppad or not.
* Lots of modifications were made to the instruction builders to construct the
new instructions as well as pass the tagging information for the call/invoke
instructions.
* The translation of the `try` intrinsics for MSVC has been overhauled to use
the new `catchpad` instruction. The filter function is now also a
rustc-generated function instead of a purely libstd-defined function. The
libstd definition still exists, it just has a stable ABI across architectures
and leaves some of the really weird implementation details to the compiler
(e.g. the `localescape` and `localrecover` intrinsics).
This brings some routine upgrades to the bundled LLVM that we're using, the most
notable of which is a bug fix to the way we handle range asserts when loading
the discriminant of an enum. This fix ended up being very similar to f9d4149c
where we basically can't have a range assert when loading a discriminant due to
filling drop, and appropriate flags were added to communicate this to
`trans::adt`.
This commit removes the `-D warnings` flag being passed through the makefiles to
all crates to instead be a crate attribute. We want these attributes always
applied for all our standard builds, and this is more amenable to Cargo-based
builds as well.
Note that all `deny(warnings)` attributes are gated with a `cfg(stage0)`
attribute currently to match the same semantics we have today
LLVM was upgraded to a new version in this commit:
f9d4149c29
which was part of this pull request:
https://github.com/rust-lang/rust/issues/26025
Consider the following two lines from that commit:
f9d4149c29 (diff-a3b24dbe2ea7c1981f9ac79f9745f40aL462)f9d4149c29 (diff-a3b24dbe2ea7c1981f9ac79f9745f40aL469)
The purpose of these lines is to register LLVM passes. Prior to the that
commit, the passes being handled were assumed to be ModulePasses (a
specific type of LLVM pass) since they were being added to a ModulePass
manager. After that commit, both lines were refactored (presumably in an
attempt to DRY out the code), but the ModulePasses were changed to be
registered to a FunctionPass manager. This change resulted in
ModulePasses being run, but a Function object was being passed as a
parameter to the pass instead of a Module, which resulted in
segmentation faults.
In this commit, I changed relevant sections of the code to check the
type of the passes being added and register them to the appropriate pass
manager.
Closes https://github.com/rust-lang/rust/issues/31067
This commit removes the `-D warnings` flag being passed through the makefiles to
all crates to instead be a crate attribute. We want these attributes always
applied for all our standard builds, and this is more amenable to Cargo-based
builds as well.
Note that all `deny(warnings)` attributes are gated with a `cfg(stage0)`
attribute currently to match the same semantics we have today
LLVM doesn't really support reusing the same module to emit more than
one file. One bug this causes is that the IR is invalidated by the stack
coloring pass when emitting the first file, and then the IR verifier
complains by the time we try to emit the second file. Also, we get
different binaries with --emit=asm,link than with just --emit=link. In
some cases leading to segfaults.
Unfortunately, it seems that at this point in time, the most sensible
option to circumvent this problem is to just clone the whole llvm module
for the asm output if we need both, asm and obj file output.
Fixes#24876Fixes#26235
This handles cases when the LLVM used isn't configured will the 'usual'
targets. Also, cases where LLVM is shared are also handled (ie with
`LD_LIBRARY_PATH` etc).
This commit is the standard API stabilization commit for the 1.6 release cycle.
The list of issues and APIs below have all been through their cycle-long FCP and
the libs team decisions are listed below
Stabilized APIs
* `Read::read_exact`
* `ErrorKind::UnexpectedEof` (renamed from `UnexpectedEOF`)
* libcore -- this was a bit of a nuanced stabilization, the crate itself is now
marked as `#[stable]` and the methods appearing via traits for primitives like
`char` and `str` are now also marked as stable. Note that the extension traits
themeselves are marked as unstable as they're imported via the prelude. The
`try!` macro was also moved from the standard library into libcore to have the
same interface. Otherwise the functions all have copied stability from the
standard library now.
* The `#![no_std]` attribute
* `fs::DirBuilder`
* `fs::DirBuilder::new`
* `fs::DirBuilder::recursive`
* `fs::DirBuilder::create`
* `os::unix::fs::DirBuilderExt`
* `os::unix::fs::DirBuilderExt::mode`
* `vec::Drain`
* `vec::Vec::drain`
* `string::Drain`
* `string::String::drain`
* `vec_deque::Drain`
* `vec_deque::VecDeque::drain`
* `collections::hash_map::Drain`
* `collections::hash_map::HashMap::drain`
* `collections::hash_set::Drain`
* `collections::hash_set::HashSet::drain`
* `collections::binary_heap::Drain`
* `collections::binary_heap::BinaryHeap::drain`
* `Vec::extend_from_slice` (renamed from `push_all`)
* `Mutex::get_mut`
* `Mutex::into_inner`
* `RwLock::get_mut`
* `RwLock::into_inner`
* `Iterator::min_by_key` (renamed from `min_by`)
* `Iterator::max_by_key` (renamed from `max_by`)
Deprecated APIs
* `ErrorKind::UnexpectedEOF` (renamed to `UnexpectedEof`)
* `OsString::from_bytes`
* `OsStr::to_cstring`
* `OsStr::to_bytes`
* `fs::walk_dir` and `fs::WalkDir`
* `path::Components::peek`
* `slice::bytes::MutableByteVector`
* `slice::bytes::copy_memory`
* `Vec::push_all` (renamed to `extend_from_slice`)
* `Duration::span`
* `IpAddr`
* `SocketAddr::ip`
* `Read::tee`
* `io::Tee`
* `Write::broadcast`
* `io::Broadcast`
* `Iterator::min_by` (renamed to `min_by_key`)
* `Iterator::max_by` (renamed to `max_by_key`)
* `net::lookup_addr`
New APIs (still unstable)
* `<[T]>::sort_by_key` (added to mirror `min_by_key`)
Closes#27585Closes#27704Closes#27707Closes#27710Closes#27711Closes#27727Closes#27740Closes#27744Closes#27799Closes#27801
cc #27801 (doesn't close as `Chars` is still unstable)
Closes#28968
* Delete `sys::unix::{c, sync}` as these are now all folded into libc itself
* Update all references to use `libc` as a result.
* Update all references to the new flat namespace.
* Moves all windows bindings into sys::c
Rust's current compilation model makes it impossible on Windows to generate one
object file with a complete and final set of dllexport annotations. This is
because when an object is generated the compiler doesn't actually know if it
will later be included in a dynamic library or not. The compiler works around
this today by flagging *everything* as dllexport, but this has the drawback of
exposing too much.
Thankfully there are alternate methods of specifying the exported surface area
of a dll on Windows, one of which is passing a `*.def` file to the linker which
lists all public symbols of the dynamic library. This commit removes all
locations that add `dllexport` to LLVM variables and instead dynamically
generates a `*.def` file which is passed to the linker. This file will include
all the public symbols of the current object file as well as all upstream
libraries, and the crucial aspect is that it's only used when generating a
dynamic library. When generating an executable this file isn't generated, so all
the symbols aren't exported from an executable.
To ensure that statically included native libraries are reexported correctly,
the previously added support for the `#[linked_from]` attribute is used to
determine the set of FFI symbols that are exported from a dynamic library, and
this is required to get the compiler to link correctly.
Makes the lint a bit more accurate, and improves the quality of the diagnostic
messages by explicitly returning an error message.
The new lint is also a little more aggressive: specifically, it now
rejects tuples, and it recurses into function pointers.
This commit moves the IR files in the distribution, rust_try.ll,
rust_try_msvc_64.ll, and rust_try_msvc_32.ll into the compiler from the main
distribution. There's a few reasons for this change:
* LLVM changes its IR syntax from time to time, so it's very difficult to
have these files build across many LLVM versions simultaneously. We'll likely
want to retain this ability for quite some time into the future.
* The implementation of these files is closely tied to the compiler and runtime
itself, so it makes sense to fold it into a location which can do more
platform-specific checks for various implementation details (such as MSVC 32
vs 64-bit).
* This removes LLVM as a build-time dependency of the standard library. This may
end up becoming very useful if we move towards building the standard library
with Cargo.
In the immediate future, however, this commit should restore compatibility with
LLVM 3.5 and 3.6.
Turns out for OSX our data layout was subtly wrong and the LLVM update must have
exposed this. Instead of fixing this I've removed all data layouts from the
compiler to just use the defaults that LLVM provides for all targets. All data
layouts (and a number of dead modules) are removed from the compiler here.
Custom target specifications can still provide a custom data layout, but it is
now an optional key as the default will be used if one isn't specified.
Updates our LLVM bindings to be able to write out multiple kinds of archives.
This commit also enables using LLVM instead of the system ar on all current
targets.
The C API of this function changed so it no longer takes a personality function.
A shim was introduced to call the right LLVM function (depending on which
version we're compiled against) to set the personality function on the outer
function.
The compiler only ever sets one personality function for all generated
functions, so this should be equivalent.
We have previously always relied upon an external tool, `ar`, to modify archives
that the compiler produces (staticlibs, rlibs, etc). This approach, however, has
a number of downsides:
* Spawning a process is relatively expensive for small compilations
* Encoding arguments across process boundaries often incurs unnecessary overhead
or lossiness. For example `ar` has a tough time dealing with files that have
the same name in archives, and the compiler copies many files around to ensure
they can be passed to `ar` in a reasonable fashion.
* Most `ar` programs found do **not** have the ability to target arbitrary
platforms, so this is an extra tool which needs to be found/specified when
cross compiling.
The LLVM project has had a tool called `llvm-ar` for quite some time now, but it
wasn't available in the standard LLVM libraries (it was just a standalone
program). Recently, however, in LLVM 3.7, this functionality has been moved to a
library and is now accessible by consumers of LLVM via the `writeArchive`
function.
This commit migrates our archive bindings to no longer invoke `ar` by default
but instead make a library call to LLVM to do various operations. This solves
all of the downsides listed above:
* Archive management is now much faster, for example creating a "hello world"
staticlib is now 6x faster (50ms => 8ms). Linking dynamic libraries also
recently started requiring modification of rlibs, and linking a hello world
dynamic library is now 2x faster.
* The compiler is now one step closer to "hassle free" cross compilation because
no external tool is needed for managing archives, LLVM does the right thing!
This commit does not remove support for calling a system `ar` utility currently.
We will continue to maintain compatibility with LLVM 3.5 and 3.6 looking forward
(so the system LLVM can be used wherever possible), and in these cases we must
shell out to a system utility. All nightly builds of Rust, however, will stop
needing a system `ar`.
This commit shards the all-encompassing `core`, `std_misc`, `collections`, and `alloc` features into finer-grained components that are much more easily opted into and tracked. This reflects the effort to push forward current unstable APIs to either stabilization or removal. Keeping track of unstable features on a much more fine-grained basis will enable the library subteam to quickly analyze a feature and help prioritize internally about what APIs should be stabilized.
A few assorted APIs were deprecated along the way, but otherwise this change is just changing the feature name associated with each API. Soon we will have a dashboard for keeping track of all the unstable APIs in the standard library, and I'll also start making issues for each unstable API after performing a first-pass for stabilization.
This commit updates the LLVM submodule in use to the current HEAD of the LLVM
repository. This is primarily being done to start picking up unwinding support
for MSVC, which is currently unimplemented in the revision of LLVM we are using.
Along the way a few changes had to be made:
* As usual, lots of C++ debuginfo bindings in LLVM changed, so there were some
significant changes to our RustWrapper.cpp
* As usual, some pass management changed in LLVM, so clang was re-scrutinized to
ensure that we're doing the same thing as clang.
* Some optimization options are now passed directly into the
`PassManagerBuilder` instead of through CLI switches to LLVM.
* The `NoFramePointerElim` option was removed from LLVM, favoring instead the
`no-frame-pointer-elim` function attribute instead.
Additionally, LLVM has picked up some new optimizations which required fixing an
existing soundness hole in the IR we generate. It appears that the current LLVM
we use does not expose this hole. When an enum is moved, the previous slot in
memory is overwritten with a bit pattern corresponding to "dropped". When the
drop glue for this slot is run, however, the switch on the discriminant can
often start executing the `unreachable` block of the switch due to the
discriminant now being outside the normal range. This was patched over locally
for now by having the `unreachable` block just change to a `ret void`.
* Removes `RustJITMemoryManager` from public API.
This was really sort of an implementation detail to begin with.
* `__morestack` is linked to C++ wrapper code and this pointer
is used when resolving the symbol for `ExecutionEngine` code.
* `__morestack_addr` is also resolved for `ExecutionEngine` code.
This function is sometimes referenced in LLVM-generated code,
but was not able to be resolved on Mac OS systems.
* Added Windows support to `ExecutionEngine` API.
* Added a test for basic `ExecutionEngine` functionality.
* Removes `RustJITMemoryManager` from public API.
This was really sort of an implementation detail to begin with.
* `__morestack` is linked to C++ wrapper code and this pointer
is used when resolving the symbol for `ExecutionEngine` code.
* `__morestack_addr` is also resolved for `ExecutionEngine` code.
This function is sometimes referenced in LLVM-generated code,
but was not able to be resolved on Mac OS systems.
* Added Windows support to `ExecutionEngine` API.
* Added a test for basic `ExecutionEngine` functionality.
This commit introduce a third parameter for compatible_ifn!, as new
intrinsics are being added in recent LLVM releases and there is no
need to hardcode a specific case.
Signed-off-by: Luca Bruno <lucab@debian.org>
Special thanks to @retep998 for the [excellent writeup](https://github.com/rust-lang/rfcs/issues/1061) of tasks to be done and @ricky26 for initially blazing the trail here!
# MSVC Support
This goal of this series of commits is to add MSVC support to the Rust compiler
and build system, allowing it more easily interoperate with Visual Studio
installations and native libraries compiled outside of MinGW.
The tl;dr; of this change is that there is a new target of the compiler,
`x86_64-pc-windows-msvc`, which will not interact with the MinGW toolchain at
all and will instead use `link.exe` to assemble output artifacts.
## Why try to use MSVC?
With today's Rust distribution, when you install a compiler on Windows you also
install `gcc.exe` and a number of supporting libraries by default (this can be
opted out of). This allows installations to remain independent of MinGW
installations, but it still generally requires native code to be linked with
MinGW instead of MSVC. Some more background can also be found in #1768 about the
incompatibilities between MinGW and MSVC.
Overall the current installation strategy is quite nice so long as you don't
interact with native code, but once you do the usage of a MinGW-based `gcc.exe`
starts to get quite painful.
Relying on a nonstandard Windows toolchain has also been a long-standing "code
smell" of Rust and has been slated for remedy for quite some time now. Using a
standard toolchain is a great motivational factor for improving the
interoperability of Rust code with the native system.
## What does it mean to use MSVC?
"Using MSVC" can be a bit of a nebulous concept, but this PR defines it as:
* The build system for Rust will build as much code as possible with the MSVC
compiler, `cl.exe`.
* The build system will use native MSVC tools for managing archives.
* The compiler will link all output with `link.exe` instead of `gcc.exe`.
None of these are currently implemented today, but all are required for the
compiler to fluently interoperate with MSVC.
## How does this all work?
At the highest level, this PR adds a new target triple to the Rust compiler:
x86_64-pc-windows-msvc
All logic for using MSVC or not is scoped within this triple and code can
conditionally build for MSVC or MinGW via:
#[cfg(target_env = "msvc")]
It is expected that auto builders will be set up for MSVC-based compiles in
addition to the existing MinGW-based compiles, and we will likely soon start
shipping MSVC nightlies where `x86_64-pc-windows-msvc` is the host target triple
of the compiler.
# Summary of changes
Here I'll explain at a high level what many of the changes made were targeted
at, but many more details can be found in the commits themselves. Many thanks to
@retep998 for the excellent writeup in rust-lang/rfcs#1061 and @rick26 for a lot
of the initial proof-of-concept work!
## Build system changes
As is probably expected, a large chunk of this PR is changes to Rust's build
system to build with MSVC. At a high level **it is an explicit non goal** to
enable building outside of a MinGW shell, instead all Makefile infrastructure we
have today is retrofitted with support to use MSVC instead of the standard MSVC
toolchain. Some of the high-level changes are:
* The configure script now detects when MSVC is being targeted and adds a number
of additional requirements about the build environment:
* The `--msvc-root` option must be specified or `cl.exe` must be in PATH to
discover where MSVC is installed. The compiler in use is also required to
target x86_64.
* Once the MSVC root is known, the INCLUDE/LIB environment variables are
scraped so they can be reexported by the build system.
* CMake is required to build LLVM with MSVC (and LLVM is also configured with
CMake instead of the normal configure script).
* jemalloc is currently unconditionally disabled for MSVC targets as jemalloc
isn't a hard requirement and I don't know how to build it with MSVC.
* Invocations of a C and/or C++ compiler are now abstracted behind macros to
appropriately call the underlying compiler with the correct format of
arguments, for example there is now a macro for "assemble an archive from
objects" instead of hard-coded invocations of `$(AR) crus liboutput.a ...`
* The output filenames for standard libraries such as morestack/compiler-rt are
now "more correct" on windows as they are shipped as `foo.lib` instead of
`libfoo.a`.
* Rust targets can now depend on native tools provided by LLVM, and as you'll
see in the commits the entire MSVC target depends on `llvm-ar.exe`.
* Support for custom arbitrary makefile dependencies of Rust targets has been
added. The MSVC target for `rustc_llvm` currently requires a custom `.DEF`
file to be passed to the linker to get further linkages to complete.
## Compiler changes
The modifications made to the compiler have so far largely been minor tweaks
here and there, mostly just adding a layer of abstraction over whether MSVC or a
GNU-like linker is being used. At a high-level these changes are:
* The section name for metadata storage in dynamic libraries is called `.rustc`
for MSVC-based platorms as section names cannot contain more than 8
characters.
* The implementation of `rustc_back::Archive` was refactored, but the
functionality has remained the same.
* Targets can now specify the default `ar` utility to use, and for MSVC this
defaults to `llvm-ar.exe`
* The building of the linker command in `rustc_trans:🔙:link` has been
abstracted behind a trait for the same code path to be used between GNU and
MSVC linkers.
## Standard library changes
Only a few small changes were required to the stadnard library itself, and only
for minor differences between the C runtime of msvcrt.dll and MinGW's libc.a
* Some function names for floating point functions have leading underscores, and
some are not present at all.
* Linkage to the `advapi32` library for crypto-related functions is now
explicit.
* Some small bits of C code here and there were fixed for compatibility with
MSVC's cl.exe compiler.
# Future Work
This commit is not yet a 100% complete port to using MSVC as there are still
some key components missing as well as some unimplemented optimizations. This PR
is already getting large enough that I wanted to draw the line here, but here's
a list of what is not implemented in this PR, on purpose:
## Unwinding
The revision of our LLVM submodule [does not seem to implement][llvm] does not
support lowering SEH exception handling on the Windows MSVC targets, so
unwinding support is not currently implemented for the standard library (it's
lowered to an abort).
[llvm]: https://github.com/rust-lang/llvm/blob/rust-llvm-2015-02-19/lib/CodeGen/Passes.cpp#L454-L461
It looks like, however, that upstream LLVM has quite a bit more support for SEH
unwinding and landing pads than the current revision we have, so adding support
will likely just involve updating LLVM and then adding some shims of our own
here and there.
## dllimport and dllexport
An interesting part of Windows which MSVC forces our hand on (and apparently
MinGW didn't) is the usage of `dllimport` and `dllexport` attributes in LLVM IR
as well as native dependencies (in C these correspond to
`__declspec(dllimport)`).
Whenever a dynamic library is built by MSVC it must have its public interface
specified by functions tagged with `dllexport` or otherwise they're not
available to be linked against. This poses a few problems for the compiler, some
of which are somewhat fundamental, but this commit alters the compiler to attach
the `dllexport` attribute to all LLVM functions that are reachable (e.g. they're
already tagged with external linkage). This is suboptimal for a few reasons:
* If an object file will never be included in a dynamic library, there's no need
to attach the dllexport attribute. Most object files in Rust are not destined
to become part of a dll as binaries are statically linked by default.
* If the compiler is emitting both an rlib and a dylib, the same source object
file is currently used but with MSVC this may be less feasible. The compiler
may be able to get around this, but it may involve some invasive changes to
deal with this.
The flipside of this situation is that whenever you link to a dll and you import
a function from it, the import should be tagged with `dllimport`. At this time,
however, the compiler does not emit `dllimport` for any declarations other than
constants (where it is required), which is again suboptimal for even more
reasons!
* Calling a function imported from another dll without using `dllimport` causes
the linker/compiler to have extra overhead (one `jmp` instruction on x86) when
calling the function.
* The same object file may be used in different circumstances, so a function may
be imported from a dll if the object is linked into a dll, but it may be
just linked against if linked into an rlib.
* The compiler has no knowledge about whether native functions should be tagged
dllimport or not.
For now the compiler takes the perf hit (I do not have any numbers to this
effect) by marking very little as `dllimport` and praying the linker will take
care of everything. Fixing this problem will likely require adding a few
attributes to Rust itself (feature gated at the start) and then strongly
recommending static linkage on Windows! This may also involve shipping a
statically linked compiler on Windows instead of a dynamically linked compiler,
but these sorts of changes are pretty invasive and aren't part of this PR.
## CI integration
Thankfully we don't need to set up a new snapshot bot for the changes made here as our snapshots are freestanding already, we should be able to use the same snapshot to bootstrap both MinGW and MSVC compilers (once a new snapshot is made from these changes).
I plan on setting up a new suite of auto bots which are testing MSVC configurations for now as well, for now they'll just be bootstrapping and not running tests, but once unwinding is implemented they'll start running all tests as well and we'll eventually start gating on them as well.
---
I'd love as many eyes on this as we've got as this was one of my first interactions with MSVC and Visual Studio, so there may be glaring holes that I'm missing here and there!
cc @retep998, @ricky26, @vadimcn, @klutzy
r? @brson
For imports of constants across DLLs to work on Windows it *requires* that the
import be marked with `dllimport` (unlike functions where the marker is
optional, but strongly recommended). This currently isn't working for importing
FFI constants across boundaries, however, so the one constant exported from
`rustc_llvm.dll` is now a function to be called instead.
These new intrinsics are comparable to `atomic_signal_fence` in C++,
ensuring the compiler will not reorder memory accesses across the
barrier, nor will it emit any machine instructions for it.
Closes#24118, implementing RFC 888.
When linking an archive statically to an rlib, the compiler will extract all
contents of the archive and add them all to the rlib being generated. The
current method of extraction is to run `ar x`, dumping all files into a
temporary directory. Object archives, however, are allowed to have multiple
entries with the same file name, so there is no method for them to extract their
contents into a directory in a lossless fashion.
This commit adds iterator support to the `ArchiveRO` structure which hooks into
LLVM's support for reading object archives. This iterator is then used to
inspect each object in turn and extract it to a unique location for later
assembly.
const_eval : add overflow-checking for {`+`, `-`, `*`, `/`, `<<`, `>>`}.
One tricky detail here: There is some duplication of labor between `rustc::middle::const_eval` and `rustc_trans::trans::consts`. It might be good to explore ways to try to factor out the common structure to the two passes (by abstracting over the particular value-representation used in the compile-time interpreter).
----
Update: Rebased atop #23841Fix#22531Fix#23030Fix#23221Fix#23235
Add option-returning variants to `const_to_int`/`const_to_uint` that
never assert fail. (These will be used for overflow checking from
rustc_trans::trans::consts.)
This commit cleans out a large amount of deprecated APIs from the standard
library and some of the facade crates as well, updating all users in the
compiler and in tests as it goes along.
This permits all coercions to be performed in casts, but adds lints to warn in those cases.
Part of this patch moves cast checking to a later stage of type checking. We acquire obligations to check casts as part of type checking where we previously checked them. Once we have type checked a function or module, then we check any cast obligations which have been acquired. That means we have more type information available to check casts (this was crucial to making coercions work properly in place of some casts), but it means that casts cannot feed input into type inference.
[breaking change]
* Adds two new lints for trivial casts and trivial numeric casts, these are warn by default, but can cause errors if you build with warnings as errors. Previously, trivial numeric casts and casts to trait objects were allowed.
* The unused casts lint has gone.
* Interactions between casting and type inference have changed in subtle ways. Two ways this might manifest are:
- You may need to 'direct' casts more with extra type information, for example, in some cases where `foo as _ as T` succeeded, you may now need to specify the type for `_`
- Casts do not influence inference of integer types. E.g., the following used to type check:
```
let x = 42;
let y = &x as *const u32;
```
Because the cast would inform inference that `x` must have type `u32`. This no longer applies and the compiler will fallback to `i32` for `x` and thus there will be a type error in the cast. The solution is to add more type information:
```
let x: u32 = 42;
let y = &x as *const u32;
```
at least that's what the docs say: http://doc.rust-lang.org/std/slice/fn.from_raw_parts.html
A few situations got prettier. In some situations the mutability of the resulting and source pointers differed (and was cast away by transmute), the mutability matches now.
This commit stabilizes essentially all of the new `std::path` API. The
API itself is changed in a couple of ways (which brings it in closer
alignment with the RFC):
* `.` components are now normalized away, unless they appear at the
start of a path. This in turn effects the semantics of e.g. asking for
the file name of `foo/` or `foo/.`, both of which yield `Some("foo")`
now. This semantics is what the original RFC specified, and is also
desirable given early experience rolling out the new API.
* The `parent` function now succeeds if, and only if, the path has at
least one non-root/prefix component. This change affects `pop` as
well.
* The `Prefix` component now involves a separate `PrefixComponent`
struct, to better allow for keeping both parsed and unparsed prefix data.
In addition, the `old_path` module is now deprecated.
Closes#23264
[breaking-change]
This commit deprecates the majority of std::old_io::fs in favor of std::fs and
its new functionality. Some functions remain non-deprecated but are now behind a
feature gate called `old_fs`. These functions will be deprecated once
suitable replacements have been implemented.
The compiler has been migrated to new `std::fs` and `std::path` APIs where
appropriate as part of this change.
This commit is an implementation of [RFC 592][r592] and [RFC 840][r840]. These
two RFCs tweak the behavior of `CString` and add a new `CStr` unsized slice type
to the module.
[r592]: https://github.com/rust-lang/rfcs/blob/master/text/0592-c-str-deref.md
[r840]: https://github.com/rust-lang/rfcs/blob/master/text/0840-no-panic-in-c-string.md
The new `CStr` type is only constructable via two methods:
1. By `deref`'ing from a `CString`
2. Unsafely via `CStr::from_ptr`
The purpose of `CStr` is to be an unsized type which is a thin pointer to a
`libc::c_char` (currently it is a fat pointer slice due to implementation
limitations). Strings from C can be safely represented with a `CStr` and an
appropriate lifetime as well. Consumers of `&CString` should now consume `&CStr`
instead to allow producers to pass in C-originating strings instead of just
Rust-allocated strings.
A new constructor was added to `CString`, `new`, which takes `T: IntoBytes`
instead of separate `from_slice` and `from_vec` methods (both have been
deprecated in favor of `new`). The `new` method returns a `Result` instead of
panicking. The error variant contains the relevant information about where the
error happened and bytes (if present). Conversions are provided to the
`io::Error` and `old_io::IoError` types via the `FromError` trait which
translate to `InvalidInput`.
This is a breaking change due to the modification of existing `#[unstable]` APIs
and new deprecation, and more detailed information can be found in the two RFCs.
Notable breakage includes:
* All construction of `CString` now needs to use `new` and handle the outgoing
`Result`.
* Usage of `CString` as a byte slice now explicitly needs a `.as_bytes()` call.
* The `as_slice*` methods have been removed in favor of just having the
`as_bytes*` methods.
Closes#22469Closes#22470
[breaking-change]
This commit is an implementation of [RFC 592][r592] and [RFC 840][r840]. These
two RFCs tweak the behavior of `CString` and add a new `CStr` unsized slice type
to the module.
[r592]: https://github.com/rust-lang/rfcs/blob/master/text/0592-c-str-deref.md
[r840]: https://github.com/rust-lang/rfcs/blob/master/text/0840-no-panic-in-c-string.md
The new `CStr` type is only constructable via two methods:
1. By `deref`'ing from a `CString`
2. Unsafely via `CStr::from_ptr`
The purpose of `CStr` is to be an unsized type which is a thin pointer to a
`libc::c_char` (currently it is a fat pointer slice due to implementation
limitations). Strings from C can be safely represented with a `CStr` and an
appropriate lifetime as well. Consumers of `&CString` should now consume `&CStr`
instead to allow producers to pass in C-originating strings instead of just
Rust-allocated strings.
A new constructor was added to `CString`, `new`, which takes `T: IntoBytes`
instead of separate `from_slice` and `from_vec` methods (both have been
deprecated in favor of `new`). The `new` method returns a `Result` instead of
panicking. The error variant contains the relevant information about where the
error happened and bytes (if present). Conversions are provided to the
`io::Error` and `old_io::IoError` types via the `FromError` trait which
translate to `InvalidInput`.
This is a breaking change due to the modification of existing `#[unstable]` APIs
and new deprecation, and more detailed information can be found in the two RFCs.
Notable breakage includes:
* All construction of `CString` now needs to use `new` and handle the outgoing
`Result`.
* Usage of `CString` as a byte slice now explicitly needs a `.as_bytes()` call.
* The `as_slice*` methods have been removed in favor of just having the
`as_bytes*` methods.
Closes#22469Closes#22470
[breaking-change]
This commit is an implementation of [RFC 823][rfc] which is another pass over
the `std::hash` module for stabilization. The contents of the module were not
entirely marked stable, but some portions which remained quite similar to the
previous incarnation are now marked `#[stable]`. Specifically:
[rfc]: https://github.com/rust-lang/rfcs/blob/master/text/0823-hash-simplification.md
* `std::hash` is now stable (the name)
* `Hash` is now stable
* `Hash::hash` is now stable
* `Hasher` is now stable
* `SipHasher` is now stable
* `SipHasher::new` and `new_with_keys` are now stable
* `Hasher for SipHasher` is now stable
* Many `Hash` implementations are now stable
All other portions of the `hash` module remain `#[unstable]` as they are less
commonly used and were recently redesigned.
This commit is a breaking change due to the modifications to the `std::hash` API
and more details can be found on the [RFC][rfc].
Closes#22467
[breaking-change]
This commit renames the features for the `std::old_io` and `std::old_path`
modules to `old_io` and `old_path` to help facilitate migration to the new APIs.
This is a breaking change as crates which mention the old feature names now need
to be renamed to use the new feature names.
[breaking-change]
This was particularly helpful in the time just after OIBIT's
implementation to make sure things that were supposed to be Copy
continued to be, but it's now creates a lot of noise for types that
intentionally don't want to be Copy.
r? @alexcrichton
This was particularly helpful in the time just after OIBIT's
implementation to make sure things that were supposed to be Copy
continued to be, but it's now creates a lot of noise for types that
intentionally don't want to be Copy.