windows: Copy libwinpthread-1.dll into libdir bin
Recently we switched from the win32 MinGW toolchain to the pthreads-based
toolchain. We ship `gcc.exe` from this toolchain with the `rust-mingw` package
in the standard distribution but the pthreads version of `gcc.exe` depends on
`libwinpthread-1.dll`. While we're shipping this DLL for the compiler to depend
on we're not shipping it for gcc. As a workaround just copy the dll to gcc.exe
location and don't attempt to share for now.
cc https://github.com/rust-lang/rust/issues/31840#issuecomment-297478538
query for describe_def
Resolves `fn describe_def(&self, def: DefId) -> Option<Def>;` of #41417.
r? @nikomatsakis I would greatly appreciate a review. I hope I covered everything described in the pr.
Currently our slowest test suite on android, run-pass, takes over 5 times longer
than the x86_64 component (~400 -> ~2200s). Typically QEMU emulation does indeed
add overhead, but not 5x for this kind of workload. One of the slowest parts of
the Android process is that *compilation* happens serially. Tests themselves
need to run single-threaded on the emulator (due to how the test harness works)
and this forces the compiles themselves to be single threaded.
Now Travis gives us more than one core per machine, so it'd be much better if we
could take advantage of them! The emulator itself is still fundamentally
single-threaded, but we should see a nice speedup by sending binaries for it to
run much more quickly.
It turns out that we've already got all the tools to do this in-tree. The
qemu-test-{server,client} that are in use for the ARM Linux testing are a
perfect match for the Android emulator. This commit migrates the custom adb
management code in compiletest/rustbuild to the same qemu-test-{server,client}
implementation that ARM Linux uses.
This allows us to lift the parallelism restriction on the compiletest test
suites, namely run-pass. Consequently although we'll still basically run the
tests themselves in single threaded mode we'll be able to compile all of them in
parallel, keeping the pipeline much more full and using more cores for the work
at hand. Additionally the architecture here should be a bit speedier as it
should have less overhead than adb which is a whole new process on both the host
and the emulator!
Locally on an 8 core machine I've seen the run-pass test suite speed up from
taking nearly an hour to only taking 6 minutes. I don't think we'll see quite a
drastic speedup on Travis but I'm hoping this change can place the Android tests
well below 2 hours instead of just above 2 hours.
Because the client/server here are now repurposed for more than just QEMU,
they've been renamed to `remote-test-{server,client}`.
Note that this PR does not currently modify how debuginfo tests are executed on
Android. While parallelizable it wouldn't be quite as easy, so that's left to
another day. Thankfully that test suite is much smaller than the run-pass test
suite.
As a final fix I discovered that the ARM and Android test suites were actually
running all library unit tests (e.g. stdtest, coretest, etc) twice. I've
corrected that to only run tests once which should also give a nice boost in
overall cycle time here.
typeck: resolve type vars before calling `try_index_step`
`try_index_step` does not resolve type variables by itself and would
fail otherwise. Also harden the failure path in `confirm` to cause less
confusing errors.
r? @eddyb
Fixes#41498.
beta-nominating because regression (caused by #41279).
Adding links and examples for various mspc pages #29377
Adding links and copying examples for the various Iterators; adding some extra stuff to `Sender`/`SyncSender`/`Receiver`.
appveyor: Use Ninja/sccache on MSVC
Now that the final bug fixes have been merged into sccache we can start
leveraging sccache on the MSVC builders on AppVeyor instead of relying on the
ad-hoc caching strategy of trigger files and whatnot.
Now that the final bug fixes have been merged into sccache we can start
leveraging sccache on the MSVC builders on AppVeyor instead of relying on the
ad-hoc caching strategy of trigger files and whatnot.
`try_index_step` does not resolve type variables by itself and would
fail otherwise. Also harden the failure path in `confirm` to cause less
confusing errors.
#37653 support `default impl` for specialization
this commit implements the first step of the `default impl` feature:
> all items in a `default impl` are (implicitly) `default` and hence
> specializable.
In order to test this feature I've copied all the tests provided for the
`default` method implementation (in run-pass/specialization and
compile-fail/specialization directories) and moved the `default` keyword
from the item to the impl.
See [referenced](https://github.com/rust-lang/rust/issues/37653) issue for further info
r? @aturon
Recently we switched from the win32 MinGW toolchain to the pthreads-based
toolchain. We ship `gcc.exe` from this toolchain with the `rust-mingw` package
in the standard distribution but the pthreads version of `gcc.exe` depends on
`libwinpthread-1.dll`. While we're shipping this DLL for the compiler to depend
on we're not shipping it for gcc. As a workaround just copy the dll to gcc.exe
location and don't attempt to share for now.
cc https://github.com/rust-lang/rust/issues/31840#issuecomment-297478538
Shrink the rust-src component
Before this change, the installable rust-src component had essentially the same contents as the rustc-src dist tarball, just additionally wrapped in a rust-installer. As discussed on [internals], rust-src is only meant to support uses for the standard library, so it doesn't really need the rest of the compiler sources.
Now rust-src only contains libstd and its path dependencies, which roughly matches the set of crates that have rust-analysis data. The result is **significantly** smaller, from 36MB to 1.3MB compressed, and from 247MB to 8.5MB uncompressed.
[internals]: https://internals.rust-lang.org/t/minimizing-the-rust-src-component/5117
Add Hexagon support
This requires an updated LLVM with https://reviews.llvm.org/D31999 and https://reviews.llvm.org/D32000 to build libcore.
A basic hello world builds and runs successfully on the hexagon simulator. libcore is fine with LLVM fixes, but libstd requires a lot more work since there's a custom rtos running on most hexagon cores. Running Linux sounds possible though, so maybe getting linux + musl going would be easier.
Here's the target file I've been using for testing
```
{
"arch": "hexagon",
"llvm-target": "hexagon-unknown-elf",
"os": "none",
"target-endian": "little",
"target-pointer-width": "32",
"data-layout": "e-m:e-p:32:32:32-a:0-n16:32-i64:64:64-i32:32:32-i16:16:16-i1:8:8-f32:32:32-f64:64:64-v32:32:32-v64:64:64-v512:512:512-v1024:1024:1024-v2048:2048:2048",
"linker": "hexagon-clang",
"linker-flavor": "gcc",
"executables": true,
"cpu": "hexagonv60"
}
```
Address platform-specific behavior in TcpStream::shutdown
Fixes#25164
r? @rust-lang/libs from the GitHub thread, it seems like documenting this behavior is okay, but I want to make sure that's what you want.
Make sure openssl compiles with only one core
This is (hopefully) a fix for the osx openssl spurious failure - #40417.
The intermittent failures and failing in different ways made me think of a race condition. But programs are parallel make safe right? [Not openssl](https://github.com/openssl/openssl/issues/298). But we don't do a parallel make on openssl [do we](8c4f2c64c6/src/bootstrap/native.rs (L309))? This confused me, except "Waiting for unfinished jobs" is present in the logs...which is evidence of a parallel make!
It turns out that when we invoke to top level target [in run.sh](036983201d/src/ci/run.sh (L75-L77)), make will [pass the flags downwards](https://www.gnu.org/software/make/manual/html_node/Options_002fRecursion.html) in order to take advantage of parallelism in sub-makes. Of course, we don't want this in openssl! Override this by explicitly disabling parallelism on the command line.
I don't know why this hasn't happened on anything except OSX. Maybe Linux binutils check if the file is in use?
r? @alexcrichton
appveyor: Upgrade to gcc for mingw 6.3.0
This commit sort of brings back #40777 by upgrading back to 6.3.0. While
investigating #40546 it was discovered that 6.3.0 appears to not spurious
fail in the same way that 6.2.0 does (which we're currently using). The
workaround for #40184 contained in #40777 did not work so this commit also
contains a different workaround for the gdb issue. We will not download the
6.2.0 version of gdb and use that instead of the default version that comes with
6.3.0.
I'm going to optimistically say...
Closes#40546