Require +thumb-mode to generate thumb2 code for Android/armv7-a
I am investigating rust's code generation into Gecko by https://bugzilla.mozilla.org/show_bug.cgi?id=1399337.
armv7-linux-androideabi target uses `+v7,+thumb2,+vfp3,+d16,-neon` as target-feature. But `+thumb2` only doesn't generate thumb2 code. To generate thumb2 code, it requires `+thumb-mode`. So we should add it for armv7-linux-androideabi.
r? @alexcrichton
rustc: Preallocate when building the dep graph
This commit alters the `query` function in the dep graph module to preallocate
memory using `with_capacity` instead of relying on automatic growth. Discovered
in #44576 it was found that for the syntex_syntax clean incremental benchmark
the peak memory usage was found when the dep graph was being saved, particularly
the `DepGraphQuery` data structure itself. PRs like #44142 which add more
queries end up just making this much larger!
I didn't see an immediately obvious way to reduce the size of the
`DepGraphQuery` object, but it turns out that `with_capacity` helps quite a bit!
Locally 831 MB was used [before] this commit, and 770 MB is in use at the peak
of the compiler [after] this commit. That's a nice 7.5% improvement! This won't
quite make up for the losses in #44142 but I figured it's a good start.
[before]: https://gist.github.com/alexcrichton/2d2b9c7a65503761925c5a0bcfeb0d1e
[before]: https://gist.github.com/alexcrichton/6da51f2a6184bfb81694cc44f06deb5b
Customize `<FlatMap as Iterator>::fold`
`FlatMap` can use internal iteration for its `fold`, which shows a
performance advantage in the new benchmarks:
test iter::bench_flat_map_chain_ref_sum ... bench: 4,354,111 ns/iter (+/- 108,871)
test iter::bench_flat_map_chain_sum ... bench: 468,167 ns/iter (+/- 2,274)
test iter::bench_flat_map_ref_sum ... bench: 449,616 ns/iter (+/- 6,257)
test iter::bench_flat_map_sum ... bench: 348,010 ns/iter (+/- 1,227)
... where the "ref" benches are using `by_ref()` that isn't optimized.
So this change shows a decent advantage on its own, but much more when
combined with a `chain` iterator that also optimizes `fold`.
travis: Move sccache to the us-west-1 region
Most of the other rust-lang buckets are in us-west-1 and I think the original
bucket was just accidentally created in the us-east-1 region. Let's consolidate
by moving it to the same location as the rest of our buckets.
bring TyCtxt into scope
got comments both from @eddyb and @nikomatsakis (via https://github.com/rust-lang/rust/pull/44505) that we should always put `TyCtxt` in scope
should I just go and import it at other places in the codebase or we just keep doing small improvements?
rustc: Spawn `cmd /c emcc.bat` explicitly
In #42436 the behavior for spawning processes on Windows was tweaked slightly to
fix various bugs, but this caused #42791 as a regression, namely that to spawn
batch scripts they need to be manually spawned with `cmd /c` instead now. This
updates the compiler to handle this case explicitly for Emscripten.
Closes#42791
bump gcc for bootstrap
On Windows, the gcc crate would send /Wall to msvc, which would cause
builds to get flooded with warnings, exploding compile times from one
hour to more than 72! The gcc crate version 0.3.54 changes this behavior
to send /W4 instead, which greatly cuts down on cl.exe flooding the
command prompt window with warnings.
Ipv4Addr and Ipv6Addr convenience constructors.
Introduce convenience constructors for common types.
This introduces the following constructors:
* Ipv6Addr::localhost()
* Ipv6Addr::unspecified()
* Ipv4Addr::localhost()
* Ipv4Addr::unspecified()
The recently added `From` implementations were nice for avoiding the fallibility of conversions from strings like `"127.0.0.1".parse().unwrap()`, and `"::1".parse().unwrap()`, but while the Ipv4 version is roughly comparable in verbosity, the Ipv6 version lacks zero-segment elision, which makes it significantly more awkward: `[0, 0, 0, 0, 0, 0, 0, 0].into()`. While there isn't a clear way to introduce zero elision to type that can infallibly be converted into Ipv6 addresses, this PR resolves the problem for the two most commonly used addresses, which, incidentally, are the ones that suffer the most from the lack of zero-segment elision.
This change is dead simple, and introduces no backwards incompatibility.
See also, [this topic on the inernals board](https://internals.rust-lang.org/t/pre-rfc-convenience-ip-address-constructors/5878)
Implement <Rc<Any>>::downcast
* Implement `<Rc<Any>>::downcast::<T>`
* New unstable method. Works just like Box\<Any\>, but for Rc.
* Any has two cases for its methods: Any and Any + Send; Rc is never Send, so that case is skipped for Rc.
* Motivation for being a method with self is to match Box and there is no user-supplied type; the inner type is Any and downcast does not conflict with any method of Any.
* Arc was skipped because Any itself has no downcast for the case that makes most sense: Any + Send + Sync
implement unsafe pointer methods
I also cleaned up some existing documentation a bit here or there since I was doing so much auditing of it. Most notably I significantly rewrote the `offset` docs to clarify safety (`*const` and `*mut`'s offset docs had actually diverged).
Individualize feature gates for const fn invocation
This PR changes the meaning of `#![feature(const_fn)]` so it is only required to declare a const fn but not to call one. Based on discussion at #24111. I was hoping we could have an FCP here in order to move that conversation forward.
This sets the stage for future stabilization of the constness of several functions in the standard library (listed below), so could someone please tag the lang team for review.
- `std::cell`
- `Cell::new`
- `RefCell::new`
- `UnsafeCell::new`
- `std::mem`
- `size_of`
- `align_of`
- `std::ptr`
- `null`
- `null_mut`
- `std::sync`
- `atomic`
- `Atomic{Bool,Ptr,Isize,Usize}::new`
- `once`
- `Once::new`
- primitives
- `{integer}::min_value`
- `{integer}::max_value`
Some other functions are const but they are also unstable or hidden, e.g. `Unique::new` so they don't have to be considered at this time.
After this stabilization, the following `*_INIT` constants in the standard library can be deprecated. I wasn't sure whether to include those deprecations in the current PR.
- `std::sync`
- `atomic`
- `ATOMIC_{BOOL,ISIZE,USIZE}_INIT`
- `once`
- `ONCE_INIT`
rustbuild: Compile the error-index in stage 2
Right now we comiple rustdoc in stage 2 and the error index in stage 0, which
ends up compiling rustdoc twice! To avoid compiling rustdoc twice (which takes
awhile) let's just compile it once in stage 2.
travis: Disable LLVM assertions on OSX
Our OSX builders are routinely and significantly over hour 2 hour "soft limit"
for testing PRs. I *think* that a big portion of this time comes from the fact
that LLVM and debug assertions are enabled. In an effort to speed up these
builders and reduce cycle time this commit disables LLVM assertions on OSX for
all builders.
My thinking is that we'll let this bake for a bit after merged to see what the
effect is on timing on Travis. If it doesn't actually help much we can turn them
back on, and if it doesn't help enough we can disable Rust debug assertions as
well.
Our OSX builders are routinely and significantly over hour 2 hour "soft limit"
for testing PRs. I *think* that a big portion of this time comes from the fact
that LLVM and debug assertions are enabled. In an effort to speed up these
builders and reduce cycle time this commit disables LLVM assertions on OSX for
all builders.
My thinking is that we'll let this bake for a bit after merged to see what the
effect is on timing on Travis. If it doesn't actually help much we can turn them
back on, and if it doesn't help enough we can disable Rust debug assertions as
well.
Right now we comiple rustdoc in stage 2 and the error index in stage 0, which
ends up compiling rustdoc twice! To avoid compiling rustdoc twice (which takes
awhile) let's just compile it once in stage 2.
Compile fail stable
Since #30726, we never made the `compile_fail` flag nor the error code check stable. I think it's time to change this fact.
r? @alexcrichton
This commit alters the `query` function in the dep graph module to preallocate
memory using `with_capacity` instead of relying on automatic growth. Discovered
in #44576 it was found that for the syntex_syntax clean incremental benchmark
the peak memory usage was found when the dep graph was being saved, particularly
the `DepGraphQuery` data structure itself. PRs like #44142 which add more
queries end up just making this much larger!
I didn't see an immediately obvious way to reduce the size of the
`DepGraphQuery` object, but it turns out that `with_capacity` helps quite a bit!
Locally 831 MB was used [before] this commit, and 770 MB is in use at the peak
of the compiler [after] this commit. That's a nice 7.5% improvement! This won't
quite make up for the losses in #44142 but I figured it's a good start.
[before]: https://gist.github.com/alexcrichton/2d2b9c7a65503761925c5a0bcfeb0d1e
[before]: https://gist.github.com/alexcrichton/6da51f2a6184bfb81694cc44f06deb5b
Fix example in transmute; add safety requirement to Vec::from_raw_parts
This fixes the second bullet point on #44281 and also removes some incorrect information.
Fix drain_filter doctest.
Fixes#44499.
Also change some of the hidden logic in the doctest as a regression test; two bugs in the original would now cause test failure.