rustc: Try again to disable NEON on armv7 linux
This is a follow-up to #35814 which apparently didn't disable it hard enough. It
looks like LLVM's default armv7 target enables NEON so we'd otherwise have to
pass `-neon`, but we're already enabling armv7 with `+v7` supposedly, so let's
try just telling LLVM that the armv7 target is arm and then enable features
selectively.
Closes#36913
This is a follow-up to #35814 which apparently didn't disable it hard enough. It
looks like LLVM's default armv7 target enables NEON so we'd otherwise have to
pass `-neon`, but we're already enabling armv7 with `+v7` supposedly, so let's
try just telling LLVM that the armv7 target is arm and then enable features
selectively.
Closes#36913
add Thumbs to the compiler
this commit adds 4 new target definitions to the compiler for easier
cross compilation to ARM Cortex-M devices.
- `thumbv6m-none-eabi`
- For the Cortex-M0, Cortex-M0+ and Cortex-M1
- This architecture doesn't have hardware support (instructions) for
atomics. Hence, the `Atomic*` structs are not available for this
target.
- `thumbv7m-none-eabi`
- For the Cortex-M3
- `thumbv7em-none-eabi`
- For the FPU-less variants of the Cortex-M4 and Cortex-M7
- On this target, all the floating point operations will be lowered
software routines (intrinsics)
- `thumbv7em-none-eabihf`
- For the variants of the Cortex-M4 and Cortex-M7 that do have a FPU.
- On this target, all the floating point operations will be lowered
to hardware instructions
No binary releases of standard crates, like `core`, are planned for
these targets because Cargo, in the future, will compile e.g. the `core`
crate on the fly as part of the `cargo build` process. In the meantime,
you'll have to compile the `core` crate yourself. [Xargo] is the easiest
way to do that as in handles the compilation of `core` automatically and
can be used just like Cargo: `xargo build --target thumbv6m-none-eabi`
is all that's needed.
[Xargo]: https://crates.io/crates/xargo
---
cc @brson @alexcrichton
Two lexer tweaks
19 days later, I haven't received a review of my commits in #36470. In an attempt to make some progress, I'm going to split up the changes. Here are the ones that don't relate to renaming things.
Speed up `plug_leaks`
Profiling shows that `plug_leaks` and the functions it calls are hot on some benchmarks. It's very common that `skol_map` is empty in this function, and we can specialize `plug_leaks` in that case for some big speed-ups.
The PR has two commits. I'm fairly confident that the first one is correct -- I traced through the code to confirm that the `fold_regions` and `pop_skolemized` calls are no-ops when `skol_map` is empty, and I also temporarily added an assertion to check that `result` ends up having the same value as `value` in that case. This commit is responsible for most of the improvement.
I'm less confident about the second commit. The call to `resolve_type_vars_is_possible` can change `value` when `skol_map` is empty... but testing suggests that it doesn't matter if the call is
omitted.
So, please check both patches carefully, especially the second one!
Here are the speed-ups for the first commit alone.
stage1 compiler (built with old rustc, using glibc malloc), doing debug builds:
```
futures-rs-test 4.710s vs 4.538s --> 1.038x faster (variance: 1.009x, 1.005x)
issue-32062-equ 0.415s vs 0.368s --> 1.129x faster (variance: 1.009x, 1.010x)
issue-32278-big 1.884s vs 1.808s --> 1.042x faster (variance: 1.020x, 1.017x)
jld-day15-parse 1.907s vs 1.668s --> 1.143x faster (variance: 1.011x, 1.007x)
piston-image-0. 13.024s vs 12.421s --> 1.049x faster (variance: 1.004x, 1.012x)
rust-encoding-0 3.335s vs 3.276s --> 1.018x faster (variance: 1.021x, 1.028x)
```
stage2 compiler (built with new rustc, using jemalloc), doing debug builds:
```
futures-rs-test 4.167s vs 4.065s --> 1.025x faster (variance: 1.006x, 1.018x)
issue-32062-equ 0.383s vs 0.343s --> 1.118x faster (variance: 1.012x, 1.016x)
issue-32278-big 1.680s vs 1.621s --> 1.036x faster (variance: 1.007x, 1.007x)
jld-day15-parse 1.671s vs 1.478s --> 1.131x faster (variance: 1.016x, 1.004x)
piston-image-0. 11.336s vs 10.852s --> 1.045x faster (variance: 1.003x, 1.006x)
rust-encoding-0 3.036s vs 2.971s --> 1.022x faster (variance: 1.030x, 1.032x)
```
I've omitted the benchmarks for which the change was negligible.
And here are the speed-ups for the first and second commit in combination.
stage1 compiler (built with old rustc, using glibc malloc), doing debug
builds:
```
futures-rs-test 4.684s vs 4.498s --> 1.041x faster (variance: 1.012x, 1.012x)
issue-32062-equ 0.413s vs 0.355s --> 1.162x faster (variance: 1.019x, 1.006x)
issue-32278-big 1.869s vs 1.763s --> 1.060x faster (variance: 1.013x, 1.018x)
jld-day15-parse 1.900s vs 1.602s --> 1.186x faster (variance: 1.010x, 1.003x)
piston-image-0. 12.907s vs 12.352s --> 1.045x faster (variance: 1.005x, 1.006x)
rust-encoding-0 3.254s vs 3.248s --> 1.002x faster (variance: 1.063x, 1.045x)
```
stage2 compiler (built with new rustc, using jemalloc), doing debug builds:
```
futures-rs-test 4.183s vs 4.046s --> 1.034x faster (variance: 1.007x, 1.004x)
issue-32062-equ 0.380s vs 0.340s --> 1.117x faster (variance: 1.020x, 1.003x)
issue-32278-big 1.671s vs 1.616s --> 1.034x faster (variance: 1.031x, 1.012x)
jld-day15-parse 1.661s vs 1.417s --> 1.172x faster (variance: 1.013x, 1.005x)
piston-image-0. 11.347s vs 10.841s --> 1.047x faster (variance: 1.007x, 1.010x)
rust-encoding-0 3.050s vs 3.000s --> 1.017x faster (variance: 1.016x, 1.012x)
```
@eddyb: `git blame` suggests that you should review this. Thanks!
Avoid introducing `run` twice in the Rust book
As it stands, getting-started.md and guessing-game.md both introduce `run` as a new command. I switched it so that the 2nd refers back to the first introduction, rather than re-introducing the command.
(First ever FOSS PR, sorry if I screwed up anything obvious :) )
r? @steveklabnik
Improve error message and snippet for "did you mean `x`"
- Fixes#36164
- Part of #35233
Based on the standalone example https://is.gd/8STXMd posted by @nikomatsakis and using the third formatting option mentioned in #36164 and agreed by @jonathandturner.
Note however this does not address the question of [how to handle an empty or unknown suggestion](https://github.com/rust-lang/rust/issues/36164#issuecomment-244460024). @nikomatsakis any suggestions on how best to address that part?
loosen assertion against proj in collector
The collector was asserting a total absence of projections, but some projections are expected, even in trans: in particular, projections containing higher-ranked regions, which we don't currently normalize.
r? @pnkfelix
Fixes#36381
std: Stabilize and deprecate APIs for 1.13
This commit is intended to be backported to the 1.13 branch, and works with the
following APIs:
Stabilized
* `i32::checked_abs`
* `i32::wrapping_abs`
* `i32::overflowing_abs`
* `RefCell::try_borrow`
* `RefCell::try_borrow_mut`
Deprecated
* `BinaryHeap::push_pop`
* `BinaryHeap::replace`
* `SipHash13`
* `SipHash24`
* `SipHasher` - use `DefaultHasher` instead in the `std::collections::hash_map`
module
Closes#28147Closes#34767Closes#35057Closes#35070
This commit is intended to be backported to the 1.13 branch, and works with the
following APIs:
Stabilized
* `i32::checked_abs`
* `i32::wrapping_abs`
* `i32::overflowing_abs`
* `RefCell::try_borrow`
* `RefCell::try_borrow_mut`
* `DefaultHasher`
* `DefaultHasher::new`
* `DefaultHasher::default`
Deprecated
* `BinaryHeap::push_pop`
* `BinaryHeap::replace`
* `SipHash13`
* `SipHash24`
* `SipHasher` - use `DefaultHasher` instead in the `std::collections::hash_map`
module
Closes#28147Closes#34767Closes#35057Closes#35070
The collector was asserting a total absence of projections, but some
projections are expected, even in trans: in particular, projections
containing higher-ranked regions, which we don't currently normalize.
rustdoc: Fix documenting rustc-macro crates
This commit adds a "hack" to the session to track whether we're a rustdoc
session or not. If we're rustdoc then we skip the expansion to add the
rustc-macro infrastructure.
Closes#36820
Clarify HashMap's capacity handling.
HashMap has two notions of "capacity":
- "Usable capacity": the number of elements a hash map can hold without
resizing. This is the meaning of "capacity" used in HashMap's API,
e.g. the `with_capacity()` function.
- "Internal capacity": the number of allocated slots. Except for the
zero case, it is always larger than the usable capacity (because some
slots must be left empty) and is always a power of two.
HashMap's code is confusing because it does a poor job of
distinguishing these two meanings. I propose using two different terms
for these two concepts. Because "capacity" is already used in HashMap's
API to mean "usable capacity", I will use a different word for "internal
capacity". I propose "span", though I'm happy to consider other names.