Update aarch64-linux-android target to match android abi.
- Changed `target_env` to "gnu" to empty "" for all android targets because it does not matter for android.
- The PR #33048 added "max_atomic_width" for arm-android but missed recently added armv7-android. Add it there too.
- Added features `+neon,+fp-armv8` because they [must exist on `aarch64` android](http://developer.android.com/ndk/guides/cpu-features.html).
- Update libc to include https://github.com/rust-lang/libc/pull/282 so that rust's std lib works on android's aarch64 (the main issue there was incorrect structure alignment on 64-bit arm).
r? @alexcrichton
Fix fast path of float parsing on x87
The fast path of the float parser relies on the rounding to happen
exactly and directly to the correct number of bits. On x87, instead,
double rounding would occour as the FPU stack defaults to 80 bits of
precision.
This can be fixed by setting the precision of the FPU stack before
performing the int to float conversion. This can be achieved by
changing the value of the x87 control word. This is a somewhat common
operation that is in fact performed whenever a float needs to be
truncated to an integer, but it is undesirable to add its overhead for
code that does not rely on x87 for computations (i.e. on non-x86
architectures, or x86 architectures which perform FPU computations on
using SSE).
Fixes `num::dec2flt::fast_path_correct` (on x87).
Remove irrelevant information (and instead provide pointer to
reference documentation), replace ASCII-art table with the
corresponding MarkDown one, and minor fixes.
test: explicitely check the number of spawned threads in tcp-stress
System limits may restrict the number of threads effectively spawned by this test (eg. systemd recently introduced a 512 tasks per unit maximum default).
Now this test explicitly asserts on the expected number of threads, making failures due to system limits easier to spot.
More details at https://bugs.debian.org/822325
Remove ExplicitSelf from HIR
`self` argument is already kept in the argument list and can be retrieved from there if necessary, so there's no need for the duplication.
The same changes can be applied to AST, I'll make them in the next breaking batch.
The first commit also improves parsing of method declarations and fixes https://github.com/rust-lang/rust/issues/33413.
r? @eddyb
Batch of improvements to errors for new error format
This is a batch of improvements to existing errors to help get the most out of the new error format.
* Added labels to primary spans (^^^) for a set of errors that didn't currently have them
* Highlight the source blue under the secondary notes for better readability
* Move some of the "Note:" into secondary spans+labels
* Fix span_label to take &mut instead, which makes it work the same as other methods in that set
Add error description for E0455
r? @GuillaumeGomez.
About this error there is no much thing to explain. The short description says enough to understand. Feel free to review.
typeck: if a private field exists, also check for a public method
For example, `Vec::len` is both a field and a method, and usually encountering `vec.len` just means that the parens were forgotten.
Fixes: #26472
NOTE: I added the parameter `allow_private` to `method::exists` since I don't want to suggest inaccessible methods. For the second case, where only the method exists, I think it would make sense to set it to `false` as well, but I wanted to preserve compatibility for this case.
gdb Pretty Print: generic encoded was failing on reference/pointer types
If you debug this program using **gdb**
```rust
fn main() {
let x = 10;
let y = Some(&x);
// additional code
}
```
And you try to print **y**'s value from the debugger, you get the following:
```
(gdb) print y
Python Exception <class 'gdb.error'> Cannot convert value to int.:
$1 = {RUST$ENCODED$ENUM$0$None = Some = {0x7fff5fbff97c}}
```
What happens is that inside **debugger_pretty_printers_common.py** the method `is_null_variant` doesn't have any special handling for pointer values so it ends up calling `.as_integer()` on `discriminant_val` (which holds a pointer) and fails.
Considering it needs to handle pointers and return _true_ when the pointer is _null_, I modified the `.as_integer()` method in **gdb_rust_pretty_printing.py** to take pointers into consideration.
After this modification **gdb** prints **y** like this:
```
(gdb) print y
$1 = Some = {0x7fff5fbff97c}
```
Now, it would be nice to print something useful (instead of a pointer address) but the pretty printer doesn't currently handle references/pointers so that's a completely different subject.
Some simple improvements to MIR pretty printing
In short, this PR changes the MIR printer so that it:
* places an empty line between the MIR for each item
* does *not* write an empty line before the first BB when there are no
var decls
* aligns the "// Scope" comments 50 chars in (makes the output more
readable)
* prints the scope comments as "// scope N at ..." instead of "//
Scope(N) at ..."
* prints a prettier scope tree:
* no more unbalanced delimiters!
* no more "Parent" entry (these convey no useful information)
* drop the "Scope()" and just print scope IDs
* no braces when the scope is empty
In action: https://gist.github.com/jonas-schievink/1c11226cbb112892a9470ce0f9870b65
Improve derived implementations for enums with lots of fieldless variants
A number of trait methods like PartialEq::eq or Hash::hash don't
actually need a distinct arm for each variant, because the code within
the arm only depends on the number and types of the fields in the
variants. We can easily exploit this fact to create less and better
code for enums with multiple variants that have no fields at all, the
extreme case being C-like enums.
For nickel.rs and its by now infamous 800 variant enum, this reduces
optimized compile times by 25% and non-optimized compile times by 40%.
Also peak memory usage is down by almost 40% (310MB down to 190MB).
To be fair, most other crates don't benefit nearly as much, because
they don't have as huge enums. The crates in the Rust distribution that
I measured saw basically no change in compile times (I only tried
optimized builds) and only 1-2% reduction in peak memory usage.