Fix collapse toggle insertions on impl with docs
Just went through this one randomly... When an impl has docs, the collapse toggle isn't generated. This fixes it.
r? @QuietMisdreavus
Enable target_feature on any LLVM 6+
In `LLVMRustHasFeature()`, rather than using `MCInfo->getFeatureTable()`
that is specific to Rust's LLVM fork, we can use this in LLVM 6:
/// Check whether the subtarget features are enabled/disabled as per
/// the provided string, ignoring all other features.
bool checkFeatures(StringRef FS) const;
Now rustc using external LLVM can also have `target_feature`.
r? @alexcrichton
[incremental] Don't panic if decoding the cache fails
If the cached data can't be loaded from disk, just issue a warning to
the user so they know why compilation is taking longer than usual but
don't fail the entire compilation since we can recover by ignorning the
on disk cache.
In the same way, if the disk cache can't be deserialized (because it has
been corrupted for some reason), report the issue as a warning and
continue without failing the compilation. `Decodable::decode()` tends to
panic with various errors like "entered unreachable code" or "index out
of range" if the input data is corrupted. Work around this by catching
panics from the `decode()` calls and continuing without the cached data.
Fixes#48847
Introduce a TargetTriple enum to support absolute target paths
This PR replaces target triple strings with a `TargetTriple` enum, which represents either a target triple or a path to a JSON target file. The path variant is used if the `--target` argument has a `.json` extension, else the target triple variant is used.
The motivation of this PR is support for absolute target paths to avoid the need for setting the `RUST_TARGET_PATH` environment variable (see rust-lang/cargo#4905 for more information). For places where some kind of triple is needed (e.g. in the sysroot folder), we use the file name (without extension).
For compatibility, we keep the old behavior of searching for a file named `$(target_triple).json` in `RUST_TARGET_PATH` for non-official target triples.
If the cached data can't be loaded from disk, just issue a warning to
the user so they know why compilation is taking longer than usual but
don't fail the entire compilation since we can recover by ignorning the
on disk cache.
In the same way, if the disk cache can't be deserialized (because it has
been corrupted for some reason), report the issue as a warning and
continue without failing the compilation. `Decodable::decode()` tends to
panic with various errors like "entered unreachable code" or "index out
of range" if the input data is corrupted. Work around this by catching
panics from the `decode()` calls when joining the thread and continuing
without the cached data.
Fixes#48847
In `LLVMRustHasFeature()`, rather than using `MCInfo->getFeatureTable()`
that is specific to Rust's LLVM fork, we can use this in LLVM 6:
/// Check whether the subtarget features are enabled/disabled as per
/// the provided string, ignoring all other features.
bool checkFeatures(StringRef FS) const;
Now rustc using external LLVM can also have `target_feature`.
Stabilize TryFrom / TryInto, and tweak impls for integers
Fixes https://github.com/rust-lang/rust/issues/33417 (tracking issue)
----
This adds:
* `impl From<u16> for usize`
* `impl From<i16> for isize`
* `impl From<u8> for isize`
… replacing corresponding `TryFrom<Error=!>` impls. (`TryFrom` still applies through the generic `impl<T, U> TryFrom<U> for T where T: From<U>`.) Their infallibility is supported by the C99 standard which (indirectly) requires pointers to be at least 16 bits.
The remaining `TryFrom` impls that define `type Error = !` all involve `usize` or `isize`. This PR changes them to use `TryFromIntError` instead, since having a return type change based on the target is a portability hazard.
Note: if we make similar assumptions about the *maximum* bit size of pointers (for all targets Rust will ever run on in the future), we could have similar `From` impls converting pointer-sized integers to large fixed-size integers. RISC-V considers the possibility of a 128-bit address space (RV128), which would leave only `impl From<usize> for u128` and `impl From<isize> for u128`. I [found](https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/20171017a-cheri-poster.pdf) some [things](http://www.csl.sri.com/users/neumann/2012resolve-cheri.pdf) about 256-bit “capabilities”, but I don’t know how relevant that would be to Rust’s `usize` and `isize` types.
I chose conservatively to make no assumption about the future there. Users making their portability decisions and using something like `.try_into().unwrap()`.
----
Since this feature already went through FCP in the tracking issue https://github.com/rust-lang/rust/issues/33417, this PR also proposes **stabilize** the following items:
* The `convert::TryFrom` trait
* The `convert::TryFrom` trait
* `impl<T> TryFrom<&[T]> for &[T; $N]` (for `$N` up to 32)
* `impl<T> TryFrom<&mut [T]> for &mut [T; $N]` (for `$N` up to 32)
* The `array::TryFromSliceError` struct, with impls of `Debug`, `Copy`, `Clone`, and `Error`
* `impl TryFrom<u32> for char`
* The `char::CharTryFromError` struct, with impls of `Copy`, `Clone`, `Debug`, `PartialEq`, `Eq`, `Display`, and `Error`
* Impls of `TryFrom` for all (?) combinations of primitive integer types where `From` isn’t implemented.
* The `num::TryFromIntError` struct, with impls of `Debug`, `Copy`, `Clone`, `Display`, `From<!>`, and `Error`
Some minor remaining questions that I hope can be resolved in this PR:
* Should the impls for error types be unified?
* ~Should `TryFrom` and `TryInto` be in the prelude? `From` and `Into` are.~ (Yes.)
libsyntax: Remove obsolete.rs
This little piece of infra is obsolete (ha-ha) and is unlikely to be used in the future, even if new obsolete syntax appears.
Add is_whitespace and is_alphanumeric to str.
The other methods from `UnicodeStr` are already stable inherent
methods on str, but these have not been included.
r? @SimonSapin
Add slice::sort_by_cached_key as a memoised sort_by_key
At present, `slice::sort_by_key` calls its key function twice for each comparison that is made. When the key function is expensive (which can often be the case when `sort_by_key` is chosen over `sort_by`), this can lead to very suboptimal behaviour.
To address this, I've introduced a new slice method, `sort_by_cached_key`, which has identical semantic behaviour to `sort_by_key`, except that it guarantees the key function will only be called once per element.
Where there are `n` elements and the key function is `O(m)`:
- `slice::sort_by_cached_key` time complexity is `O(m n log m n)`, compared to `slice::sort_by_key`'s `O(m n + n log n)`.
- `slice::sort_by_cached_key` space complexity remains at `O(n + m)`. (Technically, it now reserves a slice of size `n`, whereas before it reserved a slice of size `n/2`.)
`slice::sort_unstable_by_key` has not been given an analogue, as it is important that unstable sorts are in-place, which is not a property that is guaranteed here. However, this also means that `slice::sort_unstable_by_key` is likely to be slower than `slice::sort_by_cached_key` when the key function does not have negligible complexity. We might want to explore this trade-off further in the future.
Benchmarks (for a vector of 100 `i32`s):
```
# Lexicographic: `|x| x.to_string()`
test bench_sort_by_key ... bench: 112,638 ns/iter (+/- 19,563)
test bench_sort_by_cached_key ... bench: 15,038 ns/iter (+/- 4,814)
# Identity: `|x| *x`
test bench_sort_by_key ... bench: 1,346 ns/iter (+/- 238)
test bench_sort_by_cached_key ... bench: 1,839 ns/iter (+/- 765)
# Power: `|x| x.pow(31)`
test bench_sort_by_key ... bench: 3,624 ns/iter (+/- 738)
test bench_sort_by_cached_key ... bench: 1,997 ns/iter (+/- 311)
# Abs: `|x| x.abs()`
test bench_sort_by_key ... bench: 1,546 ns/iter (+/- 174)
test bench_sort_by_cached_key ... bench: 1,668 ns/iter (+/- 790)
```
(So it seems functions that are single operations do perform slightly worse with this method, but for pretty much any more complex key, you're better off with this optimisation.)
I've definitely found myself using expensive keys in the past and wishing this optimisation was made (e.g. for https://github.com/rust-lang/rust/pull/47415). This feels like both desirable and expected behaviour, at the small cost of slightly more stack allocation and minute degradation in performance for extremely trivial keys.
Resolves#34447.
Fix implicit closure return type generation for libsyntax
The `lambda` function for constructing closures in libsyntax was explicitly setting the return type to `_`, which resulted in incorrect corresponding syntax (as `|| -> _ x` is not valid, without the enclosing brackets). This meant the generated code, when printed, was invalid.
I also took the opportunity to slightly improve the generated code for the `RustcEncodable::encode` method for unit structs.
Fixes#42213.
implement minmax intrinsics
This adds the `simd_{fmin,fmax}` intrinsics, which do a vertical (lane-wise) `min`/`max` for floating point vectors that's equivalent to Rust's `min`/`max` for `f32`/`f64`.
It might make sense to make `{f32,f64}::{min,max}` use the `minnum` and `minmax` intrinsics as well.
---
~~HELP: I need some help with these. Either I should go to sleep or there must be something that I must be missing. AFAICT I am calling the `maxnum` builder correctly, yet rustc/LLVM seem to insert a call to `llvm.minnum` there instead...~~ EDIT: Rust's LLVM version is too old :/