This makes it more clear that you can't rely on the layout of these,
which seems worth doing given that the names vaguely suggest that you can
(and the docs only clarify that you can't on Mask but not the maskNxM aliases).
Within core, `use self::` does not work to import these items.
And because core is not core_simd, neither does the existing `use`.
So, use this quirky hack instead, switching the import on a feature.
It looks like the last time had left some remaining cfg's -- which made me think
that the stage0 bump was actually successful. This brings us to a released 1.62
beta though.
Working through giving example documentation to every Simd function.
The major change in this patch is using doc macros to generate
type-specific examples for each function, using a visually-apparent type
constructor. This makes it feel nicer to have twelve separate
documentation entries for reduce_product(), for example.
A simpler variant of rust-lang/portable-simd#206.
* Comparisons are moved to `SimdPartialEq`, `SimdPartialOrd`, and `SimdOrd`. The function names are prefixed with `simd_` to disambiguate from the regular `PartialEq` etc functions. With the functions on traits instead of `Simd` directly, shadowing the function names doesn't work very well.
* Floating point `Ord`-like functions are put into a `SimdFloat` trait. The intention is that eventually (some time after this PR) all floating point functions will be moved from `Simd` to `SimdFloat`, and the same goes for future `SimdInt`/`SimdUint` traits.
Now that we are thoroughly embedded in libcore, we don't need these on by default.
Indeed, their presence may provide confusing results during integration attempts.
Another approach that fixesrust-lang/portable-simd#223, as an alternative to rust-lang/portable-simd#238.
This adds the `ToBitMask` trait, which is implemented on a vector for each bitmask type it supports. This includes all unsigned integers with enough bits to contain it. The byte array variant has been separated out for now into rust-lang/portable-simd#246 and still requires `generic_const_exprs`, but the integer variants no longer require it and can make it to nightly.
* Explain unsafe contracts of core::simd
This permeates the module with remarks on safety for pub methods,
layout of the Simd type, correct use of intrinsics, et cetera.
This is mostly to help others curious about how core::simd works,
including other Rust contributors, `unsafe` library authors,
and eventually ourselves.
The way the macro expands, it may sometimes infer
"this is a uint, but doesn't impl Neg???"
Also, I made the "wrong path for intrinsics" error.
These fixes allow integration into libcore.
impl std::simd::StdFloat
This introduces an extension trait to allow use of floating point methods
that need runtime support. It is *excessively* documented because its mere
existence is quite vexing, as the entire thing constitutes a leakage of
implementation details into user observable space. Eventually the entire
thing will ideally be folded into core and restructured to match the rest
of the library, whatever that structure might look like at the time. This
is preferred in lieu of the "lang item" path because any energy the lang
items require (and it will be significant, by Simulacrum's estimation) is
better spent on implementing our libmvec.
Refactor ops.rs with wrapping shifts
This approaches reducing macro nesting in a slightly different way. Instead of just flattening details, make one macro apply another. This allows specifying all details up-front in the first macro invocation, making it easier to audit and refactor in the future.
This refactor also has some functional changes. Only one is a true behavior change, however:
- The visible one is that SIMD shifts are now wrapping, not panicking on overflow
- `core::simd` now has a lot more instances of `#[must_use]`, which merely lints
- div/rem now perform a SIMD check but remain as before, which should improve performance but be invisible
This approaches reducing macro nesting in a slightly different way.
Instead of just flattening details, make one macro apply another.
This allows specifying all details up-front in the first macro
invocation, making it easier to audit and refactor in the future.
For all other operators, we use wrapping logic where applicable.
This is another case it applies. Per rust-lang/rust#91237, we may
wish to specify this as the natural behavior of `simd_{shl,shr}`.
Generic `core::ops` for `Simd<T, _>`
In order to maintain type soundness, we need to be sure we only implement an operation for `Simd<T, _> where T: SimdElement`... and also valid for that operation in general. While we could do this purely parametrically, it is more sound to implement the operators directly for the base scalar type arguments and then use type parameters to extend the operators to the "higher order" operations.
This implements that strategy and cleans up `simd::ops` into a few submodules:
- assign.rs: `core::ops::*Assign`
- deref.rs: `core::ops` impls which "deref" borrowed versions of the arguments
- unary.rs: encloses the logic for unary operators on `Simd`, as unary ops are much simpler
This is possible since everything need not be nested in a single maze of macros anymore. The result simplifies the logic and allows reasoning about what operators are valid based on the expressed trait bounds, and also reduces the size of the trait implementation output in rustdoc, for a huge win of 4 MB off the size of `struct.Simd.html`! This addresses a common user complaint, as the original was over 5.5 MB and capable of crashing browsers!
This also carries a fix for a type-inference-related breakage, by removing the autosplatting (vector + scalar binop) impls, as unfortunately the presence of autosplatting was capable of busting type inference. We will likely need to see results from a Crater run before we can understand how to re-land autosplatting.
Unfortunately, splatting impls currently break several crates.
Rust needs more time to review possible mitigations, so
drop the impls for the `impl Add<T> for Simd<T, _>` pattern, for now.