Commit Graph

632 Commits

Author SHA1 Message Date
bors
18bfe5d8a9 Auto merge of #92048 - Urgau:num-midpoint, r=scottmcm
Add midpoint function for all integers and floating numbers

This pull-request adds the `midpoint` function to `{u,i}{8,16,32,64,128,size}`, `NonZeroU{8,16,32,64,size}` and `f{32,64}`.

This new function is analog to the [C++ midpoint](https://en.cppreference.com/w/cpp/numeric/midpoint) function, and basically compute `(a + b) / 2` with a rounding towards ~~`a`~~ negative infinity in the case of integers. Or simply said: `midpoint(a, b)` is `(a + b) >> 1` as if it were performed in a sufficiently-large signed integral type.

Note that unlike the C++ function this pull-request does not implement this function on pointers (`*const T` or `*mut T`). This could be implemented in a future pull-request if desire.

### Implementation

For `f32` and `f64` the implementation in based on the `libcxx` [one](18ab892ff7/libcxx/include/__numeric/midpoint.h (L65-L77)). I originally tried many different approach but all of them failed or lead me with a poor version of the `libcxx`. Note that `libstdc++` has a very similar one; Microsoft STL implementation is also basically the same as `libcxx`. It unfortunately doesn't seems like a better way exist.

For unsigned integers I created the macro `midpoint_impl!`, this macro has two branches:
 - The first one take `$SelfT` and is used when there is no unsigned integer with at least the double of bits. The code simply use this formula `a + (b - a) / 2` with the arguments in the correct order and signs to have the good rounding.
 - The second branch is used when a `$WideT` (at least double of bits as `$SelfT`) is provided, using a wider number means that no overflow can occur, this greatly improve the codegen (no branch and less instructions).

For signed integers the code basically forwards the signed numbers to the unsigned version of midpoint by mapping the signed numbers to their unsigned numbers (`ex: i8 [-128; 127] to [0; 255]`) and vice versa.
I originally created a version that worked directly on the signed numbers but the code was "ugly" and not understandable. Despite this mapping "overhead" the codegen is better than my most optimized version on signed integers.

~~Note that in the case of unsigned numbers I tried to be smart and used `#[cfg(target_pointer_width = "64")]` to determine if using the wide version was better or not by looking at the assembly on godbolt. This was applied to `u32`, `u64` and `usize` and doesn't change the behavior only the assembly code generated.~~
2023-05-14 19:33:02 +00:00
bors
81c2459af6 Stabilize const_ptr_read 2023-05-05 20:36:21 +02:00
Deadbeef
e92806704b fix rustdoc and core test 2023-04-29 08:50:56 +00:00
Matthias Krüger
9babe98562
Rollup merge of #110419 - jsoref:spelling-library, r=jyn514
Spelling library

Split per https://github.com/rust-lang/rust/pull/110392

I can squash once people are happy w/ the changes. It's really uncommon for large sets of changes to be perfectly acceptable w/o at least some changes.

I probably won't have time to respond until tomorrow or the next day
2023-04-26 18:51:41 +02:00
Loïc BRANSTETT
bf73234d92 Implement midpoint for all floating point f32 and f64 2023-04-26 10:18:53 +02:00
Loïc BRANSTETT
1a72d7c7c4 Implement midpoint for all signed and unsigned integers 2023-04-26 10:18:53 +02:00
Josh Soref
9cb9346005 Spelling library/
* advance
* aligned
* borrowed
* calculate
* debugable
* debuggable
* declarations
* desugaring
* documentation
* enclave
* ignorable
* initialized
* iterator
* kaboom
* monomorphization
* nonexistent
* optimizer
* panicking
* process
* reentrant
* rustonomicon
* the
* uninitialized

Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2023-04-26 02:10:22 -04:00
DrMeepster
a642563d49 major test improvements 2023-04-21 02:45:48 -07:00
DrMeepster
2bcb018253 fmt 2023-04-21 02:14:03 -07:00
DrMeepster
b92c2f792c fix incorrect param env in dead code lint 2023-04-21 02:14:03 -07:00
DrMeepster
b95852b93c test improvements 2023-04-21 02:14:03 -07:00
DrMeepster
511e457c4b offset_of 2023-04-21 02:14:02 -07:00
John Millikin
4e2797dd76 Implement Neg for signed non-zero integers.
Negating a non-zero integer currently requires unpacking to a
primitive and re-wrapping. Since negation of non-zero signed
integers always produces a non-zero result, it is safe to
implement `Neg` for `NonZeroI{N}`.

The new `impl` is marked as stable because trait implementations
for two stable types can't be marked unstable.
2023-04-20 14:27:29 +09:00
bors
3a5c8e91f0 Auto merge of #110393 - fee1-dead-contrib:rm-const-traits, r=oli-obk
Rm const traits in libcore

See [zulip thread](https://rust-lang.zulipchat.com/#narrow/stream/146212-t-compiler.2Fconst-eval/topic/.60const.20Trait.60.20removal.20or.20rework)

* [x] Bless ui tests
* [ ] Re constify some unstable functions with workarounds if they are needed
2023-04-19 13:03:40 +00:00
Deadbeef
4c6ddc036b fix library and rustdoc tests 2023-04-16 11:38:52 +00:00
Deadbeef
76dbe29104 rm const traits in libcore 2023-04-16 06:49:27 +00:00
est31
77821b2eb9 Remove unused unused_macros
The macro is always used
2023-04-16 08:35:39 +02:00
Tobias Decking
65c9c79d3f
remove obsolete test 2023-04-10 21:57:45 +02:00
Tobias Decking
0f96c71792 Improve the floating point parser in dec2flt.
* Remove all remaining traces of unsafe.
* Put `parse_8digits` inside a loop.
* Rework parsing of inf/NaN values.
2023-04-10 00:47:08 +02:00
Deadbeef
04a5d61161 Revert "Mark DoubleEndedIterator as #[const_trait] using rustc_do_not_const_check, implement const Iterator and DoubleEndedIterator for Range."
This reverts commit 8a9d6bf4fd.
2023-04-08 08:18:29 +00:00
Trevor Gross
dc4ba57566 Stabilize a portion of 'once_cell'
Move items not part of this stabilization to 'lazy_cell' or 'once_cell_try'
2023-03-29 18:04:44 -04:00
bors
cdbbce0a9e Auto merge of #108095 - soc:drop-contains, r=Amanieu
Drop unstable `Option::contains`, `Result::contains`, `Result::contains_err`

This is a proposal to drop the three functions `Option::contains`, `Result::contains` and `Result::contains_err`.

The discovery of `Option::is_some_with`/`Result::is_ok_with`/`Result::is_err_with` in https://github.com/rust-lang/rust/pull/93051 obviates the need for these methods (non-stabilization tracked in https://github.com/rust-lang/rust/issues/62358).

An additional benefit of change is that it avoids spurious error messages in IDEs, when `contains` is supplied by a third-party library:
![option-result-unstable](https://user-images.githubusercontent.com/42493/219127961-13cb559e-6ee8-4449-8dc9-d28d07270ad5.png)
2023-03-28 23:20:30 +00:00
The 8472
e29b27b4a4 replace advance_by returning usize with Result<(), NonZeroUsize> 2023-03-27 16:03:14 +02:00
The 8472
69db91b8b2 Change advance(_back)_by to return usize instead of Result<(), usize>
A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.

This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
2023-03-27 14:11:49 +02:00
onestacked
8a9d6bf4fd Mark DoubleEndedIterator as #[const_trait] using rustc_do_not_const_check, implement const Iterator and DoubleEndedIterator for Range. 2023-03-18 09:17:37 +01:00
Mara Bos
96d252160e Update format_args!() test to account for inlining. 2023-03-16 11:21:50 +01:00
est31
999405059c Match unmatched backticks in library/ 2023-03-03 03:03:29 +01:00
Ralf Jung
229aef1f7d add missing feature in core/tests 2023-02-28 10:07:57 +01:00
Linus Färnstrand
6cb34492a6 Move IpAddr and SocketAddr to core 2023-02-26 13:50:08 +01:00
soc
3aa9f76a3a
Remove #![feature(option_result_contains)] from library/core/tests/lib.rs 2023-02-15 19:30:02 +00:00
bors
2d91939bb7 Auto merge of #107634 - scottmcm:array-drain, r=thomcc
Improve the `array::map` codegen

The `map` method on arrays [is documented as sometimes performing poorly](https://doc.rust-lang.org/std/primitive.array.html#note-on-performance-and-stack-usage), and after [a question on URLO](https://users.rust-lang.org/t/try-trait-residual-o-trait-and-try-collect-into-array/88510?u=scottmcm) prompted me to take another look at the core [`try_collect_into_array`](7c46fb2111/library/core/src/array/mod.rs (L865-L912)) function, I had some ideas that ended up working better than I'd expected.

There's three main ideas in here, split over three commits:
1. Don't use `array::IntoIter` when we can avoid it, since that seems to not get SRoA'd, meaning that every step writes things like loop counters into the stack unnecessarily
2. Don't return arrays in `Result`s unnecessarily, as that doesn't seem to optimize away even with `unwrap_unchecked` (perhaps because it needs to get moved into a new LLVM type to account for the discriminant)
3. Don't distract LLVM with all the `Option` dances when we know for sure we have enough items (like in `map` and `zip`).  This one's a larger commit as to do it I ended up adding a new `pub(crate)` trait, but hopefully those changes are still straight-forward.

(No libs-api changes; everything should be completely implementation-detail-internal.)

It's still not completely fixed -- I think it needs pcwalton's `memcpy` optimizations still (#103830) to get further -- but this seems to go much better than before.  And the remaining `memcpy`s are just `transmute`-equivalent (`[T; N] -> ManuallyDrop<[T; N]>` and `[MaybeUninit<T>; N] -> [T; N]`), so hopefully those will be easier to remove with LLVM16 than the previous subobject copies 🤞

r? `@thomcc`

As a simple example, this test
```rust
pub fn long_integer_map(x: [u32; 64]) -> [u32; 64] {
    x.map(|x| 13 * x + 7)
}
```
On nightly <https://rust.godbolt.org/z/xK7548TGj> takes `sub rsp, 808`
```llvm
start:
  %array.i.i.i.i = alloca [64 x i32], align 4
  %_3.sroa.5.i.i.i = alloca [65 x i32], align 4
  %_5.i = alloca %"core::iter::adapters::map::Map<core::array::iter::IntoIter<u32, 64>, [closure@/app/example.rs:2:11: 2:14]>", align 8
```
(and yes, that's a 6**5**-element array `alloca` despite 6**4**-element input and output)

But with this PR it's only `sub rsp, 520`
```llvm
start:
  %array.i.i.i.i.i.i = alloca [64 x i32], align 4
  %array1.i.i.i = alloca %"core::mem::manually_drop::ManuallyDrop<[u32; 64]>", align 4
```

Similarly, the loop it emits on nightly is scalar-only and horrifying
```nasm
.LBB0_1:
        mov     esi, 64
        mov     edi, 0
        cmp     rdx, 64
        je      .LBB0_3
        lea     rsi, [rdx + 1]
        mov     qword ptr [rsp + 784], rsi
        mov     r8d, dword ptr [rsp + 4*rdx + 528]
        mov     edi, 1
        lea     edx, [r8 + 2*r8]
        lea     r8d, [r8 + 4*rdx]
        add     r8d, 7
.LBB0_3:
        test    edi, edi
        je      .LBB0_11
        mov     dword ptr [rsp + 4*rcx + 272], r8d
        cmp     rsi, 64
        jne     .LBB0_6
        xor     r8d, r8d
        mov     edx, 64
        test    r8d, r8d
        jne     .LBB0_8
        jmp     .LBB0_11
.LBB0_6:
        lea     rdx, [rsi + 1]
        mov     qword ptr [rsp + 784], rdx
        mov     edi, dword ptr [rsp + 4*rsi + 528]
        mov     r8d, 1
        lea     esi, [rdi + 2*rdi]
        lea     edi, [rdi + 4*rsi]
        add     edi, 7
        test    r8d, r8d
        je      .LBB0_11
.LBB0_8:
        mov     dword ptr [rsp + 4*rcx + 276], edi
        add     rcx, 2
        cmp     rcx, 64
        jne     .LBB0_1
```

whereas with this PR it's unrolled and vectorized
```nasm
	vpmulld	ymm1, ymm0, ymmword ptr [rsp + 64]
	vpaddd	ymm1, ymm1, ymm2
	vmovdqu	ymmword ptr [rsp + 328], ymm1
	vpmulld	ymm1, ymm0, ymmword ptr [rsp + 96]
	vpaddd	ymm1, ymm1, ymm2
	vmovdqu	ymmword ptr [rsp + 360], ymm1
```
(though sadly still stack-to-stack)
2023-02-13 10:18:48 +00:00
Matthias Krüger
454ae9fb8b
Rollup merge of #107954 - RalfJung:tree-borrows-fix, r=m-ou-se
avoid mixing accesses of ptrs derived from a mutable ref and parent ptrs

``@Vanille-N`` is working on a successor for Stacked Borrows. It will mostly accept strictly more code than Stacked Borrows did, with one exception: the following pattern no longer works.
```rust
let mut root = 6u8;
let mref = &mut root;
let ptr = mref as *mut u8;
*ptr = 0; // Write
assert_eq!(root, 0); // Parent Read
*ptr = 0; // Attempted Write
```
This worked in Stacked Borrows kind of by accident: when doing the "parent read", under SB we Disable `mref`, but the raw ptrs derived from it remain usable. The fact that we can still use the "children" of a reference that is no longer usable is quite nasty and leads to some undesirable effects (in particular it is the major blocker for resolving https://github.com/rust-lang/unsafe-code-guidelines/issues/257). So in Tree Borrows we no longer do that; instead, reading from `root` makes `mref` and all its children read-only.

Due to other improvements in Tree Borrows, the entire Miri test suite still passes with this new behavior, and even the entire libcore and liballoc test suite, except for these 2 cases this PR fixes. Both of these involve code where the programmer wrote `&mut` but then used pointers derived from that reference in ways that alias with the parent pointer, which arguably is violating uniqueness. They are fixed by properly using raw pointers throughout.
2023-02-12 22:29:49 +01:00
Ralf Jung
c3a2e7a809 avoid mixing accesses of ptrs derived from a mutable ref and parent ptrs 2023-02-12 15:16:27 +01:00
bors
adb4bfd25d Auto merge of #105671 - lukas-code:depreciate-char, r=scottmcm
Use associated items of `char` instead of freestanding items in `core::char`

The associated functions and constants on `char` have been stable since 1.52 and the freestanding items have soft-deprecated since 1.62 (https://github.com/rust-lang/rust/pull/95566). This PR ~~marks them as "deprecated in future", similar to the integer and floating point modules (`core::{i32, f32}` etc)~~ replaces all uses of `core::char::*` with `char::*` to prepare for future deprecation of `core::char::*`.
2023-02-12 11:09:06 +00:00
Scott McMurray
5bc328fdef Allow canonicalizing the array::map loop in trusted cases 2023-02-04 16:44:51 -08:00
Scott McMurray
5a7342c3dd Stop using into_iter in array::map 2023-02-04 16:41:35 -08:00
Nikolai Vazquez
734a91358b Remove unnecessary &format!
These were likely from before the `PartialEq<str>` impl for `&String`.
2023-01-21 22:06:42 -05:00
André Vennberg
0b35f448f8 Remove various double spaces in source comments. 2023-01-14 17:22:04 +01:00
Lukas Markeffsky
76e216f29b Use associated items of char instead of freestanding items in core::char 2023-01-14 11:58:41 +01:00
Daniel Henry-Mantilla
48b7e2a5b9
Stabilize ::{core,std}::pin::pin! 2023-01-11 14:09:14 -08:00
nils
c962b07ed3
Rollup merge of #106570 - Xaeroxe:div-duration-tests, r=JohnTitor
add tests for div_duration_* functions

Per https://github.com/rust-lang/rust/issues/63139#issuecomment-817070719

this adds unit tests for the functions that will hopefully effectively demonstrate that `div_duration` is ready to be stabilized.
2023-01-11 17:30:54 +01:00
Jacob Kiesel
9fd744b3e3 add tests for div_duration_* functions 2023-01-07 11:05:33 -07:00
Thom Chiovoloni
a4bf36e87b
Update rand in the stdlib tests, and remove the getrandom feature from it 2023-01-04 14:52:41 -08:00
David Tolnay
257e766c0c
Remove test of static Context
Context is no longer Sync so this doesn't work.

    error[E0277]: `*mut ()` cannot be shared between threads safely
      --> library/core/tests/task.rs:24:21
       |
    24 |     static CONTEXT: Context<'static> = Context::from_waker(&WAKER);
       |                     ^^^^^^^^^^^^^^^^ `*mut ()` cannot be shared between threads safely
       |
       = help: within `Context<'static>`, the trait `Sync` is not implemented for `*mut ()`
       = note: required because it appears within the type `PhantomData<*mut ()>`
       = note: required because it appears within the type `Context<'static>`
       = note: shared static variables must have a type that implements `Sync`
2023-01-02 10:33:23 -08:00
jonathanCogan
78691e3589 Update paths in comments. 2022-12-30 14:00:42 +01:00
jonathanCogan
db47071df2 Replace libstd, libcore, liballoc in line comments. 2022-12-30 14:00:42 +01:00
Pietro Albini
11191279b7 Update bootstrap cfg 2022-12-28 09:18:43 -05:00
Michael Goulet
4b668a1fee
Rollup merge of #103718 - matklad:infer-lazy, r=dtolnay
More inference-friendly API for lazy

The signature for new was

```
fn new<F>(f: F) -> Lazy<T, F>
```

Notably, with `F` unconstrained, `T` can be literally anything, and just `let _ = Lazy::new(|| 92)` would not typecheck.

This historiacally was a necessity -- `new` is a `const` function, it couldn't have any bounds. Today though, we can move `new` under the `F: FnOnce() -> T` bound, which gives the compiler enough data to infer the type of T from closure.
2022-12-27 12:33:33 -08:00
Michal Nazarewicz
28162ad970 char: µoptimise UTF-16 surrogates decoding
According to Godbolt¹, on x86_64 using binary and produces slightly
better code than using subtraction.  Readability of both is pretty
much equivalent so might just as well use the shorter option.

¹ https://rust.godbolt.org/z/9jM3ejbMx
2022-12-23 14:15:33 +01:00
Scott McMurray
9d68a1a74c Tune RepeatWith::try_fold and Take::for_each and Vec::extend_trusted 2022-11-24 19:14:19 -08:00