Add a `try_collect()` helper method to `Iterator`
Implement `Iterator::try_collect()` as a helper around `Iterator::collect()` as discussed [here](https://internals.rust-lang.org/t/idea-fallible-iterator-mapping-with-try-map/15715/5?u=a.lafrance).
First time contributor so definitely open to any feedback about my implementation! Specifically wondering if I should open a tracking issue for the unstable feature I introduced.
As the main participant in the internals discussion: r? `@scottmcm`
Add documentation to more `From::from` implementations.
For users looking at documentation through IDE popups, this gives them relevant information rather than the generic trait documentation wording “Performs the conversion”. For users reading the documentation for a specific type for any reason, this informs them when the conversion may allocate or copy significant memory versus when it is always a move or cheap copy.
Notes on specific cases:
* The new documentation for `From<T> for T` explains that it is not a conversion at all.
* Also documented `impl<T, U> Into<U> for T where U: From<T>`, the other central blanket implementation of conversion.
* The new documentation for construction of maps and sets from arrays of keys mentions the handling of duplicates. Future work could be to do this for *all* code paths that convert an iterable to a map or set.
* I did not add documentation to conversions of a specific error type to a more general error type.
* I did not add documentation to unstable code.
This change was prepared by searching for the text "From<... for" and so may have missed some cases that for whatever reason did not match. I also looked for `Into` impls but did not find any worth documenting by the above criteria.
Destabilize cfg(target_has_atomic_load_store = ...)
This was not intended to be stabilized yet.
This keeps the cfg_target_has_atomic feature gate name since compiler-builtins otherwise depends on it and I'd rather not try to manage a bump across a crates.io published repository given the time-sensitivity here (we need to land this quickly to avoid a beta backport).
Closes https://github.com/rust-lang/rust/issues/32976
r? `@Amanieu`
Make [u8]::cmp implementation branchless
The current implementation generates rather ugly assembly code, branching when the common parts are equal. By performing the comparison of the lengths upfront using a subtraction, the assembly gets much prettier: https://godbolt.org/z/4e5fnEKGd.
This will probably not impact speed too much, as the expensive part is in most cases the `memcmp`, but it sure looks better (I'm porting a sorting algorithm currently, and that branch just bothered me).
Since `decl_macro`s and/or `Span::def_site()` is deemed quite unstable,
no public-facing macro that relies on it can hope to be, itself, stabilized.
We circumvent the issue by no longer relying on field privacy for safety and,
instead, relying on an unstable feature-gate to act as the gate keeper for
non users of the macro (thanks to `allow_internal_unstable`).
This is technically not correct (since a `nightly` user could technically enable
the feature and cause unsoundness with it); or, in other words, this makes the
feature-gate used to gate the access to the field be (technically unsound, and
in practice) `unsafe`. Hence it having `unsafe` in its name.
Back to the macro, we go back to `macro_rules!` / `mixed_site()`-span rules thanks
to declaring the `decl_macro` as `semitransparent`, which is a hack to basically have
`pub macro_rules!`
Co-Authored-By: Mara Bos <m-ou.se@m-ou.se>
Stabilise inherent_ascii_escape (FCP in #77174)
Implements #77174, which completed its FCP.
This does *not* deprecate any existing methods or structs, as that is tracked in #93887. That stated, people should prefer using `u8::escape_ascii` to `std::ascii::escape_default`.
More practical examples for `Option::and_then` & `Result::and_then`
To be blatantly honest, I think the current example given for `Option::and_then` is objectively terrible. (No offence to whoever wrote them initially.)
```rust
fn sq(x: u32) -> Option<u32> { Some(x * x) }
fn nope(_: u32) -> Option<u32> { None }
assert_eq!(Some(2).and_then(sq).and_then(sq), Some(16));
assert_eq!(Some(2).and_then(sq).and_then(nope), None);
assert_eq!(Some(2).and_then(nope).and_then(sq), None);
assert_eq!(None.and_then(sq).and_then(sq), None);
```
Current example:
- does not demonstrate that `and_then` converts `Option<T>` to `Option<U>`
- is far removed from any realistic code
- generally just causes more confusion than it helps
So I replaced them with two blocks:
- the first one shows basic usage (including the type conversion)
- the second one shows an example of typical usage
Same thing with `Result::and_then`.
Hopefully this helps with clarity.
Change `ResultShunt` to be generic over `Try`
Just a refactor (and rename) for now, so it's not `Result`-specific.
This could be used for a future `Iterator::try_collect`, or similar, but anything like that is left for a future PR.
Add {floor,ceil}_char_boundary methods to str
This is technically already used internally by the standard library in the form of `truncate_to_char_boundary`.
Essentially these are two building blocks to allow for approximate string truncation, where you want to cut off the string at "approximately" a given length in bytes but don't know exactly where the character boundaries lie. It's also a good candidate for the standard library as it can easily be done naively, but would be difficult to properly optimise. Although the existing code that's done in error messages is done naively, this code will explicitly only check a window of 4 bytes since we know that a boundary must lie in that range, and because it will make it possible to vectorise.
Although this method doesn't take into account graphemes or other properties, this would still be a required building block for splitting that takes those into account. For example, if you wanted to split at a grapheme boundary, you could take your approximate splitting point and then determine the graphemes immediately following and preceeding the split. If you then notice that these two graphemes could be merged, you can decide to either include the whole grapheme or exclude it depending on whether you decide splitting should shrink or expand the string.
This takes the most conservative approach and just offers the raw indices to the user, and they can decide how to use them. That way, the methods are as useful as possible despite having as few methods as possible.
(Note: I'll add some tests and a tracking issue if it's decided that this is worth including.)
Just a refactor (and rename) for now, so it's not `Result`-specific.
This could be used for a future `Iterator::try_collect`, or similar, but anything like that is left for a future PR.
Impl {Add,Sub,Mul,Div,Rem,BitXor,BitOr,BitAnd}Assign<$t> for Wrapping<$t> for rust 1.60.0
Tracking issue #93204
This is about adding basic integer operations to the `Wrapping` type:
```rust
let mut value = Wrapping(2u8);
value += 3u8;
value -= 1u8;
value *= 2u8;
value /= 2u8;
value %= 2u8;
value ^= 255u8;
value |= 123u8;
value &= 2u8;
```
Because this adds stable impls on a stable type, it runs into the following issue if an `#[unstable(...)]` attribute is used:
```
an `#[unstable]` annotation here has no effect
note: see issue #55436 <https://github.com/rust-lang/rust/issues/55436> for more information
```
This means - if I understood this correctly - the new impls have to be stabilized instantly.
Which in turn means, this PR has to kick of an FCP on the tracking issue as well?
This impl is analog to 1c0dc1810d#92356 for the `Saturating` type ``@dtolnay`` ``@Mark-Simulacrum``
Fix invalid special casing of the unreachable! macro
This pull-request fix an invalid special casing of the `unreachable!` macro in the same way the `panic!` macro was solved, by adding two new internal only macros `unreachable_2015` and `unreachable_2021` edition dependent and turn `unreachable!` into a built-in macro that do dispatching. This logic is stolen from the `panic!` macro.
~~This pull-request also adds an internal feature `format_args_capture_non_literal` that allows capturing arguments from formatted string that expanded from macros. The original RFC #2795 mentioned this as a future possibility. This feature is [required](https://github.com/rust-lang/rust/issues/92137#issuecomment-1018630522) because of concatenation that needs to be done inside the macro:~~
```rust
$crate::concat!("internal error: entered unreachable code: ", $fmt)
```
**In summary** the new behavior for the `unreachable!` macro with this pr is:
Edition 2021:
```rust
let x = 5;
unreachable!("x is {x}");
```
```
internal error: entered unreachable code: x is 5
```
Edition <= 2018:
```rust
let x = 5;
unreachable!("x is {x}");
```
```
internal error: entered unreachable code: x is {x}
```
Also note that the change in this PR are **insta-stable** and **breaking changes** but this a considered as being a [bug](https://github.com/rust-lang/rust/issues/92137#issuecomment-998441613).
If someone could start a perf run and then a crater run this would be appreciated.
Fixes https://github.com/rust-lang/rust/issues/92137
Rollup of 2 pull requests
Successful merges:
- #90998 (Require const stability attribute on all stable functions that are `const`)
- #93489 (Mark the panic_no_unwind lang item as nounwind)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Mark the panic_no_unwind lang item as nounwind
This has 2 effects:
- It helps LLVM when inlining since it doesn't need to generate landing pads for `panic_no_unwind`.
- It makes it sound for a panic handler to unwind even if `PanicInfo::can_unwind` returns true. This will simply cause another panic once the unwind tries to go past the `panic_no_unwind` lang item. Eventually this will cause a stack overflow, which is safe.
Require const stability attribute on all stable functions that are `const`
This PR requires all stable functions (of all kinds) that are `const fn` to have a `#[rustc_const_stable]` or `#[rustc_const_unstable]` attribute. Stability was previously implied if omitted; a follow-up PR is planned to change the fallback to be unstable.
Optimize `core::str::Chars::count`
I wrote this a while ago after seeing this function as a bottleneck in a profile, but never got around to contributing it. I saw it again, and so here it is. The implementation is fairly complex, but I tried to explain what's happening at both a high level (in the header comment for the file), and in line comments in the impl. Hopefully it's clear enough.
This implementation (`case00_cur_libcore` in the benchmarks below) is somewhat consistently around 4x to 5x faster than the old implementation (`case01_old_libcore` in the benchmarks below), for a wide variety of workloads, without regressing performance on any of the workload sizes I've tried.
I also improved the benchmarks for this code, so that they explicitly check text in different languages and of different sizes (err, the cross product of language x size). The results of the benchmarks are here:
<details>
<summary>Benchmark results</summary>
<pre>
test str::char_count::emoji_huge::case00_cur_libcore ... bench: 20,216 ns/iter (+/- 3,673) = 17931 MB/s
test str::char_count::emoji_huge::case01_old_libcore ... bench: 108,851 ns/iter (+/- 12,777) = 3330 MB/s
test str::char_count::emoji_huge::case02_iter_increment ... bench: 329,502 ns/iter (+/- 4,163) = 1100 MB/s
test str::char_count::emoji_huge::case03_manual_char_len ... bench: 223,333 ns/iter (+/- 14,167) = 1623 MB/s
test str::char_count::emoji_large::case00_cur_libcore ... bench: 293 ns/iter (+/- 6) = 19331 MB/s
test str::char_count::emoji_large::case01_old_libcore ... bench: 1,681 ns/iter (+/- 28) = 3369 MB/s
test str::char_count::emoji_large::case02_iter_increment ... bench: 5,166 ns/iter (+/- 85) = 1096 MB/s
test str::char_count::emoji_large::case03_manual_char_len ... bench: 3,476 ns/iter (+/- 62) = 1629 MB/s
test str::char_count::emoji_medium::case00_cur_libcore ... bench: 48 ns/iter (+/- 0) = 14750 MB/s
test str::char_count::emoji_medium::case01_old_libcore ... bench: 217 ns/iter (+/- 4) = 3262 MB/s
test str::char_count::emoji_medium::case02_iter_increment ... bench: 642 ns/iter (+/- 7) = 1102 MB/s
test str::char_count::emoji_medium::case03_manual_char_len ... bench: 445 ns/iter (+/- 3) = 1591 MB/s
test str::char_count::emoji_small::case00_cur_libcore ... bench: 18 ns/iter (+/- 0) = 3777 MB/s
test str::char_count::emoji_small::case01_old_libcore ... bench: 23 ns/iter (+/- 0) = 2956 MB/s
test str::char_count::emoji_small::case02_iter_increment ... bench: 66 ns/iter (+/- 2) = 1030 MB/s
test str::char_count::emoji_small::case03_manual_char_len ... bench: 29 ns/iter (+/- 1) = 2344 MB/s
test str::char_count::en_huge::case00_cur_libcore ... bench: 25,909 ns/iter (+/- 39,260) = 13299 MB/s
test str::char_count::en_huge::case01_old_libcore ... bench: 102,887 ns/iter (+/- 3,257) = 3349 MB/s
test str::char_count::en_huge::case02_iter_increment ... bench: 166,370 ns/iter (+/- 12,439) = 2071 MB/s
test str::char_count::en_huge::case03_manual_char_len ... bench: 166,332 ns/iter (+/- 4,262) = 2071 MB/s
test str::char_count::en_large::case00_cur_libcore ... bench: 281 ns/iter (+/- 6) = 19160 MB/s
test str::char_count::en_large::case01_old_libcore ... bench: 1,598 ns/iter (+/- 19) = 3369 MB/s
test str::char_count::en_large::case02_iter_increment ... bench: 2,598 ns/iter (+/- 167) = 2072 MB/s
test str::char_count::en_large::case03_manual_char_len ... bench: 2,578 ns/iter (+/- 55) = 2088 MB/s
test str::char_count::en_medium::case00_cur_libcore ... bench: 44 ns/iter (+/- 1) = 15295 MB/s
test str::char_count::en_medium::case01_old_libcore ... bench: 201 ns/iter (+/- 51) = 3348 MB/s
test str::char_count::en_medium::case02_iter_increment ... bench: 322 ns/iter (+/- 40) = 2090 MB/s
test str::char_count::en_medium::case03_manual_char_len ... bench: 319 ns/iter (+/- 5) = 2109 MB/s
test str::char_count::en_small::case00_cur_libcore ... bench: 15 ns/iter (+/- 0) = 2333 MB/s
test str::char_count::en_small::case01_old_libcore ... bench: 14 ns/iter (+/- 0) = 2500 MB/s
test str::char_count::en_small::case02_iter_increment ... bench: 30 ns/iter (+/- 1) = 1166 MB/s
test str::char_count::en_small::case03_manual_char_len ... bench: 30 ns/iter (+/- 1) = 1166 MB/s
test str::char_count::ru_huge::case00_cur_libcore ... bench: 16,439 ns/iter (+/- 3,105) = 19777 MB/s
test str::char_count::ru_huge::case01_old_libcore ... bench: 89,480 ns/iter (+/- 2,555) = 3633 MB/s
test str::char_count::ru_huge::case02_iter_increment ... bench: 217,703 ns/iter (+/- 22,185) = 1493 MB/s
test str::char_count::ru_huge::case03_manual_char_len ... bench: 157,330 ns/iter (+/- 19,188) = 2066 MB/s
test str::char_count::ru_large::case00_cur_libcore ... bench: 243 ns/iter (+/- 6) = 20905 MB/s
test str::char_count::ru_large::case01_old_libcore ... bench: 1,384 ns/iter (+/- 51) = 3670 MB/s
test str::char_count::ru_large::case02_iter_increment ... bench: 3,381 ns/iter (+/- 543) = 1502 MB/s
test str::char_count::ru_large::case03_manual_char_len ... bench: 2,423 ns/iter (+/- 429) = 2096 MB/s
test str::char_count::ru_medium::case00_cur_libcore ... bench: 42 ns/iter (+/- 1) = 15119 MB/s
test str::char_count::ru_medium::case01_old_libcore ... bench: 180 ns/iter (+/- 4) = 3527 MB/s
test str::char_count::ru_medium::case02_iter_increment ... bench: 402 ns/iter (+/- 45) = 1579 MB/s
test str::char_count::ru_medium::case03_manual_char_len ... bench: 280 ns/iter (+/- 29) = 2267 MB/s
test str::char_count::ru_small::case00_cur_libcore ... bench: 12 ns/iter (+/- 0) = 2666 MB/s
test str::char_count::ru_small::case01_old_libcore ... bench: 12 ns/iter (+/- 0) = 2666 MB/s
test str::char_count::ru_small::case02_iter_increment ... bench: 19 ns/iter (+/- 0) = 1684 MB/s
test str::char_count::ru_small::case03_manual_char_len ... bench: 14 ns/iter (+/- 1) = 2285 MB/s
test str::char_count::zh_huge::case00_cur_libcore ... bench: 15,053 ns/iter (+/- 2,640) = 20067 MB/s
test str::char_count::zh_huge::case01_old_libcore ... bench: 82,622 ns/iter (+/- 3,602) = 3656 MB/s
test str::char_count::zh_huge::case02_iter_increment ... bench: 230,456 ns/iter (+/- 7,246) = 1310 MB/s
test str::char_count::zh_huge::case03_manual_char_len ... bench: 220,595 ns/iter (+/- 11,624) = 1369 MB/s
test str::char_count::zh_large::case00_cur_libcore ... bench: 227 ns/iter (+/- 65) = 20792 MB/s
test str::char_count::zh_large::case01_old_libcore ... bench: 1,136 ns/iter (+/- 144) = 4154 MB/s
test str::char_count::zh_large::case02_iter_increment ... bench: 3,147 ns/iter (+/- 253) = 1499 MB/s
test str::char_count::zh_large::case03_manual_char_len ... bench: 2,993 ns/iter (+/- 400) = 1577 MB/s
test str::char_count::zh_medium::case00_cur_libcore ... bench: 36 ns/iter (+/- 5) = 16388 MB/s
test str::char_count::zh_medium::case01_old_libcore ... bench: 142 ns/iter (+/- 18) = 4154 MB/s
test str::char_count::zh_medium::case02_iter_increment ... bench: 379 ns/iter (+/- 37) = 1556 MB/s
test str::char_count::zh_medium::case03_manual_char_len ... bench: 364 ns/iter (+/- 51) = 1620 MB/s
test str::char_count::zh_small::case00_cur_libcore ... bench: 11 ns/iter (+/- 1) = 3000 MB/s
test str::char_count::zh_small::case01_old_libcore ... bench: 11 ns/iter (+/- 1) = 3000 MB/s
test str::char_count::zh_small::case02_iter_increment ... bench: 20 ns/iter (+/- 3) = 1650 MB/s
</pre>
</details>
I also added fairly thorough tests for different sizes and alignments. This completes on my machine in 0.02s, which is surprising given how thorough they are, but it seems to detect bugs in the implementation. (I haven't run the tests on a 32 bit machine yet since before I reworked the code a little though, so... hopefully I'm not about to embarrass myself).
This uses similar SWAR-style techniques to the `is_ascii` impl I contributed in https://github.com/rust-lang/rust/pull/74066, so I'm going to request review from the same person who reviewed that one. That said am not particularly picky, and might not have the correct syntax for requesting a review from someone (so it goes).
r? `@nagisa`
Document valid values of the char type
As discussed at #93392, the current documentation on what constitutes a valid char isn't very detailed and is partly on the MAX constant rather than the type itself.
This PR expands on that information, stating the actual numerical range, giving examples of what won't work, and also mentions how a `char` might be a valid USV but still not be a defined character (terminology checked against [Unicode 14.0, table 2-3](https://www.unicode.org/versions/Unicode14.0.0/ch02.pdf#M9.61673.TableTitle.Table.22.Types.of.Code.Points)).
Implement `RawWaker` and `Waker` getters for underlying pointers
implement #87021
New APIs:
- `RawWaker::data(&self) -> *const ()`
- `RawWaker::vtable(&self) -> &'static RawWakerVTable`
- ~`Waker::as_raw_waker(&self) -> &RawWaker`~ `Waker::as_raw(&self) -> &RawWaker`
This third one is an auxiliary function to make the two APIs above more useful. Since we can only get `&Waker` in `Future::poll`, without this, we need to `transmute` it into `&RawWaker` (relying on `repr(transparent)`) in order to access its data/vtable pointers.
~Not sure if it should be named `as_raw` or `as_raw_waker`. Seems we always use `as_<something-raw>` instead of just `as_raw`. But `as_raw_waker` seems not quite consistent with `Waker::from_raw`.~ As suggested in https://github.com/rust-lang/rust/pull/91828#discussion_r770729837, use `as_raw`.
Carefully remove bounds checks from some chunk iterator functions
So, I was writing code that requires the equivalent of `rchunks(N).rev()` (which isn't the same as forward `chunks(N)` — in particular, if the buffer length is not a multiple of `N`, I must handle the "remainder" first).
I happened to look at the codegen output of the function (I was actually interested in whether or not a nested loop was being unrolled — it was), and noticed that in the outer `rchunks(n).rev()` loop, LLVM seemed to be unable to remove the bounds checks from the iteration: https://rust.godbolt.org/z/Tnz4MYY8f (this panic was from the split_at in `RChunks::next_back`).
After doing some experimentation, it seems all of the `next_back` in the non-exact chunk iterators have the issue: (`Chunks::next_back`, `RChunks::next_back`, `ChunksMut::next_back`, and `RChunksMut::next_back`)...
Even worse, the forward `rchunks` iterators sometimes have the issue as well (... but only sometimes). For example https://rust.godbolt.org/z/oGhbqv53r has bounds checks, but if I uncomment the loop body, it manages to remove the check (which is bizarre, since I'd expect the opposite...). I suspect it's highly dependent on the surrounding code, so I decided to remove the bounds checks from them anyway. Overall, this change includes:
- All `next_back` functions on the non-`Exact` iterators (e.g. `R?Chunks(Mut)?`).
- All `next` functions on the non-exact rchunks iterators (e.g. `RChunks(Mut)?`).
I wasn't able to catch any of the other chunk iterators failing to remove the bounds checks (I checked iterations over `r?chunks(_exact)?(_mut)?` with constant chunk sizes under `-O3`, `-Os`, and `-Oz`), which makes sense, since these were the cases where it was harder to prove the bounds check correct to remove...
In fact, it took quite a bit of thinking to convince myself that using unchecked_ here was valid — so I'm not really surprised that LLVM had trouble (although compilers are slightly better at this sort of reasoning than humans). A consequence of that is the fact that the `// SAFETY` comment for these are... kinda long...
---
I didn't do this for, or even think about it for, any of the other iteration methods; just `next` and `next_back` (where it mattered). If this PR is accepted, I'll file a follow up for someone (possibly me) to look at the others later (in particular, `nth`/`nth_back` looked like they had similar logic), but I wanted to do this now, as IMO `next`/`next_back` are the most important here, since they're what gets used by the iteration protocol.
---
Note: While I don't expect this to impact performance directly, the panic is a side effect, which would otherwise not exist in these loops. That is, this could prevent the compiler from being able to move/remove/otherwise rework a loop over these iterators (as an example, it could not delete the code for a loop whose body computes a value which doesn't get used).
Also, some like to be able to have confidence this code has no panicking branches in the optimized code, and "no bounds checks" is kinda part of the selling point of Rust's iterators anyway.
Remove deprecated and unstable slice_partition_at_index functions
They have been deprecated since commit 01ac5a97c9
which was part of the 1.49.0 release, so from the point of nightly,
11 releases ago.
review the total_cmp documentation
The documentation has been restructured to split out a brief summary
paragraph out from the following elaborating paragraphs.
I also attempted my hand at wording improvements and adding articles
where I felt them missing, but being non-native english speaker these
may need more thorough review.
cc https://github.com/rust-lang/rust/issues/72599
Clarify documentation on char::MAX
As mentioned in https://github.com/rust-lang/rust/issues/91836#issuecomment-994106874, the documentation on `char::MAX` is not quite correct – USVs are not "only ones within a certain range", they are code points _outside_ a certain range. I have corrected this and given the actual numbers as there is no reason to hide them.
Make `char::DecodeUtf16::size_hist` more precise
New implementation takes into account contents of `self.buf` and rounds lower bound up instead of down.
Fixes#88762
Revival of #88763
Create `core::fmt::ArgumentV1` with generics instead of fn pointer
Split from (and prerequisite of) #90488, as this seems to have perf implication.
`@rustbot` label: +T-libs
The documentation has been restructured to split out a brief summary
paragraph out from the following elaborating paragraphs.
I also attempted my hand at wording improvements and adding articles
where I felt them missing, but being non-native english speaker these
may need more thorough review.
Add `intrinsics::const_deallocate`
Tracking issue: #79597
Related: #91884
This allows deallocation of a memory allocated by `intrinsics::const_allocate`. At the moment, this can be only used to reduce memory usage, but in the future this may be useful to detect memory leaks (If an allocated memory remains after evaluation, raise an error...?).
Unimpl {Add,Sub,Mul,Div,Rem,BitXor,BitOr,BitAnd}<$t> for Saturating<$t>
Tracking issue #92354
Analog to 9648b313cc#93208 reduce `saturating_int_assign_impl` (#93208) to:
```rust
let mut value = Saturating(2u8);
value += 3u8;
value -= 1u8;
value *= 2u8;
value /= 2u8;
value %= 2u8;
value ^= 255u8;
value |= 123u8;
value &= 2u8;
```
See https://github.com/rust-lang/rust/pull/93208#issuecomment-1022564429
Add links to the reference and rust by example for asm! docs and lints
These were previously removed in #91728 due to broken links.
cc ``@ehuss`` since this updates the rust-by-example submodule
This makes `PartialOrd` consistent with the other three traits in this
module, which all include links to their respective mathematical concepts
on Wikipedia.
impl Not for !
The lack of this impl caused trouble for me in some degenerate cases of macro-generated code of the form `if !$cond {...}`, even without `feature(never_type)` on a stable compiler. Namely if `$cond` contains a `return` or `break` or similar diverging expression, which would otherwise be perfectly legal in boolean position, the code previously failed to compile with:
```console
error[E0600]: cannot apply unary operator `!` to type `!`
--> library/core/tests/ops.rs:239:8
|
239 | if !return () {}
| ^^^^^^^^^^ cannot apply unary operator `!`
```
Print a helpful message if unwinding aborts when it reaches a nounwind function
This is implemented by routing `TerminatorKind::Abort` back through the panic handler, but with a special flag in the `PanicInfo` which indicates that the panic handler should *not* attempt to unwind the stack and should instead abort immediately.
This is useful for the planned change in https://github.com/rust-lang/lang-team/issues/97 which would make `Drop` impls `nounwind` by default.
### Code
```rust
#![feature(c_unwind)]
fn panic() {
panic!()
}
extern "C" fn nounwind() {
panic();
}
fn main() {
nounwind();
}
```
### Before
```
$ ./test
thread 'main' panicked at 'explicit panic', test.rs:4:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Illegal instruction (core dumped)
```
### After
```
$ ./test
thread 'main' panicked at 'explicit panic', test.rs:4:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'main' panicked at 'panic in a function that cannot unwind', test.rs:7:1
stack backtrace:
0: 0x556f8f86ec9b - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hdccefe11a6ac4396
1: 0x556f8f88ac6c - core::fmt::write::he152b28c41466ebb
2: 0x556f8f85d6e2 - std::io::Write::write_fmt::h0c261480ab86f3d3
3: 0x556f8f8654fa - std::panicking::default_hook::{{closure}}::h5d7346f3ff7f6c1b
4: 0x556f8f86512b - std::panicking::default_hook::hd85803a1376cac7f
5: 0x556f8f865a91 - std::panicking::rust_panic_with_hook::h4dc1c5a3036257ac
6: 0x556f8f86f079 - std::panicking::begin_panic_handler::{{closure}}::hdda1d83c7a9d34d2
7: 0x556f8f86edc4 - std::sys_common::backtrace::__rust_end_short_backtrace::h5b70ed0cce71e95f
8: 0x556f8f865592 - rust_begin_unwind
9: 0x556f8f85a764 - core::panicking::panic_no_unwind::h2606ab3d78c87899
10: 0x556f8f85b910 - test::nounwind::hade6c7ee65050347
11: 0x556f8f85b936 - test::main::hdc6e02cb36343525
12: 0x556f8f85b7e3 - core::ops::function::FnOnce::call_once::h4d02663acfc7597f
13: 0x556f8f85b739 - std::sys_common::backtrace::__rust_begin_short_backtrace::h071d40135adb0101
14: 0x556f8f85c149 - std::rt::lang_start::{{closure}}::h70dbfbf38b685e93
15: 0x556f8f85c791 - std::rt::lang_start_internal::h798f1c0268d525aa
16: 0x556f8f85c131 - std::rt::lang_start::h476a7ee0a0bb663f
17: 0x556f8f85b963 - main
18: 0x7f64c0822b25 - __libc_start_main
19: 0x556f8f85ae8e - _start
20: 0x0 - <unknown>
thread panicked while panicking. aborting.
Aborted (core dumped)
```
Add MaybeUninit::(slice_)as_bytes(_mut)
This adds methods to convert between `MaybeUninit<T>` and a slice of `MaybeUninit<u8>`. This is safe since `MaybeUninit<u8>` can correctly handle padding bytes in any `T`.
These methods are added:
```rust
impl<T> MaybeUninit<T> {
pub fn as_bytes(&self) -> &[MaybeUninit<u8>];
pub fn as_bytes_mut(&mut self) -> &mut [MaybeUninit<u8>];
pub fn slice_as_bytes(this: &[MaybeUninit<T>]) -> &[MaybeUninit<u8>];
pub fn slice_as_bytes_mut(this: &mut [MaybeUninit<T>]) -> &mut [MaybeUninit<u8>];
}
```
Change PhantomData type for `BuildHasherDefault` (and more)
Changes `PhantomData<H>` to `PhantomData<fn() -> H>` for `BuildHasherDefault`. This preserves the covariance of `H`, while it lifts the currently inferred unnecessary bounds like [`H: Send` for `BuildHasherDefault<H>: Send`](https://doc.rust-lang.org/1.57.0/std/hash/struct.BuildHasherDefault.html#impl-Send), etc.
_Edit:_ Also does a similar change for `iter::Empty` and `future::Pending`.
add `rustc_diagnostic_item` attribute to `AtomicBool` type
I wanted to use this in clippy and found that it didn't work. So hopefully this addition will fix it.
Use `carrying_{mul|add}` in `num::bignum`
Now that we have (unstable) methods for this, we don't need the bespoke trait methods for it in the `bignum` implementation.
cc #85532
Add `log2` and `log10` to `NonZeroU*`
This version is nice in that it doesn't need to worry about zeros, and thus doesn't have any error cases.
cc `int_log` tracking issue #70887
(I didn't add them to `NonZeroI*` despite it being on `i*` since allowing negatives bring back the error cases again.)
Remove deprecated LLVM-style inline assembly
The `llvm_asm!` was deprecated back in #87590 1.56.0, with intention to remove
it once `asm!` was stabilized, which already happened in #91728 1.59.0. Now it
is time to remove `llvm_asm!` to avoid continued maintenance cost.
Closes#70173.
Closes#92794.
Closes#87612.
Closes#82065.
cc `@rust-lang/wg-inline-asm`
r? `@Amanieu`
fix const_ptr_offset_from tracking issue
The old tracking issue #41079 was for exposing those functions at all, and got closed when they were stabilized. We had nothing tracking their `const`ness so I opened a new tracking issue: https://github.com/rust-lang/rust/issues/92980.
Copy an example to PartialOrd as well
In https://github.com/rust-lang/rust/pull/88202 I added an example for deriving PartialOrd on enums, but only later did I realize that I actually put the example on Ord.
This copies the example to PartialOrd as well, which is where I intended for it to be.
We could also delete the example on Ord, but I see there's already some highly similar examples shared between Ord and PartialOrd, so I figured we could leave it.
I also changed some type annotations in an example from `x : T` to the more common style (in Rust) of `x: T`.
Add diagnostic items for macros
For use in Clippy, it adds diagnostic items to all the stable public macros
Clippy has lints that look for almost all of these (currently by name or path), but there are a few that aren't currently part of any lint, I could remove those if it's preferred to add them as needed rather than ahead of time
Simplification of BigNum::bit_length
As indicated in the comment, the BigNum::bit_length function could be
optimized by using CLZ, which is often a single instruction instead a
loop.
I think the code is also simpler now without the loop.
I added some additional tests for Big8x3 and Big32x40 to ensure that
there were no regressions.
Extend const_convert to rest of blanket core::convert impls
This adds constness to all the blanket impls in `core::convert` under the existing `const_convert` feature, tracked by #88674.
Existing impls under that feature:
```rust
impl<T> const From<T> for T;
impl<T, U> const Into<U> for T where U: ~const From<T>;
impl<T> const ops::Try for Option<T>;
impl<T> const ops::FromResidual for Option<T>;
impl<T, E> const ops::Try for Result<T, E>;
impl<T, E, F> const ops::FromResidual<Result<convert::Infallible, E>> for Result<T, F> where F: ~const From<E>;
```
Additional impls:
```rust
impl<T: ?Sized, U: ?Sized> const AsRef<U> for &T where T: ~const AsRef<U>;
impl<T: ?Sized, U: ?Sized> const AsRef<U> for &mut T where T: ~const AsRef<U>;
impl<T: ?Sized, U: ?Sized> const AsMut<U> for &mut T where T: ~const AsMut<U>;
impl<T, U> const TryInto<U> for T where U: ~const TryFrom<T>;
impl<T, U> const TryFrom<U> for T where U: ~const Into<T>;
```
Partially stabilize `maybe_uninit_extra`
This covers:
```rust
impl<T> MaybeUninit<T> {
pub unsafe fn assume_init_read(&self) -> T { ... }
pub unsafe fn assume_init_drop(&mut self) { ... }
}
```
It does not cover the const-ness of `write` under `const_maybe_uninit_write` nor the const-ness of `assume_init_read` (this commit adds `const_maybe_uninit_assume_init_read` for that).
FCP: https://github.com/rust-lang/rust/issues/63567#issuecomment-958590287.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
This covers:
impl<T> MaybeUninit<T> {
pub unsafe fn assume_init_read(&self) -> T { ... }
pub unsafe fn assume_init_drop(&mut self) { ... }
}
It does not cover the const-ness of `write` under
`const_maybe_uninit_write` nor the const-ness of
`assume_init_read` (this commit adds
`const_maybe_uninit_assume_init_read` for that).
FCP: https://github.com/rust-lang/rust/issues/63567#issuecomment-958590287.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Replace usages of vec![].into_iter with [].into_iter
`[].into_iter` is idiomatic over `vec![].into_iter` because its simpler and faster (unless the vec is optimized away in which case it would be the same)
So we should change all the implementation, documentation and tests to use it.
I skipped:
* `src/tools` - Those are copied in from upstream
* `src/test/ui` - Hard to tell if `vec![].into_iter` was used intentionally or not here and not much benefit to changing it.
* any case where `vec![].into_iter` was used because we specifically needed a `Vec::IntoIter<T>`
* any case where it looked like we were intentionally using `vec![].into_iter` to test it.
As indicated in the comment, the BigNum::bit_length function could be
optimized by using CLZ, which is often a single instruction instead a
loop.
I think the code is also simpler now without the loop.
I added some additional tests for Big8x3 and Big32x40 to ensure that
there were no regressions.
Mak DefId to AccessLevel map in resolve for export
hir_id to accesslevel in resolve and applied in privacy
using local def id
removing tracing probes
making function not recursive and adding comments
Move most of Exported/Public res to rustc_resolve
moving public/export res to resolve
fix missing stability attributes in core, std and alloc
move code to access_levels.rs
return for some kinds instead of going through them
Export correctness, macro changes, comments
add comment for import binding
add comment for import binding
renmae to access level visitor, remove comments, move fn as closure, remove new_key
fmt
fix rebase
fix rebase
fmt
fmt
fix: move macro def to rustc_resolve
fix: reachable AccessLevel for enum variants
fmt
fix: missing stability attributes for other architectures
allow unreachable pub in rustfmt
fix: missing impl access level + renaming export to reexport
Missing impl access level was found thanks to a test in clippy
Make `Atomic*::from_mut` return `&mut Atomic*`
```rust
impl Atomic* {
pub fn from_mut(v: &mut bool) -> &mut Self;
// ^^^^---- previously was just a &
}
```
This PR makes `from_mut` atomic methods tracked in #76314 return unique references to atomic types, instead of shared ones. This makes `from_mut` and `get_mut` inverses of each other, allowing to undo either of them by the other.
r? `@RalfJung`
(as Ralf was [concerned](https://github.com/rust-lang/rust/issues/76314#issuecomment-955062593) about this)
Implemented const casts of raw pointers
This adds `as_mut()` method for `*const T` and `as_const()` for `*mut T`
which are intended to make casting of consts safer. This was discussed
in the [internals discussion][discussion].
Given that this is a simple change and multiple people agreed to it including `@RalfJung` I decided to go ahead and open the PR.
[discussion]: https://internals.rust-lang.org/t/casting-constness-can-be-risky-heres-a-simple-fix/15933
Add some missing `#[must_use]` to some `f{32,64}` operations
This PR adds `#[must_use]` to the following methods:
- `f32::recip`
- `f32::max`
- `f32::min`
- `f32::maximum`
- `f32::minimum`
and their equivalents in `f64`.
These methods all produce a new value without modifying the original and so are pointless to call without using the result.
Add note about non_exhaustive to variant_count
Since `variant_count` isn't returning something opaque, I thought it makes sense to explicitly call out that its return value may change for some enums.
cc #73662
Previously suggested in https://github.com/rust-lang/rfcs/issues/2854.
It makes sense to have this since `char` implements `From<u8>`. Likewise
`u32`, `u64`, and `u128` (since #79502) implement `From<char>`.
Rollup of 7 pull requests
Successful merges:
- #92092 (Drop guards in slice sorting derive src pointers from &mut T, which is invalidated by interior mutation in comparison)
- #92388 (Fix a minor mistake in `String::try_reserve_exact` examples)
- #92442 (Add negative `impl` for `Ord`, `PartialOrd` on `LocalDefId`)
- #92483 (Stabilize `result_cloned` and `result_copied`)
- #92574 (Add RISC-V detection macro and more architecture instructions)
- #92575 (ast: Always keep a `NodeId` in `ast::Crate`)
- #92583 (⬆️ rust-analyzer)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Add RISC-V detection macro and more architecture instructions
This pull request includes:
- Update `stdarch` dependency to include ratified RISC-V supervisor and hypervisor instruction intrinsics which is useful in Rust kernel development
- Add macro `is_riscv_feature_detected!`
- Modify impl of `core::hint::spin_loop` to comply with latest version of `core::arch`
After this update, users may now develop RISC-V kernels and user applications more freely.
r? `@Amanieu`
Drop guards in slice sorting derive src pointers from &mut T, which is invalidated by interior mutation in comparison
I tried to run https://github.com/rust-lang/miri-test-libstd on `alloc` with `-Zmiri-track-raw-pointers`, and got a failure on the test `slice::panic_safe`. The test failure has nothing to do with panic safety, it's from how the test tests for panic safety.
I minimized the test failure into this very silly program:
```rust
use std::cell::Cell;
use std::cmp::Ordering;
#[derive(Clone)]
struct Evil(Cell<usize>);
fn main() {
let mut input = vec![Evil(Cell::new(0)); 3];
// Hits the bug pattern via CopyOnDrop in core
input.sort_unstable_by(|a, _b| {
a.0.set(0);
Ordering::Less
});
// Hits the bug pattern via InsertionHole in alloc
input.sort_by(|_a, b| {
b.0.set(0);
Ordering::Less
});
}
```
To fix this, I'm just removing the mutability/uniqueness where it wasn't required.
Rollup of 7 pull requests
Successful merges:
- #91587 (core::ops::unsize: improve docs for DispatchFromDyn)
- #91907 (Allow `_` as the length of array types and repeat expressions)
- #92515 (RustWrapper: adapt for an LLVM API change)
- #92516 (Do not use deprecated -Zsymbol-mangling-version in bootstrap)
- #92530 (Move `contains` method of Option and Result lower in docs)
- #92546 (Update books)
- #92551 (rename StackPopClean::None to Root)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Move `contains` method of Option and Result lower in docs
Follow-up to #92444 trying to get the `Option` and `Result` rustdocs in better shape.
This addresses the request in https://github.com/rust-lang/rust/issues/62358#issuecomment-645676285. The `contains` methods are previously too high up in the docs on both `Option` and `Result` — stuff like `ok` and `map` and `and_then` should all be featured higher than `contains`. All of those are more ubiquitously useful than `contains`.
- Do not `#[doc(hidden)]` the `#[derive]` macro attribute
- Add a link to the reference section to `derive`'s inherent docs
- Do the same for `#[test]` and `#[global_allocator]`
- Fix `GlobalAlloc` link (why is it on `core` and not `alloc`?)
- Try `no_inline`-ing the `std` reexports from `core`
- Revert "Try `no_inline`-ing the `std` reexports from `core`"
- Address PR review
- Also document the unstable macros
Consolidate Result's and Option's methods into fewer impl blocks
`Result`'s and `Option`'s methods have historically been separated up into `impl` blocks based on their trait bounds, with the bounds specified on type parameters of the impl block. I find this unhelpful because closely related methods, like `unwrap_or` and `unwrap_or_default`, end up disproportionately far apart in source code and rustdocs:
<pre>
impl<T> Option<T> {
pub fn unwrap_or(self, default: T) -> T {
...
}
<img alt="one eternity later" src="https://user-images.githubusercontent.com/1940490/147780325-ad4e01a4-c971-436e-bdf4-e755f2d35f15.jpg" width="750">
}
impl<T: Default> Option<T> {
pub fn unwrap_or_default(self) -> T {
...
}
}
</pre>
I'd prefer for method to be in as few impl blocks as possible, with the most logical grouping within each impl block. Any bounds needed can be written as `where` clauses on the method instead:
```rust
impl<T> Option<T> {
pub fn unwrap_or(self, default: T) -> T {
...
}
pub fn unwrap_or_default(self) -> T
where
T: Default,
{
...
}
}
```
*Warning: the end-to-end diff of this PR is computed confusingly by git / rendered confusingly by GitHub; it's practically impossible to review that way. I've broken the PR into commits that move small groups of methods for which git behaves better — these each should be easily individually reviewable.*
Track caller of slice split and swap
Improves error location for `slice.split_at*()` and `slice.swap()`.
These are generic inline functions, so the `#[track_caller]` on them is free — only changes a value of an argument already passed to panicking code.