rustdoc: skip `// some variants omitted` if enum is `#[non_exhaustive]`
Fixes#108925
Never touched rustdoc before so probably not the best code.
cc `@dtolnay`
Permit the MIR inliner to inline diverging functions
This heuristic prevents inlining of `hint::unreachable_unchecked`, which in turn makes `Option/Result::unwrap_unchecked` a bad inlining candidate. I looked through the changes to `core`, `alloc`, `std`, and `hashbrown` by hand and they all seem reasonable. Let's see how this looks in perf...
---
Based on rustc-perf it looks like this regresses ctfe-stress, and the cachegrind diff indicates that this regression is in `InterpCx::statement`. I don't know how to do any deeper analysis because that function is _enormous_ in the try toolchain, which has no debuginfo in it. And a local build produces significantly different codegen for that function, even with LTO.
Since structs are always `VariantIdx(0)`, there's a bunch of files where the only reason they had `VariantIdx` or `vec::Idx` imported at all was to get the first variant.
So this uses a constant for that, and adds some doc-comments to `VariantIdx` while I'm there, since it doesn't have any today.
Clarify that copied allocators must behave the same
Currently, the safety documentation for `Allocator` says that a cloned or moved allocator must behave the same as the original. However, it does not specify that a copied allocator must behave the same, and it's possible to construct an allocator that permits being moved or cloned, but sometimes produces a new allocator when copied.
<details>
<summary>Contrived example which results in a Miri error</summary>
```rust
#![feature(allocator_api, once_cell, strict_provenance)]
use std::{
alloc::{AllocError, Allocator, Global, Layout},
collections::HashMap,
hint,
marker::PhantomPinned,
num::NonZeroUsize,
pin::Pin,
ptr::{addr_of, NonNull},
sync::{LazyLock, Mutex},
};
mod source_allocator {
use super::*;
// `SourceAllocator` has 3 states:
// - invalid value: is_cloned == false, source != self.addr()
// - source value: is_cloned == false, source == self.addr()
// - cloned value: is_cloned == true
pub struct SourceAllocator {
is_cloned: bool,
source: usize,
_pin: PhantomPinned,
}
impl SourceAllocator {
// Returns a pinned source value (pointing to itself).
pub fn new_source() -> Pin<Box<Self>> {
let mut b = Box::new(Self {
is_cloned: false,
source: 0,
_pin: PhantomPinned,
});
b.source = b.addr();
Box::into_pin(b)
}
fn addr(&self) -> usize {
addr_of!(*self).addr()
}
// Invalid values point to source 0.
// Source values point to themselves.
// Cloned values point to their corresponding source.
fn source(&self) -> usize {
if self.is_cloned || self.addr() == self.source {
self.source
} else {
0
}
}
}
// Copying an invalid value produces an invalid value.
// Copying a source value produces an invalid value.
// Copying a cloned value produces a cloned value with the same source.
impl Copy for SourceAllocator {}
// Cloning an invalid value produces an invalid value.
// Cloning a source value produces a cloned value with that source.
// Cloning a cloned value produces a cloned value with the same source.
impl Clone for SourceAllocator {
fn clone(&self) -> Self {
if self.is_cloned || self.addr() != self.source {
*self
} else {
Self {
is_cloned: true,
source: self.source,
_pin: PhantomPinned,
}
}
}
}
static SOURCE_MAP: LazyLock<Mutex<HashMap<NonZeroUsize, usize>>> =
LazyLock::new(Default::default);
// SAFETY: Wraps `Global`'s methods with additional tracking.
// All invalid values share blocks with each other.
// Each source value shares blocks with all cloned values pointing to it.
// Cloning an allocator always produces a compatible allocator:
// - Cloning an invalid value produces another invalid value.
// - Cloning a source value produces a cloned value pointing to it.
// - Cloning a cloned value produces another cloned value with the same source.
// Moving an allocator always produces a compatible allocator:
// - Invalid values remain invalid when moved.
// - Source values cannot be moved, since they are always pinned to the heap.
// - Cloned values keep the same source when moved.
unsafe impl Allocator for SourceAllocator {
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
let mut map = SOURCE_MAP.lock().unwrap();
let block = Global.allocate(layout)?;
let block_addr = block.cast::<u8>().addr();
map.insert(block_addr, self.source());
Ok(block)
}
unsafe fn deallocate(&self, block: NonNull<u8>, layout: Layout) {
let mut map = SOURCE_MAP.lock().unwrap();
let block_addr = block.addr();
// SAFETY: `block` came from an allocator that shares blocks with this allocator.
if map.remove(&block_addr) != Some(self.source()) {
hint::unreachable_unchecked()
}
Global.deallocate(block, layout)
}
}
}
use source_allocator::SourceAllocator;
// SAFETY: `alloc1` and `alloc2` must share blocks.
unsafe fn test_same(alloc1: &SourceAllocator, alloc2: &SourceAllocator) {
let ptr = alloc1.allocate(Layout:🆕:<i32>()).unwrap();
alloc2.deallocate(ptr.cast(), Layout:🆕:<i32>());
}
fn main() {
let orig = &*SourceAllocator::new_source();
let orig_cloned1 = &orig.clone();
let orig_cloned2 = &orig.clone();
let copied = &{ *orig };
let copied_cloned1 = &copied.clone();
let copied_cloned2 = &copied.clone();
unsafe {
test_same(orig, orig_cloned1);
test_same(orig_cloned1, orig_cloned2);
test_same(copied, copied_cloned1);
test_same(copied_cloned1, copied_cloned2);
test_same(orig, copied); // error
}
}
```
</details>
This could result in issues in the future for algorithms that specialize on `Copy` types. Right now, nothing in the standard library that depends on `Allocator + Clone` is susceptible to this issue, but I still think it would make sense to specify that copying an allocator is always as valid as cloning it.
Upgrade to LLVM 16, again
Relative to the previous attempt in https://github.com/rust-lang/rust/pull/107224:
* Update to GCC 8.5 on dist-x86_64-linux, to avoid std::optional ABI-incompatibility between libstdc++ 7 and 8.
* Cherry-pick 96df79af02.
* Cherry-pick 6fc670e5e3.
r? `@cuviper`
Refactor `try_execute_query`
This merges `JobOwner::try_start` into `try_execute_query`, removing `TryGetJob` in the processes. 3 new functions are extracted from `try_execute_query`: `execute_job`, `cycle_error` and `wait_for_query`. This makes the control flow a bit clearer and improves performance.
Based on https://github.com/rust-lang/rust/pull/109046.
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.7134s</td><td align="right">1.7061s</td><td align="right"> -0.43%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2519s</td><td align="right">0.2510s</td><td align="right"> -0.35%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9517s</td><td align="right">0.9481s</td><td align="right"> -0.38%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.5389s</td><td align="right">1.5338s</td><td align="right"> -0.33%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.9488s</td><td align="right">5.9258s</td><td align="right"> -0.39%</td></tr><tr><td>Total</td><td align="right">10.4048s</td><td align="right">10.3647s</td><td align="right"> -0.38%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9962s</td><td align="right"> -0.38%</td></tr></table>
r? `@cjgillot`
Optimize `incremental_verify_ich`
This optimizes `incremental_verify_ich` by operating on `SerializedDepNodeIndex`, saving 2 hashmap lookups. The panic paths are also changed to get a `TyCtxt` reference using TLS.
Implement Default for some alloc/core iterators
Add `Default` impls to the following collection iterators:
* slice::{Iter, IterMut}
* binary_heap::IntoIter
* btree::map::{Iter, IterMut, Keys, Values, Range, IntoIter, IntoKeys, IntoValues}
* btree::set::{Iter, IntoIter, Range}
* linked_list::IntoIter
* vec::IntoIter
and these adapters:
* adapters::{Chain, Cloned, Copied, Rev, Enumerate, Flatten, Fuse, Rev}
For iterators which are generic over allocators it only implements it for the global allocator because we can't conjure an allocator from nothing or would have to turn the allocator field into an `Option` just for this change.
These changes will be insta-stable.
ACP: https://github.com/rust-lang/libs-team/issues/77
Add #[inline] to the Into for From impl
I was skimming through the standard library MIR and I noticed a handful of very suspicious `Into::into` calls in `alloc`. ~Since this is a trivial wrapper function, `#[inline(always)]` seems appropriate.;~ `#[inline]` works too and is a lot less spooky.
r? `@thomcc`
Bugfix: avoid panic on invalid json output from libtest
#108659 introduces a custom test display implementation. It does so by using libtest to output json. The stdout is read and parsed; The code trims the line read and checks whether it starts with a `{` and ends with a `}`. If so, it concludes that it must be a json encoded `Message`. Unfortunately, this does not work in all cases:
- This assumes that tests running with `--nocapture` will never start and end lines with `{` and `}` characters
- Output is generated by issuing multiple `write_message` [statements](https://github.com/rust-lang/rust/blob/master/library/test/src/formatters/json.rs#L33-L60). Where only the last one issues a `\n`. This likely results in a race condition as we see multiple json outputs on the same line when running tests for the `x86_64-fortanix-unknown-sgx` target:
```
10:21:04 [0m[0m[1m[32m Running[0m tests/run-time-detect.rs (build/x86_64-unknown-linux-gnu/stage1-std/x86_64-fortanix-unknown-sgx/release/deps/run_time_detect-8c66026bd4b1871a)
10:21:04
10:21:04 running 1 tests
10:21:04 test x86_all ... ok
10:21:04 [0m[0m[1m[32m Running[0m tests/thread.rs (build/x86_64-unknown-linux-gnu/stage1-std/x86_64-fortanix-unknown-sgx/release/deps/thread-ed5456a7d80a6193)
10:21:04 thread 'main' panicked at 'failed to parse libtest json output; error: trailing characters at line 1 column 135, line: "{ \"type\": \"suite\", \"event\": \"ok\", \"passed\": 1, \"failed\": 0, \"ignored\": 0, \"measured\": 0, \"filtered_out\": 0, \"exec_time\": 0.000725911 }{ \"type\": \"suite\", \"event\": \"started\", \"test_count\": 1 }\n"', render_tests.rs:108:25
```
This PR implements a partial fix by being much more conservative of what it asserts is a valid json encoded `Message`. This prevents panics, but still does not resolve the race condition. A discussion is needed where this race condition comes from exactly and how it best can be avoided.
cc: `@jethrogb,` `@pietroalbini`
Without this, users trying to run `x.py dist` under a sufficiently long
path run into problems when we build the resulting tarballs due to
length limits in the original tar spec. The error looks like:
Finished release [optimized + debuginfo] target(s) in 0.34s
Copying stage0 std from stage0 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-musl)
Building stage0 tool rust-installer (x86_64-unknown-linux-gnu)
Finished release [optimized] target(s) in 0.35s
Dist rust-std-1.67.1-x86_64-unknown-linux-musl
Error: failed to generate installer
Caused by:
0: failed to tar file '/home/AAAAAAAAAAAAAA/BBBBBB/CCCC/DDD/EEEEE/FFFFFFFFFFFF/GGGGGGGGGGGGGGGG/HHHHHHHHHH/IIIIIIIIIIIIIII/JJJJJ/KKKKKKK/src/build/tmp/tarball/rust-std/x86_64-unknown-linux-musl/rust-std-1.67.1-x86_64-unknown-linux-musl/rust-std-x86_64-unknown-linux-musl/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/libc.a'
1: provided value is too long when setting link name for
Build completed unsuccessfully in 0:00:03
The fix is to make use of the widely-supported GNU tar extensions which
lift this restriction. Switching to [`tar::Builder::append_link`] takes
care of that for us. See also alexcrichton/tar-rs#273.
[`tar::Builder::append_link`]: https://docs.rs/tar/0.4.38/tar/struct.Builder.html#method.append_link
rustdoc: Optimize impl sorting during rendering
This should fix the perf regression on [bitmaps-3.1.0](https://github.com/rust-lang/rustc-perf/tree/master/collector/compile-benchmarks/bitmaps-3.1.0) from https://github.com/rust-lang/rust/pull/107765.
The bitmaps crate has a lot of impls:
```rust
impl Bits for BitsImpl<1> { ... }
impl Bits for BitsImpl<2> { ... }
// ...
impl Bits for BitsImpl<1023> { ... }
impl Bits for BitsImpl<1024> { ... }
```
and the logic in `fn print_item` sorts them in natural order.
Before https://github.com/rust-lang/rust/pull/107765 the impls came in source order, which happened to be already sorted in the necessary way.
So the comparison function was called fewer times.
After https://github.com/rust-lang/rust/pull/107765 the impls came in "stable" order (based on def path hash).
So the comparison function was called more times to sort them.
The comparison function was terribly inefficient, so it caused a large perf regression.
This PR attempts to make it more efficient by using cached keys during sorting.
Use poison instead of undef
In cases where it is legal, we should prefer poison values over undef values.
This replaces undef with poison for aggregate construction and for uninhabited types. There are more places where we can likely use poison, but I wanted to stay conservative to start with.
In particular the aggregate case is important for newer LLVM versions, which are not able to handle an undef base value during early optimization due to poison-propagation concerns.
r? `@cuviper`