Fix "Directly go to item in search if there is only one result" setting
Part of #66181.
The setting was actually broken, so I fixed it when I added the GUI test.
r? `@notriddle`
Stabilize `nonnull_slice_from_raw_parts`
FCP is done: https://github.com/rust-lang/rust/issues/71941#issuecomment-1100910416
Note that this doesn't const-stabilize `NonNull::slice_from_raw_parts` as `slice_from_raw_parts_mut` isn't const-stabilized yet. Given #67456 and #57349, it's not likely available soon, meanwhile, stabilizing only the feature makes some sense, I think.
Closes#71941
Cleanup `codegen_fn_attrs`
The `match` control flow construct has been stable since 1.0, we should use it here.
Sorry for the hard to review diff, I did try to at least split it into two commits. But looking at before-after side-by-side (instead of whatever github is doing) is probably the easiest way to make sure that I didn't forget about anything.
On top of #109088, you can wait for that
Limit to one link job on mingw builders
This is another attempt to work around
https://github.com/rust-lang/rust/issues/108227.
By limiting to one link job, we should be able to avoid file name clashes in mkstemp().
Add #[inline] to as_deref
While working on https://github.com/rust-lang/rust/pull/109247 I found an `as_deref` call in the compiler that should have been inlined. This fixes the missing inlining (but doesn't address the perf issues I was chasing).
r? `@thomcc`
Refactor: `VariantIdx::from_u32(0)` -> `FIRST_VARIANT`
Since structs are always `VariantIdx(0)`, there's a bunch of files where the only reason they had `VariantIdx` or `vec::Idx` imported at all was to get the first variant.
So this uses a constant for that, and adds some doc-comments to `VariantIdx` while I'm there, since [it doesn't have any today](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_target/abi/struct.VariantIdx.html).
Still-further-specializable projections are ambiguous in new solver
Fixes https://github.com/rust-lang/rust/pull/108896/files#r1148450781
r? ``@BoxyUwU`` (though feel free to re-roll)
---
This can be used to create an unsound transmute function with the new solver:
```rust
#![feature(specialization)]
trait Default {
type Id;
fn intu(&self) -> &Self::Id;
}
impl<T> Default for T {
default type Id = T;
fn intu(&self) -> &Self::Id {
self
}
}
fn transmute<T: Default<Id = U>, U: Copy>(t: T) -> U {
*t.intu()
}
use std::num::NonZeroU8;
fn main() {
let s = transmute::<u8, Option<NonZeroU8>>(0);
assert_eq!(s, None);
}
```
Improve "Auto-hide trait implementation documentation" GUI test
Part of #66181.
I'll start working on the `include` command for `browser-ui-test` so we can greatly reduce the duplicated code between setting tests.
r? ``@notriddle``
rustdoc: skip `// some variants omitted` if enum is `#[non_exhaustive]`
Fixes#108925
Never touched rustdoc before so probably not the best code.
cc `@dtolnay`
Permit the MIR inliner to inline diverging functions
This heuristic prevents inlining of `hint::unreachable_unchecked`, which in turn makes `Option/Result::unwrap_unchecked` a bad inlining candidate. I looked through the changes to `core`, `alloc`, `std`, and `hashbrown` by hand and they all seem reasonable. Let's see how this looks in perf...
---
Based on rustc-perf it looks like this regresses ctfe-stress, and the cachegrind diff indicates that this regression is in `InterpCx::statement`. I don't know how to do any deeper analysis because that function is _enormous_ in the try toolchain, which has no debuginfo in it. And a local build produces significantly different codegen for that function, even with LTO.
Since structs are always `VariantIdx(0)`, there's a bunch of files where the only reason they had `VariantIdx` or `vec::Idx` imported at all was to get the first variant.
So this uses a constant for that, and adds some doc-comments to `VariantIdx` while I'm there, since it doesn't have any today.
Clarify that copied allocators must behave the same
Currently, the safety documentation for `Allocator` says that a cloned or moved allocator must behave the same as the original. However, it does not specify that a copied allocator must behave the same, and it's possible to construct an allocator that permits being moved or cloned, but sometimes produces a new allocator when copied.
<details>
<summary>Contrived example which results in a Miri error</summary>
```rust
#![feature(allocator_api, once_cell, strict_provenance)]
use std::{
alloc::{AllocError, Allocator, Global, Layout},
collections::HashMap,
hint,
marker::PhantomPinned,
num::NonZeroUsize,
pin::Pin,
ptr::{addr_of, NonNull},
sync::{LazyLock, Mutex},
};
mod source_allocator {
use super::*;
// `SourceAllocator` has 3 states:
// - invalid value: is_cloned == false, source != self.addr()
// - source value: is_cloned == false, source == self.addr()
// - cloned value: is_cloned == true
pub struct SourceAllocator {
is_cloned: bool,
source: usize,
_pin: PhantomPinned,
}
impl SourceAllocator {
// Returns a pinned source value (pointing to itself).
pub fn new_source() -> Pin<Box<Self>> {
let mut b = Box::new(Self {
is_cloned: false,
source: 0,
_pin: PhantomPinned,
});
b.source = b.addr();
Box::into_pin(b)
}
fn addr(&self) -> usize {
addr_of!(*self).addr()
}
// Invalid values point to source 0.
// Source values point to themselves.
// Cloned values point to their corresponding source.
fn source(&self) -> usize {
if self.is_cloned || self.addr() == self.source {
self.source
} else {
0
}
}
}
// Copying an invalid value produces an invalid value.
// Copying a source value produces an invalid value.
// Copying a cloned value produces a cloned value with the same source.
impl Copy for SourceAllocator {}
// Cloning an invalid value produces an invalid value.
// Cloning a source value produces a cloned value with that source.
// Cloning a cloned value produces a cloned value with the same source.
impl Clone for SourceAllocator {
fn clone(&self) -> Self {
if self.is_cloned || self.addr() != self.source {
*self
} else {
Self {
is_cloned: true,
source: self.source,
_pin: PhantomPinned,
}
}
}
}
static SOURCE_MAP: LazyLock<Mutex<HashMap<NonZeroUsize, usize>>> =
LazyLock::new(Default::default);
// SAFETY: Wraps `Global`'s methods with additional tracking.
// All invalid values share blocks with each other.
// Each source value shares blocks with all cloned values pointing to it.
// Cloning an allocator always produces a compatible allocator:
// - Cloning an invalid value produces another invalid value.
// - Cloning a source value produces a cloned value pointing to it.
// - Cloning a cloned value produces another cloned value with the same source.
// Moving an allocator always produces a compatible allocator:
// - Invalid values remain invalid when moved.
// - Source values cannot be moved, since they are always pinned to the heap.
// - Cloned values keep the same source when moved.
unsafe impl Allocator for SourceAllocator {
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
let mut map = SOURCE_MAP.lock().unwrap();
let block = Global.allocate(layout)?;
let block_addr = block.cast::<u8>().addr();
map.insert(block_addr, self.source());
Ok(block)
}
unsafe fn deallocate(&self, block: NonNull<u8>, layout: Layout) {
let mut map = SOURCE_MAP.lock().unwrap();
let block_addr = block.addr();
// SAFETY: `block` came from an allocator that shares blocks with this allocator.
if map.remove(&block_addr) != Some(self.source()) {
hint::unreachable_unchecked()
}
Global.deallocate(block, layout)
}
}
}
use source_allocator::SourceAllocator;
// SAFETY: `alloc1` and `alloc2` must share blocks.
unsafe fn test_same(alloc1: &SourceAllocator, alloc2: &SourceAllocator) {
let ptr = alloc1.allocate(Layout:🆕:<i32>()).unwrap();
alloc2.deallocate(ptr.cast(), Layout:🆕:<i32>());
}
fn main() {
let orig = &*SourceAllocator::new_source();
let orig_cloned1 = &orig.clone();
let orig_cloned2 = &orig.clone();
let copied = &{ *orig };
let copied_cloned1 = &copied.clone();
let copied_cloned2 = &copied.clone();
unsafe {
test_same(orig, orig_cloned1);
test_same(orig_cloned1, orig_cloned2);
test_same(copied, copied_cloned1);
test_same(copied_cloned1, copied_cloned2);
test_same(orig, copied); // error
}
}
```
</details>
This could result in issues in the future for algorithms that specialize on `Copy` types. Right now, nothing in the standard library that depends on `Allocator + Clone` is susceptible to this issue, but I still think it would make sense to specify that copying an allocator is always as valid as cloning it.
Upgrade to LLVM 16, again
Relative to the previous attempt in https://github.com/rust-lang/rust/pull/107224:
* Update to GCC 8.5 on dist-x86_64-linux, to avoid std::optional ABI-incompatibility between libstdc++ 7 and 8.
* Cherry-pick 96df79af02.
* Cherry-pick 6fc670e5e3.
r? `@cuviper`
Refactor `try_execute_query`
This merges `JobOwner::try_start` into `try_execute_query`, removing `TryGetJob` in the processes. 3 new functions are extracted from `try_execute_query`: `execute_job`, `cycle_error` and `wait_for_query`. This makes the control flow a bit clearer and improves performance.
Based on https://github.com/rust-lang/rust/pull/109046.
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.7134s</td><td align="right">1.7061s</td><td align="right"> -0.43%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2519s</td><td align="right">0.2510s</td><td align="right"> -0.35%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9517s</td><td align="right">0.9481s</td><td align="right"> -0.38%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.5389s</td><td align="right">1.5338s</td><td align="right"> -0.33%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.9488s</td><td align="right">5.9258s</td><td align="right"> -0.39%</td></tr><tr><td>Total</td><td align="right">10.4048s</td><td align="right">10.3647s</td><td align="right"> -0.38%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9962s</td><td align="right"> -0.38%</td></tr></table>
r? `@cjgillot`
Optimize `incremental_verify_ich`
This optimizes `incremental_verify_ich` by operating on `SerializedDepNodeIndex`, saving 2 hashmap lookups. The panic paths are also changed to get a `TyCtxt` reference using TLS.
Implement Default for some alloc/core iterators
Add `Default` impls to the following collection iterators:
* slice::{Iter, IterMut}
* binary_heap::IntoIter
* btree::map::{Iter, IterMut, Keys, Values, Range, IntoIter, IntoKeys, IntoValues}
* btree::set::{Iter, IntoIter, Range}
* linked_list::IntoIter
* vec::IntoIter
and these adapters:
* adapters::{Chain, Cloned, Copied, Rev, Enumerate, Flatten, Fuse, Rev}
For iterators which are generic over allocators it only implements it for the global allocator because we can't conjure an allocator from nothing or would have to turn the allocator field into an `Option` just for this change.
These changes will be insta-stable.
ACP: https://github.com/rust-lang/libs-team/issues/77