Mark some more types as having insignificant dtor
These were caught by https://github.com/rust-lang/rust/pull/129864#issuecomment-2376658407, which is implementing a lint for some changes in drop order for temporaries in tail expressions.
Specifically, the destructors of `CString` and the bitpacked repr for `std::io::Error` are insignificant insofar as they don't have side-effects on things like locking or synchronization; they just free memory.
See some discussion on #89144 for what makes a drop impl "significant"
This commit is a followup to https://github.com/rust-lang/rust/pull/124032. It
replaces the tests that test the various sort functions in the standard library
with a test-suite developed as part of
https://github.com/Voultapher/sort-research-rs. The current tests suffer a
couple of problems:
- They don't cover important real world patterns that the implementations take
advantage of and execute special code for.
- The input lengths tested miss out on code paths. For example, important safety
property tests never reach the quicksort part of the implementation.
- The miri side is often limited to `len <= 20` which means it very thoroughly
tests the insertion sort, which accounts for 19 out of 1.5k LoC.
- They are split into to core and alloc, causing code duplication and uneven
coverage.
- The randomness is not repeatable, as it
relies on `std:#️⃣:RandomState::new().build_hasher()`.
Most of these issues existed before
https://github.com/rust-lang/rust/pull/124032, but they are intensified by it.
One thing that is new and requires additional testing, is that the new sort
implementations specialize based on type properties. For example `Freeze` and
non `Freeze` execute different code paths.
Effectively there are three dimensions that matter:
- Input type
- Input length
- Input pattern
The ported test-suite tests various properties along all three dimensions,
greatly improving test coverage. It side-steps the miri issue by preferring
sampled approaches. For example the test that checks if after a panic the set of
elements is still the original one, doesn't do so for every single possible
panic opportunity but rather it picks one at random, and performs this test
across a range of input length, which varies the panic point across them. This
allows regular execution to easily test inputs of length 10k, and miri execution
up to 100 which covers significantly more code. The randomness used is tied to a
fixed - but random per process execution - seed. This allows for fully
repeatable tests and fuzzer like exploration across multiple runs.
Structure wise, the tests are previously found in the core integration tests for
`sort_unstable` and alloc unit tests for `sort`. The new test-suite was
developed to be a purely black-box approach, which makes integration testing the
better place, because it can't accidentally rely on internal access. Because
unwinding support is required the tests can't be in core, even if the
implementation is, so they are now part of the alloc integration tests. Are
there architectures that can only build and test core and not alloc? If so, do
such platforms require sort testing? For what it's worth the current
implementation state passes miri `--target mips64-unknown-linux-gnuabi64` which
is big endian.
The test-suite also contains tests for properties that were and are given by the
current and previous implementations, and likely relied upon by users but
weren't tested. For example `self_cmp` tests that the two parameters `a` and `b`
passed into the comparison function are never references to the same object,
which if the user is sorting for example a `&mut [Mutex<i32>]` could lead to a
deadlock.
Instead of using the hashed caller location as rand seed, it uses seconds since
unix epoch / 10, which given timestamps in the CI should be reasonably easy to
reproduce, but also allows fuzzer like space exploration.
Improve autovectorization of to_lowercase / to_uppercase functions
Refactor the code in the `convert_while_ascii` helper function to make it more suitable for auto-vectorization and also process the full ascii prefix of the string. The generic case conversion logic will only be invoked starting from the first non-ascii character.
The runtime on a microbenchmark with a small ascii-only input decreases from ~55ns to ~18ns per iteration. The new implementation also reduces the amount of unsafe code and encapsulates all unsafe inside the helper function.
Fixes#123712
update `compiler-builtins` to 0.1.126
this requires the addition of a bootstrap variant of the new `naked_asm!` macro
r? `@tgross35`
extracted from https://github.com/rust-lang/rust/pull/128651
Since the stabilization in #127679 has reached stage0, 1.82-beta, we can
start using `&raw` freely, and even the soft-deprecated `ptr::addr_of!`
and `ptr::addr_of_mut!` can stop allowing the unstable feature.
I intentionally did not change any documentation or tests, but the rest
of those macro uses are all now using `&raw const` or `&raw mut` in the
standard library.
Refactor the code in the `convert_while_ascii` helper function to make
it more suitable for auto-vectorization and also process the full ascii
prefix of the string. The generic case conversion logic will only be
invoked starting from the first non-ascii character.
The runtime on microbenchmarks with ascii-only inputs improves between
1.5x for short and 4x for long inputs on x86_64 and aarch64.
The new implementation also encapsulates all unsafe inside the
`convert_while_ascii` function.
Fixes#123712
Add str.as_str() for easy Deref to string slices
Working with `Box<str>` is cumbersome, because in places like `iter.filter()` it can end up being `&Box<str>` or even `&&Box<str>`, and such type doesn't always get auto-dereferenced as expected.
Dereferencing such box to `&str` requires ugly syntax like `&**boxed_str` or `&***boxed_str`, with the exact amount of `*`s.
`Box<str>` is [not easily comparable with other string types](https://github.com/rust-lang/rust/pull/129852) via `PartialEq`. `Box<str>` won't work for lookups in types like `HashSet<String>`, because `Borrow<String>` won't take types like `&Box<str>`. OTOH `set.contains(s.as_str())` works nicely regardless of levels of indirection.
`String` has a simple solution for this: the `as_str()` method, and `Box<str>` should too.
Avoid re-validating UTF-8 in `FromUtf8Error::into_utf8_lossy`
Part of the unstable feature `string_from_utf8_lossy_owned` - #129436
Refactor `FromUtf8Error::into_utf8_lossy` to copy valid UTF-8 bytes into the buffer, avoiding double validation of bytes.
Add tests that mirror the `String::from_utf8_lossy` tests.
Refactor `into_utf8_lossy` to copy valid UTF-8 bytes into the buffer,
avoiding double validation of bytes.
Add tests that mirror the `String::from_utf8_lossy` tests
[Clippy] Get rid of most `std` `match_def_path` usage, swap to diagnostic items.
Part of https://github.com/rust-lang/rust-clippy/issues/5393.
This was going to remove all `std` paths, but `SeekFrom` has issues being cleanly replaced with a diagnostic item as the paths are for variants, which currently cannot be diagnostic items.
This also, as a last step, categories the paths to help with future path removals.
Add new_cyclic_in for Rc and Arc
Currently, new_cyclic_in does not exist for Rc and Arc. This is an oversight according to https://github.com/rust-lang/wg-allocators/issues/132.
This PR adds new_cyclic_in for Rc and Arc. The implementation is almost the exact same as new_cyclic with some small differences to make it allocator-specific. new_cyclic's implementation has been replaced with a call to `new_cyclic_in(data_fn, Global)`.
Remaining questions:
* ~~Is requiring Allocator to be Clone OK? According to https://github.com/rust-lang/wg-allocators/issues/88, Allocators should be cheap to clone. I'm just hesitant to add unnecessary constraints, though I don't see an obvious workaround for this function since many called functions in new_cyclic_in expect an owned Allocator. I see Allocator.by_ref() as an option, but that doesn't work on when creating Weak { ptr: init_ptr, alloc: alloc.clone() }, because the type of Weak then becomes Weak<T, &A> which is incompatible.~~ Fixed, thank you `@zakarumych!` This PR no longer requires the allocator to be Clone.
* Currently, new_cyclic_in's documentation is almost entirely copy-pasted from new_cyclic, with minor tweaks to make it more accurate (e.g. Rc<T> -> Rc<T, A>). The example section is removed to mitigate redundancy and instead redirects to cyclic_in. Is this appropriate?
* ~~The comments in new_cyclic_in (and much of the implementation) are also copy-pasted from new_cyclic. Would it be better to make a helper method new_cyclic_in_internal that both functions call, with either Global or the custom allocator? I'm not sure if that's even possible, since the internal method would have to return Arc<T, Global> and I don't know if it's possible to "downcast" that to an Arc<T>. Maybe transmute would work here?~~ Done, thanks `@zakarumych`
* Arc::new_cyclic is #[inline], but Rc::new_cyclic is not. Which is preferred?
* nit: does it matter where in the impl block new_cyclic_in is defined?
Implement feature `string_from_utf8_lossy_owned` for lossy conversion from `Vec<u8>` to `String` methods
Accepted ACP: https://github.com/rust-lang/libs-team/issues/116
Tracking issue: #129436
Implement feature for lossily converting from `Vec<u8>` to `String`
- Add `String::from_utf8_lossy_owned`
- Add `FromUtf8Error::into_utf8_lossy`
---
Related to #64727, but unsure whether to mark it "fixed" by this PR.
That issue partly asks for in-place replacement of the original allocation. We fulfill the other half of that request with these functions.
closes#64727
Stabilize `&mut` (and `*mut`) as well as `&Cell` (and `*const Cell`) in const
This stabilizes `const_mut_refs` and `const_refs_to_cell`. That allows a bunch of new things in const contexts:
- Mentioning `&mut` types
- Creating `&mut` and `*mut` values
- Creating `&T` and `*const T` values where `T` contains interior mutability
- Dereferencing `&mut` and `*mut` values (both for reads and writes)
The same rules as at runtime apply: mutating immutable data is UB. This includes mutation through pointers derived from shared references; the following is diagnosed with a hard error:
```rust
#[allow(invalid_reference_casting)]
const _: () = {
let mut val = 15;
let ptr = &val as *const i32 as *mut i32;
unsafe { *ptr = 16; }
};
```
The main limitation that is enforced is that the final value of a const (or non-`mut` static) may not contain `&mut` values nor interior mutable `&` values. This is necessary because the memory those references point to becomes *read-only* when the constant is done computing, so (interior) mutable references to such memory would be pretty dangerous. We take a multi-layered approach here to ensuring no mutable references escape the initializer expression:
- A static analysis rejects (interior) mutable references when the referee looks like it may outlive the current MIR body.
- To be extra sure, this static check is complemented by a "safety net" of dynamic checks. ("Dynamic" in the sense of "running during/after const-evaluation, e.g. at runtime of this code" -- in contrast to "static" which works entirely by looking at the MIR without evaluating it.)
- After the final value is computed, we do a type-driven traversal of the entire value, and if we find any `&mut` or interior-mutable `&` we error out.
- However, the type-driven traversal cannot traverse `union` or raw pointers, so there is a second dynamic check where if the final value of the const contains any pointer that was not derived from a shared reference, we complain. This is currently a future-compat lint, but will become an ICE in #128543. On the off-chance that it's actually possible to trigger this lint on stable, I'd prefer if we could make it an ICE before stabilizing const_mut_refs, but it's not a hard blocker. This part of the "safety net" is only active for mutable references since with shared references, it has false positives.
Altogether this should prevent people from leaking (interior) mutable references out of the const initializer.
While updating the tests I learned that surprisingly, this code gets rejected:
```rust
const _: Vec<i32> = {
let mut x = Vec::<i32>::new(); //~ ERROR destructor of `Vec<i32>` cannot be evaluated at compile-time
let r = &mut x;
let y = x;
y
};
```
The analysis that rejects destructors in `const` is very conservative when it sees an `&mut` being created to `x`, and then considers `x` to be always live. See [here](https://github.com/rust-lang/rust/issues/65394#issuecomment-541499219) for a longer explanation. `const_precise_live_drops` will solve this, so I consider this problem to be tracked by https://github.com/rust-lang/rust/issues/73255.
Cc `@rust-lang/wg-const-eval` `@rust-lang/lang`
Cc https://github.com/rust-lang/rust/issues/57349
Cc https://github.com/rust-lang/rust/issues/80384
Add `NonNull` convenience methods to `Box` and `Vec`
Implements the ACP: https://github.com/rust-lang/libs-team/issues/418.
The docs for the added methods are mostly copied from the existing methods that use raw pointers instead of `NonNull`.
I'm new to this "contributing to rustc" thing, so I'm sorry if I did something wrong. In particular, I don't know what the process is for creating a new unstable feature. Please advise me if I should do something. Thank you.
some const cleanup: remove unnecessary attributes, add const-hack indications
I learned that we use `FIXME(const-hack)` on top of the "const-hack" label. That seems much better since it marks the right place in the code and moves around with the code. So I went through the PRs with that label and added appropriate FIXMEs in the code. IMO this means we can then remove the label -- Cc ``@rust-lang/wg-const-eval.``
I also noticed some const stability attributes that don't do anything useful, and removed them.
r? ``@fee1-dead``
Also emit `missing_docs` lint with `--test` to fulfil expectations
This PR removes the "test harness" suppression of the `missing_docs` lint to be able to fulfil `#[expect]` (expectations) as it is now "relevant".
I think the goal was to maybe avoid false-positive while linting on public items under `#[cfg(test)]` but with effective visibility we should no longer have any false-positive.
Another possibility would be to query the lint level and only emit the lint if it's of expect level, but that is even more hacky.
Fixes https://github.com/rust-lang/rust/issues/130021
try-job: x86_64-gnu-aux
enable -Zrandomize-layout in debug CI builds
This builds rustc/libs/tools with `-Zrandomize-layout` on *-debug CI runners.
Only a handful of tests and asserts break with that enabled, which is promising. One test was fixable, the rest is dealt with by disabling them through new cargo features or compiletest directives.
The config.toml flag `rust.randomize-layout` defaults to false, so it has to be explicitly enabled for now.
Rollup of 9 pull requests
Successful merges:
- #127474 (doc: Make block of inline Deref methods foldable)
- #129678 (Deny imports of `rustc_type_ir::inherent` outside of type ir + new trait solver)
- #129738 (`rustc_mir_transform` cleanups)
- #129793 (add extra linebreaks so rustdoc can identify the first sentence)
- #129804 (Fixed some typos in the standard library documentation/comments)
- #129837 (Actually parse stdout json, instead of using hacky contains logic.)
- #129842 (Fix LLVM ABI NAME for riscv64imac-unknown-nuttx-elf)
- #129843 (Mark myself as on vacation for triagebot)
- #129858 (Replace walk with visit so we dont skip outermost expr kind in def collector)
Failed merges:
- #129777 (Add `unreachable_pub`, round 4)
- #129868 (Remove kobzol vacation status)
r? `@ghost`
`@rustbot` modify labels: rollup
Apply size optimizations to panic machinery and some cold functions
* std dependencies gimli and addr2line are now built with opt-level=s
* various panic-related methods and `#[cold]` methods are now marked `#[optimize(size)]`
Panics should be cold enough that it doesn't make sense to optimize them for speed. The only tradeoff here is if someone does a lot of backtrace captures (without panics) and printing then the opt-level change might impact their perf.
Seems to be the first use of the optimize attribute. Tracking issue #54882
Re-enable android tests/benches in alloc/core
This is basically a revert of https://github.com/rust-lang/rust/pull/73729. These tests better work on android now; it's been 4 years and we don't use dlmalloc on that target anymore.
And I've validated that they should pass now with a try-build :)
Add fmt::Debug to sync::Weak<T, A>
Currently, `sync::Weak<T>` implements `Debug`, but `sync::Weak<T, A>` does not. This appears to be an oversight, as `rc::Weak<T, A>` implements `Debug`. (Note: `sync::Weak` is the weak for `Arc`, and `rc::Weak` is the weak for `Rc`.)
This PR adds the Debug trait for `sync::Weak<T, A>`. The issue was initially brought up here: https://github.com/rust-lang/wg-allocators/issues/131
A partial stabilization that only affects:
- AllocType<T>::new_uninit
- AllocType<T>::assume_init
- AllocType<[T]>::new_uninit_slice
- AllocType<[T]>::assume_init
where "AllocType" is Box, Rc, or Arc
library: Move unstable API of new_uninit to new features
- `new_zeroed` variants move to `new_zeroed_alloc`
- the `write` fn moves to `box_uninit_write`
The remainder will be stabilized in upcoming patches, as it was decided to only stabilize `uninit*` and `assume_init`.
- `new_zeroed` variants move to `new_zeroed_alloc`
- the `write` fn moves to `box_uninit_write`
The remainder will be stabilized in upcoming patches, as
it was decided to only stabilize `uninit*` and `assume_init`.