Hashing does not have to use the whole Components parsing machinery because we only need it to match the
normalizations that Components does.
* stripping redundant separators -> skipping separators
* stripping redundant '.' directories -> skipping '.' following after a separator
That's all it takes.
And instead of hashing individual slices for each component we feed the bytes directly into the hasher which avoids
hashing the length of each component in addition to its contents.
Make `std:🧵:available_concurrency` support process-limited number of CPUs
Use `libc::sched_getaffinity` and count the number of CPUs in the returned mask. This handles cases where the process doesn't have access to all CPUs, such as when limited via `taskset` or similar.
This also covers cgroup cpusets.
This stabilizes dereferencing immutable raw pointers in const contexts.
It does not stabilize `*mut T` dereferencing. This is placed behind the
`const_raw_mut_ptr_deref` feature gate.
In #89522 we learned that `clone3` is interacting poorly with Gentoo's
`sandbox` tool. We only need that for the unstable pidfd extensions, so
otherwise avoid that and use a normal `fork`.
Replace `std::os::raw::c_ssize_t` with `std::os::raw::c_ptrdiff_t`
The discussions in #88345 brought up that `ssize_t` is not actually the signed index type defined in stddef.h, but instead it's `ptrdiff_t`. It seems pretty clear that the use of `ssize_t` here was a mistake on my part, and that if we're going to bother having a isize-alike for FFI in `std::os::raw`, it should be `ptrdiff_t` and not `ssize_t`.
Anyway, both this and `c_size_t` are dubious in the face of the discussion in https://internals.rust-lang.org/t/pre-rfc-usize-is-not-size-t/15369, and any RFC/project-group/etc that handles those issues there should contend with these types in some manner, but that doesn't mean we shouldn't fix something wrong like this, even if it is unstable.
All that said, `size_t` is *vastly* more common in function signatures than either `ssize_t` or `ptrdiff_t`, so I'm going to update the tracking issue's list of unresolved questions to note that perhaps we only want `c_size_t` — I mostly added the signed version for symmetry, rather than to meet a need. (Given this, I'm also fine with modifying this patch to instead remove `c_ssize_t` without a replacement)
CC `@magicant` (who brought the issue up)
CC `@chorman0773` (who has a significantly firmer grasp on the minutae of the C standard than I do)
r? `@joshtriplett` (original reviewer, active in the discussions around this)
Windows thread-local keyless drop
`#[thread_local]` allows us to maintain a per-thread list of destructors. This also avoids the need to synchronize global data (which is particularly tricky within the TLS callback function).
r? `@alexcrichton`
Add JoinHandle::is_running.
This adds:
```rust
impl<T> JoinHandle<T> {
/// Checks if the the associated thread is still running its main function.
///
/// This might return `false` for a brief moment after the thread's main
/// function has returned, but before the thread itself has stopped running.
pub fn is_running(&self) -> bool;
}
```
The usual way to check if a background thread is still running is to set some atomic flag at the end of its main function. We already do that, in the form of dropping an Arc which will reduce the reference counter. So we might as well expose that information.
This is useful in applications with a main loop (e.g. a game, gui, control system, ..) where you spawn some background task, and check every frame/iteration whether the background task is finished to .join() it in that frame/iteration while keeping the program responsive.
`#[thread_local]` allows us to maintain a per-thread list of destructors. This also avoids the need to synchronize global data (which is particularly tricky within the TLS callback function).
Add #[must_use] to remaining std functions (O-Z)
I've run out of compelling reasons to group functions together across crates so I'm just going to go module-by-module. This is half of the remaining items from the `std` crate, from O-Z.
`panicking::take_hook` has a side effect: it unregisters the current panic hook, returning it. I almost ignored it, but the documentation example shows `let _ = panic::take_hook();`, so following suit I went ahead and added a `#[must_use]`.
```rust
std::panicking fn take_hook() -> Box<dyn Fn(&PanicInfo<'_>) + 'static + Sync + Send>;
```
I added these functions that clippy did not flag:
```rust
std::path::Path fn starts_with<P: AsRef<Path>>(&self, base: P) -> bool;
std::path::Path fn ends_with<P: AsRef<Path>>(&self, child: P) -> bool;
std::path::Path fn with_file_name<S: AsRef<OsStr>>(&self, file_name: S) -> PathBuf;
std::path::Path fn with_extension<S: AsRef<OsStr>>(&self, extension: S) -> PathBuf;
```
Parent issue: #89692
r? `@joshtriplett`
Add #[must_use] to expensive computations
The unifying theme for this commit is weak, admittedly. I put together a list of "expensive" functions when I originally proposed this whole effort, but nobody's cared about that criterion. Still, it's a decent way to bite off a not-too-big chunk of work.
Given the grab bag nature of this commit, the messages I used vary quite a bit. I'm open to wording changes.
For some reason clippy flagged four `BTreeSet` methods but didn't say boo about equivalent ones on `HashSet`. I stared at them for a while but I can't figure out the difference so I added the `HashSet` ones in.
```rust
// Flagged by clippy.
alloc::collections::btree_set::BTreeSet<T> fn difference<'a>(&'a self, other: &'a BTreeSet<T>) -> Difference<'a, T>;
alloc::collections::btree_set::BTreeSet<T> fn symmetric_difference<'a>(&'a self, other: &'a BTreeSet<T>) -> SymmetricDifference<'a, T>
alloc::collections::btree_set::BTreeSet<T> fn intersection<'a>(&'a self, other: &'a BTreeSet<T>) -> Intersection<'a, T>;
alloc::collections::btree_set::BTreeSet<T> fn union<'a>(&'a self, other: &'a BTreeSet<T>) -> Union<'a, T>;
// Ignored by clippy, but not by me.
std::collections::HashSet<T, S> fn difference<'a>(&'a self, other: &'a HashSet<T, S>) -> Difference<'a, T, S>;
std::collections::HashSet<T, S> fn symmetric_difference<'a>(&'a self, other: &'a HashSet<T, S>) -> SymmetricDifference<'a, T, S>
std::collections::HashSet<T, S> fn intersection<'a>(&'a self, other: &'a HashSet<T, S>) -> Intersection<'a, T, S>;
std::collections::HashSet<T, S> fn union<'a>(&'a self, other: &'a HashSet<T, S>) -> Union<'a, T, S>;
```
Parent issue: #89692
r? ```@joshtriplett```
Stabilize `is_symlink()` for `Metadata` and `Path`
I'm not fully sure about `since` version, correct me if I'm wrong
Needs update after stabilization: [cargo-test-support](8063672238/crates/cargo-test-support/src/paths.rs (L202))
Linked issue: #85748
Use libc::sched_getaffinity and count the number of CPUs in the returned
mask. This handles cases where the process doesn't have access to all
CPUs, such as when limited via taskset or similar.
Automatically convert paths to verbatim for filesystem operations that support it
This allows using longer paths without the user needing to `canonicalize` or manually prefix paths. If the path is already verbatim then this has no effect.
Fixes: #32689