Remove `#[rustc_deprecated]`
This removes `#[rustc_deprecated]` and introduces diagnostics to help users to the right direction (that being `#[deprecated]`). All uses of `#[rustc_deprecated]` have been converted. CI is expected to fail initially; this requires #95958, which includes converting `stdarch`.
I plan on following up in a short while (maybe a bootstrap cycle?) removing the diagnostics, as they're only intended to be short-term.
Further elaborate the lack of guarantees from `Hasher`
I realized that I got too excited in #94598 by adding new methods, and forgot to do the documentation to really answer the core question in #94026.
This PR just has that doc update.
r? `@Amanieu`
Add more diagnostic items
This just adds a handful diagnostic items I noticed were missing.
Would it be worth doing this for all of the remaining types? I'm willing to do it if it'd be helpful.
Create clippy lint against unexpectedly late drop for temporaries in match scrutinee expressions
A new clippy lint for issue 93883 (https://github.com/rust-lang/rust/issues/93883). Relies on a new trait in `marker` (called `SignificantDrop` to enable linting), which is why this PR is for the rust-lang repo and not the clippy repo.
changelog: new lint [`significant_drop_in_scrutinee`]
Remove hard links from `env::current_exe` security example
The security example shows that `env::current_exe` will return the path used when the program was started. This is not really surprising considering how hard links work: after `ln foo bar`, the two files are _equivalent_. It is _not_ the case that `bar` is a “link” to `foo`, nor is `foo` a link to `bar`. They are simply two names for the same underlying data.
The security vulnerability linked to seems to be different: there an attacker would start a SUID binary from a directory under the control of the attacker. The binary would respawn itself by executing the program found at `/proc/self/exe` (which the attacker can control). This is a real problem. In my opinion, the example given here doesn’t really show the same problem, it just shows a misunderstanding of what hard links are.
I looked through the history a bit and found that the example was introduced in https://github.com/rust-lang/rust/pull/33526. That PR actually has two commits, and the first (8478d48dad) explains the race condition at the root of the linked security vulnerability. The second commit proceeds to replace the explanation with the example we have today.
This commit reverts most of the second commit from https://github.com/rust-lang/rust/pull/33526.
Add aliases for std::fs::canonicalize
The aliases are `realpath` and `GetFinalPathNameByHandle` which are explicitly mentioned in `canonicalize`'s documentation.
Link to correct `as_mut` in docs for `pointer::as_ref`
It previously linked to the unstable const-mut-cast method instead of
the `mut` counterpart for `as_ref`.
Closes#96327
Use 64-bit time on 32-bit linux-gnu
The standard library suffered the [Year 2038 problem][Y2038] in two main places on targets with 32-bit `time_t`:
- In `std::time::SystemTime`, we stored a `timespec` that has `time_t` seconds. This is now changed to directly store 64-bit seconds and nanoseconds, and on 32-bit linux-gnu we try to use `__clock_gettime64` (glibc 2.34+) to get the larger timestamp.
- In `std::fs::Metadata`, we store a `stat64`, which has 64-bit `off_t` but still 32-bit `time_t`, and unfortunately that is baked in the API by the (deprecated) `MetadataExt::as_raw_stat()`. However, we can use `statx` for 64-bit `statx_timestamp` to store in addition to the `stat64`, as we already do to support creation time, and the rest of the `MetadataExt` methods can return those full values. Note that some filesystems may still be limited in their actual timestamp support, but that's not something Rust can change.
There remain a few places that need `timespec` for system call timeouts -- I leave that to future work.
[Y2038]: https://en.wikipedia.org/wiki/Year_2038_problem
Add a dedicated length-prefixing method to `Hasher`
This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths
This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.
Fixes#94026
r? rust-lang/libs
---
The core of this change is the following two new methods on `Hasher`:
```rust
pub trait Hasher {
/// Writes a length prefix into this hasher, as part of being prefix-free.
///
/// If you're implementing [`Hash`] for a custom collection, call this before
/// writing its contents to this `Hasher`. That way
/// `(collection![1, 2, 3], collection![4, 5])` and
/// `(collection![1, 2], collection![3, 4, 5])` will provide different
/// sequences of values to the `Hasher`
///
/// The `impl<T> Hash for [T]` includes a call to this method, so if you're
/// hashing a slice (or array or vector) via its `Hash::hash` method,
/// you should **not** call this yourself.
///
/// This method is only for providing domain separation. If you want to
/// hash a `usize` that represents part of the *data*, then it's important
/// that you pass it to [`Hasher::write_usize`] instead of to this method.
///
/// # Examples
///
/// ```
/// #![feature(hasher_prefixfree_extras)]
/// # // Stubs to make the `impl` below pass the compiler
/// # struct MyCollection<T>(Option<T>);
/// # impl<T> MyCollection<T> {
/// # fn len(&self) -> usize { todo!() }
/// # }
/// # impl<'a, T> IntoIterator for &'a MyCollection<T> {
/// # type Item = T;
/// # type IntoIter = std::iter::Empty<T>;
/// # fn into_iter(self) -> Self::IntoIter { todo!() }
/// # }
///
/// use std:#️⃣:{Hash, Hasher};
/// impl<T: Hash> Hash for MyCollection<T> {
/// fn hash<H: Hasher>(&self, state: &mut H) {
/// state.write_length_prefix(self.len());
/// for elt in self {
/// elt.hash(state);
/// }
/// }
/// }
/// ```
///
/// # Note to Implementers
///
/// If you've decided that your `Hasher` is willing to be susceptible to
/// Hash-DoS attacks, then you might consider skipping hashing some or all
/// of the `len` provided in the name of increased performance.
#[inline]
#[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
fn write_length_prefix(&mut self, len: usize) {
self.write_usize(len);
}
/// Writes a single `str` into this hasher.
///
/// If you're implementing [`Hash`], you generally do not need to call this,
/// as the `impl Hash for str` does, so you can just use that.
///
/// This includes the domain separator for prefix-freedom, so you should
/// **not** call `Self::write_length_prefix` before calling this.
///
/// # Note to Implementers
///
/// The default implementation of this method includes a call to
/// [`Self::write_length_prefix`], so if your implementation of `Hasher`
/// doesn't care about prefix-freedom and you've thus overridden
/// that method to do nothing, there's no need to override this one.
///
/// This method is available to be overridden separately from the others
/// as `str` being UTF-8 means that it never contains `0xFF` bytes, which
/// can be used to provide prefix-freedom cheaper than hashing a length.
///
/// For example, if your `Hasher` works byte-by-byte (perhaps by accumulating
/// them into a buffer), then you can hash the bytes of the `str` followed
/// by a single `0xFF` byte.
///
/// If your `Hasher` works in chunks, you can also do this by being careful
/// about how you pad partial chunks. If the chunks are padded with `0x00`
/// bytes then just hashing an extra `0xFF` byte doesn't necessarily
/// provide prefix-freedom, as `"ab"` and `"ab\u{0}"` would likely hash
/// the same sequence of chunks. But if you pad with `0xFF` bytes instead,
/// ensuring at least one padding byte, then it can often provide
/// prefix-freedom cheaper than hashing the length would.
#[inline]
#[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
fn write_str(&mut self, s: &str) {
self.write_length_prefix(s.len());
self.write(s.as_bytes());
}
}
```
With updates to the `Hash` implementations for slices and containers to call `write_length_prefix` instead of `write_usize`.
`write_str` defaults to using `write_length_prefix` since, as was pointed out in the issue, the `write_u8(0xFF)` approach is insufficient for hashers that work in chunks, as those would hash `"a\u{0}"` and `"a"` to the same thing. But since `SipHash` works byte-wise (there's an internal buffer to accumulate bytes until a full chunk is available) it overrides `write_str` to continue to use the add-non-UTF-8-byte approach.
---
Compatibility:
Because the default implementation of `write_length_prefix` calls `write_usize`, the changed hash implementation for slices will do the same thing the old one did on existing `Hasher`s.
Use futex-based locks and thread parker on {Free, Open, DragonFly}BSD.
This switches *BSD to our futex-based locks and thread parker.
Tracking issue: https://github.com/rust-lang/rust/issues/93740
This is a draft, because this still needs a new version of the `libc` crate to be published that includes https://github.com/rust-lang/libc/pull/2770.
r? `@Amanieu`
We might want to change the default before stabilizing (or maybe even after), but for getting in the new unstable methods, leave it as-is for now. That way it won't break cargo and such.
This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths
This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.
Implement [OsStr]::join
Implements join for `OsStr` and `OsString` slices:
```Rust
let strings = [OsStr::new("hello"), OsStr::new("dear"), OsStr::new("world")];
assert_eq!("hello dear world", strings.join(OsStr::new(" ")));
````
This saves one from converting to strings and back, or from implementing it manually.
Fix typo in `offset_from` documentation
Small fix for what I think is a typo in the `offset_from` documentation.
Someone reading this may understand that the distance in bytes is obtained by dividing the distance by `mem::size_of::<T>()`, but here we just want to define "units of T" in terms of bytes (i.e., units of T == bytes / `mem::size_of::<T>()`).
Update `int_roundings` methods from feedback
This updates `#![feature(int_roundings)]` (#88581) from feedback. All methods now take `NonZeroX`. The documentation makes clear that they panic in debug mode and wrap in release mode.
r? `@joshtriplett`
`@rustbot` label +T-libs +T-libs-api +S-waiting-on-review
Include nonexported macro_rules! macros in the doctest target
Fixes#88038
This PR aims to include nonexported `macro_rules!` macros in the doctest target. For more details, please see the above issue.
Avoid using `rand::thread_rng` in the stdlib benchmarks.
This is kind of an anti-pattern because it introduces extra nondeterminism for no real reason. In thread_rng's case this comes both from the random seed and also from the reseeding operations it does, which occasionally does syscalls (which adds additional nondeterminism). The impact of this would be pretty small in most cases, but it's a good practice to avoid (particularly because avoiding it was not hard).
Anyway, several of our benchmarks already did the right thing here anyway, so the change was pretty easy and mostly just applying it more universally. That said, the stdlib benchmarks aren't particularly stable (nor is our benchmark framework particularly great), so arguably this doesn't matter that much in practice.
~~Anyway, this also bumps the `rand` dev-dependency to 0.8, since it had fallen somewhat out of date.~~ Nevermind, too much of a headache.
Relax memory ordering used in SameMutexCheck
`SameMutexCheck` only requires atomicity for `self.addr`, but does not need ordering of other memory accesses in either the success or failure case. Using `Relaxed`, the code still correctly handles the case when two threads race to store an address.
Relax memory ordering used in `min_stack`
`min_stack` does not provide any synchronization guarantees to its callers, and only requires atomicity for `MIN` itself, so relaxed memory ordering is sufficient.
library/core: Fixes implement of c_uint, c_long, c_ulong
Fixes: aa67016624 ("make memcmp return a value of c_int_width instead of i32")
Introduce c_num_definition to getting the cfg_if logic easier to maintain
Add newlines for easier code reading
Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
The security example shows that `env::current_exe` will return the
path used when the program was started. This is not really surprising
considering how hard links work: after `ln foo bar`, the two files are
_equivalent_. It is _not_ the case that `bar` is a “link” to `foo`,
nor is `foo` a link to `bar`. They are simply two names for the same
underlying data.
The security vulnerability linked to seems to be different: there an
attacker would start a SUID binary from a directory under the control
of the attacker. The binary would respawn itself by executing the
program found at `/proc/self/exe` (which the attacker can control).
This is a real problem. In my opinion, the example given here doesn’t
really show the same problem, it just shows a misunderstanding of what
hard links are.
I looked through the history a bit and found that the example was
introduced in #33526. That PR actually has two commits, and the
first (8478d48dad) explains the race
condition at the root of the linked security vulnerability. The second
commit proceeds to replace the explanation with the example we have
today.
This commit reverts most of the second commit from #33526.