Constier maybe uninit
I was playing around trying to make `[T; N]::zip()` in #79451 be `const fn`. One of the things I bumped into was `MaybeUninit::assume_init`. Is there any reason for the intrinsic `assert_inhabited<T>()` and therefore `MaybeUninit::assume_init` not being `const`?
---
I have as best as I could tried to follow the instruction in [library/core/src/intrinsics.rs](https://github.com/rust-lang/rust/blob/master/library/core/src/intrinsics.rs#L11). I have no idea what I am doing but it seems to compile after some slight changes after the copy paste. Is this anywhere near how this should be done?
Also any ideas for name of the feature gate? I guess `const_maybe_assume_init` is quite misleading since I have added some more methods. Should I add test? If so what should be tested?
implement better availability probing for copy_file_range
Followup to https://github.com/rust-lang/rust/pull/75428#discussion_r469616547
Previously syscall detection was overly pessimistic. Any attempt to copy to an immutable file (EPERM) would disable copy_file_range support for the whole process.
The change tries to copy_file_range on invalid file descriptors which will never run into the immutable file case and thus we can clearly distinguish syscall availability.
Rollup of 12 pull requests
Successful merges:
- #79732 (minor stylistic clippy cleanups)
- #79750 (Fix trimming of lint docs)
- #79777 (Remove `first_merge` from liveness debug logs)
- #79795 (Privatize some of libcore unicode_internals)
- #79803 (Update xsv to prevent random CI failures)
- #79810 (Account for gaps in def path table during decoding)
- #79818 (Fixes to Rust coverage)
- #79824 (Strip prefix instead of replacing it with empty string)
- #79826 (Simplify visit_{foreign,trait}_item)
- #79844 (Move RWUTable to a separate module)
- #79861 (Update LLVM submodule)
- #79862 (Remove tab-lock and replace it with ctrl+up/down arrows to switch between search result tabs)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Privatize some of libcore unicode_internals
My understanding is that these API are perma unstable, so it doesn't
make sense to pollute docs & IDE completion[1] with them.
[1]: https://github.com/rust-analyzer/rust-analyzer/issues/6738
ext/ucred: Support PID in peer creds on macOS
This is a follow-up to https://github.com/rust-lang/rust/pull/75148 (RFC: https://github.com/rust-lang/rust/issues/42839).
The original PR used `getpeereid` on macOS and the BSDs, since they don't (generally) support the `SO_PEERCRED` mechanism that Linux supplies.
This PR splits the macOS/iOS implementation of `peer_cred()` from that of the BSDs, since macOS supplies the `LOCAL_PEERPID` sockopt as a source of the missing PID. It also adds a `cfg`-gated tests that ensures that platforms with support for PIDs in `UCred` have the expected data.
Use is_write_vectored to optimize the write_vectored implementation for BufWriter
In case when the underlying writer does not have an efficient implementation `write_vectored`, the present implementation of
`write_vectored` for `BufWriter` may still forward vectored writes directly to the writer depending on the total length of the data. This misses the advantage of buffering, as the actually written slice may be small.
Provide an alternative code path for the non-vectored case, where the slices passed to `BufWriter` are coalesced in the buffer before being flushed to the underlying writer with plain `write` calls. The buffer is only bypassed if an individual slice's length is at least as large as the buffer.
Remove a FIXME comment referring to #72919 as the issue has been closed with an explanation provided.
The code in io::stdio before this change misused the ReentrantMutexes,
by calling init() on them and moving them afterwards. Now that
ReentrantMutex requires Pin for init(), this mistake is no longer easy
to make.
We also change the specialization of `SpecFromIterNested::from_iter` for
`TrustedLen` to use `Vec::with_capacity` when the iterator has a proper size
hint, instead of `Vec::new`, avoiding calls to `grow_*` and thus
`finish_grow` in some fully inlinable cases, which would regress with
this change.
Fixes#78471.
Fix SGX CI, take 3
Broken in #79038
r? `@Mark-Simulacrum`
I actually ran `./x.py test --target x86_64-fortanix-unknown-sgx` on the commit before submitting it this time.
Fix incorrect io::Take's limit resulting from io::copy specialization
The specialization introduced in #75272 fails to update `io::Take` wrappers after performing the copy syscalls which bypass those wrappers. The buffer flushing before the copy does update them correctly, but the bytes copied after the initial flush weren't subtracted.
The fix is to subtract the bytes copied from each `Take` in the chain of wrappers, even when an error occurs during the syscall loop. To do so the `CopyResult` enum now has to carry the bytes copied so far in the error case.