Don't use __gxx_personality_v0 in panic_unwind on emscripten target
This resolves#85821. See also the discussion here:
https://github.com/emscripten-core/emscripten/issues/17128
The consensus seems to be that rust_eh_personality is never invoked.
I patched __gxx_personality_v0 to log invocations and then ran
various panic tests and it was never called, so this analysis matches
what seems to happen in practice. This replaces the definition with
an abort, modeled on the structured exception handling implementation.
update docs for `std::future::IntoFuture`
Ref https://github.com/rust-lang/rust/issues/67644.
This updates the docs for `IntoFuture` providing a bit more guidance on how to use it. Thanks!
Fix `delayed_good_path_bug` ice for expected diagnostics (RFC 2383)
Fixes a small ICE with the `delayed_good_path_bug` check.
---
r? ``@wesleywiser``
cc: ``@eddyb`` this might be interesting, since you've added a `FIXME` comment above the modified check which kind of discusses a case like this
closes: https://github.com/rust-lang/rust/issues/95540
cc: https://github.com/rust-lang/rust/issues/85549
std::io: Modify some ReadBuf method signatures to return `&mut Self`
This allows using `ReadBuf` in a builder-like style and to setup a `ReadBuf` and
pass it to `read_buf` in a single expression, e.g.,
```
// With this PR:
reader.read_buf(ReadBuf::uninit(buf).assume_init(init_len))?;
// Previously:
let mut buf = ReadBuf::uninit(buf);
buf.assume_init(init_len);
reader.read_buf(&mut buf)?;
```
r? `@sfackler`
cc https://github.com/rust-lang/rust/issues/78485, https://github.com/rust-lang/rust/issues/94741
Add the Provider api to core::any
This is an implementation of [RFC 3192](https://github.com/rust-lang/rfcs/pull/3192) ~~(which is yet to be merged, thus why this is a draft PR)~~. It adds an API for type-driven requests and provision of data from trait objects. A primary use case is for the `Error` trait, though that is not implemented in this PR. The only major difference to the RFC is that the functionality is added to the `any` module, rather than being in a sibling `provide_any` module (as discussed in the RFC thread).
~~Still todo: improve documentation on items, including adding examples.~~
cc `@yaahc`
The unused_macro_rules lint had a bug where it would regard all rules of
a macro as unused if one rule were malformed. This bug doesn't exist
with the unused_macros lint. To ensure it doesn't appear in the future,
we add a test for it.
Prior to this commit, if a macro had any malformed rules, all rules would
be reported as unused, regardless of whether they were used or not.
So we just turn off unused rule checking completely for macros with
malformed rules.
The very point of compile_error! is to never be reached, and one of
the use cases of the macro, currently also listed as examples in the
documentation of compile_error, is to create nicer errors for wrong
macro invocations. Thus, we shuuld never warn about unused macro arms
that contain invocations of compile_error.
This is a debug setting. We should only make debug builds if user requests
a debug build. Currently this is inserted in release builds.
Furthermore, it would be better to insert these settings in --pre-link-args
because then it would be possible to override them if appropriate. Because
these are inserted at the end, it is necessary to patch emscripten to remove
them.
Revert "remove num_cpus dependency" in rustc and update cargo
Fixes#97549. This PR reverts #94524 and does a Cargo update to pull in rust-lang/cargo#10737.
Rust 1.61.0 has a regression in which it misidentifies the number of available CPUs in some environments, leading to enormously increased memory usage and failing builds. In between Rust 1.60 and 1.61 both rustc and cargo replaced some uses of `num_cpus` with `available_parallelism`, which eliminated support for cgroupv1, still apparently in common use. This PR switches both rustc and cargo back to using `num_cpus` in order to support environments where the available parallelism is controlled by cgroupv1. Both can use `available_parallism` again once it handles cgroupv1 (if ever).
I have confirmed that the rustc part of this PR fixes the memory usage regression in my non-Cargo environment, and others have confirmed in #97549 that the Cargo regression was at fault for the memory usage regression in their environments.
Relax mipsel-sony-psp's linker script
Previously, the linker script forcefully kept all `.lib.stub` sections, unnecessarily bloating the binary. Now, the script is LTO and `--gc-sections` friendly.
`--nmagic` was also added to the linker, because page alignment is not required on the PSP. This further reduces binary size.
Accompanying changes for the `psp` crate are found in: https://github.com/overdrivenpotato/rust-psp/pull/118
hexagon: adapt test for upstream output changes
The output of IR formatting changed slightly in upstream rev
a0bc67e555f404d0e7ddb2e78cb891d96eaf913d
(https://reviews.llvm.org/D123096). I'm not actually sure what any of
that means, as I don't even know what hexagon is in this context, but
this change allows the test to pass on both old and new LLVMs.
r? ``@nikic``
impl Read and Write for VecDeque<u8>
Implementing `Read` and `Write` for `VecDeque<u8>` fills in the VecDeque api surface where `Vec<u8>` and `Cursor<Vec<u8>>` already impl Read and Write. Not only for completeness, but VecDeque in particular is a very handy mock interface for a TCP echo service, if only it supported Read/Write.
Since this PR is just an impl trait, I don't think there is a way to limit it behind a feature flag, so it's "insta-stable". Please correct me if I'm wrong here, not trying to rush stability.
BTreeSet: avoid intermediate sorting when collecting sorted iterators
As [pointed out by droundy](https://users.rust-lang.org/t/question-about-btreeset-implementation/76427), an obvious optimization is to skip the first step introduced by #88448 (creation of a vector and sorting) and it's easy to do so for btree's own iterators. Also, exploit `from` in the examples.
optimize `superset` method of `IntervalSet`
Given that intervals in the `IntervalSet` are sorted and strictly separated( it means the `end` of the previous interval will not be equal to the `start` of the next interval), we can reduce the complexity of the `superset` method from O(NMlogN) to O(2N) (N is the number of intervals and M is the length of each interval)