Backstory: I somehow missed the fact that I needed to register a lint pass in order for it to run, and I spent some time confused until I figured it out. So I wanted to make it clear that a missing `register_(early|late)_pass` call is a likely cause of a lint not running.
Move `slice::check_range` to `RangeBounds`
Since this method doesn't take a slice anymore (#76662), it makes more sense to define it on `RangeBounds`.
Questions:
- Should the new method be `assert_len` or `assert_length`?
normalize substs while inlining
fixes#68347 or more precisely, this fixes the same ICE in rust analyser as veloren is pinned to a specific nightly
and had an error with the current one.
I didn't look into creating an MVCE here as that seems fairly annoying, will spend a few minutes doing so rn. (failed)
r? `@eddyb` cc `@bjorn3`
Make sure arenas don't allocate bigger than HUGE_PAGE
Right now, arenas allocate based on the size of the last chunk. It is possible for a `grow` call to allocate a chunk that is not a multiple of `PAGE`, and this size is doubled for each subsequent allocation. This means, instead of `HUGE_PAGE`, the biggest page possible is actually unknown.
This change fixes this, and also removes an unnecessary checked multiplication. It is still possible to allocate bigger than `HUGE_PAGE` pages, but this will only happen as many times as absolutely necessary.
Make set_span take mut self
This was a mistake in https://github.com/rust-lang/rust/pull/77614
It's not a _huge_ deal, because backends can always implement this with interior mutability, but it's nice to avoid interior mutability when possible. For context, the `set_source_location` method, called alongside `set_span`, also takes `&mut self`.
r? `@eddyb`
For example, if you had this code:
fn foo(x: i32, y: f32) -> f32 {
x * y
}
You would get this error:
error[E0277]: cannot multiply `f32` to `i32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
However, that's not usually how people describe multiplication. People
usually describe multiplication like how the division error words it:
error[E0277]: cannot divide `i32` by `f32`
--> src/lib.rs:2:7
|
2 | x / y
| ^ no implementation for `i32 / f32`
|
= help: the trait `Div<f32>` is not implemented for `i32`
So that's what this change does. It changes this:
error[E0277]: cannot multiply `f32` to `i32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
To this:
error[E0277]: cannot multiply `i32` by `f32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
Add std:🧵:available_concurrency
This PR adds a counterpart to [C++'s `std:🧵:hardware_concurrency`](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency) to Rust, tracking issue https://github.com/rust-lang/rust/issues/74479.
cc/ `@rust-lang/libs`
## Motivation
Being able to know how many hardware threads a platform supports is a core part of building multi-threaded code. In C++ 11 this has become available through the [`std:🧵:hardware_concurrency`](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency) API. Currently in Rust most of the ecosystem depends on the [`num_cpus` crate](https://docs.rs/num_cpus/1.13.0/num_cpus/) ([no.35 in top 500 crates](https://docs.google.com/spreadsheets/d/1wwahRMHG3buvnfHjmPQFU4Kyfq15oTwbfsuZpwHUKc4/edit#gid=1253069234)) to provide this functionality. This PR proposes an API to provide access to the number of hardware threads available on a given platform.
__edit (2020-07-24):__ The purpose of this PR is to provide a hint for how many threads to spawn to saturate the processor. There's value in introducing APIs for NUMA and Windows processor groups, but those are intentionally out of scope for this PR. See: https://github.com/rust-lang/rust/pull/74480#issuecomment-662116186.
## Naming
Discussing the naming of the API on Zulip surfaced two options:
- `std:🧵:hardware_concurrency`
- `std:🧵:hardware_threads`
Both options seemed acceptable, but overall people seem to gravitate the most towards `hardware_threads`. Additionally `@jonas-schievink` pointed out that the "hardware threads" terminology is well-established and is used in among other the [RISC-V specification](https://riscv.org/specifications/isa-spec-pdf/) (page 20):
> A component is termed a core if it contains an independent instruction fetch unit. A RISC-V-compatible core might support multiple RISC-V-compatible __hardware threads__, or harts, through multithreading.
It's also worth noting that [the original paper introducing C++'s `std::thread` submodule](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2320.html) unfortunately doesn't feature any discussion on the naming of `hardware_concurrency`, so we can't use that to help inform our decision here.
## Return type
An important consideration `@joshtriplett` brought up is that we don't want to default to `1` for platforms where the number of available threads cannot be retrieved. Instead we want to inform the users of the fact that we don't know and allow them to handle that case. Which is why this PR uses `Option<NonZeroUsize>` as its return type, where `None` is returned on platforms where we don't know the number of hardware threads available.
The reasoning for `NonZeroUsize` vs `usize` is that if the number of threads for a platform are known, they'll always be at least 1. As evidenced by the example the `NonZero*` family of APIs may currently not be the most ergonomic to use, but improving the ergonomics of them is something that I think we can address separately.
## Implementation
`@Mark-Simulacrum` pointed out that most of the code we wanted to expose here was already available under `libtest`. So this PR mostly moves the internal code of libtest into a public API.