Commit Graph

1611 Commits

Author SHA1 Message Date
Josh Stone
9202fbdbdb Check for exhaustion in SliceIndex for RangeInclusive 2020-10-20 17:18:08 -07:00
Josh Stone
9fd79a3904 make exhausted RangeInclusive::end_bound return Excluded(end) 2020-10-19 13:46:30 -07:00
Josh Stone
b62b352f47 Check for exhaustion in RangeInclusive::contains
When a range has finished iteration, `is_empty` returns true, so it
should also be the case that `contains` returns false.
2020-10-19 10:02:51 -07:00
bors
e42cbe8edc Auto merge of #77874 - camelid:range-docs-readability, r=scottmcm
Improve range docs

* Improve code formatting and legibility
* Various other readability improvements
2020-10-19 00:11:08 +00:00
Camelid
a885c5008c Improve range docs
* Mention that `RangeFull` is a ZST and thus a singleton
* Improve code formatting and legibility
* Various other readability improvements
2020-10-18 16:02:08 -07:00
bors
b1496c6e60 Auto merge of #78075 - est31:remove_redundant_static, r=jonas-schievink
Remove redundant 'static
2020-10-18 21:02:05 +00:00
bors
187b8771dc Auto merge of #76885 - dylni:move-slice-check-range-to-range-bounds, r=KodrAus
Move `slice::check_range` to `RangeBounds`

Since this method doesn't take a slice anymore (#76662), it makes more sense to define it on `RangeBounds`.

Questions:
- Should the new method be `assert_len` or `assert_length`?
2020-10-18 18:50:43 +00:00
est31
a687420d17 Remove redundant 'static from library crates 2020-10-18 17:25:51 +02:00
bors
c38ddb8040 Auto merge of #74480 - yoshuawuyts:hardware_threads, r=dtolnay
Add std:🧵:available_concurrency

This PR adds a counterpart to [C++'s `std:🧵:hardware_concurrency`](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency) to Rust, tracking issue https://github.com/rust-lang/rust/issues/74479.

cc/ `@rust-lang/libs`

## Motivation

Being able to know how many hardware threads a platform supports is a core part of building multi-threaded code. In C++ 11 this has become available through the [`std:🧵:hardware_concurrency`](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency) API. Currently in Rust most of the ecosystem depends on the [`num_cpus` crate](https://docs.rs/num_cpus/1.13.0/num_cpus/) ([no.35 in top 500 crates](https://docs.google.com/spreadsheets/d/1wwahRMHG3buvnfHjmPQFU4Kyfq15oTwbfsuZpwHUKc4/edit#gid=1253069234)) to provide this functionality. This PR proposes an API to provide access to the number of hardware threads available on a given platform.

__edit (2020-07-24):__ The purpose of this PR is to provide a hint for how many threads to spawn to saturate the processor. There's value in introducing APIs for NUMA and Windows processor groups, but those are intentionally out of scope for this PR. See: https://github.com/rust-lang/rust/pull/74480#issuecomment-662116186.

## Naming

Discussing the naming of the API on Zulip surfaced two options:

- `std:🧵:hardware_concurrency`
- `std:🧵:hardware_threads`

Both options seemed acceptable, but overall people seem to gravitate the most towards `hardware_threads`. Additionally `@jonas-schievink` pointed out that the "hardware threads" terminology is well-established and is used in among other the [RISC-V specification](https://riscv.org/specifications/isa-spec-pdf/) (page 20):

> A component is termed a core if it contains an independent instruction fetch unit. A RISC-V-compatible core might support multiple RISC-V-compatible __hardware threads__, or harts, through multithreading.

It's also worth noting that [the original paper introducing C++'s `std::thread` submodule](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2320.html) unfortunately doesn't feature any discussion on the naming of `hardware_concurrency`, so we can't use that to help inform our decision here.

## Return type

An important consideration `@joshtriplett` brought up is that we don't want to default to `1` for platforms where the number of available threads cannot be retrieved. Instead we want to inform the users of the fact that we don't know and allow them to handle that case. Which is why this PR uses `Option<NonZeroUsize>` as its return type, where `None` is returned on platforms where we don't know the number of hardware threads available.

The reasoning for `NonZeroUsize` vs `usize` is that if the number of threads for a platform are known, they'll always be at least 1. As evidenced by the example the `NonZero*` family of APIs may currently not be the most ergonomic to use, but improving the ergonomics of them is something that I think we can address separately.

## Implementation

`@Mark-Simulacrum` pointed out that most of the code we wanted to expose here was already available under `libtest`. So this PR mostly moves the internal code of libtest into a public API.
2020-10-18 02:28:21 +00:00
Yuki Okushi
a0242e73bb
Rollup merge of #77851 - exrook:split-btreemap, r=dtolnay
BTreeMap: refactor Entry out of map.rs into its own file

btree/map.rs is approaching the 3000 line mark, splitting out the entry
code buys about 500 lines of headroom.

I've created this PR because the changes I've made in #77438 will push `map.rs` over the 3000 line limit and cause tidy to complain.

I picked `Entry` to factor out because it feels less tightly coupled to the rest of `BTreeMap` than the various iterator implementations.

Related: #60302
2020-10-18 04:11:07 +09:00
bors
4cb7ef0f94 Auto merge of #77455 - asm89:faster-spawn, r=kennytm
Use posix_spawn() on unix if program is a path

Previously `Command::spawn` would fall back to the non-posix_spawn based
implementation if the `PATH` environment variable was possibly changed.
On systems with a modern (g)libc `posix_spawn()` can be significantly
faster. If program is a path itself the `PATH` environment variable is
not used for the lookup and it should be safe to use the
`posix_spawnp()` method. [1]

We found this, because we have a cli application that effectively runs a
lot of subprocesses. It would sometimes noticeably hang while printing
output. Profiling showed that the process was spending the majority of
time in the kernel's `copy_page_range` function while spawning
subprocesses. During this time the process is completely blocked from
running, explaining why users were reporting the cli app hanging.

Through this we discovered that `std::process::Command` has a fast and
slow path for process execution. The fast path is backed by
`posix_spawnp()` and the slow path by fork/exec syscalls being called
explicitly. Using fork for process creation is supposed to be fast, but
it slows down as your process uses more memory.  It's not because the
kernel copies the actual memory from the parent, but it does need to
copy the references to it (see `copy_page_range` above!).  We ended up
using the slow path, because the command spawn implementation in falls
back to the slow path if it suspects the PATH environment variable was
changed.

Here is a smallish program demonstrating the slowdown before this code
change:

```
use std::process::Command;
use std::time::Instant;

fn main() {
    let mut args = std::env::args().skip(1);
    if let Some(size) = args.next() {
        // Allocate some memory
        let _xs: Vec<_> = std::iter::repeat(0)
            .take(size.parse().expect("valid number"))
            .collect();

        let mut command = Command::new("/bin/sh");
        command
            .arg("-c")
            .arg("echo hello");

        if args.next().is_some() {
            println!("Overriding PATH");
            command.env("PATH", std::env::var("PATH").expect("PATH env var"));
        }

        let now = Instant::now();
        let child = command
            .spawn()
            .expect("failed to execute process");

        println!("Spawn took: {:?}", now.elapsed());

        let output = child.wait_with_output().expect("failed to wait on process");
        println!("Output: {:?}", output);
    } else {
        eprintln!("Usage: prog [size]");
        std::process::exit(1);
    }
    ()
}
```

Running it and passing different amounts of elements to use to allocate
memory shows that the time taken for `spawn()` can differ quite
significantly. In latter case the `posix_spawnp()` implementation is 30x
faster:

```
$ cargo run --release 10000000
...
Spawn took: 324.275µs
hello
$ cargo run --release 10000000 changepath
...
Overriding PATH
Spawn took: 2.346809ms
hello
$ cargo run --release 100000000
...
Spawn took: 387.842µs
hello
$ cargo run --release 100000000 changepath
...
Overriding PATH
Spawn took: 13.434677ms
hello
```

[1]: 5f72f9800b/posix/execvpe.c (L81)
2020-10-17 06:16:00 +00:00
Dylan DPC
16b878fd0f
Rollup merge of #77932 - ssomers:btree_cleanup_gdb, r=Mark-Simulacrum
BTreeMap: improve gdb introspection of BTreeMap with ZST keys or values

I accidentally pushed an earlier revision in #77788: it changes the index of tuples for BTreeSet from ""[{}]".format(i) to "key{}".format(i). Which doesn't seem to make the slightest difference on my linux box nor on CI. In fact, gdb doesn't make any distinction between "key{}" and "val{}" for a BTreeMap either, leading to confusing output if you test more. But easy to improve.

r? @Mark-Simulacrum
2020-10-17 03:27:18 +02:00
Dylan DPC
f40ecff964
Rollup merge of #77751 - vojtechkral:vecdeque-binary-search, r=scottmcm,dtolnay
liballoc: VecDeque: Add binary search functions

I am submitting rust-lang/rfcs#2997 as a PR as suggested by @scottmcm

I haven't yet created a tracking issue - if there's a favorable feedback I'll create one and update the issue links in the unstable attribs.
2020-10-17 03:27:15 +02:00
bors
f1b97ee7f8 Auto merge of #77997 - fusion-engineering-forks:to-string-no-shrink, r=joshtriplett
Remove shrink_to_fit from default ToString::to_string implementation.

As suggested by `@scottmcm` on Zulip. shrink_to_fit() seems like the wrong thing to do here in most use cases of to_string(). Would be intereseting to see if it makes any difference in a timer run.

r? `@joshtriplett`
2020-10-16 22:53:50 +00:00
Yoshua Wuyts
3717646366 Add std:🧵:available_concurrency 2020-10-16 23:36:15 +02:00
Yuki Okushi
b4282b37f1
Rollup merge of #77991 - Aaron1011:bump-backtrace-again, r=Mark-Simulacrum
Bump backtrace-rs

This pulls in https://github.com/rust-lang/backtrace-rs/pull/376, which
fixes Miri support for `std::backtrace::Backtrace`.
2020-10-17 05:36:50 +09:00
Yuki Okushi
050eb4d7e4
Rollup merge of #77971 - jyn514:broken-intra-doc-links, r=mark-simulacrum
Deny broken intra-doc links in linkchecker

Since rustdoc isn't warning about these links, check for them manually.

This also fixes the broken links that popped up from the lint.
2020-10-17 05:36:49 +09:00
Yuki Okushi
9abf81afa8
Rollup merge of #77900 - Thomasdezeeuw:fdatasync, r=dtolnay
Use fdatasync for File::sync_data on more OSes

Add support for the following OSes:
 * Android
 * FreeBSD: https://www.freebsd.org/cgi/man.cgi?query=fdatasync&sektion=2
 * OpenBSD: https://man.openbsd.org/OpenBSD-5.8/fsync.2
 * NetBSD: https://man.netbsd.org/fdatasync.2
 * illumos: https://illumos.org/man/3c/fdatasync
2020-10-17 05:36:45 +09:00
Yuki Okushi
3356ad7c26
Rollup merge of #77547 - RalfJung:stable-union-drop, r=matthewjasper
stabilize union with 'ManuallyDrop' fields and 'impl Drop for Union'

As [discussed by @SimonSapin and @withoutboats](https://github.com/rust-lang/rust/issues/55149#issuecomment-634692020), this PR proposes to stabilize parts of the `untagged_union` feature gate:

* It will be possible to have a union with field type `ManuallyDrop<T>` for any `T`.
* While at it I propose we also stabilize `impl Drop for Union`; to my knowledge, there are no open concerns around this feature.

In the RFC discussion, we also talked about allowing `&mut T` as another non-`Copy` non-dropping type, but that felt to me like an overly specific exception so I figured we'd wait if there is actually any use for such a special case.

Some things remain unstable and still require the `untagged_union` feature gate:
* Union with fields that do not drop, are not `Copy`, and are not `ManuallyDrop<_>`. The reason to not stabilize this is to avoid semver concerns around libraries adding `Drop` implementations later. (This is already not fully semver compatible as, to my knowledge, the borrow checker will exploit the non-dropping nature of any type, but it seems prudent to avoid further increasing the amount of trouble adding an `impl Drop` can cause.)

Due to this, quite a few tests still need the `untagged_union` feature, but I think the ones where I could remove the feature flag provide good test coverage for the stable part.

Cc @rust-lang/lang
2020-10-17 05:36:38 +09:00
Vojtech Kral
c7a787a327 liballoc: VecDeque: Simplify binary_search_by() 2020-10-16 20:49:19 +02:00
Vojtech Kral
e0506d1e9a liballoc: VecDeque: Add tracking issue for binary search fns 2020-10-16 18:17:55 +02:00
bors
8850893d96 Auto merge of #77850 - kornelski:resizedefault, r=dtolnay
Remove deprecated unstable Vec::resize_default

It's [been deprecated](https://github.com/rust-lang/rust/pull/57656) for 15 releases.
2020-10-16 12:11:32 +00:00
Ralf Jung
defcd7ff47 stop relying on feature(untagged_unions) in stdlib 2020-10-16 11:33:35 +02:00
Mara Bos
0b062887db Remove shrink_to_fit from default ToString::to_string implementation.
Co-authored-by: scottmcm <scottmcm@users.noreply.github.com>
2020-10-16 11:15:12 +02:00
Mara Bos
0f0257be10 Take some of sys/vxworks/process/* from sys/unix instead. 2020-10-16 06:22:05 +02:00
Mara Bos
408db0da85 Take sys/vxworks/{os,path,pipe} from sys/unix instead. 2020-10-16 06:22:00 +02:00
Mara Bos
71bb1dc2a0 Take sys/vxworks/{fd,fs,io} from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
3f196dc137 Take sys/vxworks/cmath from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
ba483c51df Take sys/vxworks/args from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
08bcaac091 Take sys/vxworks/memchar from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
dce405ae3d Take sys/vxworks/net from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
a489c33beb Take sys/vxworks/ext/* from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
c909ff9577 Add weak macro to vxworks. 2020-10-16 06:19:00 +02:00
Mara Bos
66c9b04e94 Take sys/vxworks/alloc from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
678d078950 Take sys/vxworks/thread_local_key from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
4853a6e78e Take sys/vxworks/stdio from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
5d526f6eee Take sys/vxworks/thread from sys/unix instead. 2020-10-16 06:19:00 +02:00
Mara Bos
c8628f43bf Take sys/vxworks/stack_overflow from sys/unix instead. 2020-10-16 06:18:59 +02:00
Mara Bos
d1947628b5 Take sys/vxworks/time from sys/unix instead. 2020-10-16 06:18:59 +02:00
Mara Bos
f875c8be5d Take sys/vxworks/rwlock from sys/unix instead. 2020-10-16 06:18:59 +02:00
Mara Bos
f3f30c7132 Take sys/vxworks/condvar from sys/unix instead. 2020-10-16 06:18:59 +02:00
Mara Bos
b8dcd2fbce Take sys/vxworks/mutex from sys/unix instead. 2020-10-16 06:18:59 +02:00
Joshua Nelson
65835d1059 Deny broken intra-doc links in linkchecker
Since rustdoc isn't warning about these links, check for them manually.
2020-10-15 20:22:16 -04:00
Dylan DPC
e688b4d51c
Rollup merge of #77980 - Manishearth:needs-drop-intra, r=jyn514
Fix intra doc link for needs_drop

It currently links to itself. Oops.

r? @jyn514
2020-10-16 02:10:25 +02:00
Dylan DPC
b64b5fac40
Rollup merge of #77935 - ssomers:btree_cleanup_1, r=Mark-Simulacrum
BTreeMap: make PartialCmp/PartialEq explicit and tested

Follow-up on a topic raised in #77612

r? @Mark-Simulacrum
2020-10-16 02:10:24 +02:00
Dylan DPC
9b8c0eb107
Rollup merge of #77657 - fusion-engineering-forks:cleanup-cloudabi-sync, r=dtolnay
Cleanup cloudabi mutexes and condvars

This gets rid of lots of unnecessary unsafety.

All the AtomicU32s were wrapped in UnsafeCell or UnsafeCell<MaybeUninit>, and raw pointers were used to get to the AtomicU32 inside. This change cleans that up by using AtomicU32 directly.

Also replaces a UnsafeCell<u32> by a safer Cell<u32>.

@rustbot modify labels: +C-cleanup
2020-10-16 02:10:17 +02:00
Dylan DPC
b183ef2068
Rollup merge of #77648 - fusion-engineering-forks:static-mutex, r=dtolnay
Static mutex is static

StaticMutex is only ever used with as a static (as the name already suggests). So it doesn't have to be generic over a lifetime, but can simply assume 'static.

This 'static lifetime guarantees the object is never moved, so this is no longer a manually checked requirement for unsafe calls to lock().

@rustbot modify labels: +T-libs +A-concurrency +C-cleanup
2020-10-16 02:10:15 +02:00
Dylan DPC
085399f481
Rollup merge of #77646 - fusion-engineering-forks:use-static-mutex, r=dtolnay
For backtrace, use StaticMutex instead of a raw sys Mutex.

The code used the very unsafe `sys::mutex::Mutex` directly, and built its own unlock-on-drop wrapper around it. The StaticMutex wrapper already provides that and is easier to use safely.

@rustbot modify labels: +T-libs +C-cleanup
2020-10-16 02:10:13 +02:00
Dylan DPC
dcf972a2be
Rollup merge of #77619 - fusion-engineering-forks:wasm-parker, r=dtolnay
Use futex-based thread-parker for Wasm32.

This uses the existing `sys_common/thread_parker/futex.rs` futex-based thread parker (that was already used for Linux) for wasm32 as well (if the wasm32 atomics target feature is enabled, which is not the case by default).

Wasm32 provides the basic futex operations as instructions: https://webassembly.github.io/threads/syntax/instructions.html

These are now exposed from `sys::futex::{futex_wait, futex_wake}`, just like on Linux. So, `thread_parker/futex.rs` stays completely unmodified.
2020-10-16 02:10:11 +02:00
Dylan DPC
5acb7f198f
Rollup merge of #76084 - Lucretiel:split-buffered, r=dtolnay
Refactor io/buffered.rs into submodules

This pull request splits `BufWriter`, `BufReader`, `LineWriter`, and `LineWriterShim` (along with their associated tests) into separate submodules. It contains no functional changes. This change is being made in anticipation of adding another type of buffered writer which can be switched between line- and block-buffering mode.

Part of a series of pull requests resolving #60673.
2020-10-16 02:10:04 +02:00