2014-01-07 14:16:38 -06:00
|
|
|
% The Rust Tasks and Communication Guide
|
2012-09-26 21:00:13 -05:00
|
|
|
|
|
|
|
# Introduction
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2013-04-04 00:34:29 -05:00
|
|
|
Rust provides safe concurrency through a combination
|
|
|
|
of lightweight, memory-isolated tasks and message passing.
|
2013-12-21 17:29:48 -06:00
|
|
|
This guide will describe the concurrency model in Rust, how it
|
2013-04-04 00:34:29 -05:00
|
|
|
relates to the Rust type system, and introduce
|
|
|
|
the fundamental library abstractions for constructing concurrent programs.
|
|
|
|
|
|
|
|
Rust tasks are not the same as traditional threads: rather,
|
|
|
|
they are considered _green threads_, lightweight units of execution that the Rust
|
|
|
|
runtime schedules cooperatively onto a small number of operating system threads.
|
|
|
|
On a multi-core system Rust tasks will be scheduled in parallel by default.
|
|
|
|
Because tasks are significantly
|
2012-10-09 18:14:55 -05:00
|
|
|
cheaper to create than traditional threads, Rust can create hundreds of
|
|
|
|
thousands of concurrent tasks on a typical 32-bit system.
|
2013-04-04 00:34:29 -05:00
|
|
|
In general, all Rust code executes inside a task, including the `main` function.
|
|
|
|
|
|
|
|
In order to make efficient use of memory Rust tasks have dynamically sized stacks.
|
|
|
|
A task begins its life with a small
|
|
|
|
amount of stack space (currently in the low thousands of bytes, depending on
|
|
|
|
platform), and acquires more stack as needed.
|
|
|
|
Unlike in languages such as C, a Rust task cannot accidentally write to
|
|
|
|
memory beyond the end of the stack, causing crashes or worse.
|
2012-10-09 18:14:55 -05:00
|
|
|
|
2013-04-04 00:34:29 -05:00
|
|
|
Tasks provide failure isolation and recovery. When a fatal error occurs in Rust
|
|
|
|
code as a result of an explicit call to `fail!()`, an assertion failure, or
|
|
|
|
another invalid operation, the runtime system destroys the entire
|
2012-10-09 18:14:55 -05:00
|
|
|
task. Unlike in languages such as Java and C++, there is no way to `catch` an
|
|
|
|
exception. Instead, tasks may monitor each other for failure.
|
|
|
|
|
|
|
|
Tasks use Rust's type system to provide strong memory safety guarantees. In
|
|
|
|
particular, the type system guarantees that tasks cannot share mutable state
|
|
|
|
with each other. Tasks communicate with each other by transferring _owned_
|
|
|
|
data through the global _exchange heap_.
|
|
|
|
|
2012-10-01 17:41:21 -05:00
|
|
|
## A note about the libraries
|
2012-09-29 21:21:12 -05:00
|
|
|
|
|
|
|
While Rust's type system provides the building blocks needed for safe
|
|
|
|
and efficient tasks, all of the task functionality itself is implemented
|
2014-01-30 14:04:47 -06:00
|
|
|
in the standard and sync libraries, which are still under development
|
2013-04-04 00:34:29 -05:00
|
|
|
and do not always present a consistent or complete interface.
|
2012-09-29 21:21:12 -05:00
|
|
|
|
|
|
|
For your reference, these are the standard modules involved in Rust
|
2013-05-17 16:11:49 -05:00
|
|
|
concurrency at this writing:
|
2012-09-29 21:21:12 -05:00
|
|
|
|
2013-05-23 15:06:29 -05:00
|
|
|
* [`std::task`] - All code relating to tasks and task scheduling,
|
|
|
|
* [`std::comm`] - The message passing interface,
|
2014-01-30 14:04:47 -06:00
|
|
|
* [`sync::DuplexStream`] - An extension of `pipes::stream` that allows both sending and receiving,
|
2014-03-09 16:58:32 -05:00
|
|
|
* [`sync::SyncSender`] - An extension of `pipes::stream` that provides synchronous message sending,
|
|
|
|
* [`sync::SyncReceiver`] - An extension of `pipes::stream` that acknowledges each message received,
|
2014-03-01 18:33:24 -06:00
|
|
|
* [`sync::rendezvous`] - Creates a stream whose channel, upon sending a message, blocks until the
|
2014-01-30 14:04:47 -06:00
|
|
|
message is received.
|
|
|
|
* [`sync::Arc`] - The Arc (atomically reference counted) type, for safely sharing immutable data,
|
|
|
|
* [`sync::RWArc`] - A dual-mode Arc protected by a reader-writer lock,
|
|
|
|
* [`sync::MutexArc`] - An Arc with mutable data protected by a blocking mutex,
|
|
|
|
* [`sync::Semaphore`] - A counting, blocking, bounded-waiting semaphore,
|
2014-03-01 18:33:24 -06:00
|
|
|
* [`sync::Mutex`] - A blocking, bounded-waiting, mutual exclusion lock with an associated
|
2014-01-30 14:04:47 -06:00
|
|
|
FIFO condition variable,
|
|
|
|
* [`sync::RWLock`] - A blocking, no-starvation, reader-writer lock with an associated condvar,
|
|
|
|
* [`sync::Barrier`] - A barrier enables multiple tasks to synchronize the beginning
|
|
|
|
of some computation,
|
|
|
|
* [`sync::TaskPool`] - A task pool abstraction,
|
|
|
|
* [`sync::Future`] - A type encapsulating the result of a computation which may not be complete,
|
|
|
|
* [`sync::one`] - A "once initialization" primitive
|
|
|
|
* [`sync::mutex`] - A proper mutex implementation regardless of the "flavor of task" which is
|
|
|
|
acquiring the lock.
|
2012-09-29 21:21:12 -05:00
|
|
|
|
2013-10-18 10:57:52 -05:00
|
|
|
[`std::task`]: std/task/index.html
|
|
|
|
[`std::comm`]: std/comm/index.html
|
2014-01-30 14:04:47 -06:00
|
|
|
[`sync::DuplexStream`]: sync/struct.DuplexStream.html
|
2014-03-09 16:58:32 -05:00
|
|
|
[`sync::SyncSender`]: sync/struct.SyncSender.html
|
|
|
|
[`sync::SyncReceiver`]: sync/struct.SyncReceiver.html
|
2014-01-30 14:04:47 -06:00
|
|
|
[`sync::rendezvous`]: sync/fn.rendezvous.html
|
|
|
|
[`sync::Arc`]: sync/struct.Arc.html
|
|
|
|
[`sync::RWArc`]: sync/struct.RWArc.html
|
|
|
|
[`sync::MutexArc`]: sync/struct.MutexArc.html
|
|
|
|
[`sync::Semaphore`]: sync/struct.Semaphore.html
|
|
|
|
[`sync::Mutex`]: sync/struct.Mutex.html
|
|
|
|
[`sync::RWLock`]: sync/struct.RWLock.html
|
|
|
|
[`sync::Barrier`]: sync/struct.Barrier.html
|
|
|
|
[`sync::TaskPool`]: sync/struct.TaskPool.html
|
|
|
|
[`sync::Future`]: sync/struct.Future.html
|
|
|
|
[`sync::one`]: sync/one/index.html
|
|
|
|
[`sync::mutex`]: sync/mutex/index.html
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2012-10-01 17:41:21 -05:00
|
|
|
# Basics
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
The programming interface for creating and managing tasks lives
|
2013-05-23 15:06:29 -05:00
|
|
|
in the `task` module of the `std` library, and is thus available to all
|
2012-10-09 18:14:55 -05:00
|
|
|
Rust code by default. At its simplest, creating a task is a matter of
|
|
|
|
calling the `spawn` function with a closure argument. `spawn` executes the
|
|
|
|
closure in the new task.
|
2012-09-22 17:33:50 -05:00
|
|
|
|
|
|
|
~~~~
|
2013-05-23 15:06:29 -05:00
|
|
|
# use std::task::spawn;
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2012-10-01 17:41:21 -05:00
|
|
|
// Print something profound in a different task using a named function
|
2014-01-09 04:06:55 -06:00
|
|
|
fn print_message() { println!("I am running in a different task!"); }
|
2012-10-01 17:41:21 -05:00
|
|
|
spawn(print_message);
|
|
|
|
|
|
|
|
// Print something more profound in a different task using a lambda expression
|
2014-01-09 04:06:55 -06:00
|
|
|
spawn(proc() println!("I am also running in a different task!") );
|
2012-09-22 17:33:50 -05:00
|
|
|
~~~~
|
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
In Rust, there is nothing special about creating tasks: a task is not a
|
|
|
|
concept that appears in the language semantics. Instead, Rust's type system
|
|
|
|
provides all the tools necessary to implement safe concurrency: particularly,
|
2013-05-23 15:06:29 -05:00
|
|
|
_owned types_. The language leaves the implementation details to the standard
|
2012-10-09 18:14:55 -05:00
|
|
|
library.
|
2012-10-01 17:41:21 -05:00
|
|
|
|
|
|
|
The `spawn` function has a very simple type signature: `fn spawn(f:
|
2013-11-18 19:40:54 -06:00
|
|
|
proc())`. Because it accepts only owned closures, and owned closures
|
2012-10-09 18:14:55 -05:00
|
|
|
contain only owned data, `spawn` can safely move the entire closure
|
2012-10-01 17:41:21 -05:00
|
|
|
and all its associated state into an entirely different task for
|
2012-10-09 18:14:55 -05:00
|
|
|
execution. Like any closure, the function passed to `spawn` may capture
|
2012-10-01 17:41:21 -05:00
|
|
|
an environment that it carries across tasks.
|
|
|
|
|
|
|
|
~~~
|
2013-05-23 15:06:29 -05:00
|
|
|
# use std::task::spawn;
|
2012-10-01 17:41:21 -05:00
|
|
|
# fn generate_task_number() -> int { 0 }
|
|
|
|
// Generate some state locally
|
|
|
|
let child_task_number = generate_task_number();
|
|
|
|
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2013-11-11 08:26:22 -06:00
|
|
|
// Capture it in the remote task
|
|
|
|
println!("I am child number {}", child_task_number);
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2012-10-01 17:41:21 -05:00
|
|
|
~~~
|
|
|
|
|
|
|
|
## Communication
|
|
|
|
|
|
|
|
Now that we have spawned a new task, it would be nice if we could
|
|
|
|
communicate with it. Recall that Rust does not have shared mutable
|
|
|
|
state, so one task may not manipulate variables owned by another task.
|
|
|
|
Instead we use *pipes*.
|
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
A pipe is simply a pair of endpoints: one for sending messages and another for
|
|
|
|
receiving messages. Pipes are low-level communication building-blocks and so
|
|
|
|
come in a variety of forms, each one appropriate for a different use case. In
|
|
|
|
what follows, we cover the most commonly used varieties.
|
2012-10-01 17:41:21 -05:00
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
The simplest way to create a pipe is to use the `channel`
|
|
|
|
function to create a `(Sender, Receiver)` pair. In Rust parlance, a *sender*
|
|
|
|
is a sending endpoint of a pipe, and a *receiver* is the receiving
|
2012-10-09 18:14:55 -05:00
|
|
|
endpoint. Consider the following example of calculating two results
|
|
|
|
concurrently:
|
2012-09-22 17:33:50 -05:00
|
|
|
|
|
|
|
~~~~
|
2013-05-23 15:06:29 -05:00
|
|
|
# use std::task::spawn;
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
let (tx, rx): (Sender<int>, Receiver<int>) = channel();
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2012-09-22 17:33:50 -05:00
|
|
|
let result = some_expensive_computation();
|
2014-03-09 16:58:32 -05:00
|
|
|
tx.send(result);
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2012-09-22 17:33:50 -05:00
|
|
|
|
|
|
|
some_other_expensive_computation();
|
2014-03-09 16:58:32 -05:00
|
|
|
let result = rx.recv();
|
2012-09-22 17:33:50 -05:00
|
|
|
# fn some_expensive_computation() -> int { 42 }
|
|
|
|
# fn some_other_expensive_computation() {}
|
|
|
|
~~~~
|
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
Let's examine this example in detail. First, the `let` statement creates a
|
|
|
|
stream for sending and receiving integers (the left-hand side of the `let`,
|
2014-03-09 16:58:32 -05:00
|
|
|
`(tx, rx)`, is an example of a *destructuring let*: the pattern separates
|
2012-10-09 18:14:55 -05:00
|
|
|
a tuple into its component parts).
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2012-10-01 17:41:21 -05:00
|
|
|
~~~~
|
2014-03-09 16:58:32 -05:00
|
|
|
let (tx, rx): (Sender<int>, Receiver<int>) = channel();
|
2012-09-22 17:33:50 -05:00
|
|
|
~~~~
|
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
The child task will use the sender to send data to the parent task,
|
|
|
|
which will wait to receive the data on the receiver. The next statement
|
2012-10-01 17:41:21 -05:00
|
|
|
spawns the child task.
|
2012-09-22 17:33:50 -05:00
|
|
|
|
|
|
|
~~~~
|
2013-05-23 15:06:29 -05:00
|
|
|
# use std::task::spawn;
|
2012-09-22 17:33:50 -05:00
|
|
|
# fn some_expensive_computation() -> int { 42 }
|
2014-03-09 16:58:32 -05:00
|
|
|
# let (tx, rx) = channel();
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2012-09-22 17:33:50 -05:00
|
|
|
let result = some_expensive_computation();
|
2014-03-09 16:58:32 -05:00
|
|
|
tx.send(result);
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2012-09-22 17:33:50 -05:00
|
|
|
~~~~
|
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
Notice that the creation of the task closure transfers `tx` to the child
|
|
|
|
task implicitly: the closure captures `tx` in its environment. Both `Sender`
|
|
|
|
and `Receiver` are sendable types and may be captured into tasks or otherwise
|
2012-10-09 18:14:55 -05:00
|
|
|
transferred between them. In the example, the child task runs an expensive
|
|
|
|
computation, then sends the result over the captured channel.
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
Finally, the parent continues with some other expensive
|
|
|
|
computation, then waits for the child's result to arrive on the
|
2014-03-09 16:58:32 -05:00
|
|
|
receiver:
|
2012-09-22 17:33:50 -05:00
|
|
|
|
|
|
|
~~~~
|
|
|
|
# fn some_other_expensive_computation() {}
|
2014-03-09 16:58:32 -05:00
|
|
|
# let (tx, rx) = channel::<int>();
|
|
|
|
# tx.send(0);
|
2012-09-22 17:33:50 -05:00
|
|
|
some_other_expensive_computation();
|
2014-03-09 16:58:32 -05:00
|
|
|
let result = rx.recv();
|
2012-09-22 17:33:50 -05:00
|
|
|
~~~~
|
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
The `Sender` and `Receiver` pair created by `channel` enables efficient
|
2013-12-15 20:17:43 -06:00
|
|
|
communication between a single sender and a single receiver, but multiple
|
2014-03-09 16:58:32 -05:00
|
|
|
senders cannot use a single `Sender` value, and multiple receivers cannot use a
|
|
|
|
single `Receiver` value. What if our example needed to compute multiple
|
|
|
|
results across a number of tasks? The following program is ill-typed:
|
2012-10-01 20:03:09 -05:00
|
|
|
|
2014-01-11 20:25:19 -06:00
|
|
|
~~~ {.ignore}
|
2012-10-01 20:03:09 -05:00
|
|
|
# fn some_expensive_computation() -> int { 42 }
|
2014-03-09 16:58:32 -05:00
|
|
|
let (tx, rx) = channel();
|
2012-10-01 20:03:09 -05:00
|
|
|
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2014-03-09 16:58:32 -05:00
|
|
|
tx.send(some_expensive_computation());
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2012-10-01 20:03:09 -05:00
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
// ERROR! The previous spawn statement already owns the sender,
|
2012-10-01 20:03:09 -05:00
|
|
|
// so the compiler will not allow it to be captured again
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2014-03-09 16:58:32 -05:00
|
|
|
tx.send(some_expensive_computation());
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2012-10-01 20:03:09 -05:00
|
|
|
~~~
|
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
Instead we can clone the `tx`, which allows for multiple senders.
|
2012-10-01 17:41:21 -05:00
|
|
|
|
|
|
|
~~~
|
2014-03-09 16:58:32 -05:00
|
|
|
let (tx, rx) = channel();
|
2012-10-01 17:41:21 -05:00
|
|
|
|
2013-08-03 11:45:23 -05:00
|
|
|
for init_val in range(0u, 3) {
|
2012-10-01 17:41:21 -05:00
|
|
|
// Create a new channel handle to distribute to the child task
|
2014-03-09 16:58:32 -05:00
|
|
|
let child_tx = tx.clone();
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2014-03-09 16:58:32 -05:00
|
|
|
child_tx.send(some_expensive_computation(init_val));
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2012-10-01 17:41:21 -05:00
|
|
|
}
|
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
let result = rx.recv() + rx.recv() + rx.recv();
|
2012-10-01 17:41:21 -05:00
|
|
|
# fn some_expensive_computation(_i: uint) -> int { 42 }
|
|
|
|
~~~
|
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
Cloning a `Sender` produces a new handle to the same channel, allowing multiple
|
|
|
|
tasks to send data to a single receiver. It upgrades the channel internally in
|
2014-02-17 04:32:30 -06:00
|
|
|
order to allow this functionality, which means that channels that are not
|
|
|
|
cloned can avoid the overhead required to handle multiple senders. But this
|
|
|
|
fact has no bearing on the channel's usage: the upgrade is transparent.
|
2012-10-01 17:41:21 -05:00
|
|
|
|
2014-02-17 04:32:30 -06:00
|
|
|
Note that the above cloning example is somewhat contrived since
|
2014-03-09 16:58:32 -05:00
|
|
|
you could also simply use three `Sender` pairs, but it serves to
|
2012-10-09 18:14:55 -05:00
|
|
|
illustrate the point. For reference, written with multiple streams, it
|
2012-10-01 17:41:21 -05:00
|
|
|
might look like the example below.
|
|
|
|
|
|
|
|
~~~
|
2013-05-23 15:06:29 -05:00
|
|
|
# use std::task::spawn;
|
2014-03-08 17:11:52 -06:00
|
|
|
# use std::slice;
|
2012-10-01 17:41:21 -05:00
|
|
|
|
2012-10-01 20:03:09 -05:00
|
|
|
// Create a vector of ports, one for each child task
|
2014-03-08 17:11:52 -06:00
|
|
|
let rxs = slice::from_fn(3, |init_val| {
|
2014-03-09 16:58:32 -05:00
|
|
|
let (tx, rx) = channel();
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2014-03-09 16:58:32 -05:00
|
|
|
tx.send(some_expensive_computation(init_val));
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2014-03-09 16:58:32 -05:00
|
|
|
rx
|
2013-11-21 21:20:48 -06:00
|
|
|
});
|
2012-10-01 17:41:21 -05:00
|
|
|
|
|
|
|
// Wait on each port, accumulating the results
|
2014-03-09 16:58:32 -05:00
|
|
|
let result = rxs.iter().fold(0, |accum, rx| accum + rx.recv() );
|
2012-10-01 17:41:21 -05:00
|
|
|
# fn some_expensive_computation(_i: uint) -> int { 42 }
|
|
|
|
~~~
|
|
|
|
|
2013-05-26 12:10:16 -05:00
|
|
|
## Backgrounding computations: Futures
|
2014-01-30 14:04:47 -06:00
|
|
|
With `sync::Future`, rust has a mechanism for requesting a computation and getting the result
|
2013-05-17 16:11:49 -05:00
|
|
|
later.
|
|
|
|
|
|
|
|
The basic example below illustrates this.
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-17 16:11:49 -05:00
|
|
|
~~~
|
2014-03-09 16:58:32 -05:00
|
|
|
extern crate sync;
|
2014-01-30 14:04:47 -06:00
|
|
|
|
|
|
|
# fn main() {
|
2013-05-17 16:11:49 -05:00
|
|
|
# fn make_a_sandwich() {};
|
2013-10-27 13:59:58 -05:00
|
|
|
fn fib(n: u64) -> u64 {
|
2013-05-17 16:11:49 -05:00
|
|
|
// lengthy computation returning an uint
|
|
|
|
12586269025
|
|
|
|
}
|
|
|
|
|
2014-01-30 14:04:47 -06:00
|
|
|
let mut delayed_fib = sync::Future::spawn(proc() fib(50));
|
2013-05-17 16:11:49 -05:00
|
|
|
make_a_sandwich();
|
2013-09-30 12:32:28 -05:00
|
|
|
println!("fib(50) = {:?}", delayed_fib.get())
|
2014-01-30 14:04:47 -06:00
|
|
|
# }
|
2013-05-17 16:11:49 -05:00
|
|
|
~~~
|
|
|
|
|
|
|
|
The call to `future::spawn` returns immediately a `future` object regardless of how long it
|
|
|
|
takes to run `fib(50)`. You can then make yourself a sandwich while the computation of `fib` is
|
|
|
|
running. The result of the execution of the method is obtained by calling `get` on the future.
|
|
|
|
This call will block until the value is available (*i.e.* the computation is complete). Note that
|
|
|
|
the future needs to be mutable so that it can save the result for next time `get` is called.
|
|
|
|
|
|
|
|
Here is another example showing how futures allow you to background computations. The workload will
|
|
|
|
be distributed on the available cores.
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-17 16:11:49 -05:00
|
|
|
~~~
|
2014-02-14 12:10:06 -06:00
|
|
|
# extern crate sync;
|
2014-03-08 17:11:52 -06:00
|
|
|
# use std::slice;
|
2013-05-17 16:11:49 -05:00
|
|
|
fn partial_sum(start: uint) -> f64 {
|
|
|
|
let mut local_sum = 0f64;
|
2013-08-03 11:45:23 -05:00
|
|
|
for num in range(start*100000, (start+1)*100000) {
|
2014-01-12 14:02:59 -06:00
|
|
|
local_sum += (num as f64 + 1.0).powf(&-2.0);
|
2013-05-17 16:11:49 -05:00
|
|
|
}
|
|
|
|
local_sum
|
|
|
|
}
|
|
|
|
|
|
|
|
fn main() {
|
2014-03-08 17:11:52 -06:00
|
|
|
let mut futures = slice::from_fn(1000, |ind| sync::Future::spawn( proc() { partial_sum(ind) }));
|
2013-05-17 16:11:49 -05:00
|
|
|
|
|
|
|
let mut final_res = 0f64;
|
2013-08-03 11:45:23 -05:00
|
|
|
for ft in futures.mut_iter() {
|
2013-05-17 16:11:49 -05:00
|
|
|
final_res += ft.get();
|
|
|
|
}
|
2013-09-30 12:32:28 -05:00
|
|
|
println!("π^2/6 is not far from : {}", final_res);
|
2013-05-17 16:11:49 -05:00
|
|
|
}
|
|
|
|
~~~
|
|
|
|
|
2013-07-22 15:57:40 -05:00
|
|
|
## Sharing immutable data without copy: Arc
|
2013-05-26 12:10:16 -05:00
|
|
|
|
|
|
|
To share immutable data between tasks, a first approach would be to only use pipes as we have seen
|
|
|
|
previously. A copy of the data to share would then be made for each task. In some cases, this would
|
|
|
|
add up to a significant amount of wasted memory and would require copying the same data more than
|
|
|
|
necessary.
|
|
|
|
|
2013-07-22 15:57:40 -05:00
|
|
|
To tackle this issue, one can use an Atomically Reference Counted wrapper (`Arc`) as implemented in
|
2014-01-30 14:04:47 -06:00
|
|
|
the `sync` library of Rust. With an Arc, the data will no longer be copied for each task. The Arc
|
2013-05-26 12:10:16 -05:00
|
|
|
acts as a reference to the shared data and only this reference is shared and cloned.
|
|
|
|
|
2013-07-22 15:57:40 -05:00
|
|
|
Here is a small example showing how to use Arcs. We wish to run concurrently several computations on
|
2013-05-26 12:10:16 -05:00
|
|
|
a single large vector of floats. Each task needs the full vector to perform its duty.
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-26 12:10:16 -05:00
|
|
|
~~~
|
2014-03-09 16:58:32 -05:00
|
|
|
extern crate rand;
|
|
|
|
extern crate sync;
|
|
|
|
|
2014-03-08 17:11:52 -06:00
|
|
|
use std::slice;
|
2014-01-30 14:04:47 -06:00
|
|
|
use sync::Arc;
|
2013-05-26 12:10:16 -05:00
|
|
|
|
2013-09-26 01:26:09 -05:00
|
|
|
fn pnorm(nums: &~[f64], p: uint) -> f64 {
|
2014-01-12 14:02:59 -06:00
|
|
|
nums.iter().fold(0.0, |a,b| a+(*b).powf(&(p as f64)) ).powf(&(1.0 / (p as f64)))
|
2013-05-26 12:10:16 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
fn main() {
|
2014-03-08 17:11:52 -06:00
|
|
|
let numbers = slice::from_fn(1000000, |_| rand::random::<f64>());
|
2013-07-22 15:57:40 -05:00
|
|
|
let numbers_arc = Arc::new(numbers);
|
2013-05-26 12:10:16 -05:00
|
|
|
|
2013-08-03 11:45:23 -05:00
|
|
|
for num in range(1u, 10) {
|
2014-03-09 16:58:32 -05:00
|
|
|
let (tx, rx) = channel();
|
|
|
|
tx.send(numbers_arc.clone());
|
2013-05-26 12:10:16 -05:00
|
|
|
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2014-03-09 16:58:32 -05:00
|
|
|
let local_arc : Arc<~[f64]> = rx.recv();
|
2014-03-22 02:55:50 -05:00
|
|
|
let task_numbers = &*local_arc;
|
2013-09-30 12:32:28 -05:00
|
|
|
println!("{}-norm = {}", num, pnorm(task_numbers, num));
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2013-05-26 12:10:16 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
~~~
|
|
|
|
|
|
|
|
The function `pnorm` performs a simple computation on the vector (it computes the sum of its items
|
2013-07-22 15:57:40 -05:00
|
|
|
at the power given as argument and takes the inverse power of this value). The Arc on the vector is
|
2013-05-26 12:10:16 -05:00
|
|
|
created by the line
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-26 12:10:16 -05:00
|
|
|
~~~
|
2014-02-14 12:10:06 -06:00
|
|
|
# extern crate sync;
|
2014-03-01 18:33:24 -06:00
|
|
|
# extern crate rand;
|
2014-01-30 14:04:47 -06:00
|
|
|
# use sync::Arc;
|
2014-03-08 17:11:52 -06:00
|
|
|
# use std::slice;
|
2014-01-30 14:04:47 -06:00
|
|
|
# fn main() {
|
2014-03-08 17:11:52 -06:00
|
|
|
# let numbers = slice::from_fn(1000000, |_| rand::random::<f64>());
|
2013-07-22 15:57:40 -05:00
|
|
|
let numbers_arc=Arc::new(numbers);
|
2014-01-30 14:04:47 -06:00
|
|
|
# }
|
2013-05-26 12:10:16 -05:00
|
|
|
~~~
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-26 12:10:16 -05:00
|
|
|
and a clone of it is sent to each task
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-26 12:10:16 -05:00
|
|
|
~~~
|
2014-02-14 12:10:06 -06:00
|
|
|
# extern crate sync;
|
2014-03-01 18:33:24 -06:00
|
|
|
# extern crate rand;
|
2014-01-30 14:04:47 -06:00
|
|
|
# use sync::Arc;
|
2014-03-08 17:11:52 -06:00
|
|
|
# use std::slice;
|
2014-01-30 14:04:47 -06:00
|
|
|
# fn main() {
|
2014-03-08 17:11:52 -06:00
|
|
|
# let numbers=slice::from_fn(1000000, |_| rand::random::<f64>());
|
2013-07-22 15:57:40 -05:00
|
|
|
# let numbers_arc = Arc::new(numbers);
|
2014-03-09 16:58:32 -05:00
|
|
|
# let (tx, rx) = channel();
|
|
|
|
tx.send(numbers_arc.clone());
|
2014-01-30 14:04:47 -06:00
|
|
|
# }
|
2013-05-26 12:10:16 -05:00
|
|
|
~~~
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-26 12:10:16 -05:00
|
|
|
copying only the wrapper and not its contents.
|
|
|
|
|
|
|
|
Each task recovers the underlying data by
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-26 12:10:16 -05:00
|
|
|
~~~
|
2014-02-14 12:10:06 -06:00
|
|
|
# extern crate sync;
|
2014-03-01 18:33:24 -06:00
|
|
|
# extern crate rand;
|
2014-01-30 14:04:47 -06:00
|
|
|
# use sync::Arc;
|
2014-03-08 17:11:52 -06:00
|
|
|
# use std::slice;
|
2014-01-30 14:04:47 -06:00
|
|
|
# fn main() {
|
2014-03-08 17:11:52 -06:00
|
|
|
# let numbers=slice::from_fn(1000000, |_| rand::random::<f64>());
|
2013-07-22 15:57:40 -05:00
|
|
|
# let numbers_arc=Arc::new(numbers);
|
2014-03-09 16:58:32 -05:00
|
|
|
# let (tx, rx) = channel();
|
|
|
|
# tx.send(numbers_arc.clone());
|
|
|
|
# let local_arc : Arc<~[f64]> = rx.recv();
|
2014-03-22 02:55:50 -05:00
|
|
|
let task_numbers = &*local_arc;
|
2014-01-30 14:04:47 -06:00
|
|
|
# }
|
2013-05-26 12:10:16 -05:00
|
|
|
~~~
|
2014-01-03 12:17:19 -06:00
|
|
|
|
2013-05-26 12:10:16 -05:00
|
|
|
and can use it as if it were local.
|
|
|
|
|
2013-07-22 15:57:40 -05:00
|
|
|
The `arc` module also implements Arcs around mutable data that are not covered here.
|
2013-05-26 12:10:16 -05:00
|
|
|
|
2012-10-01 20:42:11 -05:00
|
|
|
# Handling task failure
|
|
|
|
|
2013-01-31 20:24:09 -06:00
|
|
|
Rust has a built-in mechanism for raising exceptions. The `fail!()` macro
|
|
|
|
(which can also be written with an error string as an argument: `fail!(
|
2013-03-28 20:39:09 -05:00
|
|
|
~reason)`) and the `assert!` construct (which effectively calls `fail!()`
|
2013-03-06 15:58:02 -06:00
|
|
|
if a boolean expression is false) are both ways to raise exceptions. When a
|
|
|
|
task raises an exception the task unwinds its stack---running destructors and
|
2012-10-09 18:14:55 -05:00
|
|
|
freeing memory along the way---and then exits. Unlike exceptions in C++,
|
|
|
|
exceptions in Rust are unrecoverable within a single task: once a task fails,
|
|
|
|
there is no way to "catch" the exception.
|
2012-10-01 20:42:11 -05:00
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
While it isn't possible for a task to recover from failure, tasks may notify
|
|
|
|
each other of failure. The simplest way of handling task failure is with the
|
|
|
|
`try` function, which is similar to `spawn`, but immediately blocks waiting
|
2013-09-23 21:30:42 -05:00
|
|
|
for the child task to finish. `try` returns a value of type `Result<T,
|
2012-10-09 18:14:55 -05:00
|
|
|
()>`. `Result` is an `enum` type with two variants: `Ok` and `Err`. In this
|
|
|
|
case, because the type arguments to `Result` are `int` and `()`, callers can
|
|
|
|
pattern-match on a result to check whether it's an `Ok` result with an `int`
|
|
|
|
field (representing a successful result) or an `Err` result (representing
|
|
|
|
termination with an error).
|
2012-10-01 20:42:11 -05:00
|
|
|
|
2014-01-11 20:25:19 -06:00
|
|
|
~~~{.ignore .linked-failure}
|
2013-05-24 21:35:29 -05:00
|
|
|
# use std::task;
|
2012-10-01 20:42:11 -05:00
|
|
|
# fn some_condition() -> bool { false }
|
|
|
|
# fn calculate_result() -> int { 0 }
|
2014-01-27 23:53:01 -06:00
|
|
|
let result: Result<int, ()> = task::try(proc() {
|
2012-10-01 20:42:11 -05:00
|
|
|
if some_condition() {
|
|
|
|
calculate_result()
|
|
|
|
} else {
|
2013-05-05 17:18:51 -05:00
|
|
|
fail!("oops!");
|
2012-10-01 20:42:11 -05:00
|
|
|
}
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2013-03-28 20:39:09 -05:00
|
|
|
assert!(result.is_err());
|
2012-10-01 20:42:11 -05:00
|
|
|
~~~
|
|
|
|
|
|
|
|
Unlike `spawn`, the function spawned using `try` may return a value,
|
|
|
|
which `try` will dutifully propagate back to the caller in a [`Result`]
|
|
|
|
enum. If the child task terminates successfully, `try` will
|
|
|
|
return an `Ok` result; if the child task fails, `try` will return
|
|
|
|
an `Error` result.
|
|
|
|
|
2013-10-18 10:57:52 -05:00
|
|
|
[`Result`]: std/result/index.html
|
2012-10-01 20:42:11 -05:00
|
|
|
|
|
|
|
> ***Note:*** A failed task does not currently produce a useful error
|
2012-10-09 18:14:55 -05:00
|
|
|
> value (`try` always returns `Err(())`). In the
|
|
|
|
> future, it may be possible for tasks to intercept the value passed to
|
2013-01-31 20:24:09 -06:00
|
|
|
> `fail!()`.
|
2012-10-01 20:42:11 -05:00
|
|
|
|
|
|
|
TODO: Need discussion of `future_result` in order to make failure
|
|
|
|
modes useful.
|
|
|
|
|
2013-06-19 14:58:08 -05:00
|
|
|
But not all failures are created equal. In some cases you might need to
|
2012-10-01 20:42:11 -05:00
|
|
|
abort the entire program (perhaps you're writing an assert which, if
|
|
|
|
it trips, indicates an unrecoverable logic error); in other cases you
|
|
|
|
might want to contain the failure at a certain boundary (perhaps a
|
|
|
|
small piece of input from the outside world, which you happen to be
|
|
|
|
processing in parallel, is malformed and its processing task can't
|
2013-11-21 18:55:40 -06:00
|
|
|
proceed).
|
2012-10-01 20:42:11 -05:00
|
|
|
|
2012-10-01 17:41:21 -05:00
|
|
|
## Creating a task with a bi-directional communication path
|
2012-09-22 17:33:50 -05:00
|
|
|
|
|
|
|
A very common thing to do is to spawn a child task where the parent
|
|
|
|
and child both need to exchange messages with each other. The
|
2014-03-09 16:58:32 -05:00
|
|
|
function `sync::comm::duplex` supports this pattern. We'll
|
2012-10-09 18:14:55 -05:00
|
|
|
look briefly at how to use it.
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
To see how `duplex` works, we will create a child task
|
2012-10-09 18:14:55 -05:00
|
|
|
that repeatedly receives a `uint` message, converts it to a string, and sends
|
|
|
|
the string in response. The child terminates when it receives `0`.
|
2012-09-22 17:33:50 -05:00
|
|
|
Here is the function that implements the child task:
|
|
|
|
|
2014-01-30 14:04:47 -06:00
|
|
|
~~~
|
2014-03-09 16:58:32 -05:00
|
|
|
extern crate sync;
|
2014-01-30 14:04:47 -06:00
|
|
|
# fn main() {
|
2014-03-22 13:54:19 -05:00
|
|
|
fn stringifier(channel: &sync::DuplexStream<~str, uint>) {
|
|
|
|
let mut value: uint;
|
|
|
|
loop {
|
|
|
|
value = channel.recv();
|
|
|
|
channel.send(value.to_str());
|
|
|
|
if value == 0 { break; }
|
2012-09-22 17:33:50 -05:00
|
|
|
}
|
2014-03-22 13:54:19 -05:00
|
|
|
}
|
2014-01-30 14:04:47 -06:00
|
|
|
# }
|
2012-09-22 17:33:50 -05:00
|
|
|
~~~~
|
|
|
|
|
|
|
|
The implementation of `DuplexStream` supports both sending and
|
|
|
|
receiving. The `stringifier` function takes a `DuplexStream` that can
|
|
|
|
send strings (the first type parameter) and receive `uint` messages
|
|
|
|
(the second type parameter). The body itself simply loops, reading
|
|
|
|
from the channel and then sending its response back. The actual
|
2012-10-09 18:14:55 -05:00
|
|
|
response itself is simply the stringified version of the received value,
|
2012-09-22 17:33:50 -05:00
|
|
|
`uint::to_str(value)`.
|
|
|
|
|
|
|
|
Here is the code for the parent task:
|
|
|
|
|
2014-01-30 14:04:47 -06:00
|
|
|
~~~
|
2014-03-09 16:58:32 -05:00
|
|
|
extern crate sync;
|
2013-05-23 15:06:29 -05:00
|
|
|
# use std::task::spawn;
|
2014-01-30 14:04:47 -06:00
|
|
|
# use sync::DuplexStream;
|
2014-03-09 16:58:32 -05:00
|
|
|
# fn stringifier(channel: &sync::DuplexStream<~str, uint>) {
|
2012-09-22 17:33:50 -05:00
|
|
|
# let mut value: uint;
|
|
|
|
# loop {
|
|
|
|
# value = channel.recv();
|
2014-01-30 14:04:47 -06:00
|
|
|
# channel.send(value.to_str());
|
2012-09-22 17:33:50 -05:00
|
|
|
# if value == 0u { break; }
|
|
|
|
# }
|
|
|
|
# }
|
|
|
|
# fn main() {
|
|
|
|
|
2014-03-09 16:58:32 -05:00
|
|
|
let (from_child, to_child) = sync::duplex();
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2014-01-27 23:53:01 -06:00
|
|
|
spawn(proc() {
|
2012-09-22 17:33:50 -05:00
|
|
|
stringifier(&to_child);
|
2014-01-27 23:53:01 -06:00
|
|
|
});
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
from_child.send(22);
|
2013-03-28 20:39:09 -05:00
|
|
|
assert!(from_child.recv() == ~"22");
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
from_child.send(23);
|
|
|
|
from_child.send(0);
|
2012-09-22 17:33:50 -05:00
|
|
|
|
2013-03-28 20:39:09 -05:00
|
|
|
assert!(from_child.recv() == ~"23");
|
|
|
|
assert!(from_child.recv() == ~"0");
|
2012-09-22 17:33:50 -05:00
|
|
|
|
|
|
|
# }
|
|
|
|
~~~~
|
|
|
|
|
2012-10-09 18:14:55 -05:00
|
|
|
The parent task first calls `DuplexStream` to create a pair of bidirectional
|
|
|
|
endpoints. It then uses `task::spawn` to create the child task, which captures
|
|
|
|
one end of the communication channel. As a result, both parent and child can
|
|
|
|
send and receive data to and from the other.
|