Specifically, `F`, the closure that we pass to execute in the new thread. It
has two restrictions: It must be a `FnOnce` from `()` to `T`. Using `FnOnce`
allows the closure to take ownership of any data it mentions from the parent
thread. The other restriction is that `F` must be `Send`. We aren't allowed to
transfer this ownership unless the type thinks that's okay.
Many languages have the ability to execute threads, but it's wildly unsafe.
There are entire books about how to prevent errors that occur from shared
mutable state. Rust helps out with its type system here as well, by preventing
data races at compile time. Let's talk about how you actually share things
between threads.
## Safe Shared Mutable State
Due to Rust's type system, we have a concept that sounds like a lie: "safe
shared mutable state." Many programmers agree that shared mutable state is
very, very bad.
Someone once said this:
> Shared mutable state is the root of all evil. Most languages attempt to deal
> with this problem through the 'mutable' part, but Rust deals with it by
> solving the 'shared' part.
The same [ownership system](ownership.html) that helps prevent using pointers
incorrectly also helps rule out data races, one of the worst kinds of
concurrency bugs.
As an example, here is a Rust program that would have a data race in many
languages. It will not compile:
```ignore
use std::thread::Thread;
use std::old_io::timer;
use std::time::Duration;
fn main() {
let mut data = vec![1u32, 2, 3];
for i in 0 .. 2 {
Thread::spawn(move || {
data[i] += 1;
});
}
timer::sleep(Duration::milliseconds(50));
}
```
This gives us an error:
```text
12:17 error: capture of moved value: `data`
data[i] += 1;
^~~~
```
In this case, we know that our code _should_ be safe, but Rust isn't sure. And
it's actually not safe: if we had a reference to `data` in each thread, and the
thread takes ownership of the reference, we have three owners! That's bad. We
can fix this by using the `Arc<T>` type, which is an atomic reference counted
pointer. The 'atomic' part means that it's safe to share across threads.
`Arc<T>` assumes one more property about its contents to ensure that it is safe
to share across threads: it assumes its contents are `Sync`. But in our
case, we want to be able to mutate the value. We need a type that can ensure
only one person at a time can mutate what's inside. For that, we can use the
`Mutex<T>` type. Here's the second version of our code. It still doesn't work,
but for a different reason:
```ignore
use std::thread::Thread;
use std::old_io::timer;
use std::time::Duration;
use std::sync::Mutex;
fn main() {
let mut data = Mutex::new(vec![1u32, 2, 3]);
for i in 0 .. 2 {
let data = data.lock().unwrap();
Thread::spawn(move || {
data[i] += 1;
});
}
timer::sleep(Duration::milliseconds(50));
}
```
Here's the error:
```text
<anon>:11:9: 11:22 error: the trait `core::marker::Send` is not implemented for the type `std::sync::mutex::MutexGuard<'_, collections::vec::Vec<u32>>` [E0277]
<anon>:11 Thread::spawn(move || {
^~~~~~~~~~~~~
<anon>:11:9: 11:22 note: `std::sync::mutex::MutexGuard<'_, collections::vec::Vec<u32>>` cannot be sent between threads safely
<anon>:11 Thread::spawn(move || {
^~~~~~~~~~~~~
```
You see, [`Mutex`](std/sync/struct.Mutex.html) has a
If we [look at the code for MutexGuard](https://github.com/rust-lang/rust/blob/ca4b9674c26c1de07a2042cb68e6a062d7184cef/src/libstd/sync/mutex.rs#L172), we'll see
this:
```ignore
__marker: marker::NoSend,
```
Because our guard is `NoSend`, it's not `Send`. Which means we can't actually
transfer the guard across thread boundaries, which gives us our error.
We can use `Arc<T>` to fix this. Here's the working version:
```
use std::sync::{Arc, Mutex};
use std::thread::Thread;
use std::old_io::timer;
use std::time::Duration;
fn main() {
let data = Arc::new(Mutex::new(vec![1u32, 2, 3]));
for i in (0us..2) {
let data = data.clone();
Thread::spawn(move || {
let mut data = data.lock().unwrap();
data[i] += 1;
});
}
timer::sleep(Duration::milliseconds(50));
}
```
We now call `clone()` on our `Arc`, which increases the internal count. This
handle is then moved into the new thread. Let's examine the body of the
thread more closely:
```
# use std::sync::{Arc, Mutex};
# use std::thread::Thread;
# use std::old_io::timer;
# use std::time::Duration;
# fn main() {
# let data = Arc::new(Mutex::new(vec![1u32, 2, 3]));
# for i in (0us..2) {
# let data = data.clone();
Thread::spawn(move || {
let mut data = data.lock().unwrap();
data[i] += 1;
});
# }
# }
```
First, we call `lock()`, which acquires the mutex's lock. Because this may fail,
it returns an `Result<T, E>`, and because this is just an example, we `unwrap()`
it to get a reference to the data. Real code would have more robust error handling
here. We're then free to mutate it, since we have the lock.
This timer bit is a bit awkward, however. We have picked a reasonable amount of
time to wait, but it's entirely possible that we've picked too high, and that
we could be taking less time. It's also possible that we've picked too low,
and that we aren't actually finishing this computation.
Rust's standard library provides a few more mechanisms for two threads to
synchronize with each other. Let's talk about one: channels.
## Channels
Here's a version of our code that uses channels for synchronization, rather
than waiting for a specific time:
```
use std::sync::{Arc, Mutex};
use std::thread::Thread;
use std::sync::mpsc;
fn main() {
let data = Arc::new(Mutex::new(0u32));
let (tx, rx) = mpsc::channel();
for _ in (0..10) {
let (data, tx) = (data.clone(), tx.clone());
Thread::spawn(move || {
let mut data = data.lock().unwrap();
*data += 1;
tx.send(());
});
}
for _ in 0 .. 10 {
rx.recv();
}
}
```
We use the `mpsc::channel()` method to construct a new channel. We just `send`
a simple `()` down the channel, and then wait for ten of them to come back.
While this channel is just sending a generic signal, we can send any data that