r? @pcwalton
* Move `SharedMutableState`, `LittleLock`, and `Exclusive` from `core::unstable` to `core::unstable::sync`
* Modernize the `SharedMutableState` interface with methods
* Rename `SharedMutableState` to `UnsafeAtomicRcBox` to match `RcBox`.
This pull request adds 4 atomic intrinsics to the compiler, in preparation for #5042.
* `atomic_load(src: &int) -> int` performs an atomic sequentially consistent load.
* `atomic_load_acq(src: &int) -> int` performs an atomic acquiring load.
* `atomic_store(dst: &mut int, val: int)` performs an atomic sequentially consistent store.
* `atomic_store_rel(dst: &mut int, val: int)` performs an atomic releasing store.
For more information about the whole acquire/release thing: http://llvm.org/docs/Atomics.html
r?
In this commit I added a useful utility type, named Void, that encapsulates the
doable but annoying job of creating an uninhabited type. As well, a function on
that type, named absurd, was created which is useful for ignoring the result of
matching on that type. No unit tests were created because it is not possible to
create an instance of this type to test the usage of.
This type is useful because it is like NonCopyable in that it can be used to
create a type with special characteristics without special bloat. For instance,
instead of typing pub struct PhantomType { priv contents : () } for each void
type one may want to use one can simply type pub struct PhantomType (Void);.
This type make such special cases much easier to write.
The default versions (atomic_load and atomic_store) are sequentially consistent.
The atomic_load_acq intrinsic acquires as described in [1].
The atomic_store_rel intrinsic releases as described in [1].
[1]: http://llvm.org/docs/Atomics.html
There may be a more efficient implementation of `core::util::swap_ptr`. The issue mentioned using `move_val_init`, but I couldn't figure out what that did, so I just used `copy_memory` a few times instead.
I'm not exactly the best at reading LLVM generated by rust, but this does appear to be optimized away just as expected (when possible).
Closes#6183.
The first commit changes the compiler's method of treating a `for` loop, and all the remaining commits are just dealing with the fallout.
The biggest fallout was the `IterBytes` trait, although it's really a whole lot nicer now because all of the `iter_bytes_XX` methods are just and-ed together. Sadly there was a huge amount of stuff that's `cfg(stage0)` gated, but whoever lands the next snapshot is going to have a lot of fun deleting all this code!
I changed ```RED_ZONE_SIZE``` to ```RZ_MAC_32``` because of stack canary failure.
Here is a LLVM patch for MIPS segmented stacks.
http://people.cs.nctu.edu.tw/~jyyou/rust/mips-segstk.patch
Current test results
```
failures:
rand::tests::test_rng_seeded_custom_seed2
run::tests::test_forced_destroy_actually_kills
run::tests::test_unforced_destroy_actually_kills
time::tests::run_tests
uv_ll::test::test_uv_ll_struct_size_addrinfo
uv_ll::test::test_uv_ll_struct_size_uv_timer_t
segfaults:
rt::io::option::test::test_option_writer_error
rt::local_services::test::unwind
rt::sched::test_swap_tasks_then
stackwalk::test_simple
stackwalk::test_simple_deep
```
Adds an `uninit` intrinsic.
It's just an empty function, so llvm optimizes it down to nothing.
I changed all of the `init` intrinsic usages to `uninit` where it seemed appropriate to.