This moves the locking/waiting methods to returning an RAII struct instead of
relying on closures. Additionally, this changes the methods to all take
'&mut self' to discourage recursive locking. The new method to block is to call
`wait` on the returned RAII structure instead of calling it on the lock itself
(this enforces that the lock is held).
At the same time, this improves the Mutex interface a bit by allowing
destruction of non-initialized members and by allowing construction of an empty
mutex (nothing initialized inside).
This patchset fixes some parts broken on Win64.
This also adds `--disable-pthreads` flags to llvm on mingw-w64 archs (both 32-bit and 64-bit, not mingw) due to bad performance. See #8996 for discussion.
This moves the locking/waiting methods to returning an RAII struct instead of
relying on closures. Additionally, this changes the methods to all take
'&mut self' to discourage recursive locking. The new method to block is to call
`wait` on the returned RAII structure instead of calling it on the lock itself
(this enforces that the lock is held).
At the same time, this improves the Mutex interface a bit by allowing
destruction of non-initialized members and by allowing construction of an empty
mutex (nothing initialized inside).
This is both useful for performance (otherwise logging is unbuffered), but also
useful for correctness. Because when a task is destroyed we can't block the task
waiting for the logger to close, loggers are opened with a 'CloseAsynchronously'
specification. This causes libuv do defer the call to close() until the next
turn of the event loop.
If you spin in a tight loop around printing, you never yield control back to the
libuv event loop, meaning that you simply enqueue a large number of close
requests but nothing is actually closed. This queue ends up never getting
closed, meaning that if you keep trying to create handles one will eventually
fail, which the runtime will attempt to print the failure, causing mass
destruction.
Caching will provide better performance as well as prevent creation of too many
handles.
Closes#10626
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
This is both useful for performance (otherwise logging is unbuffered), but also
useful for correctness. Because when a task is destroyed we can't block the task
waiting for the logger to close, loggers are opened with a 'CloseAsynchronously'
specification. This causes libuv do defer the call to close() until the next
turn of the event loop.
If you spin in a tight loop around printing, you never yield control back to the
libuv event loop, meaning that you simply enqueue a large number of close
requests but nothing is actually closed. This queue ends up never getting
closed, meaning that if you keep trying to create handles one will eventually
fail, which the runtime will attempt to print the failure, causing mass
destruction.
Caching will provide better performance as well as prevent creation of too many
handles.
Closes#10626
The reasons for doing this are:
* The model on which linked failure is based is inherently complex
* The implementation is also very complex, and there are few remaining who
fully understand the implementation
* There are existing race conditions in the core context switching function of
the scheduler, and possibly others.
* It's unclear whether this model of linked failure maps well to a 1:1 threading
model
Linked failure is often a desired aspect of tasks, but we would like to take a
much more conservative approach in re-implementing linked failure if at all.
Closes#8674Closes#8318Closes#8863
Make TrieMap/TrieSet's find_mut check the key for external nodes.
Without this find_mut sometimes returns a reference to another key when
querying for a non-present key.
This is based off of @blake2-ppc's work on #9429. That PR bitrotted and I haven't been able to contact the original author so I decided to take up the cause.
Overview
======
`Mut` encapsulates a mutable, non-nullable slot. The `Cell` type is currently used to do this, but `Cell` is much more commonly used as a workaround for the inability to move values into non-once functions. `Mut` provides a more robust API.
`Mut` duplicates the semantics of borrowed pointers with enforcement at runtime instead of compile time.
```rust
let x = Mut::new(0);
{
// make some immutable borrows
let p = x.borrow();
let y = *p.get() + 10;
// multiple immutable borrows are allowed simultaneously
let p2 = x.borrow();
// this would throw a runtime failure
// let p_mut = x.borrow_mut();
}
// now we can mutably borrow
let p = x.borrow_mut();
*p.get() = 10;
```
`borrow` returns a `Ref` type and `borrow_mut` returns a `RefMut` type, both of which are simple smart pointer types with a single method, `get`, which returns a reference to the wrapped data.
This also allows `RcMut<T>` to be deleted, as it can be replaced with `Rc<Mut<T>>`.
Changes
======
I've done things a little bit differently than the original proposal.
* I've added `try_borrow` and `try_borrow_mut` methods that return `Option<Ref<T>>` and `Option<RefMut<T>>` respectively instead of failing on a borrow check failure. I'm not totally sure when that'd be useful, but I don't see any reason to not put them in and @cmr requested them.
* `ReadPtr` and `WritePtr` have been renamed to `Ref` and `RefMut` respectively, as `Ref` is to `ref foo` and `RefMut` is to `ref mut foo` as `Mut` is to `mut foo`.
* `get` on `MutRef` now takes `&self` instead of `&mut self` for consistency with `&mut`. As @alexcrichton pointed, out this violates soundness by allowing aliasing `&mut` references.
* `Cell` is being left as is. It solves a different problem than `Mut` is designed to solve.
* There are no longer methods implemented for `Mut<Option<T>>`. Since `Cell` isn't going away, there's less of a need for these, and I didn't feel like they provided a huge benefit, especially as that kind of `impl` is very uncommon in the standard library.
Open Questions
============
* `Cell` should now be used exclusively for movement into closures. Should this be enforced by reducing its API to `new` and `take`? It seems like this use case will be completely going away once the transition to `proc` and co. finishes.
* Should there be `try_map` and `try_map_mut` methods along with `map` and `map_mut`?
I cannot tell whether the original comment was unsure about the
arithmetic calculations, or if it was unsure about the assumptions
being made about the alignment of the current allocation pointer.
The arithmetic calculation looks fine to me, though. This technique
is documented e.g. in Henry Warren's "Hacker's Delight" (section 3-1).
(I am sure one can find it elsewhere too, its not an obscure
property.)
I cannot tell whether the original comment was unsure about the
arithmetic calculations, or if it was unsure about the assumptions
being made about the alignment of the current allocation pointer.
The arithmetic calculation looks fine to me, though. This technique
is documented e.g. in Henry Warren's "Hacker's Delight" (section 3-1).
(I am sure one can find it elsewhere too, its not an obscure
property.)
New benchmark tests in vec.rs:
`push`, `starts_with_same_vector`, `starts_with_single_element`,
`starts_with_diff_one_element_end`, `ends_with_same_vector`,
`ends_with_single_element`, `ends_with_diff_one_element_beginning` and
`contains_last_element`
This isn't very useful yet, but it does replace most functionality of `@T`. The `Mut<T>` type will make it unnecessary to have a `GcMut<T>` so I haven't included one. Obviously it doesn't work for trait objects but that needs to be figured out for `Rc<T>` too.