This test also had a race condition in using the cvar/lock, so I fixed that up
as well. The race originated from one half trying to destroy the lock when
another half was using it.
These functions are all unnecessary now, and they only have meaning in the M:N
context. Removing these functions uncovered a bug in the librustuv timer
bindings, but it was fairly easy to cover (and the test is already committed).
These cannot be completely removed just yet due to their usage in the WaitQueue
of extra::sync, and until the mutex in libextra is rewritten it will not be
possible to remove the deferred sends for channels.
The rmake tests should depend on the target libraries (for linking), not just
the host libraries (for running). The host file dependencies are also correct
now because HLIBRUSTC_DEFAULT doesn't actually exist.
The uv loop was being destroyed before the async handle was being destroyed, so
closing the async handle was causing a use-after-free in the uv loop. This was
fixed by moving destruction of the queue's async handle to an earlier location
and then actually freeing it once the loop has been dropped.
This is a very real problem with cvars on normal systems, and all of channels
will not work if spurious wakeups are accepted. This problem is just solved with
a synchronized flag (accessed in the cvar's lock) to see whether a signal()
actually happened or whether it's spurious.
The queue has an async handle which must be destroyed before the loop is
destroyed, so use a little bit of an option dance to get around this
requirement.
There was a race in the code previously where schedulers would *immediately*
shut down after spawning the main task (because the global task count would
still be 0). This fixes the logic by blocking the sched pool task in receving on
a port instead of spawning a task into the pool to receive on a port.
The modifications necessary were to have a "simple task" running by the time the
code is executing, but this is a simple enough thing to implement and I forsee
this being necessary to have implemented in the future anyway.
Note that this removes a number of run-pass tests which are exercising behavior
of the old runtime. This functionality no longer exists and is thoroughly tested
inside of libgreen and libnative. There isn't really the notion of "starting the
runtime" any more. The major notion now is "bootstrapping the initial task".
This measure is simply to allow programs to continue compiling as they once did.
In the future, this needs a more robust solution to choose how to start with
libgreen or libnative.
This allows creation of different sched pools with different io factories.
Namely, this will be used to test the basic I/O loop in the green crate. This
can also be used to override the global default.
This will prevent a deadlock when a task spins in a try_recv when using channel
communication routines is a clear location for a M:N scheduling to happen.
Use the previous commit's new scheduler pool abstraction in libgreen to write
some homing tests which force an I/O handle to be homed from one event loop to
another.
The scheduler pool now has a much more simplified interface. There is now a
clear distinction between creating the pool and then interacting the pool. When
a pool is created, all schedulers are not active, and only later if a spawn is
done does activity occur.
There are four operations that you can do on a pool:
1. Create a new pool. The only argument to this function is the configuration
for the scheduler pool. Currently the only configuration parameter is the
number of threads to initially spawn.
2. Spawn a task into this pool. This takes a procedure and task configuration
options and spawns a new task into the pool of schedulers.
3. Spawn a new scheduler into the pool. This will return a handle on which to
communicate with the scheduler in order to do something like a pinned task.
4. Shut down the scheduler pool. This will consume the scheduler pool, request
all of the schedulers to shut down, and then wait on all the scheduler
threads. Currently this will block the invoking OS thread, but I plan on
making 'Thread::join' not a thread-blocking call.
These operations can be used to encode all current usage of M:N schedulers, as
well as providing a simple interface through which a pool can be modified. There
is currently no way to remove a scheduler from a pool of scheduler, as there's
no way to guarantee that a scheduler has exited. This may be added in the
future, however (as necessary).
All tests except for the homing tests are now working again with the
librustuv/libgreen refactoring. The homing-related tests are currently commented
out and now placed in the rustuv::homing module.
I plan on refactoring scheduler pool spawning in order to enable more homing
tests in a future commit.
This adds a few smoke tests associated with libnative tasks (not much code to
test here anyway), and cleans up the entry points a little bit to be a little
more like libgreen.
The I/O code doesn't need much testing because that's all tested in libstd (with
the iotest! macro).
In the compiled version of local_ptr (that with #[thread_local]), the take()
funciton didn't zero-out the previous pointer, allowing for multiple takes (with
fewer runtime assertions being tripped).
This extracts everything related to green scheduling from libstd and introduces
a new libgreen crate. This mostly involves deleting most of std::rt and moving
it to libgreen.
Along with the movement of code, this commit rearchitects many functions in the
scheduler in order to adapt to the fact that Local::take now *only* works on a
Task, not a scheduler. This mostly just involved threading the current green
task through in a few locations, but there were one or two spots where things
got hairy.
There are a few repercussions of this commit:
* tube/rc have been removed (the runtime implementation of rc)
* There is no longer a "single threaded" spawning mode for tasks. This is now
encompassed by 1:1 scheduling + communication. Convenience methods have been
introduced that are specific to libgreen to assist in the spawning of pools of
schedulers.
This commit introduces a new crate called "native" which will be the crate that
implements the 1:1 runtime of rust. This currently entails having an
implementation of std::rt::Runtime inside of libnative as well as moving all of
the native I/O implementations to libnative.
The current snag is that the start lang item must currently be defined in
libnative in order to start running, but this will change in the future.
Cool fact about this crate, there are no extra features that are enabled.
Note that this commit does not include any makefile support necessary for
building libnative, that's all coming in a later commit.
Like the librustuv refactoring, this refactors std::comm to sever all ties with
the scheduler. This means that the entire `comm::imp` module can be deleted in
favor of implementations outside of libstd.
This reimplements librustuv without using the interfaces provided by the
scheduler in libstd. This solely uses the new Runtime trait in order to
interface with the local task and perform the necessary scheduling operations.
The largest snag in this refactoring is reimplementing homing. The new runtime
trait exposes no concept of "homing" a task or forcibly sending a task to a
remote scheduler (there is no concept of a scheduler). In order to reimplement
homing, the transferrence of tasks is now done at the librustuv level instead of
the scheduler level. This means that all I/O loops now have a concurrent queue
which receives homing messages and requests.
This allows the entire implementation of librustuv to be only dependent on the
runtime trait, severing all dependence of librustuv on the scheduler and related
green-thread functions.
This is all in preparation of the introduction of libgreen and libnative.
At the same time, I also took the liberty of removing all glob imports from
librustuv.
This commit fixes the logging function to be safely implemented, as well as
forcibly requiring a task to be present to use logging macros. This is safely
implemented by transferring ownership of the logger from the task to the local
stack frame in order to perform the print. This means that if a logger does more
logging while logging a new one will be initialized and then will get
overwritten once the initial logging function returns.
Without a scheme such as this, it is possible to unsafely alias two loggers by
logging twice (unsafely borrows from the task twice).
Printing is an incredibly useful debugging utility, and it's not much help if
your debugging prints just trigger an obscure abort when you need them most. In
order to handle this case, forcibly fall back to a libc::write implementation of
printing whenever a local task is not available.
Note that this is *not* a 1:1 fallback. All 1:1 rust tasks will still have a
local Task that it can go through (and stdio will be created through the local
IO factory), this is only a fallback for "no context" rust code (such as that
setting up the context).
It is not the case that all programs will always be able to acquire an instance
of the LocalIo borrow, so this commit exposes this limitation by returning
Option<LocalIo> from LocalIo::borrow().
At the same time, a helper method LocalIo::maybe_raise() has been added in order
to encapsulate the functionality of raising on io_error if there is on local I/O
available.
For now, this moves the following modules to std::sync
* UnsafeArc (also removed unwrap method)
* mpsc_queue
* spsc_queue
* atomics
* mpmc_bounded_queue
* deque
We may want to remove some of the queues, but for now this moves things out of
std::rt into std::sync
This module contains many M:N specific concepts. This will no longer be
available with libgreen, and most functions aren't really that necessary today
anyway. New testing primitives will be introduced as they become available for
1:1 and M:N.
A new io::test module is introduced with the new ip4/ip6 address helpers to
continue usage in io tests.
This trait is used to abstract the differences between 1:1 and M:N scheduling
and is the sole dispatch point for the differences between these two scheduling
modes.
This, and the following series of commits, is not intended to compile. Only
after the entire transition is complete are programs expected to compile.
So that `Uuid` can be used as the key in a `HashMap` or in a `HashSet`, etc
The only question I have about this is: Is endianness an issue, here? If so, what's the correct way to proceed?
Could prevent callers from catching the situation and lead to e.g early
iterator terminations (cf. `Reader::read_byte`) since `None` is only to
be returned only on EOF.
This commit adds a `--test` flag to rustdoc to expose the ability to test code examples in doc strings. This work by using sundown's `lang` attribute to figure out how a code block should be tested. The format for this is:
```
1. ```rust
2. ```rust,ignore
3. ```rust,notest
4. ```rust,should_fail
```
Where `rust` means that rustdoc will attempt to test is, `ignore` means that it will not execute the test but it will compile it, `notest` means that rustdoc completely ignores it, and `should_fail` means that the test should fail. This commit also leverages `extra::test` for the testing harness in order to allow parallelization and whatnot.
I have fixed all existing code examples in libstd and libextra, but I have not made a pass through the crates in order to change code blocks to testable code blocks.
It may also be a questionable decision to require opting-in to a testable code block.
Finally, I kept our sugar in the doc suite to omit lines starting with `#` in documentation but still process them during tests.
Closes#2925