This just moves the responsibility for joining with scheduler threads
off to a worker thread. This will be needed when we allow tasks to be
scheduled on the main thread.
This makes the kernel join every scheduler thread before exiting in order to
ensure that all threads are completely terminated before the process exits. On
my machine, for 32-bit targets, this was causing regular valgrind errors.
This is in preparation for giving schedulers their own life cycle separate
from the kernel.
Tasks must be deleted before their scheduler thread, so we can't let the
scheduler exit before all its tasks have been cleaned up. In this scheme,
the scheduler will unregister tasks with the kernel when they are reaped,
then drop their ref on the task (there may still be others). When the task
ref count hits zero, the task will request to be unregistered from the
scheduler, which is responsible for deleting the task.
Instead of having the kernel tell the scheduler to exit, let the scheduler
decide when to exit. For now it will exit when all of its tasks are
unregistered.
Instead of joining on the scheduler threads, instead keep a count of active
schedulers. When there are no more schedulers raise a signal for the main
thread to continue.
This will be required once schedulers can be added and removed from the
running kernel.
At the moment there's not really any reason to be raising this signal,
since they schedulers wake up periodically anyway, but once we remove
the timer this will be how the schedulers know to exit.
When the kernel fails, kill all tasks and wait for the schedulers to stop
instead of just exiting. I'm sure there are tons of lurking issues here but
this is enough to fail without leaking (at least in the absence of cleanups).
This is the new way to refer to tasks in rust-land. Currently all they
do is serve as a key to look up the old rust_task structure. Ideally
they won't be ref counted, but baby steps.
Previously we were locking the spawning task's scheduler. I couldn't
see that that was protecting anything. The newborn_task list in the new task's
scheduler though was unprotected from concurrent access. So now we're locking
the new task's scheduler.