rust/src/libnative/bookkeeping.rs

51 lines
1.5 KiB
Rust
Raw Normal View History

// Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! 1:1 Task bookkeeping
//!
//! This module keeps track of the number of running 1:1 tasks so that entry
//! points with libnative know when it's possible to exit the program (once all
//! tasks have exited).
//!
//! The green counterpart for this is bookkeeping on sched pools.
use std::sync::atomics;
use std::unstable::mutex::{Mutex, MUTEX_INIT};
static mut TASK_COUNT: atomics::AtomicUint = atomics::INIT_ATOMIC_UINT;
static mut TASK_LOCK: Mutex = MUTEX_INIT;
pub fn increment() {
let _ = unsafe { TASK_COUNT.fetch_add(1, atomics::SeqCst) };
}
pub fn decrement() {
unsafe {
if TASK_COUNT.fetch_sub(1, atomics::SeqCst) == 1 {
TASK_LOCK.lock();
TASK_LOCK.signal();
TASK_LOCK.unlock();
}
}
}
/// Waits for all other native tasks in the system to exit. This is only used by
/// the entry points of native programs
pub fn wait_for_other_tasks() {
unsafe {
TASK_LOCK.lock();
while TASK_COUNT.load(atomics::SeqCst) > 0 {
TASK_LOCK.wait();
}
TASK_LOCK.unlock();
Implement native timers Native timers are a much hairier thing to deal with than green timers due to the interface that we would like to expose (both a blocking sleep() and a channel-based interface). I ended up implementing timers in three different ways for the various platforms that we supports. In all three of the implementations, there is a worker thread which does send()s on channels for timers. This worker thread is initialized once and then communicated to in a platform-specific manner, but there's always a shared channel available for sending messages to the worker thread. * Windows - I decided to use windows kernel timer objects via CreateWaitableTimer and SetWaitableTimer in order to provide sleeping capabilities. The worker thread blocks via WaitForMultipleObjects where one of the objects is an event that is used to wake up the helper thread (which then drains the incoming message channel for requests). * Linux/(Android?) - These have the ideal interface for implementing timers, timerfd_create. Each timer corresponds to a timerfd, and the helper thread uses epoll to wait for all active timers and then send() for the next one that wakes up. The tricky part in this implementation is updating a timerfd, but see the implementation for the fun details * OSX/FreeBSD - These obviously don't have the windows APIs, and sadly don't have the timerfd api available to them, so I have thrown together a solution which uses select() plus a timeout in order to ad-hoc-ly implement a timer solution for threads. The implementation is backed by a sorted array of timers which need to fire. As I said, this is an ad-hoc solution which is certainly not accurate timing-wise. I have done this implementation due to the lack of other primitives to provide an implementation, and I've done it the best that I could, but I'm sure that there's room for improvement. I'm pretty happy with how these implementations turned out. In theory we could drop the timerfd implementation and have linux use the select() + timeout implementation, but it's so inaccurate that I would much rather continue to use timerfd rather than my ad-hoc select() implementation. The only change that I would make to the API in general is to have a generic sleep() method on an IoFactory which doesn't require allocating a Timer object. For everything but windows it's super-cheap to request a blocking sleep for a set amount of time, and it's probably worth it to provide a sleep() which doesn't do something like allocate a file descriptor on linux.
2013-12-29 01:33:56 -06:00
TASK_LOCK.destroy();
}
}