rust/src/libstd/io/test.rs

203 lines
6.6 KiB
Rust
Raw Normal View History

// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*! Various utility functions useful for writing I/O tests */
#![macro_escape]
use libc;
use os;
use prelude::*;
use std::io::net::ip::*;
use sync::atomics::{AtomicUint, INIT_ATOMIC_UINT, Relaxed};
macro_rules! iotest (
{ fn $name:ident() $b:block $(#[$a:meta])* } => (
mod $name {
2014-04-01 20:39:26 -07:00
#![allow(unused_imports)]
use super::super::*;
use super::*;
use io;
use prelude::*;
use io::*;
use io::fs::*;
2013-12-27 17:50:16 -08:00
use io::test::*;
use io::net::tcp::*;
use io::net::ip::*;
use io::net::udp::*;
#[cfg(unix)]
use io::net::unix::*;
Implement native timers Native timers are a much hairier thing to deal with than green timers due to the interface that we would like to expose (both a blocking sleep() and a channel-based interface). I ended up implementing timers in three different ways for the various platforms that we supports. In all three of the implementations, there is a worker thread which does send()s on channels for timers. This worker thread is initialized once and then communicated to in a platform-specific manner, but there's always a shared channel available for sending messages to the worker thread. * Windows - I decided to use windows kernel timer objects via CreateWaitableTimer and SetWaitableTimer in order to provide sleeping capabilities. The worker thread blocks via WaitForMultipleObjects where one of the objects is an event that is used to wake up the helper thread (which then drains the incoming message channel for requests). * Linux/(Android?) - These have the ideal interface for implementing timers, timerfd_create. Each timer corresponds to a timerfd, and the helper thread uses epoll to wait for all active timers and then send() for the next one that wakes up. The tricky part in this implementation is updating a timerfd, but see the implementation for the fun details * OSX/FreeBSD - These obviously don't have the windows APIs, and sadly don't have the timerfd api available to them, so I have thrown together a solution which uses select() plus a timeout in order to ad-hoc-ly implement a timer solution for threads. The implementation is backed by a sorted array of timers which need to fire. As I said, this is an ad-hoc solution which is certainly not accurate timing-wise. I have done this implementation due to the lack of other primitives to provide an implementation, and I've done it the best that I could, but I'm sure that there's room for improvement. I'm pretty happy with how these implementations turned out. In theory we could drop the timerfd implementation and have linux use the select() + timeout implementation, but it's so inaccurate that I would much rather continue to use timerfd rather than my ad-hoc select() implementation. The only change that I would make to the API in general is to have a generic sleep() method on an IoFactory which doesn't require allocating a Timer object. For everything but windows it's super-cheap to request a blocking sleep for a set amount of time, and it's probably worth it to provide a sleep() which doesn't do something like allocate a file descriptor on linux.
2013-12-28 23:33:56 -08:00
use io::timer::*;
use io::process::*;
use rt::running_on_valgrind;
use str;
fn f() $b
$(#[$a])* #[test] fn green() { f() }
$(#[$a])* #[test] fn native() {
use native;
let (tx, rx) = channel();
native::task::spawn(proc() { tx.send(f()) });
rx.recv();
}
}
)
)
/// Get a port number, starting at 9600, for use in tests
pub fn next_test_port() -> u16 {
static mut next_offset: AtomicUint = INIT_ATOMIC_UINT;
unsafe {
base_port() + next_offset.fetch_add(1, Relaxed) as u16
}
}
/// Get a temporary path which could be the location of a unix socket
pub fn next_test_unix() -> Path {
static mut COUNT: AtomicUint = INIT_ATOMIC_UINT;
// base port and pid are an attempt to be unique between multiple
// test-runners of different configurations running on one
// buildbot, the count is to be unique within this executable.
let string = format!("rust-test-unix-path-{}-{}-{}",
base_port(),
unsafe {libc::getpid()},
unsafe {COUNT.fetch_add(1, Relaxed)});
if cfg!(unix) {
os::tmpdir().join(string)
} else {
Path::new(format!("{}{}", r"\\.\pipe\", string))
}
}
/// Get a unique IPv4 localhost:port pair starting at 9600
pub fn next_test_ip4() -> SocketAddr {
SocketAddr { ip: Ipv4Addr(127, 0, 0, 1), port: next_test_port() }
}
/// Get a unique IPv6 localhost:port pair starting at 9600
pub fn next_test_ip6() -> SocketAddr {
SocketAddr { ip: Ipv6Addr(0, 0, 0, 0, 0, 0, 0, 1), port: next_test_port() }
}
/*
XXX: Welcome to MegaHack City.
The bots run multiple builds at the same time, and these builds
all want to use ports. This function figures out which workspace
it is running in and assigns a port range based on it.
*/
fn base_port() -> u16 {
let base = 9600u16;
let range = 1000u16;
let bases = [
("32-opt", base + range * 1),
2013-12-27 17:50:16 -08:00
("32-nopt", base + range * 2),
("64-opt", base + range * 3),
2013-12-27 17:50:16 -08:00
("64-nopt", base + range * 4),
("64-opt-vg", base + range * 5),
("all-opt", base + range * 6),
("snap3", base + range * 7),
("dist", base + range * 8)
];
// FIXME (#9639): This needs to handle non-utf8 paths
let path = os::getcwd();
let path_s = path.as_str().unwrap();
let mut final_base = base;
for &(dir, base) in bases.iter() {
if path_s.contains(dir) {
final_base = base;
break;
}
}
return final_base;
}
/// Raises the file descriptor limit when running tests if necessary
pub fn raise_fd_limit() {
unsafe { darwin_fd_limit::raise_fd_limit() }
}
#[cfg(target_os="macos")]
#[allow(non_camel_case_types)]
mod darwin_fd_limit {
/*!
* darwin_fd_limit exists to work around an issue where launchctl on Mac OS X defaults the
* rlimit maxfiles to 256/unlimited. The default soft limit of 256 ends up being far too low
* for our multithreaded scheduler testing, depending on the number of cores available.
*
* This fixes issue #7772.
*/
use libc;
type rlim_t = libc::uint64_t;
struct rlimit {
rlim_cur: rlim_t,
rlim_max: rlim_t
}
extern {
// name probably doesn't need to be mut, but the C function doesn't specify const
fn sysctl(name: *mut libc::c_int, namelen: libc::c_uint,
oldp: *mut libc::c_void, oldlenp: *mut libc::size_t,
newp: *mut libc::c_void, newlen: libc::size_t) -> libc::c_int;
fn getrlimit(resource: libc::c_int, rlp: *mut rlimit) -> libc::c_int;
2014-06-25 12:47:34 -07:00
fn setrlimit(resource: libc::c_int, rlp: *const rlimit) -> libc::c_int;
}
static CTL_KERN: libc::c_int = 1;
static KERN_MAXFILESPERPROC: libc::c_int = 29;
static RLIMIT_NOFILE: libc::c_int = 8;
pub unsafe fn raise_fd_limit() {
// The strategy here is to fetch the current resource limits, read the kern.maxfilesperproc
// sysctl value, and bump the soft resource limit for maxfiles up to the sysctl value.
2014-02-14 18:42:01 -05:00
use ptr::mut_null;
use mem::size_of_val;
use os::last_os_error;
// Fetch the kern.maxfilesperproc value
let mut mib: [libc::c_int, ..2] = [CTL_KERN, KERN_MAXFILESPERPROC];
let mut maxfiles: libc::c_int = 0;
let mut size: libc::size_t = size_of_val(&maxfiles) as libc::size_t;
2014-02-14 18:42:01 -05:00
if sysctl(&mut mib[0], 2, &mut maxfiles as *mut libc::c_int as *mut libc::c_void, &mut size,
mut_null(), 0) != 0 {
let err = last_os_error();
log: Introduce liblog, the old std::logging This commit moves all logging out of the standard library into an external crate. This crate is the new crate which is responsible for all logging macros and logging implementation. A few reasons for this change are: * The crate map has always been a bit of a code smell among rust programs. It has difficulty being loaded on almost all platforms, and it's used almost exclusively for logging and only logging. Removing the crate map is one of the end goals of this movement. * The compiler has a fair bit of special support for logging. It has the __log_level() expression as well as generating a global word per module specifying the log level. This is unfairly favoring the built-in logging system, and is much better done purely in libraries instead of the compiler itself. * Initialization of logging is much easier to do if there is no reliance on a magical crate map being available to set module log levels. * If the logging library can be written outside of the standard library, there's no reason that it shouldn't be. It's likely that we're not going to build the highest quality logging library of all time, so third-party libraries should be able to provide just as high-quality logging systems as the default one provided in the rust distribution. With a migration such as this, the change does not come for free. There are some subtle changes in the behavior of liblog vs the previous logging macros: * The core change of this migration is that there is no longer a physical log-level per module. This concept is still emulated (it is quite useful), but there is now only a global log level, not a local one. This global log level is a reflection of the maximum of all log levels specified. The previously generated logging code looked like: if specified_level <= __module_log_level() { println!(...) } The newly generated code looks like: if specified_level <= ::log::LOG_LEVEL { if ::log::module_enabled(module_path!()) { println!(...) } } Notably, the first layer of checking is still intended to be "super fast" in that it's just a load of a global word and a compare. The second layer of checking is executed to determine if the current module does indeed have logging turned on. This means that if any module has a debug log level turned on, all modules with debug log levels get a little bit slower (they all do more expensive dynamic checks to determine if they're turned on or not). Semantically, this migration brings no change in this respect, but runtime-wise, this will have a perf impact on some code. * A `RUST_LOG=::help` directive will no longer print out a list of all modules that can be logged. This is because the crate map will no longer specify the log levels of all modules, so the list of modules is not known. Additionally, warnings can no longer be provided if a malformed logging directive was supplied. The new "hello world" for logging looks like: #[phase(syntax, link)] extern crate log; fn main() { debug!("Hello, world!"); }
2014-03-08 22:11:44 -08:00
fail!("raise_fd_limit: error calling sysctl: {}", err);
}
// Fetch the current resource limits
let mut rlim = rlimit{rlim_cur: 0, rlim_max: 0};
2014-02-14 18:42:01 -05:00
if getrlimit(RLIMIT_NOFILE, &mut rlim) != 0 {
let err = last_os_error();
log: Introduce liblog, the old std::logging This commit moves all logging out of the standard library into an external crate. This crate is the new crate which is responsible for all logging macros and logging implementation. A few reasons for this change are: * The crate map has always been a bit of a code smell among rust programs. It has difficulty being loaded on almost all platforms, and it's used almost exclusively for logging and only logging. Removing the crate map is one of the end goals of this movement. * The compiler has a fair bit of special support for logging. It has the __log_level() expression as well as generating a global word per module specifying the log level. This is unfairly favoring the built-in logging system, and is much better done purely in libraries instead of the compiler itself. * Initialization of logging is much easier to do if there is no reliance on a magical crate map being available to set module log levels. * If the logging library can be written outside of the standard library, there's no reason that it shouldn't be. It's likely that we're not going to build the highest quality logging library of all time, so third-party libraries should be able to provide just as high-quality logging systems as the default one provided in the rust distribution. With a migration such as this, the change does not come for free. There are some subtle changes in the behavior of liblog vs the previous logging macros: * The core change of this migration is that there is no longer a physical log-level per module. This concept is still emulated (it is quite useful), but there is now only a global log level, not a local one. This global log level is a reflection of the maximum of all log levels specified. The previously generated logging code looked like: if specified_level <= __module_log_level() { println!(...) } The newly generated code looks like: if specified_level <= ::log::LOG_LEVEL { if ::log::module_enabled(module_path!()) { println!(...) } } Notably, the first layer of checking is still intended to be "super fast" in that it's just a load of a global word and a compare. The second layer of checking is executed to determine if the current module does indeed have logging turned on. This means that if any module has a debug log level turned on, all modules with debug log levels get a little bit slower (they all do more expensive dynamic checks to determine if they're turned on or not). Semantically, this migration brings no change in this respect, but runtime-wise, this will have a perf impact on some code. * A `RUST_LOG=::help` directive will no longer print out a list of all modules that can be logged. This is because the crate map will no longer specify the log levels of all modules, so the list of modules is not known. Additionally, warnings can no longer be provided if a malformed logging directive was supplied. The new "hello world" for logging looks like: #[phase(syntax, link)] extern crate log; fn main() { debug!("Hello, world!"); }
2014-03-08 22:11:44 -08:00
fail!("raise_fd_limit: error calling getrlimit: {}", err);
}
// Bump the soft limit to the smaller of kern.maxfilesperproc and the hard limit
rlim.rlim_cur = ::cmp::min(maxfiles as rlim_t, rlim.rlim_max);
// Set our newly-increased resource limit
2014-02-14 18:42:01 -05:00
if setrlimit(RLIMIT_NOFILE, &rlim) != 0 {
let err = last_os_error();
log: Introduce liblog, the old std::logging This commit moves all logging out of the standard library into an external crate. This crate is the new crate which is responsible for all logging macros and logging implementation. A few reasons for this change are: * The crate map has always been a bit of a code smell among rust programs. It has difficulty being loaded on almost all platforms, and it's used almost exclusively for logging and only logging. Removing the crate map is one of the end goals of this movement. * The compiler has a fair bit of special support for logging. It has the __log_level() expression as well as generating a global word per module specifying the log level. This is unfairly favoring the built-in logging system, and is much better done purely in libraries instead of the compiler itself. * Initialization of logging is much easier to do if there is no reliance on a magical crate map being available to set module log levels. * If the logging library can be written outside of the standard library, there's no reason that it shouldn't be. It's likely that we're not going to build the highest quality logging library of all time, so third-party libraries should be able to provide just as high-quality logging systems as the default one provided in the rust distribution. With a migration such as this, the change does not come for free. There are some subtle changes in the behavior of liblog vs the previous logging macros: * The core change of this migration is that there is no longer a physical log-level per module. This concept is still emulated (it is quite useful), but there is now only a global log level, not a local one. This global log level is a reflection of the maximum of all log levels specified. The previously generated logging code looked like: if specified_level <= __module_log_level() { println!(...) } The newly generated code looks like: if specified_level <= ::log::LOG_LEVEL { if ::log::module_enabled(module_path!()) { println!(...) } } Notably, the first layer of checking is still intended to be "super fast" in that it's just a load of a global word and a compare. The second layer of checking is executed to determine if the current module does indeed have logging turned on. This means that if any module has a debug log level turned on, all modules with debug log levels get a little bit slower (they all do more expensive dynamic checks to determine if they're turned on or not). Semantically, this migration brings no change in this respect, but runtime-wise, this will have a perf impact on some code. * A `RUST_LOG=::help` directive will no longer print out a list of all modules that can be logged. This is because the crate map will no longer specify the log levels of all modules, so the list of modules is not known. Additionally, warnings can no longer be provided if a malformed logging directive was supplied. The new "hello world" for logging looks like: #[phase(syntax, link)] extern crate log; fn main() { debug!("Hello, world!"); }
2014-03-08 22:11:44 -08:00
fail!("raise_fd_limit: error calling setrlimit: {}", err);
}
}
}
#[cfg(not(target_os="macos"))]
mod darwin_fd_limit {
pub unsafe fn raise_fd_limit() {}
}