rust/src/librustc/util/common.rs

211 lines
5.8 KiB
Rust
Raw Normal View History

2014-10-14 13:41:50 -05:00
// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![allow(non_camel_case_types)]
2014-10-14 13:41:50 -05:00
use std::cell::RefCell;
use std::collections::HashMap;
2014-10-15 01:25:34 -05:00
use std::fmt::Show;
use std::hash::{Hash, Hasher};
use std::time::Duration;
2012-09-04 13:54:36 -05:00
use syntax::ast;
use syntax::visit;
use syntax::visit::Visitor;
// An error has already been reported to the user, so no need to continue checking.
#[deriving(Clone,Show)]
pub struct ErrorReported;
pub fn time<T, U>(do_it: bool, what: &str, u: U, f: |U| -> T) -> T {
Implement LTO This commit implements LTO for rust leveraging LLVM's passes. What this means is: * When compiling an rlib, in addition to insdering foo.o into the archive, also insert foo.bc (the LLVM bytecode) of the optimized module. * When the compiler detects the -Z lto option, it will attempt to perform LTO on a staticlib or binary output. The compiler will emit an error if a dylib or rlib output is being generated. * The actual act of performing LTO is as follows: 1. Force all upstream libraries to have an rlib version available. 2. Load the bytecode of each upstream library from the rlib. 3. Link all this bytecode into the current LLVM module (just using llvm apis) 4. Run an internalization pass which internalizes all symbols except those found reachable for the local crate of compilation. 5. Run the LLVM LTO pass manager over this entire module 6a. If assembling an archive, then add all upstream rlibs into the output archive. This ignores all of the object/bitcode/metadata files rust generated and placed inside the rlibs. 6b. If linking a binary, create copies of all upstream rlibs, remove the rust-generated object-file, and then link everything as usual. As I have explained in #10741, this process is excruciatingly slow, so this is *not* turned on by default, and it is also why I have decided to hide it behind a -Z flag for now. The good news is that the binary sizes are about as small as they can be as a result of LTO, so it's definitely working. Closes #10741 Closes #10740
2013-12-03 01:19:29 -06:00
local_data_key!(depth: uint);
2013-09-27 21:46:09 -05:00
if !do_it { return f(u); }
Implement LTO This commit implements LTO for rust leveraging LLVM's passes. What this means is: * When compiling an rlib, in addition to insdering foo.o into the archive, also insert foo.bc (the LLVM bytecode) of the optimized module. * When the compiler detects the -Z lto option, it will attempt to perform LTO on a staticlib or binary output. The compiler will emit an error if a dylib or rlib output is being generated. * The actual act of performing LTO is as follows: 1. Force all upstream libraries to have an rlib version available. 2. Load the bytecode of each upstream library from the rlib. 3. Link all this bytecode into the current LLVM module (just using llvm apis) 4. Run an internalization pass which internalizes all symbols except those found reachable for the local crate of compilation. 5. Run the LLVM LTO pass manager over this entire module 6a. If assembling an archive, then add all upstream rlibs into the output archive. This ignores all of the object/bitcode/metadata files rust generated and placed inside the rlibs. 6b. If linking a binary, create copies of all upstream rlibs, remove the rust-generated object-file, and then link everything as usual. As I have explained in #10741, this process is excruciatingly slow, so this is *not* turned on by default, and it is also why I have decided to hide it behind a -Z flag for now. The good news is that the binary sizes are about as small as they can be as a result of LTO, so it's definitely working. Closes #10741 Closes #10740
2013-12-03 01:19:29 -06:00
let old = depth.get().map(|d| *d).unwrap_or(0);
depth.replace(Some(old + 1));
Implement LTO This commit implements LTO for rust leveraging LLVM's passes. What this means is: * When compiling an rlib, in addition to insdering foo.o into the archive, also insert foo.bc (the LLVM bytecode) of the optimized module. * When the compiler detects the -Z lto option, it will attempt to perform LTO on a staticlib or binary output. The compiler will emit an error if a dylib or rlib output is being generated. * The actual act of performing LTO is as follows: 1. Force all upstream libraries to have an rlib version available. 2. Load the bytecode of each upstream library from the rlib. 3. Link all this bytecode into the current LLVM module (just using llvm apis) 4. Run an internalization pass which internalizes all symbols except those found reachable for the local crate of compilation. 5. Run the LLVM LTO pass manager over this entire module 6a. If assembling an archive, then add all upstream rlibs into the output archive. This ignores all of the object/bitcode/metadata files rust generated and placed inside the rlibs. 6b. If linking a binary, create copies of all upstream rlibs, remove the rust-generated object-file, and then link everything as usual. As I have explained in #10741, this process is excruciatingly slow, so this is *not* turned on by default, and it is also why I have decided to hide it behind a -Z flag for now. The good news is that the binary sizes are about as small as they can be as a result of LTO, so it's definitely working. Closes #10741 Closes #10740
2013-12-03 01:19:29 -06:00
let mut u = Some(u);
let mut rv = None;
let dur = Duration::span(|| {
rv = Some(f(u.take().unwrap()))
});
let rv = rv.unwrap();
Implement LTO This commit implements LTO for rust leveraging LLVM's passes. What this means is: * When compiling an rlib, in addition to insdering foo.o into the archive, also insert foo.bc (the LLVM bytecode) of the optimized module. * When the compiler detects the -Z lto option, it will attempt to perform LTO on a staticlib or binary output. The compiler will emit an error if a dylib or rlib output is being generated. * The actual act of performing LTO is as follows: 1. Force all upstream libraries to have an rlib version available. 2. Load the bytecode of each upstream library from the rlib. 3. Link all this bytecode into the current LLVM module (just using llvm apis) 4. Run an internalization pass which internalizes all symbols except those found reachable for the local crate of compilation. 5. Run the LLVM LTO pass manager over this entire module 6a. If assembling an archive, then add all upstream rlibs into the output archive. This ignores all of the object/bitcode/metadata files rust generated and placed inside the rlibs. 6b. If linking a binary, create copies of all upstream rlibs, remove the rust-generated object-file, and then link everything as usual. As I have explained in #10741, this process is excruciatingly slow, so this is *not* turned on by default, and it is also why I have decided to hide it behind a -Z flag for now. The good news is that the binary sizes are about as small as they can be as a result of LTO, so it's definitely working. Closes #10741 Closes #10740
2013-12-03 01:19:29 -06:00
println!("{}time: {}.{:03} \t{}", " ".repeat(old),
dur.num_seconds(), dur.num_milliseconds() % 1000, what);
depth.replace(Some(old));
Implement LTO This commit implements LTO for rust leveraging LLVM's passes. What this means is: * When compiling an rlib, in addition to insdering foo.o into the archive, also insert foo.bc (the LLVM bytecode) of the optimized module. * When the compiler detects the -Z lto option, it will attempt to perform LTO on a staticlib or binary output. The compiler will emit an error if a dylib or rlib output is being generated. * The actual act of performing LTO is as follows: 1. Force all upstream libraries to have an rlib version available. 2. Load the bytecode of each upstream library from the rlib. 3. Link all this bytecode into the current LLVM module (just using llvm apis) 4. Run an internalization pass which internalizes all symbols except those found reachable for the local crate of compilation. 5. Run the LLVM LTO pass manager over this entire module 6a. If assembling an archive, then add all upstream rlibs into the output archive. This ignores all of the object/bitcode/metadata files rust generated and placed inside the rlibs. 6b. If linking a binary, create copies of all upstream rlibs, remove the rust-generated object-file, and then link everything as usual. As I have explained in #10741, this process is excruciatingly slow, so this is *not* turned on by default, and it is also why I have decided to hide it behind a -Z flag for now. The good news is that the binary sizes are about as small as they can be as a result of LTO, so it's definitely working. Closes #10741 Closes #10740
2013-12-03 01:19:29 -06:00
rv
}
2014-10-15 01:25:34 -05:00
pub fn indent<R: Show>(op: || -> R) -> R {
2012-04-05 22:59:07 -05:00
// Use in conjunction with the log post-processor like `src/etc/indenter`
// to make debug output more readable.
debug!(">>");
let r = op();
2014-10-15 01:25:34 -05:00
debug!("<< (Result = {})", r);
2013-02-15 03:14:34 -06:00
r
2012-04-05 22:59:07 -05:00
}
pub struct Indenter {
_cannot_construct_outside_of_this_module: ()
}
impl Drop for Indenter {
fn drop(&mut self) { debug!("<<"); }
2012-04-05 22:59:07 -05:00
}
pub fn indenter() -> Indenter {
debug!(">>");
Indenter { _cannot_construct_outside_of_this_module: () }
2012-04-05 22:59:07 -05:00
}
struct LoopQueryVisitor<'a> {
2014-04-07 15:30:48 -05:00
p: |&ast::Expr_|: 'a -> bool,
flag: bool,
}
impl<'a, 'v> Visitor<'v> for LoopQueryVisitor<'a> {
fn visit_expr(&mut self, e: &ast::Expr) {
self.flag |= (self.p)(&e.node);
2012-08-06 14:34:08 -05:00
match e.node {
// Skip inner loops, since a break in the inner loop isn't a
// break inside the outer loop
ast::ExprLoop(..) | ast::ExprWhile(..) | ast::ExprForLoop(..) => {}
_ => visit::walk_expr(self, e)
}
}
}
// Takes a predicate p, returns true iff p is true for any subexpressions
// of b -- skipping any inner loops (loop, while, loop_body)
2014-01-06 06:00:46 -06:00
pub fn loop_query(b: &ast::Block, p: |&ast::Expr_| -> bool) -> bool {
let mut v = LoopQueryVisitor {
p: p,
flag: false,
};
visit::walk_block(&mut v, b);
return v.flag;
2012-06-14 14:24:56 -05:00
}
struct BlockQueryVisitor<'a> {
2014-04-07 15:30:48 -05:00
p: |&ast::Expr|: 'a -> bool,
flag: bool,
}
impl<'a, 'v> Visitor<'v> for BlockQueryVisitor<'a> {
fn visit_expr(&mut self, e: &ast::Expr) {
self.flag |= (self.p)(e);
visit::walk_expr(self, e)
}
}
// Takes a predicate p, returns true iff p is true for any subexpressions
// of b -- skipping any inner loops (loop, while, loop_body)
2014-09-07 12:09:06 -05:00
pub fn block_query(b: &ast::Block, p: |&ast::Expr| -> bool) -> bool {
let mut v = BlockQueryVisitor {
p: p,
flag: false,
};
visit::walk_block(&mut v, &*b);
return v.flag;
}
// K: Eq + Hash<S>, V, S, H: Hasher<S>
pub fn can_reach<S,H:Hasher<S>,T:Eq+Clone+Hash<S>>(
edges_map: &HashMap<T,Vec<T>,H>,
source: T,
destination: T)
-> bool
{
/*!
* Determines whether there exists a path from `source` to
* `destination`. The graph is defined by the `edges_map`, which
* maps from a node `S` to a list of its adjacent nodes `T`.
*
* Efficiency note: This is implemented in an inefficient way
* because it is typically invoked on very small graphs. If the graphs
* become larger, a more efficient graph representation and algorithm
* would probably be advised.
*/
if source == destination {
return true;
}
// Do a little breadth-first-search here. The `queue` list
// doubles as a way to detect if we've seen a particular FR
// before. Note that we expect this graph to be an *extremely
// shallow* tree.
let mut queue = vec!(source);
let mut i = 0;
while i < queue.len() {
2014-11-06 11:25:16 -06:00
match edges_map.get(&queue[i]) {
Some(edges) => {
for target in edges.iter() {
if *target == destination {
return true;
}
if !queue.iter().any(|x| x == target) {
queue.push((*target).clone());
}
}
}
None => {}
}
i += 1;
}
return false;
}
2014-10-14 13:41:50 -05:00
/// Memoizes a one-argument closure using the given RefCell containing
/// a type implementing MutableMap to serve as a cache.
///
/// In the future the signature of this function is expected to be:
/// ```
/// pub fn memoized<T: Clone, U: Clone, M: MutableMap<T, U>>(
/// cache: &RefCell<M>,
/// f: &|&: T| -> U
/// ) -> impl |&: T| -> U {
/// ```
/// but currently it is not possible.
///
/// # Example
/// ```
/// struct Context {
/// cache: RefCell<HashMap<uint, uint>>
/// }
///
/// fn factorial(ctxt: &Context, n: uint) -> uint {
/// memoized(&ctxt.cache, n, |n| match n {
/// 0 | 1 => n,
/// _ => factorial(ctxt, n - 2) + factorial(ctxt, n - 1)
/// })
/// }
/// ```
#[inline(always)]
pub fn memoized<T: Clone + Hash<S> + Eq, U: Clone, S, H: Hasher<S>>(
cache: &RefCell<HashMap<T, U, H>>,
2014-10-14 13:41:50 -05:00
arg: T,
f: |T| -> U
) -> U {
let key = arg.clone();
2014-11-06 11:25:16 -06:00
let result = cache.borrow().get(&key).map(|result| result.clone());
2014-10-14 13:41:50 -05:00
match result {
Some(result) => result,
None => {
let result = f(arg);
cache.borrow_mut().insert(key, result.clone());
result
}
}
}