2016-03-24 10:40:49 -05:00
|
|
|
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
|
|
|
|
// file at the top-level directory of this distribution and at
|
|
|
|
// http://rust-lang.org/COPYRIGHT.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
|
|
|
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
|
|
|
|
// option. This file may not be copied, modified, or distributed
|
|
|
|
// except according to those terms.
|
|
|
|
|
|
|
|
//! Partitioning Codegen Units for Incremental Compilation
|
|
|
|
//! ======================================================
|
|
|
|
//!
|
|
|
|
//! The task of this module is to take the complete set of translation items of
|
|
|
|
//! a crate and produce a set of codegen units from it, where a codegen unit
|
|
|
|
//! is a named set of (translation-item, linkage) pairs. That is, this module
|
|
|
|
//! decides which translation item appears in which codegen units with which
|
|
|
|
//! linkage. The following paragraphs describe some of the background on the
|
|
|
|
//! partitioning scheme.
|
|
|
|
//!
|
|
|
|
//! The most important opportunity for saving on compilation time with
|
|
|
|
//! incremental compilation is to avoid re-translating and re-optimizing code.
|
|
|
|
//! Since the unit of translation and optimization for LLVM is "modules" or, how
|
|
|
|
//! we call them "codegen units", the particulars of how much time can be saved
|
|
|
|
//! by incremental compilation are tightly linked to how the output program is
|
|
|
|
//! partitioned into these codegen units prior to passing it to LLVM --
|
|
|
|
//! especially because we have to treat codegen units as opaque entities once
|
|
|
|
//! they are created: There is no way for us to incrementally update an existing
|
|
|
|
//! LLVM module and so we have to build any such module from scratch if it was
|
|
|
|
//! affected by some change in the source code.
|
|
|
|
//!
|
|
|
|
//! From that point of view it would make sense to maximize the number of
|
|
|
|
//! codegen units by, for example, putting each function into its own module.
|
|
|
|
//! That way only those modules would have to be re-compiled that were actually
|
|
|
|
//! affected by some change, minimizing the number of functions that could have
|
|
|
|
//! been re-used but just happened to be located in a module that is
|
|
|
|
//! re-compiled.
|
|
|
|
//!
|
|
|
|
//! However, since LLVM optimization does not work across module boundaries,
|
|
|
|
//! using such a highly granular partitioning would lead to very slow runtime
|
|
|
|
//! code since it would effectively prohibit inlining and other inter-procedure
|
|
|
|
//! optimizations. We want to avoid that as much as possible.
|
|
|
|
//!
|
|
|
|
//! Thus we end up with a trade-off: The bigger the codegen units, the better
|
|
|
|
//! LLVM's optimizer can do its work, but also the smaller the compilation time
|
|
|
|
//! reduction we get from incremental compilation.
|
|
|
|
//!
|
|
|
|
//! Ideally, we would create a partitioning such that there are few big codegen
|
|
|
|
//! units with few interdependencies between them. For now though, we use the
|
|
|
|
//! following heuristic to determine the partitioning:
|
|
|
|
//!
|
|
|
|
//! - There are two codegen units for every source-level module:
|
|
|
|
//! - One for "stable", that is non-generic, code
|
|
|
|
//! - One for more "volatile" code, i.e. monomorphized instances of functions
|
|
|
|
//! defined in that module
|
|
|
|
//!
|
|
|
|
//! In order to see why this heuristic makes sense, let's take a look at when a
|
|
|
|
//! codegen unit can get invalidated:
|
|
|
|
//!
|
|
|
|
//! 1. The most straightforward case is when the BODY of a function or global
|
|
|
|
//! changes. Then any codegen unit containing the code for that item has to be
|
|
|
|
//! re-compiled. Note that this includes all codegen units where the function
|
|
|
|
//! has been inlined.
|
|
|
|
//!
|
|
|
|
//! 2. The next case is when the SIGNATURE of a function or global changes. In
|
|
|
|
//! this case, all codegen units containing a REFERENCE to that item have to be
|
|
|
|
//! re-compiled. This is a superset of case 1.
|
|
|
|
//!
|
|
|
|
//! 3. The final and most subtle case is when a REFERENCE to a generic function
|
|
|
|
//! is added or removed somewhere. Even though the definition of the function
|
|
|
|
//! might be unchanged, a new REFERENCE might introduce a new monomorphized
|
|
|
|
//! instance of this function which has to be placed and compiled somewhere.
|
|
|
|
//! Conversely, when removing a REFERENCE, it might have been the last one with
|
|
|
|
//! that particular set of generic arguments and thus we have to remove it.
|
|
|
|
//!
|
|
|
|
//! From the above we see that just using one codegen unit per source-level
|
|
|
|
//! module is not such a good idea, since just adding a REFERENCE to some
|
|
|
|
//! generic item somewhere else would invalidate everything within the module
|
|
|
|
//! containing the generic item. The heuristic above reduces this detrimental
|
|
|
|
//! side-effect of references a little by at least not touching the non-generic
|
|
|
|
//! code of the module.
|
|
|
|
//!
|
|
|
|
//! A Note on Inlining
|
|
|
|
//! ------------------
|
|
|
|
//! As briefly mentioned above, in order for LLVM to be able to inline a
|
|
|
|
//! function call, the body of the function has to be available in the LLVM
|
|
|
|
//! module where the call is made. This has a few consequences for partitioning:
|
|
|
|
//!
|
|
|
|
//! - The partitioning algorithm has to take care of placing functions into all
|
|
|
|
//! codegen units where they should be available for inlining. It also has to
|
|
|
|
//! decide on the correct linkage for these functions.
|
|
|
|
//!
|
|
|
|
//! - The partitioning algorithm has to know which functions are likely to get
|
|
|
|
//! inlined, so it can distribute function instantiations accordingly. Since
|
|
|
|
//! there is no way of knowing for sure which functions LLVM will decide to
|
|
|
|
//! inline in the end, we apply a heuristic here: Only functions marked with
|
2017-01-09 08:54:54 -06:00
|
|
|
//! #[inline] are considered for inlining by the partitioner. The current
|
|
|
|
//! implementation will not try to determine if a function is likely to be
|
|
|
|
//! inlined by looking at the functions definition.
|
2016-03-24 10:40:49 -05:00
|
|
|
//!
|
|
|
|
//! Note though that as a side-effect of creating a codegen units per
|
|
|
|
//! source-level module, functions from the same module will be available for
|
|
|
|
//! inlining, even when they are not marked #[inline].
|
|
|
|
|
2016-05-09 13:26:15 -05:00
|
|
|
use collector::InliningMap;
|
2017-02-28 17:30:41 -06:00
|
|
|
use common;
|
2016-07-21 11:49:59 -05:00
|
|
|
use rustc::dep_graph::{DepNode, WorkProductId};
|
2016-03-24 10:40:49 -05:00
|
|
|
use rustc::hir::def_id::DefId;
|
|
|
|
use rustc::hir::map::DefPathData;
|
2017-09-12 13:04:46 -05:00
|
|
|
use rustc::middle::trans::{Linkage, Visibility};
|
2017-07-12 10:37:58 -05:00
|
|
|
use rustc::ty::{self, TyCtxt, InstanceDef};
|
2016-03-24 10:40:49 -05:00
|
|
|
use rustc::ty::item_path::characteristic_def_id_of_type;
|
2017-07-12 10:37:58 -05:00
|
|
|
use rustc::util::nodemap::{FxHashMap, FxHashSet};
|
|
|
|
use std::collections::hash_map::Entry;
|
2016-05-26 12:04:35 -05:00
|
|
|
use syntax::ast::NodeId;
|
2016-11-16 04:52:37 -06:00
|
|
|
use syntax::symbol::{Symbol, InternedString};
|
2017-09-12 13:04:46 -05:00
|
|
|
use trans_item::{TransItem, TransItemExt, InstantiationMode};
|
|
|
|
|
|
|
|
pub use rustc::middle::trans::CodegenUnit;
|
2016-03-24 10:40:49 -05:00
|
|
|
|
2016-04-21 15:45:33 -05:00
|
|
|
pub enum PartitioningStrategy {
|
2016-04-22 13:07:23 -05:00
|
|
|
/// Generate one codegen unit per source-level module.
|
2016-04-21 15:45:33 -05:00
|
|
|
PerModule,
|
2016-04-22 13:07:23 -05:00
|
|
|
|
|
|
|
/// Partition the whole crate into a fixed number of codegen units.
|
2016-04-21 15:45:33 -05:00
|
|
|
FixedUnitCount(usize)
|
|
|
|
}
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
pub trait CodegenUnitExt<'tcx> {
|
|
|
|
fn as_codegen_unit(&self) -> &CodegenUnit<'tcx>;
|
2016-07-21 11:49:59 -05:00
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
fn contains_item(&self, item: &TransItem<'tcx>) -> bool {
|
|
|
|
self.items().contains_key(item)
|
2016-07-21 11:49:59 -05:00
|
|
|
}
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
fn name<'a>(&'a self) -> &'a InternedString
|
|
|
|
where 'tcx: 'a,
|
|
|
|
{
|
|
|
|
&self.as_codegen_unit().name()
|
2016-07-21 11:49:59 -05:00
|
|
|
}
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
fn items(&self) -> &FxHashMap<TransItem<'tcx>, (Linkage, Visibility)> {
|
|
|
|
&self.as_codegen_unit().items()
|
2016-07-21 11:49:59 -05:00
|
|
|
}
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
fn work_product_id(&self) -> WorkProductId {
|
2017-06-06 08:09:21 -05:00
|
|
|
WorkProductId::from_cgu_name(self.name())
|
2016-07-21 11:49:59 -05:00
|
|
|
}
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
fn work_product_dep_node(&self) -> DepNode {
|
2017-06-02 10:36:30 -05:00
|
|
|
self.work_product_id().to_dep_node()
|
2016-07-21 11:49:59 -05:00
|
|
|
}
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
fn items_in_deterministic_order<'a>(&self,
|
|
|
|
tcx: TyCtxt<'a, 'tcx, 'tcx>)
|
|
|
|
-> Vec<(TransItem<'tcx>,
|
|
|
|
(Linkage, Visibility))> {
|
2016-05-26 12:04:35 -05:00
|
|
|
// The codegen tests rely on items being process in the same order as
|
|
|
|
// they appear in the file, so for local items, we sort by node_id first
|
2017-04-24 11:35:47 -05:00
|
|
|
#[derive(PartialEq, Eq, PartialOrd, Ord)]
|
|
|
|
pub struct ItemSortKey(Option<NodeId>, ty::SymbolName);
|
2016-05-26 12:04:35 -05:00
|
|
|
|
2017-04-24 11:35:47 -05:00
|
|
|
fn item_sort_key<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|
|
|
item: TransItem<'tcx>) -> ItemSortKey {
|
|
|
|
ItemSortKey(match item {
|
2017-10-10 10:11:08 -05:00
|
|
|
TransItem::Fn(ref instance) => {
|
|
|
|
match instance.def {
|
|
|
|
// We only want to take NodeIds of user-defined
|
|
|
|
// instances into account. The others don't matter for
|
|
|
|
// the codegen tests and can even make item order
|
|
|
|
// unstable.
|
|
|
|
InstanceDef::Item(def_id) => {
|
|
|
|
tcx.hir.as_local_node_id(def_id)
|
|
|
|
}
|
|
|
|
InstanceDef::Intrinsic(..) |
|
|
|
|
InstanceDef::FnPtrShim(..) |
|
|
|
|
InstanceDef::Virtual(..) |
|
|
|
|
InstanceDef::ClosureOnceShim { .. } |
|
|
|
|
InstanceDef::DropGlue(..) |
|
|
|
|
InstanceDef::CloneShim(..) => {
|
|
|
|
None
|
|
|
|
}
|
|
|
|
}
|
2016-05-26 12:04:35 -05:00
|
|
|
}
|
2017-10-10 10:11:08 -05:00
|
|
|
TransItem::Static(node_id) |
|
|
|
|
TransItem::GlobalAsm(node_id) => {
|
2017-03-21 10:03:52 -05:00
|
|
|
Some(node_id)
|
|
|
|
}
|
2017-04-24 11:35:47 -05:00
|
|
|
}, item.symbol_name(tcx))
|
2016-05-26 12:04:35 -05:00
|
|
|
}
|
2017-04-24 11:35:47 -05:00
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
let items: Vec<_> = self.items().iter().map(|(&i, &l)| (i, l)).collect();
|
2017-04-24 11:35:47 -05:00
|
|
|
let mut items : Vec<_> = items.iter()
|
|
|
|
.map(|il| (il, item_sort_key(tcx, il.0))).collect();
|
|
|
|
items.sort_by(|&(_, ref key1), &(_, ref key2)| key1.cmp(key2));
|
|
|
|
items.into_iter().map(|(&item_linkage, _)| item_linkage).collect()
|
2016-05-26 10:43:53 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
impl<'tcx> CodegenUnitExt<'tcx> for CodegenUnit<'tcx> {
|
|
|
|
fn as_codegen_unit(&self) -> &CodegenUnit<'tcx> {
|
|
|
|
self
|
|
|
|
}
|
|
|
|
}
|
2016-05-26 10:43:53 -05:00
|
|
|
|
2016-03-24 10:40:49 -05:00
|
|
|
// Anything we can't find a proper codegen unit for goes into this.
|
|
|
|
const FALLBACK_CODEGEN_UNIT: &'static str = "__rustc_fallback_codegen_unit";
|
|
|
|
|
2017-09-12 10:32:50 -05:00
|
|
|
pub fn partition<'a, 'tcx, I>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
2016-05-02 20:56:42 -05:00
|
|
|
trans_items: I,
|
|
|
|
strategy: PartitioningStrategy,
|
2017-09-13 15:22:20 -05:00
|
|
|
inlining_map: &InliningMap<'tcx>)
|
2016-05-02 20:56:42 -05:00
|
|
|
-> Vec<CodegenUnit<'tcx>>
|
2016-03-24 10:40:49 -05:00
|
|
|
where I: Iterator<Item = TransItem<'tcx>>
|
|
|
|
{
|
|
|
|
// In the first step, we place all regular translation items into their
|
|
|
|
// respective 'home' codegen unit. Regular translation items are all
|
|
|
|
// functions and statics defined in the local crate.
|
2017-09-12 10:32:50 -05:00
|
|
|
let mut initial_partitioning = place_root_translation_items(tcx,
|
2016-09-15 19:39:58 -05:00
|
|
|
trans_items);
|
2016-04-21 15:45:33 -05:00
|
|
|
|
2017-07-23 10:06:16 -05:00
|
|
|
debug_dump(tcx, "INITIAL PARTITIONING:", initial_partitioning.codegen_units.iter());
|
2016-05-09 22:56:49 -05:00
|
|
|
|
2016-04-22 13:07:23 -05:00
|
|
|
// If the partitioning should produce a fixed count of codegen units, merge
|
|
|
|
// until that count is reached.
|
2016-04-21 15:45:33 -05:00
|
|
|
if let PartitioningStrategy::FixedUnitCount(count) = strategy {
|
2016-11-16 04:52:37 -06:00
|
|
|
merge_codegen_units(&mut initial_partitioning, count, &tcx.crate_name.as_str());
|
2016-05-09 22:56:49 -05:00
|
|
|
|
2017-04-14 14:30:06 -05:00
|
|
|
debug_dump(tcx, "POST MERGING:", initial_partitioning.codegen_units.iter());
|
2016-04-21 15:45:33 -05:00
|
|
|
}
|
2016-03-24 10:40:49 -05:00
|
|
|
|
2017-07-23 10:06:16 -05:00
|
|
|
// In the next step, we use the inlining map to determine which additional
|
2016-03-24 10:40:49 -05:00
|
|
|
// translation items have to go into each codegen unit. These additional
|
|
|
|
// translation items can be drop-glue, functions from external crates, and
|
|
|
|
// local functions the definition of which is marked with #[inline].
|
2017-07-12 10:37:58 -05:00
|
|
|
let mut post_inlining = place_inlined_translation_items(initial_partitioning,
|
|
|
|
inlining_map);
|
|
|
|
|
|
|
|
debug_dump(tcx, "POST INLINING:", post_inlining.codegen_units.iter());
|
2016-05-09 22:56:49 -05:00
|
|
|
|
2017-07-12 10:37:58 -05:00
|
|
|
// Next we try to make as many symbols "internal" as possible, so LLVM has
|
|
|
|
// more freedom to optimize.
|
|
|
|
internalize_symbols(tcx, &mut post_inlining, inlining_map);
|
2016-05-09 22:56:49 -05:00
|
|
|
|
2016-05-26 10:43:53 -05:00
|
|
|
// Finally, sort by codegen unit name, so that we get deterministic results
|
2017-07-12 10:37:58 -05:00
|
|
|
let PostInliningPartitioning {
|
|
|
|
codegen_units: mut result,
|
|
|
|
trans_item_placements: _,
|
|
|
|
internalization_candidates: _,
|
|
|
|
} = post_inlining;
|
|
|
|
|
2016-06-02 11:28:29 -05:00
|
|
|
result.sort_by(|cgu1, cgu2| {
|
2017-09-12 13:04:46 -05:00
|
|
|
cgu1.name().cmp(cgu2.name())
|
2016-05-26 10:43:53 -05:00
|
|
|
});
|
|
|
|
|
2017-09-12 10:32:50 -05:00
|
|
|
if tcx.sess.opts.enable_dep_node_debug_strs() {
|
2017-06-23 09:37:12 -05:00
|
|
|
for cgu in &result {
|
|
|
|
let dep_node = cgu.work_product_dep_node();
|
2017-09-12 10:32:50 -05:00
|
|
|
tcx.dep_graph.register_dep_node_debug_str(dep_node,
|
2017-06-23 09:37:12 -05:00
|
|
|
|| cgu.name().to_string());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-26 10:43:53 -05:00
|
|
|
result
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
|
2016-04-22 13:07:23 -05:00
|
|
|
struct PreInliningPartitioning<'tcx> {
|
2016-03-24 10:40:49 -05:00
|
|
|
codegen_units: Vec<CodegenUnit<'tcx>>,
|
2016-11-07 21:02:55 -06:00
|
|
|
roots: FxHashSet<TransItem<'tcx>>,
|
2017-07-12 10:37:58 -05:00
|
|
|
internalization_candidates: FxHashSet<TransItem<'tcx>>,
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
|
2017-07-12 10:37:58 -05:00
|
|
|
/// For symbol internalization, we need to know whether a symbol/trans-item is
|
|
|
|
/// accessed from outside the codegen unit it is defined in. This type is used
|
|
|
|
/// to keep track of that.
|
|
|
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
|
|
|
enum TransItemPlacement {
|
|
|
|
SingleCgu { cgu_name: InternedString },
|
|
|
|
MultipleCgus,
|
|
|
|
}
|
|
|
|
|
|
|
|
struct PostInliningPartitioning<'tcx> {
|
|
|
|
codegen_units: Vec<CodegenUnit<'tcx>>,
|
|
|
|
trans_item_placements: FxHashMap<TransItem<'tcx>, TransItemPlacement>,
|
|
|
|
internalization_candidates: FxHashSet<TransItem<'tcx>>,
|
|
|
|
}
|
2016-04-22 13:07:23 -05:00
|
|
|
|
2017-09-12 10:32:50 -05:00
|
|
|
fn place_root_translation_items<'a, 'tcx, I>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
2016-09-15 19:39:58 -05:00
|
|
|
trans_items: I)
|
2016-05-02 20:56:42 -05:00
|
|
|
-> PreInliningPartitioning<'tcx>
|
2016-03-24 10:40:49 -05:00
|
|
|
where I: Iterator<Item = TransItem<'tcx>>
|
|
|
|
{
|
2016-11-07 21:02:55 -06:00
|
|
|
let mut roots = FxHashSet();
|
|
|
|
let mut codegen_units = FxHashMap();
|
2017-01-09 08:52:08 -06:00
|
|
|
let is_incremental_build = tcx.sess.opts.incremental.is_some();
|
2017-07-12 10:37:58 -05:00
|
|
|
let mut internalization_candidates = FxHashSet();
|
2016-03-24 10:40:49 -05:00
|
|
|
|
|
|
|
for trans_item in trans_items {
|
2017-10-06 16:59:33 -05:00
|
|
|
match trans_item.instantiation_mode(tcx) {
|
|
|
|
InstantiationMode::GloballyShared { .. } => {}
|
|
|
|
InstantiationMode::LocalCopy => continue,
|
|
|
|
}
|
|
|
|
|
|
|
|
let characteristic_def_id = characteristic_def_id_of_trans_item(tcx, trans_item);
|
|
|
|
let is_volatile = is_incremental_build &&
|
|
|
|
trans_item.is_generic_fn();
|
|
|
|
|
|
|
|
let codegen_unit_name = match characteristic_def_id {
|
|
|
|
Some(def_id) => compute_codegen_unit_name(tcx, def_id, is_volatile),
|
|
|
|
None => Symbol::intern(FALLBACK_CODEGEN_UNIT).as_str(),
|
|
|
|
};
|
|
|
|
|
|
|
|
let make_codegen_unit = || {
|
|
|
|
CodegenUnit::new(codegen_unit_name.clone())
|
|
|
|
};
|
|
|
|
|
|
|
|
let codegen_unit = codegen_units.entry(codegen_unit_name.clone())
|
|
|
|
.or_insert_with(make_codegen_unit);
|
|
|
|
|
|
|
|
let (linkage, visibility) = match trans_item.explicit_linkage(tcx) {
|
|
|
|
Some(explicit_linkage) => (explicit_linkage, Visibility::Default),
|
|
|
|
None => {
|
|
|
|
match trans_item {
|
|
|
|
TransItem::Fn(ref instance) => {
|
|
|
|
let visibility = match instance.def {
|
|
|
|
InstanceDef::Item(def_id) => {
|
|
|
|
if def_id.is_local() {
|
|
|
|
if tcx.is_exported_symbol(def_id) {
|
|
|
|
Visibility::Default
|
2017-07-12 10:37:58 -05:00
|
|
|
} else {
|
2017-09-12 13:04:46 -05:00
|
|
|
Visibility::Hidden
|
2017-07-12 10:37:58 -05:00
|
|
|
}
|
2017-10-06 16:59:33 -05:00
|
|
|
} else {
|
|
|
|
Visibility::Hidden
|
2017-07-12 10:37:58 -05:00
|
|
|
}
|
2017-10-06 16:59:33 -05:00
|
|
|
}
|
|
|
|
InstanceDef::FnPtrShim(..) |
|
|
|
|
InstanceDef::Virtual(..) |
|
|
|
|
InstanceDef::Intrinsic(..) |
|
|
|
|
InstanceDef::ClosureOnceShim { .. } |
|
|
|
|
InstanceDef::DropGlue(..) |
|
|
|
|
InstanceDef::CloneShim(..) => {
|
2017-09-12 13:04:46 -05:00
|
|
|
Visibility::Hidden
|
2017-10-06 16:59:33 -05:00
|
|
|
}
|
|
|
|
};
|
|
|
|
(Linkage::External, visibility)
|
|
|
|
}
|
|
|
|
TransItem::Static(node_id) |
|
|
|
|
TransItem::GlobalAsm(node_id) => {
|
|
|
|
let def_id = tcx.hir.local_def_id(node_id);
|
|
|
|
let visibility = if tcx.is_exported_symbol(def_id) {
|
|
|
|
Visibility::Default
|
|
|
|
} else {
|
|
|
|
Visibility::Hidden
|
|
|
|
};
|
|
|
|
(Linkage::External, visibility)
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
}
|
2017-10-06 16:59:33 -05:00
|
|
|
}
|
|
|
|
};
|
|
|
|
if visibility == Visibility::Hidden {
|
|
|
|
internalization_candidates.insert(trans_item);
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
2017-10-06 16:59:33 -05:00
|
|
|
|
|
|
|
codegen_unit.items_mut().insert(trans_item, (linkage, visibility));
|
|
|
|
roots.insert(trans_item);
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
|
2016-05-24 14:08:07 -05:00
|
|
|
// always ensure we have at least one CGU; otherwise, if we have a
|
2016-05-18 19:19:07 -05:00
|
|
|
// crate with just types (for example), we could wind up with no CGU
|
|
|
|
if codegen_units.is_empty() {
|
2016-11-17 08:04:20 -06:00
|
|
|
let codegen_unit_name = Symbol::intern(FALLBACK_CODEGEN_UNIT).as_str();
|
2017-07-12 10:37:58 -05:00
|
|
|
codegen_units.insert(codegen_unit_name.clone(),
|
2017-09-12 13:04:46 -05:00
|
|
|
CodegenUnit::new(codegen_unit_name.clone()));
|
2016-05-18 19:19:07 -05:00
|
|
|
}
|
|
|
|
|
2016-04-22 13:07:23 -05:00
|
|
|
PreInliningPartitioning {
|
2016-03-24 10:40:49 -05:00
|
|
|
codegen_units: codegen_units.into_iter()
|
|
|
|
.map(|(_, codegen_unit)| codegen_unit)
|
|
|
|
.collect(),
|
2017-07-12 10:37:58 -05:00
|
|
|
roots,
|
|
|
|
internalization_candidates,
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-22 13:07:23 -05:00
|
|
|
fn merge_codegen_units<'tcx>(initial_partitioning: &mut PreInliningPartitioning<'tcx>,
|
2016-04-21 15:45:33 -05:00
|
|
|
target_cgu_count: usize,
|
|
|
|
crate_name: &str) {
|
|
|
|
assert!(target_cgu_count >= 1);
|
|
|
|
let codegen_units = &mut initial_partitioning.codegen_units;
|
|
|
|
|
2016-04-22 13:07:23 -05:00
|
|
|
// Merge the two smallest codegen units until the target size is reached.
|
|
|
|
// Note that "size" is estimated here rather inaccurately as the number of
|
|
|
|
// translation items in a given unit. This could be improved on.
|
2016-04-21 15:45:33 -05:00
|
|
|
while codegen_units.len() > target_cgu_count {
|
|
|
|
// Sort small cgus to the back
|
2017-09-12 13:04:46 -05:00
|
|
|
codegen_units.sort_by_key(|cgu| -(cgu.items().len() as i64));
|
|
|
|
let mut smallest = codegen_units.pop().unwrap();
|
2016-04-21 15:45:33 -05:00
|
|
|
let second_smallest = codegen_units.last_mut().unwrap();
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
for (k, v) in smallest.items_mut().drain() {
|
|
|
|
second_smallest.items_mut().insert(k, v);
|
2016-04-21 15:45:33 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (index, cgu) in codegen_units.iter_mut().enumerate() {
|
2017-09-12 13:04:46 -05:00
|
|
|
cgu.set_name(numbered_codegen_unit_name(crate_name, index));
|
2016-05-06 16:00:59 -05:00
|
|
|
}
|
2016-04-21 15:45:33 -05:00
|
|
|
}
|
|
|
|
|
2016-04-22 13:07:23 -05:00
|
|
|
fn place_inlined_translation_items<'tcx>(initial_partitioning: PreInliningPartitioning<'tcx>,
|
2016-05-09 13:26:15 -05:00
|
|
|
inlining_map: &InliningMap<'tcx>)
|
2016-04-22 13:07:23 -05:00
|
|
|
-> PostInliningPartitioning<'tcx> {
|
|
|
|
let mut new_partitioning = Vec::new();
|
2017-07-12 10:37:58 -05:00
|
|
|
let mut trans_item_placements = FxHashMap();
|
2016-03-24 10:40:49 -05:00
|
|
|
|
2017-07-12 10:37:58 -05:00
|
|
|
let PreInliningPartitioning {
|
|
|
|
codegen_units: initial_cgus,
|
|
|
|
roots,
|
|
|
|
internalization_candidates,
|
|
|
|
} = initial_partitioning;
|
|
|
|
|
|
|
|
let single_codegen_unit = initial_cgus.len() == 1;
|
|
|
|
|
|
|
|
for old_codegen_unit in initial_cgus {
|
2016-03-24 10:40:49 -05:00
|
|
|
// Collect all items that need to be available in this codegen unit
|
2016-11-07 21:02:55 -06:00
|
|
|
let mut reachable = FxHashSet();
|
2017-09-12 13:04:46 -05:00
|
|
|
for root in old_codegen_unit.items().keys() {
|
2016-05-09 13:26:15 -05:00
|
|
|
follow_inlining(*root, inlining_map, &mut reachable);
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
let mut new_codegen_unit = CodegenUnit::new(old_codegen_unit.name().clone());
|
2016-03-24 10:40:49 -05:00
|
|
|
|
|
|
|
// Add all translation items that are not already there
|
|
|
|
for trans_item in reachable {
|
2017-09-12 13:04:46 -05:00
|
|
|
if let Some(linkage) = old_codegen_unit.items().get(&trans_item) {
|
2016-03-24 10:40:49 -05:00
|
|
|
// This is a root, just copy it over
|
2017-09-12 13:04:46 -05:00
|
|
|
new_codegen_unit.items_mut().insert(trans_item, *linkage);
|
2016-03-24 10:40:49 -05:00
|
|
|
} else {
|
2017-07-12 10:37:58 -05:00
|
|
|
if roots.contains(&trans_item) {
|
2017-01-09 08:52:08 -06:00
|
|
|
bug!("GloballyShared trans-item inlined into other CGU: \
|
|
|
|
{:?}", trans_item);
|
|
|
|
}
|
|
|
|
|
|
|
|
// This is a cgu-private copy
|
2017-09-12 13:04:46 -05:00
|
|
|
new_codegen_unit.items_mut().insert(
|
|
|
|
trans_item,
|
|
|
|
(Linkage::Internal, Visibility::Default),
|
|
|
|
);
|
2017-07-12 10:37:58 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
if !single_codegen_unit {
|
|
|
|
// If there is more than one codegen unit, we need to keep track
|
|
|
|
// in which codegen units each translation item is placed:
|
|
|
|
match trans_item_placements.entry(trans_item) {
|
|
|
|
Entry::Occupied(e) => {
|
|
|
|
let placement = e.into_mut();
|
|
|
|
debug_assert!(match *placement {
|
|
|
|
TransItemPlacement::SingleCgu { ref cgu_name } => {
|
2017-09-12 13:04:46 -05:00
|
|
|
*cgu_name != *new_codegen_unit.name()
|
2017-07-12 10:37:58 -05:00
|
|
|
}
|
|
|
|
TransItemPlacement::MultipleCgus => true,
|
|
|
|
});
|
|
|
|
*placement = TransItemPlacement::MultipleCgus;
|
|
|
|
}
|
|
|
|
Entry::Vacant(e) => {
|
|
|
|
e.insert(TransItemPlacement::SingleCgu {
|
2017-09-12 13:04:46 -05:00
|
|
|
cgu_name: new_codegen_unit.name().clone()
|
2017-07-12 10:37:58 -05:00
|
|
|
});
|
|
|
|
}
|
|
|
|
}
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-22 13:07:23 -05:00
|
|
|
new_partitioning.push(new_codegen_unit);
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
|
2017-07-12 10:37:58 -05:00
|
|
|
return PostInliningPartitioning {
|
|
|
|
codegen_units: new_partitioning,
|
|
|
|
trans_item_placements,
|
|
|
|
internalization_candidates,
|
|
|
|
};
|
2016-03-24 10:40:49 -05:00
|
|
|
|
|
|
|
fn follow_inlining<'tcx>(trans_item: TransItem<'tcx>,
|
2016-05-09 13:26:15 -05:00
|
|
|
inlining_map: &InliningMap<'tcx>,
|
2016-11-07 21:02:55 -06:00
|
|
|
visited: &mut FxHashSet<TransItem<'tcx>>) {
|
2016-03-24 10:40:49 -05:00
|
|
|
if !visited.insert(trans_item) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2016-05-09 13:26:15 -05:00
|
|
|
inlining_map.with_inlining_candidates(trans_item, |target| {
|
|
|
|
follow_inlining(target, inlining_map, visited);
|
2016-04-22 13:07:23 -05:00
|
|
|
});
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-12 10:37:58 -05:00
|
|
|
fn internalize_symbols<'a, 'tcx>(_tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|
|
|
partitioning: &mut PostInliningPartitioning<'tcx>,
|
|
|
|
inlining_map: &InliningMap<'tcx>) {
|
|
|
|
if partitioning.codegen_units.len() == 1 {
|
|
|
|
// Fast path for when there is only one codegen unit. In this case we
|
|
|
|
// can internalize all candidates, since there is nowhere else they
|
|
|
|
// could be accessed from.
|
|
|
|
for cgu in &mut partitioning.codegen_units {
|
|
|
|
for candidate in &partitioning.internalization_candidates {
|
2017-09-12 13:04:46 -05:00
|
|
|
cgu.items_mut().insert(*candidate,
|
|
|
|
(Linkage::Internal, Visibility::Default));
|
2017-07-12 10:37:58 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Build a map from every translation item to all the translation items that
|
|
|
|
// reference it.
|
|
|
|
let mut accessor_map: FxHashMap<TransItem<'tcx>, Vec<TransItem<'tcx>>> = FxHashMap();
|
|
|
|
inlining_map.iter_accesses(|accessor, accessees| {
|
|
|
|
for accessee in accessees {
|
|
|
|
accessor_map.entry(*accessee)
|
|
|
|
.or_insert(Vec::new())
|
|
|
|
.push(accessor);
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
let trans_item_placements = &partitioning.trans_item_placements;
|
|
|
|
|
|
|
|
// For each internalization candidates in each codegen unit, check if it is
|
|
|
|
// accessed from outside its defining codegen unit.
|
|
|
|
for cgu in &mut partitioning.codegen_units {
|
|
|
|
let home_cgu = TransItemPlacement::SingleCgu {
|
2017-09-12 13:04:46 -05:00
|
|
|
cgu_name: cgu.name().clone()
|
2017-07-12 10:37:58 -05:00
|
|
|
};
|
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
for (accessee, linkage_and_visibility) in cgu.items_mut() {
|
2017-07-12 10:37:58 -05:00
|
|
|
if !partitioning.internalization_candidates.contains(accessee) {
|
|
|
|
// This item is no candidate for internalizing, so skip it.
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
debug_assert_eq!(trans_item_placements[accessee], home_cgu);
|
|
|
|
|
|
|
|
if let Some(accessors) = accessor_map.get(accessee) {
|
|
|
|
if accessors.iter()
|
|
|
|
.filter_map(|accessor| {
|
|
|
|
// Some accessors might not have been
|
|
|
|
// instantiated. We can safely ignore those.
|
|
|
|
trans_item_placements.get(accessor)
|
|
|
|
})
|
|
|
|
.any(|placement| *placement != home_cgu) {
|
|
|
|
// Found an accessor from another CGU, so skip to the next
|
|
|
|
// item without marking this one as internal.
|
2017-07-13 07:43:56 -05:00
|
|
|
continue
|
2017-07-12 10:37:58 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If we got here, we did not find any accesses from other CGUs,
|
|
|
|
// so it's fine to make this translation item internal.
|
2017-09-12 13:04:46 -05:00
|
|
|
*linkage_and_visibility = (Linkage::Internal, Visibility::Default);
|
2017-07-12 10:37:58 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-09-12 10:32:50 -05:00
|
|
|
fn characteristic_def_id_of_trans_item<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
2016-05-02 20:56:42 -05:00
|
|
|
trans_item: TransItem<'tcx>)
|
|
|
|
-> Option<DefId> {
|
2016-03-24 10:40:49 -05:00
|
|
|
match trans_item {
|
|
|
|
TransItem::Fn(instance) => {
|
2017-02-08 11:31:03 -06:00
|
|
|
let def_id = match instance.def {
|
|
|
|
ty::InstanceDef::Item(def_id) => def_id,
|
2017-03-07 17:41:26 -06:00
|
|
|
ty::InstanceDef::FnPtrShim(..) |
|
|
|
|
ty::InstanceDef::ClosureOnceShim { .. } |
|
|
|
|
ty::InstanceDef::Intrinsic(..) |
|
2017-03-13 18:08:21 -05:00
|
|
|
ty::InstanceDef::DropGlue(..) |
|
2017-08-04 07:44:12 -05:00
|
|
|
ty::InstanceDef::Virtual(..) |
|
2017-08-07 09:21:08 -05:00
|
|
|
ty::InstanceDef::CloneShim(..) => return None
|
2017-02-08 11:31:03 -06:00
|
|
|
};
|
|
|
|
|
2016-03-24 10:40:49 -05:00
|
|
|
// If this is a method, we want to put it into the same module as
|
|
|
|
// its self-type. If the self-type does not provide a characteristic
|
|
|
|
// DefId, we use the location of the impl after all.
|
|
|
|
|
2017-02-08 11:31:03 -06:00
|
|
|
if tcx.trait_of_item(def_id).is_some() {
|
2016-08-18 00:32:50 -05:00
|
|
|
let self_ty = instance.substs.type_at(0);
|
2016-03-24 10:40:49 -05:00
|
|
|
// This is an implementation of a trait method.
|
2017-02-08 11:31:03 -06:00
|
|
|
return characteristic_def_id_of_type(self_ty).or(Some(def_id));
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
|
2017-02-08 11:31:03 -06:00
|
|
|
if let Some(impl_def_id) = tcx.impl_of_method(def_id) {
|
2016-03-24 10:40:49 -05:00
|
|
|
// This is a method within an inherent impl, find out what the
|
|
|
|
// self-type is:
|
2017-09-12 10:32:50 -05:00
|
|
|
let impl_self_ty = common::def_ty(tcx, impl_def_id, instance.substs);
|
2016-03-24 10:40:49 -05:00
|
|
|
if let Some(def_id) = characteristic_def_id_of_type(impl_self_ty) {
|
|
|
|
return Some(def_id);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-02-08 11:31:03 -06:00
|
|
|
Some(def_id)
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
2017-03-21 10:03:52 -05:00
|
|
|
TransItem::Static(node_id) |
|
|
|
|
TransItem::GlobalAsm(node_id) => Some(tcx.hir.local_def_id(node_id)),
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-02 21:23:22 -05:00
|
|
|
fn compute_codegen_unit_name<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
2016-05-02 20:56:42 -05:00
|
|
|
def_id: DefId,
|
|
|
|
volatile: bool)
|
|
|
|
-> InternedString {
|
2016-03-24 10:40:49 -05:00
|
|
|
// Unfortunately we cannot just use the `ty::item_path` infrastructure here
|
|
|
|
// because we need paths to modules and the DefIds of those are not
|
|
|
|
// available anymore for external items.
|
|
|
|
let mut mod_path = String::with_capacity(64);
|
|
|
|
|
|
|
|
let def_path = tcx.def_path(def_id);
|
2016-11-16 04:52:37 -06:00
|
|
|
mod_path.push_str(&tcx.crate_name(def_path.krate).as_str());
|
2016-03-24 10:40:49 -05:00
|
|
|
|
|
|
|
for part in tcx.def_path(def_id)
|
|
|
|
.data
|
|
|
|
.iter()
|
|
|
|
.take_while(|part| {
|
|
|
|
match part.data {
|
|
|
|
DefPathData::Module(..) => true,
|
|
|
|
_ => false,
|
|
|
|
}
|
|
|
|
}) {
|
|
|
|
mod_path.push_str("-");
|
|
|
|
mod_path.push_str(&part.data.as_interned_str());
|
|
|
|
}
|
|
|
|
|
|
|
|
if volatile {
|
|
|
|
mod_path.push_str(".volatile");
|
|
|
|
}
|
|
|
|
|
2016-11-16 04:52:37 -06:00
|
|
|
return Symbol::intern(&mod_path[..]).as_str();
|
2016-03-24 10:40:49 -05:00
|
|
|
}
|
2016-05-19 11:35:36 -05:00
|
|
|
|
|
|
|
fn numbered_codegen_unit_name(crate_name: &str, index: usize) -> InternedString {
|
rustc: Implement ThinLTO
This commit is an implementation of LLVM's ThinLTO for consumption in rustc
itself. Currently today LTO works by merging all relevant LLVM modules into one
and then running optimization passes. "Thin" LTO operates differently by having
more sharded work and allowing parallelism opportunities between optimizing
codegen units. Further down the road Thin LTO also allows *incremental* LTO
which should enable even faster release builds without compromising on the
performance we have today.
This commit uses a `-Z thinlto` flag to gate whether ThinLTO is enabled. It then
also implements two forms of ThinLTO:
* In one mode we'll *only* perform ThinLTO over the codegen units produced in a
single compilation. That is, we won't load upstream rlibs, but we'll instead
just perform ThinLTO amongst all codegen units produced by the compiler for
the local crate. This is intended to emulate a desired end point where we have
codegen units turned on by default for all crates and ThinLTO allows us to do
this without performance loss.
* In anther mode, like full LTO today, we'll optimize all upstream dependencies
in "thin" mode. Unlike today, however, this LTO step is fully parallelized so
should finish much more quickly.
There's a good bit of comments about what the implementation is doing and where
it came from, but the tl;dr; is that currently most of the support here is
copied from upstream LLVM. This code duplication is done for a number of
reasons:
* Controlling parallelism means we can use the existing jobserver support to
avoid overloading machines.
* We will likely want a slightly different form of incremental caching which
integrates with our own incremental strategy, but this is yet to be
determined.
* This buys us some flexibility about when/where we run ThinLTO, as well as
having it tailored to fit our needs for the time being.
* Finally this allows us to reuse some artifacts such as our `TargetMachine`
creation, where all our options we used today aren't necessarily supported by
upstream LLVM yet.
My hope is that we can get some experience with this copy/paste in tree and then
eventually upstream some work to LLVM itself to avoid the duplication while
still ensuring our needs are met. Otherwise I fear that maintaining these
bindings may be quite costly over the years with LLVM updates!
2017-07-23 10:14:38 -05:00
|
|
|
Symbol::intern(&format!("{}{}", crate_name, index)).as_str()
|
2016-05-19 11:35:36 -05:00
|
|
|
}
|
2016-05-09 22:56:49 -05:00
|
|
|
|
2017-04-14 14:30:06 -05:00
|
|
|
fn debug_dump<'a, 'b, 'tcx, I>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
2016-05-09 22:56:49 -05:00
|
|
|
label: &str,
|
|
|
|
cgus: I)
|
|
|
|
where I: Iterator<Item=&'b CodegenUnit<'tcx>>,
|
|
|
|
'tcx: 'a + 'b
|
|
|
|
{
|
|
|
|
if cfg!(debug_assertions) {
|
|
|
|
debug!("{}", label);
|
|
|
|
for cgu in cgus {
|
2017-09-12 13:04:46 -05:00
|
|
|
debug!("CodegenUnit {}:", cgu.name());
|
2016-05-09 22:56:49 -05:00
|
|
|
|
2017-09-12 13:04:46 -05:00
|
|
|
for (trans_item, linkage) in cgu.items() {
|
2017-04-24 11:35:47 -05:00
|
|
|
let symbol_name = trans_item.symbol_name(tcx);
|
2016-10-04 11:02:19 -05:00
|
|
|
let symbol_hash_start = symbol_name.rfind('h');
|
|
|
|
let symbol_hash = symbol_hash_start.map(|i| &symbol_name[i ..])
|
|
|
|
.unwrap_or("<no hash>");
|
|
|
|
|
|
|
|
debug!(" - {} [{:?}] [{}]",
|
2017-04-14 14:30:06 -05:00
|
|
|
trans_item.to_string(tcx),
|
2016-10-04 11:02:19 -05:00
|
|
|
linkage,
|
|
|
|
symbol_hash);
|
2016-05-09 22:56:49 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
debug!("");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|