2014-01-17 09:18:39 -06:00
|
|
|
// Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
|
2014-01-15 13:39:08 -06:00
|
|
|
// file at the top-level directory of this distribution and at
|
|
|
|
// http://rust-lang.org/COPYRIGHT.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
|
|
|
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
|
|
|
|
// option. This file may not be copied, modified, or distributed
|
|
|
|
// except according to those terms.
|
|
|
|
|
2015-02-09 19:23:16 -06:00
|
|
|
//! ## The Cleanup module
|
|
|
|
//!
|
|
|
|
//! The cleanup module tracks what values need to be cleaned up as scopes
|
|
|
|
//! are exited, either via panic or just normal control flow. The basic
|
|
|
|
//! idea is that the function context maintains a stack of cleanup scopes
|
|
|
|
//! that are pushed/popped as we traverse the AST tree. There is typically
|
|
|
|
//! at least one cleanup scope per AST node; some AST nodes may introduce
|
|
|
|
//! additional temporary scopes.
|
|
|
|
//!
|
|
|
|
//! Cleanup items can be scheduled into any of the scopes on the stack.
|
|
|
|
//! Typically, when a scope is popped, we will also generate the code for
|
|
|
|
//! each of its cleanups at that time. This corresponds to a normal exit
|
|
|
|
//! from a block (for example, an expression completing evaluation
|
|
|
|
//! successfully without panic). However, it is also possible to pop a
|
|
|
|
//! block *without* executing its cleanups; this is typically used to
|
|
|
|
//! guard intermediate values that must be cleaned up on panic, but not
|
|
|
|
//! if everything goes right. See the section on custom scopes below for
|
|
|
|
//! more details.
|
|
|
|
//!
|
|
|
|
//! Cleanup scopes come in three kinds:
|
|
|
|
//!
|
|
|
|
//! - **AST scopes:** each AST node in a function body has a corresponding
|
|
|
|
//! AST scope. We push the AST scope when we start generate code for an AST
|
|
|
|
//! node and pop it once the AST node has been fully generated.
|
|
|
|
//! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
|
|
|
|
//! never scheduled into loop scopes; instead, they are used to record the
|
|
|
|
//! basic blocks that we should branch to when a `continue` or `break` statement
|
|
|
|
//! is encountered.
|
|
|
|
//! - **Custom scopes:** custom scopes are typically used to ensure cleanup
|
|
|
|
//! of intermediate values.
|
|
|
|
//!
|
|
|
|
//! ### When to schedule cleanup
|
|
|
|
//!
|
|
|
|
//! Although the cleanup system is intended to *feel* fairly declarative,
|
|
|
|
//! it's still important to time calls to `schedule_clean()` correctly.
|
|
|
|
//! Basically, you should not schedule cleanup for memory until it has
|
|
|
|
//! been initialized, because if an unwind should occur before the memory
|
|
|
|
//! is fully initialized, then the cleanup will run and try to free or
|
|
|
|
//! drop uninitialized memory. If the initialization itself produces
|
|
|
|
//! byproducts that need to be freed, then you should use temporary custom
|
|
|
|
//! scopes to ensure that those byproducts will get freed on unwind. For
|
|
|
|
//! example, an expression like `box foo()` will first allocate a box in the
|
|
|
|
//! heap and then call `foo()` -- if `foo()` should panic, this box needs
|
|
|
|
//! to be *shallowly* freed.
|
|
|
|
//!
|
|
|
|
//! ### Long-distance jumps
|
|
|
|
//!
|
|
|
|
//! In addition to popping a scope, which corresponds to normal control
|
|
|
|
//! flow exiting the scope, we may also *jump out* of a scope into some
|
|
|
|
//! earlier scope on the stack. This can occur in response to a `return`,
|
|
|
|
//! `break`, or `continue` statement, but also in response to panic. In
|
|
|
|
//! any of these cases, we will generate a series of cleanup blocks for
|
|
|
|
//! each of the scopes that is exited. So, if the stack contains scopes A
|
|
|
|
//! ... Z, and we break out of a loop whose corresponding cleanup scope is
|
|
|
|
//! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
|
|
|
|
//! After cleanup is done we would branch to the exit point for scope X.
|
|
|
|
//! But if panic should occur, we would generate cleanups for all the
|
|
|
|
//! scopes from A to Z and then resume the unwind process afterwards.
|
|
|
|
//!
|
|
|
|
//! To avoid generating tons of code, we cache the cleanup blocks that we
|
|
|
|
//! create for breaks, returns, unwinds, and other jumps. Whenever a new
|
|
|
|
//! cleanup is scheduled, though, we must clear these cached blocks. A
|
|
|
|
//! possible improvement would be to keep the cached blocks but simply
|
|
|
|
//! generate a new block which performs the additional cleanup and then
|
|
|
|
//! branches to the existing cached blocks.
|
|
|
|
//!
|
|
|
|
//! ### AST and loop cleanup scopes
|
|
|
|
//!
|
|
|
|
//! AST cleanup scopes are pushed when we begin and end processing an AST
|
|
|
|
//! node. They are used to house cleanups related to rvalue temporary that
|
|
|
|
//! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
|
|
|
|
//! AST scope is popped, we always trans all the cleanups, adding the cleanup
|
|
|
|
//! code after the postdominator of the AST node.
|
|
|
|
//!
|
|
|
|
//! AST nodes that represent breakable loops also push a loop scope; the
|
|
|
|
//! loop scope never has any actual cleanups, it's just used to point to
|
|
|
|
//! the basic blocks where control should flow after a "continue" or
|
|
|
|
//! "break" statement. Popping a loop scope never generates code.
|
|
|
|
//!
|
|
|
|
//! ### Custom cleanup scopes
|
|
|
|
//!
|
|
|
|
//! Custom cleanup scopes are used for a variety of purposes. The most
|
|
|
|
//! common though is to handle temporary byproducts, where cleanup only
|
|
|
|
//! needs to occur on panic. The general strategy is to push a custom
|
|
|
|
//! cleanup scope, schedule *shallow* cleanups into the custom scope, and
|
|
|
|
//! then pop the custom scope (without transing the cleanups) when
|
|
|
|
//! execution succeeds normally. This way the cleanups are only trans'd on
|
|
|
|
//! unwind, and only up until the point where execution succeeded, at
|
|
|
|
//! which time the complete value should be stored in an lvalue or some
|
|
|
|
//! other place where normal cleanup applies.
|
|
|
|
//!
|
|
|
|
//! To spell it out, here is an example. Imagine an expression `box expr`.
|
|
|
|
//! We would basically:
|
|
|
|
//!
|
|
|
|
//! 1. Push a custom cleanup scope C.
|
|
|
|
//! 2. Allocate the box.
|
|
|
|
//! 3. Schedule a shallow free in the scope C.
|
|
|
|
//! 4. Trans `expr` into the box.
|
|
|
|
//! 5. Pop the scope C.
|
|
|
|
//! 6. Return the box as an rvalue.
|
|
|
|
//!
|
|
|
|
//! This way, if a panic occurs while transing `expr`, the custom
|
|
|
|
//! cleanup scope C is pushed and hence the box will be freed. The trans
|
|
|
|
//! code for `expr` itself is responsible for freeing any other byproducts
|
|
|
|
//! that may be in play.
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2014-11-06 02:05:53 -06:00
|
|
|
pub use self::ScopeId::*;
|
|
|
|
pub use self::CleanupScopeKind::*;
|
|
|
|
pub use self::EarlyExitLabel::*;
|
|
|
|
pub use self::Heap::*;
|
|
|
|
|
2014-07-07 19:58:01 -05:00
|
|
|
use llvm::{BasicBlockRef, ValueRef};
|
2014-11-15 19:30:33 -06:00
|
|
|
use trans::base;
|
|
|
|
use trans::build;
|
|
|
|
use trans::callee;
|
|
|
|
use trans::common;
|
2014-12-11 06:53:30 -06:00
|
|
|
use trans::common::{Block, FunctionContext, ExprId, NodeIdAndSpan};
|
|
|
|
use trans::debuginfo::{DebugLoc, ToDebugLoc};
|
2014-11-15 19:30:33 -06:00
|
|
|
use trans::glue;
|
2015-01-02 18:33:54 -06:00
|
|
|
use middle::region;
|
2014-11-15 19:30:33 -06:00
|
|
|
use trans::type_::Type;
|
2015-01-03 21:42:21 -06:00
|
|
|
use middle::ty::{self, Ty};
|
2014-10-15 01:25:34 -05:00
|
|
|
use std::fmt;
|
2014-01-15 13:39:08 -06:00
|
|
|
use syntax::ast;
|
|
|
|
use util::ppaux::Repr;
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
pub struct CleanupScope<'blk, 'tcx: 'blk> {
|
2014-01-15 13:39:08 -06:00
|
|
|
// The id of this cleanup scope. If the id is None,
|
|
|
|
// this is a *temporary scope* that is pushed during trans to
|
|
|
|
// cleanup miscellaneous garbage that trans may generate whose
|
|
|
|
// lifetime is a subset of some expression. See module doc for
|
|
|
|
// more details.
|
2014-09-06 11:13:04 -05:00
|
|
|
kind: CleanupScopeKind<'blk, 'tcx>,
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
// Cleanups to run upon scope exit.
|
2014-09-29 14:11:30 -05:00
|
|
|
cleanups: Vec<CleanupObj<'tcx>>,
|
2014-01-15 13:39:08 -06:00
|
|
|
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
// The debug location any drop calls generated for this scope will be
|
|
|
|
// associated with.
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc,
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
|
2014-03-19 07:16:56 -05:00
|
|
|
cached_early_exits: Vec<CachedEarlyExit>,
|
2014-01-15 13:39:08 -06:00
|
|
|
cached_landing_pad: Option<BasicBlockRef>,
|
|
|
|
}
|
|
|
|
|
2015-01-28 07:34:18 -06:00
|
|
|
#[derive(Copy, Debug)]
|
2014-01-15 13:39:08 -06:00
|
|
|
pub struct CustomScopeIndex {
|
2014-03-28 12:05:27 -05:00
|
|
|
index: uint
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-10-06 19:30:54 -05:00
|
|
|
pub const EXIT_BREAK: uint = 0;
|
|
|
|
pub const EXIT_LOOP: uint = 1;
|
|
|
|
pub const EXIT_MAX: uint = 2;
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
pub enum CleanupScopeKind<'blk, 'tcx: 'blk> {
|
2014-01-15 13:39:08 -06:00
|
|
|
CustomScopeKind,
|
|
|
|
AstScopeKind(ast::NodeId),
|
2014-12-31 22:40:24 -06:00
|
|
|
LoopScopeKind(ast::NodeId, [Block<'blk, 'tcx>; EXIT_MAX])
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2015-01-20 17:45:07 -06:00
|
|
|
impl<'blk, 'tcx: 'blk> fmt::Debug for CleanupScopeKind<'blk, 'tcx> {
|
2014-11-17 13:29:38 -06:00
|
|
|
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
2014-10-15 01:25:34 -05:00
|
|
|
match *self {
|
|
|
|
CustomScopeKind => write!(f, "CustomScopeKind"),
|
|
|
|
AstScopeKind(nid) => write!(f, "AstScopeKind({})", nid),
|
|
|
|
LoopScopeKind(nid, ref blks) => {
|
|
|
|
try!(write!(f, "LoopScopeKind({}, [", nid));
|
2015-01-31 11:20:46 -06:00
|
|
|
for blk in blks {
|
2014-10-15 01:25:34 -05:00
|
|
|
try!(write!(f, "{:p}, ", blk));
|
|
|
|
}
|
|
|
|
write!(f, "])")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-01-28 07:34:18 -06:00
|
|
|
#[derive(Copy, PartialEq, Debug)]
|
2014-03-02 17:26:39 -06:00
|
|
|
pub enum EarlyExitLabel {
|
2014-01-15 13:39:08 -06:00
|
|
|
UnwindExit,
|
|
|
|
ReturnExit,
|
|
|
|
LoopExit(ast::NodeId, uint)
|
|
|
|
}
|
|
|
|
|
2015-01-03 21:54:18 -06:00
|
|
|
#[derive(Copy)]
|
2014-03-02 17:26:39 -06:00
|
|
|
pub struct CachedEarlyExit {
|
2014-01-15 13:39:08 -06:00
|
|
|
label: EarlyExitLabel,
|
|
|
|
cleanup_block: BasicBlockRef,
|
|
|
|
}
|
|
|
|
|
2014-09-29 14:11:30 -05:00
|
|
|
pub trait Cleanup<'tcx> {
|
2014-07-25 07:31:05 -05:00
|
|
|
fn must_unwind(&self) -> bool;
|
2014-01-15 13:39:08 -06:00
|
|
|
fn clean_on_unwind(&self) -> bool;
|
2014-10-13 09:12:38 -05:00
|
|
|
fn is_lifetime_end(&self) -> bool;
|
2014-09-29 14:11:30 -05:00
|
|
|
fn trans<'blk>(&self,
|
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc)
|
2014-09-29 14:11:30 -05:00
|
|
|
-> Block<'blk, 'tcx>;
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-09-29 14:11:30 -05:00
|
|
|
pub type CleanupObj<'tcx> = Box<Cleanup<'tcx>+'tcx>;
|
2014-08-27 20:46:52 -05:00
|
|
|
|
2015-01-28 07:34:18 -06:00
|
|
|
#[derive(Copy, Debug)]
|
2014-01-15 13:39:08 -06:00
|
|
|
pub enum ScopeId {
|
|
|
|
AstScope(ast::NodeId),
|
|
|
|
CustomScope(CustomScopeIndex)
|
|
|
|
}
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
impl<'blk, 'tcx> CleanupMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Invoked when we start to trans the code contained within a new cleanup scope.
|
2014-12-11 06:53:30 -06:00
|
|
|
fn push_ast_cleanup_scope(&self, debug_loc: NodeIdAndSpan) {
|
2014-01-15 13:39:08 -06:00
|
|
|
debug!("push_ast_cleanup_scope({})",
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
self.ccx.tcx().map.node_to_string(debug_loc.id));
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
// FIXME(#2202) -- currently closure bodies have a parent
|
|
|
|
// region, which messes up the assertion below, since there
|
|
|
|
// are no cleanup scopes on the stack at the start of
|
|
|
|
// trans'ing a closure body. I think though that this should
|
|
|
|
// eventually be fixed by closure bodies not having a parent
|
|
|
|
// region, though that's a touch unclear, and it might also be
|
|
|
|
// better just to narrow this assertion more (i.e., by
|
|
|
|
// excluding id's that correspond to closure bodies only). For
|
|
|
|
// now we just say that if there is already an AST scope on the stack,
|
|
|
|
// this new AST scope had better be its immediate child.
|
2015-01-02 18:33:54 -06:00
|
|
|
let top_scope = self.top_ast_scope();
|
2014-01-15 13:39:08 -06:00
|
|
|
if top_scope.is_some() {
|
Added DestructionScope variant to CodeExtent, representing the area
immediately surrounding a node that is a terminating_scope
(e.g. statements, looping forms) during which the destructors run (the
destructors for temporaries from the execution of that node, that is).
Introduced DestructionScopeData newtype wrapper around ast::NodeId, to
preserve invariant that FreeRegion and ScopeChain::BlockScope carry
destruction scopes (rather than arbitrary CodeExtents).
Insert DestructionScope and block Remainder into enclosing CodeExtents
hierarchy.
Add more doc for DestructionScope, complete with ASCII art.
Switch to constructing DestructionScope rather than Misc in a number
of places, mostly related to `ty::ReFree` creation, and use
destruction-scopes of node-ids at various calls to
liberate_late_bound_regions.
middle::resolve_lifetime: Map BlockScope to DestructionScope in `fn resolve_free_lifetime`.
Add the InnermostDeclaringBlock and InnermostEnclosingExpr enums that
are my attempt to clarify the region::Context structure, and that
later commmts build upon.
Improve the debug output for `CodeExtent` attached to `ty::Region::ReScope`.
Loosened an assertion in `rustc_trans::trans::cleanup` to account for
`DestructionScope`. (Perhaps this should just be switched entirely
over to `DestructionScope`, rather than allowing for either `Misc` or
`DestructionScope`.)
----
Even though the DestructionScope is new, this particular commit should
not actually change the semantics of any current code.
2014-11-25 10:02:20 -06:00
|
|
|
assert!((self.ccx
|
|
|
|
.tcx()
|
|
|
|
.region_maps
|
|
|
|
.opt_encl_scope(region::CodeExtent::from_node_id(debug_loc.id))
|
|
|
|
.map(|s|s.node_id()) == top_scope)
|
|
|
|
||
|
|
|
|
(self.ccx
|
|
|
|
.tcx()
|
|
|
|
.region_maps
|
|
|
|
.opt_encl_scope(region::CodeExtent::DestructionScope(debug_loc.id))
|
|
|
|
.map(|s|s.node_id()) == top_scope));
|
2015-01-02 18:33:54 -06:00
|
|
|
}
|
2014-01-15 13:39:08 -06:00
|
|
|
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
self.push_scope(CleanupScope::new(AstScopeKind(debug_loc.id),
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc.debug_loc()));
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
fn push_loop_cleanup_scope(&self,
|
|
|
|
id: ast::NodeId,
|
2014-12-31 22:40:24 -06:00
|
|
|
exits: [Block<'blk, 'tcx>; EXIT_MAX]) {
|
2014-01-15 13:39:08 -06:00
|
|
|
debug!("push_loop_cleanup_scope({})",
|
2014-09-05 11:18:53 -05:00
|
|
|
self.ccx.tcx().map.node_to_string(id));
|
2014-01-15 13:39:08 -06:00
|
|
|
assert_eq!(Some(id), self.top_ast_scope());
|
|
|
|
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
// Just copy the debuginfo source location from the enclosing scope
|
|
|
|
let debug_loc = self.scopes
|
|
|
|
.borrow()
|
|
|
|
.last()
|
|
|
|
.unwrap()
|
|
|
|
.debug_loc;
|
|
|
|
|
|
|
|
self.push_scope(CleanupScope::new(LoopScopeKind(id, exits), debug_loc));
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
fn push_custom_cleanup_scope(&self) -> CustomScopeIndex {
|
|
|
|
let index = self.scopes_len();
|
|
|
|
debug!("push_custom_cleanup_scope(): {}", index);
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
|
|
|
|
// Just copy the debuginfo source location from the enclosing scope
|
|
|
|
let debug_loc = self.scopes
|
|
|
|
.borrow()
|
|
|
|
.last()
|
|
|
|
.map(|opt_scope| opt_scope.debug_loc)
|
2014-12-11 06:53:30 -06:00
|
|
|
.unwrap_or(DebugLoc::None);
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
|
|
|
|
self.push_scope(CleanupScope::new(CustomScopeKind, debug_loc));
|
|
|
|
CustomScopeIndex { index: index }
|
|
|
|
}
|
|
|
|
|
|
|
|
fn push_custom_cleanup_scope_with_debug_loc(&self,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: NodeIdAndSpan)
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
-> CustomScopeIndex {
|
|
|
|
let index = self.scopes_len();
|
|
|
|
debug!("push_custom_cleanup_scope(): {}", index);
|
|
|
|
|
2014-12-11 06:53:30 -06:00
|
|
|
self.push_scope(CleanupScope::new(CustomScopeKind,
|
|
|
|
debug_loc.debug_loc()));
|
2014-01-15 13:39:08 -06:00
|
|
|
CustomScopeIndex { index: index }
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Removes the cleanup scope for id `cleanup_scope`, which must be at the top of the cleanup
|
|
|
|
/// stack, and generates the code to do its cleanups for normal exit.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn pop_and_trans_ast_cleanup_scope(&self,
|
2014-09-06 11:13:04 -05:00
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope: ast::NodeId)
|
2014-09-06 11:13:04 -05:00
|
|
|
-> Block<'blk, 'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
debug!("pop_and_trans_ast_cleanup_scope({})",
|
2014-09-05 11:18:53 -05:00
|
|
|
self.ccx.tcx().map.node_to_string(cleanup_scope));
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
assert!(self.top_scope(|s| s.kind.is_ast_with_id(cleanup_scope)));
|
|
|
|
|
|
|
|
let scope = self.pop_scope();
|
|
|
|
self.trans_scope_cleanups(bcx, &scope)
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Removes the loop cleanup scope for id `cleanup_scope`, which must be at the top of the
|
|
|
|
/// cleanup stack. Does not generate any cleanup code, since loop scopes should exit by
|
|
|
|
/// branching to a block generated by `normal_exit_block`.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn pop_loop_cleanup_scope(&self,
|
|
|
|
cleanup_scope: ast::NodeId) {
|
|
|
|
debug!("pop_loop_cleanup_scope({})",
|
2014-09-05 11:18:53 -05:00
|
|
|
self.ccx.tcx().map.node_to_string(cleanup_scope));
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
assert!(self.top_scope(|s| s.kind.is_loop_with_id(cleanup_scope)));
|
|
|
|
|
|
|
|
let _ = self.pop_scope();
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Removes the top cleanup scope from the stack without executing its cleanups. The top
|
|
|
|
/// cleanup scope must be the temporary scope `custom_scope`.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn pop_custom_cleanup_scope(&self,
|
|
|
|
custom_scope: CustomScopeIndex) {
|
|
|
|
debug!("pop_custom_cleanup_scope({})", custom_scope.index);
|
|
|
|
assert!(self.is_valid_to_pop_custom_scope(custom_scope));
|
|
|
|
let _ = self.pop_scope();
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Removes the top cleanup scope from the stack, which must be a temporary scope, and
|
|
|
|
/// generates the code to do its cleanups for normal exit.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn pop_and_trans_custom_cleanup_scope(&self,
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
bcx: Block<'blk, 'tcx>,
|
|
|
|
custom_scope: CustomScopeIndex)
|
|
|
|
-> Block<'blk, 'tcx> {
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("pop_and_trans_custom_cleanup_scope({:?})", custom_scope);
|
2014-01-15 13:39:08 -06:00
|
|
|
assert!(self.is_valid_to_pop_custom_scope(custom_scope));
|
|
|
|
|
|
|
|
let scope = self.pop_scope();
|
|
|
|
self.trans_scope_cleanups(bcx, &scope)
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns the id of the top-most loop scope
|
2014-01-15 13:39:08 -06:00
|
|
|
fn top_loop_scope(&self) -> ast::NodeId {
|
2014-03-20 21:49:20 -05:00
|
|
|
for scope in self.scopes.borrow().iter().rev() {
|
2014-11-29 15:41:21 -06:00
|
|
|
if let LoopScopeKind(id, _) = scope.kind {
|
|
|
|
return id;
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
2014-03-15 15:29:34 -05:00
|
|
|
self.ccx.sess().bug("no loop scope found");
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns a block to branch to which will perform all pending cleanups and then
|
|
|
|
/// break/continue (depending on `exit`) out of the loop with id `cleanup_scope`
|
2014-09-06 11:13:04 -05:00
|
|
|
fn normal_exit_block(&'blk self,
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope: ast::NodeId,
|
|
|
|
exit: uint) -> BasicBlockRef {
|
|
|
|
self.trans_cleanups_to_exit_scope(LoopExit(cleanup_scope, exit))
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns a block to branch to which will perform all pending cleanups and then return from
|
|
|
|
/// this function
|
2014-09-06 11:13:04 -05:00
|
|
|
fn return_exit_block(&'blk self) -> BasicBlockRef {
|
2014-01-15 13:39:08 -06:00
|
|
|
self.trans_cleanups_to_exit_scope(ReturnExit)
|
|
|
|
}
|
|
|
|
|
Emit LLVM lifetime intrinsics to improve stack usage and codegen in general
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes #15665
2014-05-01 12:32:07 -05:00
|
|
|
fn schedule_lifetime_end(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef) {
|
|
|
|
let drop = box LifetimeEnd {
|
|
|
|
ptr: val,
|
|
|
|
};
|
|
|
|
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("schedule_lifetime_end({:?}, val={})",
|
Emit LLVM lifetime intrinsics to improve stack usage and codegen in general
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes #15665
2014-05-01 12:32:07 -05:00
|
|
|
cleanup_scope,
|
2014-09-05 11:18:53 -05:00
|
|
|
self.ccx.tn().val_to_string(val));
|
Emit LLVM lifetime intrinsics to improve stack usage and codegen in general
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes #15665
2014-05-01 12:32:07 -05:00
|
|
|
|
2014-08-27 20:46:52 -05:00
|
|
|
self.schedule_clean(cleanup_scope, drop as CleanupObj);
|
Emit LLVM lifetime intrinsics to improve stack usage and codegen in general
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes #15665
2014-05-01 12:32:07 -05:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a (deep) drop of `val`, which is a pointer to an instance of `ty`
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_drop_mem(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
2014-09-29 14:11:30 -05:00
|
|
|
ty: Ty<'tcx>) {
|
2014-12-16 14:00:05 -06:00
|
|
|
if !common::type_needs_drop(self.ccx.tcx(), ty) { return; }
|
2014-04-25 03:08:02 -05:00
|
|
|
let drop = box DropValue {
|
2014-01-15 13:39:08 -06:00
|
|
|
is_immediate: false,
|
2014-12-16 14:00:05 -06:00
|
|
|
must_unwind: common::type_needs_unwind_cleanup(self.ccx, ty),
|
2014-01-15 13:39:08 -06:00
|
|
|
val: val,
|
2014-07-04 19:55:51 -05:00
|
|
|
ty: ty,
|
|
|
|
zero: false
|
2014-01-15 13:39:08 -06:00
|
|
|
};
|
|
|
|
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("schedule_drop_mem({:?}, val={}, ty={})",
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope,
|
2014-09-05 11:18:53 -05:00
|
|
|
self.ccx.tn().val_to_string(val),
|
2014-03-15 15:29:34 -05:00
|
|
|
ty.repr(self.ccx.tcx()));
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2014-08-27 20:46:52 -05:00
|
|
|
self.schedule_clean(cleanup_scope, drop as CleanupObj);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a (deep) drop and zero-ing of `val`, which is a pointer to an instance of `ty`
|
2014-07-04 19:55:51 -05:00
|
|
|
fn schedule_drop_and_zero_mem(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
2014-09-29 14:11:30 -05:00
|
|
|
ty: Ty<'tcx>) {
|
2014-12-16 14:00:05 -06:00
|
|
|
if !common::type_needs_drop(self.ccx.tcx(), ty) { return; }
|
2014-07-04 19:55:51 -05:00
|
|
|
let drop = box DropValue {
|
|
|
|
is_immediate: false,
|
2014-12-16 14:00:05 -06:00
|
|
|
must_unwind: common::type_needs_unwind_cleanup(self.ccx, ty),
|
2014-07-04 19:55:51 -05:00
|
|
|
val: val,
|
|
|
|
ty: ty,
|
|
|
|
zero: true
|
|
|
|
};
|
|
|
|
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("schedule_drop_and_zero_mem({:?}, val={}, ty={}, zero={})",
|
2014-07-04 19:55:51 -05:00
|
|
|
cleanup_scope,
|
2014-09-05 11:18:53 -05:00
|
|
|
self.ccx.tn().val_to_string(val),
|
2014-07-04 19:55:51 -05:00
|
|
|
ty.repr(self.ccx.tcx()),
|
|
|
|
true);
|
|
|
|
|
2014-08-27 20:46:52 -05:00
|
|
|
self.schedule_clean(cleanup_scope, drop as CleanupObj);
|
2014-07-04 19:55:51 -05:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a (deep) drop of `val`, which is an instance of `ty`
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_drop_immediate(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
2014-09-29 14:11:30 -05:00
|
|
|
ty: Ty<'tcx>) {
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2014-12-16 14:00:05 -06:00
|
|
|
if !common::type_needs_drop(self.ccx.tcx(), ty) { return; }
|
2014-04-25 03:08:02 -05:00
|
|
|
let drop = box DropValue {
|
2014-01-15 13:39:08 -06:00
|
|
|
is_immediate: true,
|
2014-12-16 14:00:05 -06:00
|
|
|
must_unwind: common::type_needs_unwind_cleanup(self.ccx, ty),
|
2014-01-15 13:39:08 -06:00
|
|
|
val: val,
|
2014-07-04 19:55:51 -05:00
|
|
|
ty: ty,
|
|
|
|
zero: false
|
2014-01-15 13:39:08 -06:00
|
|
|
};
|
|
|
|
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("schedule_drop_immediate({:?}, val={}, ty={:?})",
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope,
|
2014-09-05 11:18:53 -05:00
|
|
|
self.ccx.tn().val_to_string(val),
|
2014-03-15 15:29:34 -05:00
|
|
|
ty.repr(self.ccx.tcx()));
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2014-08-27 20:46:52 -05:00
|
|
|
self.schedule_clean(cleanup_scope, drop as CleanupObj);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a call to `free(val)`. Note that this is a shallow operation.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_free_value(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
2014-05-20 23:18:10 -05:00
|
|
|
heap: Heap,
|
2014-09-29 14:11:30 -05:00
|
|
|
content_ty: Ty<'tcx>) {
|
2014-05-20 23:18:10 -05:00
|
|
|
let drop = box FreeValue { ptr: val, heap: heap, content_ty: content_ty };
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("schedule_free_value({:?}, val={}, heap={:?})",
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope,
|
2014-09-05 11:18:53 -05:00
|
|
|
self.ccx.tn().val_to_string(val),
|
2014-01-15 13:39:08 -06:00
|
|
|
heap);
|
|
|
|
|
2014-08-27 20:46:52 -05:00
|
|
|
self.schedule_clean(cleanup_scope, drop as CleanupObj);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a call to `free(val)`. Note that this is a shallow operation.
|
2014-09-05 02:39:15 -05:00
|
|
|
fn schedule_free_slice(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
|
|
|
size: ValueRef,
|
|
|
|
align: ValueRef,
|
|
|
|
heap: Heap) {
|
|
|
|
let drop = box FreeSlice { ptr: val, size: size, align: align, heap: heap };
|
|
|
|
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("schedule_free_slice({:?}, val={}, heap={:?})",
|
2014-09-05 02:39:15 -05:00
|
|
|
cleanup_scope,
|
|
|
|
self.ccx.tn().val_to_string(val),
|
|
|
|
heap);
|
|
|
|
|
|
|
|
self.schedule_clean(cleanup_scope, drop as CleanupObj);
|
|
|
|
}
|
|
|
|
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_clean(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
2014-09-29 14:11:30 -05:00
|
|
|
cleanup: CleanupObj<'tcx>) {
|
2014-01-15 13:39:08 -06:00
|
|
|
match cleanup_scope {
|
|
|
|
AstScope(id) => self.schedule_clean_in_ast_scope(id, cleanup),
|
|
|
|
CustomScope(id) => self.schedule_clean_in_custom_scope(id, cleanup),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a cleanup to occur upon exit from `cleanup_scope`. If `cleanup_scope` is not
|
|
|
|
/// provided, then the cleanup is scheduled in the topmost scope, which must be a temporary
|
|
|
|
/// scope.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_clean_in_ast_scope(&self,
|
|
|
|
cleanup_scope: ast::NodeId,
|
2014-09-29 14:11:30 -05:00
|
|
|
cleanup: CleanupObj<'tcx>) {
|
2014-10-15 01:25:34 -05:00
|
|
|
debug!("schedule_clean_in_ast_scope(cleanup_scope={})",
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope);
|
|
|
|
|
2014-09-14 22:27:36 -05:00
|
|
|
for scope in self.scopes.borrow_mut().iter_mut().rev() {
|
2014-01-15 13:39:08 -06:00
|
|
|
if scope.kind.is_ast_with_id(cleanup_scope) {
|
|
|
|
scope.cleanups.push(cleanup);
|
|
|
|
scope.clear_cached_exits();
|
|
|
|
return;
|
|
|
|
} else {
|
|
|
|
// will be adding a cleanup to some enclosing scope
|
|
|
|
scope.clear_cached_exits();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-15 15:29:34 -05:00
|
|
|
self.ccx.sess().bug(
|
2015-01-07 10:58:31 -06:00
|
|
|
&format!("no cleanup scope {} found",
|
|
|
|
self.ccx.tcx().map.node_to_string(cleanup_scope))[]);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a cleanup to occur in the top-most scope, which must be a temporary scope.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_clean_in_custom_scope(&self,
|
|
|
|
custom_scope: CustomScopeIndex,
|
2014-09-29 14:11:30 -05:00
|
|
|
cleanup: CleanupObj<'tcx>) {
|
2014-01-15 13:39:08 -06:00
|
|
|
debug!("schedule_clean_in_custom_scope(custom_scope={})",
|
|
|
|
custom_scope.index);
|
|
|
|
|
|
|
|
assert!(self.is_valid_custom_scope(custom_scope));
|
|
|
|
|
|
|
|
let mut scopes = self.scopes.borrow_mut();
|
2014-10-23 10:42:21 -05:00
|
|
|
let scope = &mut (*scopes)[custom_scope.index];
|
2014-01-15 13:39:08 -06:00
|
|
|
scope.cleanups.push(cleanup);
|
|
|
|
scope.clear_cached_exits();
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns true if there are pending cleanups that should execute on panic.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn needs_invoke(&self) -> bool {
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow().iter().rev().any(|s| s.needs_invoke())
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns a basic block to branch to in the event of a panic. This block will run the panic
|
|
|
|
/// cleanups and eventually invoke the LLVM `Resume` instruction.
|
2014-09-06 11:13:04 -05:00
|
|
|
fn get_landing_pad(&'blk self) -> BasicBlockRef {
|
2014-01-15 13:39:08 -06:00
|
|
|
let _icx = base::push_ctxt("get_landing_pad");
|
|
|
|
|
|
|
|
debug!("get_landing_pad");
|
|
|
|
|
|
|
|
let orig_scopes_len = self.scopes_len();
|
|
|
|
assert!(orig_scopes_len > 0);
|
|
|
|
|
2014-10-09 14:17:22 -05:00
|
|
|
// Remove any scopes that do not have cleanups on panic:
|
2014-03-19 07:16:56 -05:00
|
|
|
let mut popped_scopes = vec!();
|
2014-01-15 13:39:08 -06:00
|
|
|
while !self.top_scope(|s| s.needs_invoke()) {
|
|
|
|
debug!("top scope does not need invoke");
|
|
|
|
popped_scopes.push(self.pop_scope());
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check for an existing landing pad in the new topmost scope:
|
|
|
|
let llbb = self.get_or_create_landing_pad();
|
|
|
|
|
|
|
|
// Push the scopes we removed back on:
|
2013-12-23 09:20:52 -06:00
|
|
|
loop {
|
|
|
|
match popped_scopes.pop() {
|
|
|
|
Some(scope) => self.push_scope(scope),
|
|
|
|
None => break
|
|
|
|
}
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
assert_eq!(self.scopes_len(), orig_scopes_len);
|
|
|
|
|
|
|
|
return llbb;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
impl<'blk, 'tcx> CleanupHelperMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns the id of the current top-most AST scope, if any.
|
2014-01-15 13:39:08 -06:00
|
|
|
fn top_ast_scope(&self) -> Option<ast::NodeId> {
|
2014-03-20 21:49:20 -05:00
|
|
|
for scope in self.scopes.borrow().iter().rev() {
|
2014-01-15 13:39:08 -06:00
|
|
|
match scope.kind {
|
|
|
|
CustomScopeKind | LoopScopeKind(..) => {}
|
|
|
|
AstScopeKind(i) => {
|
|
|
|
return Some(i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
None
|
|
|
|
}
|
|
|
|
|
|
|
|
fn top_nonempty_cleanup_scope(&self) -> Option<uint> {
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow().iter().rev().position(|s| !s.cleanups.is_empty())
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
|
|
|
|
self.is_valid_custom_scope(custom_scope) &&
|
2014-03-20 21:49:20 -05:00
|
|
|
custom_scope.index == self.scopes.borrow().len() - 1
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
|
|
|
|
let scopes = self.scopes.borrow();
|
2014-03-20 21:49:20 -05:00
|
|
|
custom_scope.index < scopes.len() &&
|
2014-10-15 01:05:01 -05:00
|
|
|
(*scopes)[custom_scope.index].kind.is_temp()
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Generates the cleanups for `scope` into `bcx`
|
2014-01-15 13:39:08 -06:00
|
|
|
fn trans_scope_cleanups(&self, // cannot borrow self, will recurse
|
2014-09-06 11:13:04 -05:00
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-09-29 14:11:30 -05:00
|
|
|
scope: &CleanupScope<'blk, 'tcx>) -> Block<'blk, 'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
let mut bcx = bcx;
|
|
|
|
if !bcx.unreachable.get() {
|
2014-01-23 13:41:57 -06:00
|
|
|
for cleanup in scope.cleanups.iter().rev() {
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
bcx = cleanup.trans(bcx, scope.debug_loc);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
bcx
|
|
|
|
}
|
|
|
|
|
|
|
|
fn scopes_len(&self) -> uint {
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow().len()
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
fn push_scope(&self, scope: CleanupScope<'blk, 'tcx>) {
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow_mut().push(scope)
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
fn pop_scope(&self) -> CleanupScope<'blk, 'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
debug!("popping cleanup scope {}, {} scopes remaining",
|
|
|
|
self.top_scope(|s| s.block_name("")),
|
|
|
|
self.scopes_len() - 1);
|
|
|
|
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow_mut().pop().unwrap()
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-12-09 12:44:51 -06:00
|
|
|
fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'blk, 'tcx>) -> R {
|
2014-03-20 21:49:20 -05:00
|
|
|
f(self.scopes.borrow().last().unwrap())
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Used when the caller wishes to jump to an early exit, such as a return, break, continue, or
|
|
|
|
/// unwind. This function will generate all cleanups between the top of the stack and the exit
|
|
|
|
/// `label` and return a basic block that the caller can branch to.
|
|
|
|
///
|
|
|
|
/// For example, if the current stack of cleanups were as follows:
|
|
|
|
///
|
|
|
|
/// AST 22
|
|
|
|
/// Custom 1
|
|
|
|
/// AST 23
|
|
|
|
/// Loop 23
|
|
|
|
/// Custom 2
|
|
|
|
/// AST 24
|
|
|
|
///
|
|
|
|
/// and the `label` specifies a break from `Loop 23`, then this function would generate a
|
|
|
|
/// series of basic blocks as follows:
|
|
|
|
///
|
|
|
|
/// Cleanup(AST 24) -> Cleanup(Custom 2) -> break_blk
|
|
|
|
///
|
|
|
|
/// where `break_blk` is the block specified in `Loop 23` as the target for breaks. The return
|
|
|
|
/// value would be the first basic block in that sequence (`Cleanup(AST 24)`). The caller could
|
|
|
|
/// then branch to `Cleanup(AST 24)` and it will perform all cleanups and finally branch to the
|
|
|
|
/// `break_blk`.
|
2014-09-06 11:13:04 -05:00
|
|
|
fn trans_cleanups_to_exit_scope(&'blk self,
|
2014-01-15 13:39:08 -06:00
|
|
|
label: EarlyExitLabel)
|
|
|
|
-> BasicBlockRef {
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("trans_cleanups_to_exit_scope label={:?} scopes={}",
|
2014-01-15 13:39:08 -06:00
|
|
|
label, self.scopes_len());
|
|
|
|
|
|
|
|
let orig_scopes_len = self.scopes_len();
|
|
|
|
let mut prev_llbb;
|
2014-03-19 07:16:56 -05:00
|
|
|
let mut popped_scopes = vec!();
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
// First we pop off all the cleanup stacks that are
|
|
|
|
// traversed until the exit is reached, pushing them
|
|
|
|
// onto the side vector `popped_scopes`. No code is
|
|
|
|
// generated at this time.
|
|
|
|
//
|
|
|
|
// So, continuing the example from above, we would wind up
|
|
|
|
// with a `popped_scopes` vector of `[AST 24, Custom 2]`.
|
|
|
|
// (Presuming that there are no cached exits)
|
|
|
|
loop {
|
|
|
|
if self.scopes_len() == 0 {
|
|
|
|
match label {
|
|
|
|
UnwindExit => {
|
|
|
|
// Generate a block that will `Resume`.
|
|
|
|
let prev_bcx = self.new_block(true, "resume", None);
|
|
|
|
let personality = self.personality.get().expect(
|
|
|
|
"create_landing_pad() should have set this");
|
|
|
|
build::Resume(prev_bcx,
|
|
|
|
build::Load(prev_bcx, personality));
|
|
|
|
prev_llbb = prev_bcx.llbb;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
ReturnExit => {
|
|
|
|
prev_llbb = self.get_llreturn();
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
LoopExit(id, _) => {
|
2015-01-07 10:58:31 -06:00
|
|
|
self.ccx.sess().bug(&format!(
|
2014-10-15 01:25:34 -05:00
|
|
|
"cannot exit from scope {}, \
|
2015-01-07 10:58:31 -06:00
|
|
|
not in scope", id)[]);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check if we have already cached the unwinding of this
|
|
|
|
// scope for this label. If so, we can stop popping scopes
|
|
|
|
// and branch to the cached label, since it contains the
|
|
|
|
// cleanups for any subsequent scopes.
|
|
|
|
match self.top_scope(|s| s.cached_early_exit(label)) {
|
|
|
|
Some(cleanup_block) => {
|
|
|
|
prev_llbb = cleanup_block;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
None => { }
|
|
|
|
}
|
|
|
|
|
|
|
|
// Pop off the scope, since we will be generating
|
|
|
|
// unwinding code for it. If we are searching for a loop exit,
|
|
|
|
// and this scope is that loop, then stop popping and set
|
|
|
|
// `prev_llbb` to the appropriate exit block from the loop.
|
|
|
|
popped_scopes.push(self.pop_scope());
|
2013-12-23 08:08:23 -06:00
|
|
|
let scope = popped_scopes.last().unwrap();
|
2014-01-15 13:39:08 -06:00
|
|
|
match label {
|
|
|
|
UnwindExit | ReturnExit => { }
|
|
|
|
LoopExit(id, exit) => {
|
|
|
|
match scope.kind.early_exit_block(id, exit) {
|
|
|
|
Some(exitllbb) => {
|
|
|
|
prev_llbb = exitllbb;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
None => { }
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
debug!("trans_cleanups_to_exit_scope: popped {} scopes",
|
|
|
|
popped_scopes.len());
|
|
|
|
|
|
|
|
// Now push the popped scopes back on. As we go,
|
|
|
|
// we track in `prev_llbb` the exit to which this scope
|
|
|
|
// should branch when it's done.
|
|
|
|
//
|
|
|
|
// So, continuing with our example, we will start out with
|
|
|
|
// `prev_llbb` being set to `break_blk` (or possibly a cached
|
|
|
|
// early exit). We will then pop the scopes from `popped_scopes`
|
|
|
|
// and generate a basic block for each one, prepending it in the
|
|
|
|
// series and updating `prev_llbb`. So we begin by popping `Custom 2`
|
|
|
|
// and generating `Cleanup(Custom 2)`. We make `Cleanup(Custom 2)`
|
|
|
|
// branch to `prev_llbb == break_blk`, giving us a sequence like:
|
|
|
|
//
|
|
|
|
// Cleanup(Custom 2) -> prev_llbb
|
|
|
|
//
|
|
|
|
// We then pop `AST 24` and repeat the process, giving us the sequence:
|
|
|
|
//
|
|
|
|
// Cleanup(AST 24) -> Cleanup(Custom 2) -> prev_llbb
|
|
|
|
//
|
|
|
|
// At this point, `popped_scopes` is empty, and so the final block
|
|
|
|
// that we return to the user is `Cleanup(AST 24)`.
|
|
|
|
while !popped_scopes.is_empty() {
|
2013-12-23 09:20:52 -06:00
|
|
|
let mut scope = popped_scopes.pop().unwrap();
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2014-08-28 22:21:28 -05:00
|
|
|
if scope.cleanups.iter().any(|c| cleanup_is_suitable_for(&**c, label))
|
2014-01-15 13:39:08 -06:00
|
|
|
{
|
|
|
|
let name = scope.block_name("clean");
|
|
|
|
debug!("generating cleanups for {}", name);
|
2014-05-09 20:45:36 -05:00
|
|
|
let bcx_in = self.new_block(label.is_unwind(),
|
2015-01-07 10:58:31 -06:00
|
|
|
&name[],
|
2014-05-09 20:45:36 -05:00
|
|
|
None);
|
2014-01-15 13:39:08 -06:00
|
|
|
let mut bcx_out = bcx_in;
|
2014-01-23 13:41:57 -06:00
|
|
|
for cleanup in scope.cleanups.iter().rev() {
|
2014-08-28 22:21:28 -05:00
|
|
|
if cleanup_is_suitable_for(&**cleanup, label) {
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
bcx_out = cleanup.trans(bcx_out,
|
|
|
|
scope.debug_loc);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
2014-12-11 06:53:30 -06:00
|
|
|
build::Br(bcx_out, prev_llbb, DebugLoc::None);
|
2014-01-15 13:39:08 -06:00
|
|
|
prev_llbb = bcx_in.llbb;
|
|
|
|
} else {
|
|
|
|
debug!("no suitable cleanups in {}",
|
|
|
|
scope.block_name("clean"));
|
|
|
|
}
|
|
|
|
|
|
|
|
scope.add_cached_early_exit(label, prev_llbb);
|
|
|
|
self.push_scope(scope);
|
|
|
|
}
|
|
|
|
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("trans_cleanups_to_exit_scope: prev_llbb={:?}", prev_llbb);
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
assert_eq!(self.scopes_len(), orig_scopes_len);
|
|
|
|
prev_llbb
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Creates a landing pad for the top scope, if one does not exist. The landing pad will
|
|
|
|
/// perform all cleanups necessary for an unwind and then `resume` to continue error
|
|
|
|
/// propagation:
|
|
|
|
///
|
|
|
|
/// landing_pad -> ... cleanups ... -> [resume]
|
|
|
|
///
|
|
|
|
/// (The cleanups and resume instruction are created by `trans_cleanups_to_exit_scope()`, not
|
|
|
|
/// in this function itself.)
|
2014-09-06 11:13:04 -05:00
|
|
|
fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef {
|
2014-01-15 13:39:08 -06:00
|
|
|
let pad_bcx;
|
|
|
|
|
|
|
|
debug!("get_or_create_landing_pad");
|
|
|
|
|
|
|
|
// Check if a landing pad block exists; if not, create one.
|
|
|
|
{
|
|
|
|
let mut scopes = self.scopes.borrow_mut();
|
2014-09-14 22:27:36 -05:00
|
|
|
let last_scope = scopes.last_mut().unwrap();
|
2014-01-15 13:39:08 -06:00
|
|
|
match last_scope.cached_landing_pad {
|
|
|
|
Some(llbb) => { return llbb; }
|
|
|
|
None => {
|
|
|
|
let name = last_scope.block_name("unwind");
|
2015-01-07 10:58:31 -06:00
|
|
|
pad_bcx = self.new_block(true, &name[], None);
|
2014-01-15 13:39:08 -06:00
|
|
|
last_scope.cached_landing_pad = Some(pad_bcx.llbb);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// The landing pad return type (the type being propagated). Not sure what
|
|
|
|
// this represents but it's determined by the personality function and
|
|
|
|
// this is what the EH proposal example uses.
|
2014-03-15 15:29:34 -05:00
|
|
|
let llretty = Type::struct_(self.ccx,
|
2014-11-17 02:39:01 -06:00
|
|
|
&[Type::i8p(self.ccx), Type::i32(self.ccx)],
|
2014-03-15 15:29:34 -05:00
|
|
|
false);
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
// The exception handling personality function.
|
2014-05-19 11:30:09 -05:00
|
|
|
//
|
|
|
|
// If our compilation unit has the `eh_personality` lang item somewhere
|
|
|
|
// within it, then we just need to translate that. Otherwise, we're
|
|
|
|
// building an rlib which will depend on some upstream implementation of
|
|
|
|
// this function, so we just codegen a generic reference to it. We don't
|
|
|
|
// specify any of the types for the function, we just make it a symbol
|
|
|
|
// that LLVM can later use.
|
|
|
|
let llpersonality = match pad_bcx.tcx().lang_items.eh_personality() {
|
2015-01-04 10:47:58 -06:00
|
|
|
Some(def_id) => {
|
|
|
|
callee::trans_fn_ref(pad_bcx.ccx(), def_id, ExprId(0),
|
|
|
|
pad_bcx.fcx.param_substs).val
|
|
|
|
}
|
2014-05-19 11:30:09 -05:00
|
|
|
None => {
|
2014-09-05 11:18:53 -05:00
|
|
|
let mut personality = self.ccx.eh_personality().borrow_mut();
|
2014-05-19 11:30:09 -05:00
|
|
|
match *personality {
|
|
|
|
Some(llpersonality) => llpersonality,
|
|
|
|
None => {
|
|
|
|
let fty = Type::variadic_func(&[], &Type::i32(self.ccx));
|
2014-05-05 02:07:49 -05:00
|
|
|
let f = base::decl_cdecl_fn(self.ccx,
|
2014-05-19 11:30:09 -05:00
|
|
|
"rust_eh_personality",
|
|
|
|
fty,
|
2014-12-25 06:20:48 -06:00
|
|
|
self.ccx.tcx().types.i32);
|
2014-05-19 11:30:09 -05:00
|
|
|
*personality = Some(f);
|
|
|
|
f
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
};
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
// The only landing pad clause will be 'cleanup'
|
2015-01-25 04:58:43 -06:00
|
|
|
let llretval = build::LandingPad(pad_bcx, llretty, llpersonality, 1);
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
// The landing pad block is a cleanup
|
|
|
|
build::SetCleanup(pad_bcx, llretval);
|
|
|
|
|
|
|
|
// We store the retval in a function-central alloca, so that calls to
|
|
|
|
// Resume can find it.
|
|
|
|
match self.personality.get() {
|
|
|
|
Some(addr) => {
|
|
|
|
build::Store(pad_bcx, llretval, addr);
|
|
|
|
}
|
|
|
|
None => {
|
|
|
|
let addr = base::alloca(pad_bcx, common::val_ty(llretval), "");
|
|
|
|
self.personality.set(Some(addr));
|
|
|
|
build::Store(pad_bcx, llretval, addr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Generate the cleanup block and branch to it.
|
|
|
|
let cleanup_llbb = self.trans_cleanups_to_exit_scope(UnwindExit);
|
2014-12-11 06:53:30 -06:00
|
|
|
build::Br(pad_bcx, cleanup_llbb, DebugLoc::None);
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
return pad_bcx.llbb;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
impl<'blk, 'tcx> CleanupScope<'blk, 'tcx> {
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
fn new(kind: CleanupScopeKind<'blk, 'tcx>,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc)
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
-> CleanupScope<'blk, 'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
CleanupScope {
|
|
|
|
kind: kind,
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
debug_loc: debug_loc,
|
2014-03-19 07:16:56 -05:00
|
|
|
cleanups: vec!(),
|
|
|
|
cached_early_exits: vec!(),
|
2014-01-15 13:39:08 -06:00
|
|
|
cached_landing_pad: None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn clear_cached_exits(&mut self) {
|
2014-03-19 07:16:56 -05:00
|
|
|
self.cached_early_exits = vec!();
|
2014-01-15 13:39:08 -06:00
|
|
|
self.cached_landing_pad = None;
|
|
|
|
}
|
|
|
|
|
|
|
|
fn cached_early_exit(&self,
|
|
|
|
label: EarlyExitLabel)
|
|
|
|
-> Option<BasicBlockRef> {
|
|
|
|
self.cached_early_exits.iter().
|
|
|
|
find(|e| e.label == label).
|
|
|
|
map(|e| e.cleanup_block)
|
|
|
|
}
|
|
|
|
|
|
|
|
fn add_cached_early_exit(&mut self,
|
|
|
|
label: EarlyExitLabel,
|
|
|
|
blk: BasicBlockRef) {
|
|
|
|
self.cached_early_exits.push(
|
|
|
|
CachedEarlyExit { label: label,
|
|
|
|
cleanup_block: blk });
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// True if this scope has cleanups that need unwinding
|
2014-01-15 13:39:08 -06:00
|
|
|
fn needs_invoke(&self) -> bool {
|
|
|
|
|
|
|
|
self.cached_landing_pad.is_some() ||
|
2014-07-25 07:31:05 -05:00
|
|
|
self.cleanups.iter().any(|c| c.must_unwind())
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns a suitable name to use for the basic block that handles this cleanup scope
|
2014-05-22 18:57:53 -05:00
|
|
|
fn block_name(&self, prefix: &str) -> String {
|
2014-01-15 13:39:08 -06:00
|
|
|
match self.kind {
|
2014-05-27 22:44:58 -05:00
|
|
|
CustomScopeKind => format!("{}_custom_", prefix),
|
|
|
|
AstScopeKind(id) => format!("{}_ast_{}_", prefix, id),
|
|
|
|
LoopScopeKind(id, _) => format!("{}_loop_{}_", prefix, id),
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
2014-10-13 09:12:38 -05:00
|
|
|
|
|
|
|
pub fn drop_non_lifetime_clean(&mut self) {
|
|
|
|
self.cleanups.retain(|c| c.is_lifetime_end());
|
|
|
|
}
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
impl<'blk, 'tcx> CleanupScopeKind<'blk, 'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
fn is_temp(&self) -> bool {
|
|
|
|
match *self {
|
|
|
|
CustomScopeKind => true,
|
|
|
|
LoopScopeKind(..) | AstScopeKind(..) => false,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn is_ast_with_id(&self, id: ast::NodeId) -> bool {
|
|
|
|
match *self {
|
|
|
|
CustomScopeKind | LoopScopeKind(..) => false,
|
|
|
|
AstScopeKind(i) => i == id
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn is_loop_with_id(&self, id: ast::NodeId) -> bool {
|
|
|
|
match *self {
|
|
|
|
CustomScopeKind | AstScopeKind(..) => false,
|
|
|
|
LoopScopeKind(i, _) => i == id
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// If this is a loop scope with id `id`, return the early exit block `exit`, else `None`
|
2014-01-15 13:39:08 -06:00
|
|
|
fn early_exit_block(&self,
|
|
|
|
id: ast::NodeId,
|
|
|
|
exit: uint) -> Option<BasicBlockRef> {
|
|
|
|
match *self {
|
|
|
|
LoopScopeKind(i, ref exits) if id == i => Some(exits[exit].llbb),
|
|
|
|
_ => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl EarlyExitLabel {
|
|
|
|
fn is_unwind(&self) -> bool {
|
|
|
|
match *self {
|
|
|
|
UnwindExit => true,
|
|
|
|
_ => false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
///////////////////////////////////////////////////////////////////////////
|
|
|
|
// Cleanup types
|
|
|
|
|
2015-01-03 21:54:18 -06:00
|
|
|
#[derive(Copy)]
|
2014-09-29 14:11:30 -05:00
|
|
|
pub struct DropValue<'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
is_immediate: bool,
|
2014-07-25 07:31:05 -05:00
|
|
|
must_unwind: bool,
|
2014-01-15 13:39:08 -06:00
|
|
|
val: ValueRef,
|
2014-09-29 14:11:30 -05:00
|
|
|
ty: Ty<'tcx>,
|
2014-07-04 19:55:51 -05:00
|
|
|
zero: bool
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-09-29 14:11:30 -05:00
|
|
|
impl<'tcx> Cleanup<'tcx> for DropValue<'tcx> {
|
2014-07-25 07:31:05 -05:00
|
|
|
fn must_unwind(&self) -> bool {
|
|
|
|
self.must_unwind
|
|
|
|
}
|
|
|
|
|
2014-01-15 13:39:08 -06:00
|
|
|
fn clean_on_unwind(&self) -> bool {
|
2014-07-25 07:31:05 -05:00
|
|
|
self.must_unwind
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-10-13 09:12:38 -05:00
|
|
|
fn is_lifetime_end(&self) -> bool {
|
|
|
|
false
|
|
|
|
}
|
|
|
|
|
2014-09-29 14:11:30 -05:00
|
|
|
fn trans<'blk>(&self,
|
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc)
|
2014-09-29 14:11:30 -05:00
|
|
|
-> Block<'blk, 'tcx> {
|
2014-07-04 19:55:51 -05:00
|
|
|
let bcx = if self.is_immediate {
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
glue::drop_ty_immediate(bcx, self.val, self.ty, debug_loc)
|
2014-01-15 13:39:08 -06:00
|
|
|
} else {
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
glue::drop_ty(bcx, self.val, self.ty, debug_loc)
|
2014-07-04 19:55:51 -05:00
|
|
|
};
|
|
|
|
if self.zero {
|
|
|
|
base::zero_mem(bcx, self.val, self.ty);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
2014-07-04 19:55:51 -05:00
|
|
|
bcx
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-01-28 07:34:18 -06:00
|
|
|
#[derive(Copy, Debug)]
|
2014-04-06 05:54:41 -05:00
|
|
|
pub enum Heap {
|
|
|
|
HeapExchange
|
|
|
|
}
|
|
|
|
|
2015-01-03 21:54:18 -06:00
|
|
|
#[derive(Copy)]
|
2014-09-29 14:11:30 -05:00
|
|
|
pub struct FreeValue<'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
ptr: ValueRef,
|
2014-04-06 05:54:41 -05:00
|
|
|
heap: Heap,
|
2014-09-29 14:11:30 -05:00
|
|
|
content_ty: Ty<'tcx>
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-09-29 14:11:30 -05:00
|
|
|
impl<'tcx> Cleanup<'tcx> for FreeValue<'tcx> {
|
2014-07-25 07:31:05 -05:00
|
|
|
fn must_unwind(&self) -> bool {
|
|
|
|
true
|
|
|
|
}
|
|
|
|
|
2014-01-15 13:39:08 -06:00
|
|
|
fn clean_on_unwind(&self) -> bool {
|
|
|
|
true
|
|
|
|
}
|
|
|
|
|
2014-10-13 09:12:38 -05:00
|
|
|
fn is_lifetime_end(&self) -> bool {
|
|
|
|
false
|
|
|
|
}
|
|
|
|
|
2014-09-29 14:11:30 -05:00
|
|
|
fn trans<'blk>(&self,
|
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc)
|
2014-09-29 14:11:30 -05:00
|
|
|
-> Block<'blk, 'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
match self.heap {
|
2014-04-06 05:54:41 -05:00
|
|
|
HeapExchange => {
|
2015-02-04 10:16:59 -06:00
|
|
|
glue::trans_exchange_free_ty(bcx,
|
|
|
|
self.ptr,
|
|
|
|
self.content_ty,
|
|
|
|
debug_loc)
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-01-03 21:54:18 -06:00
|
|
|
#[derive(Copy)]
|
2014-09-05 02:39:15 -05:00
|
|
|
pub struct FreeSlice {
|
|
|
|
ptr: ValueRef,
|
|
|
|
size: ValueRef,
|
|
|
|
align: ValueRef,
|
|
|
|
heap: Heap,
|
|
|
|
}
|
|
|
|
|
2014-09-29 14:11:30 -05:00
|
|
|
impl<'tcx> Cleanup<'tcx> for FreeSlice {
|
2014-09-05 02:39:15 -05:00
|
|
|
fn must_unwind(&self) -> bool {
|
|
|
|
true
|
|
|
|
}
|
|
|
|
|
|
|
|
fn clean_on_unwind(&self) -> bool {
|
|
|
|
true
|
|
|
|
}
|
|
|
|
|
2014-10-13 09:12:38 -05:00
|
|
|
fn is_lifetime_end(&self) -> bool {
|
|
|
|
false
|
|
|
|
}
|
|
|
|
|
2014-12-12 10:09:32 -06:00
|
|
|
fn trans<'blk>(&self,
|
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc)
|
2014-12-12 10:09:32 -06:00
|
|
|
-> Block<'blk, 'tcx> {
|
2014-09-05 02:39:15 -05:00
|
|
|
match self.heap {
|
|
|
|
HeapExchange => {
|
2015-02-04 10:16:59 -06:00
|
|
|
glue::trans_exchange_free_dyn(bcx,
|
|
|
|
self.ptr,
|
|
|
|
self.size,
|
|
|
|
self.align,
|
|
|
|
debug_loc)
|
2014-09-05 02:39:15 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-01-03 21:54:18 -06:00
|
|
|
#[derive(Copy)]
|
Emit LLVM lifetime intrinsics to improve stack usage and codegen in general
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes #15665
2014-05-01 12:32:07 -05:00
|
|
|
pub struct LifetimeEnd {
|
|
|
|
ptr: ValueRef,
|
|
|
|
}
|
|
|
|
|
2014-09-29 14:11:30 -05:00
|
|
|
impl<'tcx> Cleanup<'tcx> for LifetimeEnd {
|
2014-07-25 07:31:05 -05:00
|
|
|
fn must_unwind(&self) -> bool {
|
Emit LLVM lifetime intrinsics to improve stack usage and codegen in general
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes #15665
2014-05-01 12:32:07 -05:00
|
|
|
false
|
|
|
|
}
|
|
|
|
|
2014-07-25 07:31:05 -05:00
|
|
|
fn clean_on_unwind(&self) -> bool {
|
|
|
|
true
|
|
|
|
}
|
|
|
|
|
2014-10-13 09:12:38 -05:00
|
|
|
fn is_lifetime_end(&self) -> bool {
|
|
|
|
true
|
|
|
|
}
|
|
|
|
|
2014-12-12 10:09:32 -06:00
|
|
|
fn trans<'blk>(&self,
|
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc)
|
2014-12-12 10:09:32 -06:00
|
|
|
-> Block<'blk, 'tcx> {
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc.apply(bcx.fcx);
|
Emit LLVM lifetime intrinsics to improve stack usage and codegen in general
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes #15665
2014-05-01 12:32:07 -05:00
|
|
|
base::call_lifetime_end(bcx, self.ptr);
|
|
|
|
bcx
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-05 21:07:47 -06:00
|
|
|
pub fn temporary_scope(tcx: &ty::ctxt,
|
2014-01-15 13:39:08 -06:00
|
|
|
id: ast::NodeId)
|
|
|
|
-> ScopeId {
|
|
|
|
match tcx.region_maps.temporary_scope(id) {
|
|
|
|
Some(scope) => {
|
2014-11-18 07:22:59 -06:00
|
|
|
let r = AstScope(scope.node_id());
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("temporary_scope({}) = {:?}", id, r);
|
2014-01-15 13:39:08 -06:00
|
|
|
r
|
|
|
|
}
|
|
|
|
None => {
|
2015-01-07 10:58:31 -06:00
|
|
|
tcx.sess.bug(&format!("no temporary scope available for expr {}",
|
|
|
|
id)[])
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-05 21:07:47 -06:00
|
|
|
pub fn var_scope(tcx: &ty::ctxt,
|
2014-01-15 13:39:08 -06:00
|
|
|
id: ast::NodeId)
|
|
|
|
-> ScopeId {
|
2014-11-18 07:22:59 -06:00
|
|
|
let r = AstScope(tcx.region_maps.var_scope(id).node_id());
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("var_scope({}) = {:?}", id, r);
|
2014-01-15 13:39:08 -06:00
|
|
|
r
|
|
|
|
}
|
|
|
|
|
|
|
|
fn cleanup_is_suitable_for(c: &Cleanup,
|
|
|
|
label: EarlyExitLabel) -> bool {
|
|
|
|
!label.is_unwind() || c.clean_on_unwind()
|
|
|
|
}
|
|
|
|
|
|
|
|
///////////////////////////////////////////////////////////////////////////
|
|
|
|
// These traits just exist to put the methods into this file.
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
pub trait CleanupMethods<'blk, 'tcx> {
|
2014-12-11 06:53:30 -06:00
|
|
|
fn push_ast_cleanup_scope(&self, id: NodeIdAndSpan);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn push_loop_cleanup_scope(&self,
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
id: ast::NodeId,
|
2014-12-31 22:40:24 -06:00
|
|
|
exits: [Block<'blk, 'tcx>; EXIT_MAX]);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn push_custom_cleanup_scope(&self) -> CustomScopeIndex;
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
fn push_custom_cleanup_scope_with_debug_loc(&self,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: NodeIdAndSpan)
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
-> CustomScopeIndex;
|
2014-01-15 13:39:08 -06:00
|
|
|
fn pop_and_trans_ast_cleanup_scope(&self,
|
2014-12-11 06:53:30 -06:00
|
|
|
bcx: Block<'blk, 'tcx>,
|
|
|
|
cleanup_scope: ast::NodeId)
|
|
|
|
-> Block<'blk, 'tcx>;
|
2014-01-15 13:39:08 -06:00
|
|
|
fn pop_loop_cleanup_scope(&self,
|
|
|
|
cleanup_scope: ast::NodeId);
|
|
|
|
fn pop_custom_cleanup_scope(&self,
|
|
|
|
custom_scope: CustomScopeIndex);
|
|
|
|
fn pop_and_trans_custom_cleanup_scope(&self,
|
2014-09-06 11:13:04 -05:00
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-01-15 13:39:08 -06:00
|
|
|
custom_scope: CustomScopeIndex)
|
2014-09-06 11:13:04 -05:00
|
|
|
-> Block<'blk, 'tcx>;
|
2014-01-15 13:39:08 -06:00
|
|
|
fn top_loop_scope(&self) -> ast::NodeId;
|
2014-09-06 11:13:04 -05:00
|
|
|
fn normal_exit_block(&'blk self,
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope: ast::NodeId,
|
|
|
|
exit: uint) -> BasicBlockRef;
|
2014-09-06 11:13:04 -05:00
|
|
|
fn return_exit_block(&'blk self) -> BasicBlockRef;
|
Emit LLVM lifetime intrinsics to improve stack usage and codegen in general
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes #15665
2014-05-01 12:32:07 -05:00
|
|
|
fn schedule_lifetime_end(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_drop_mem(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
2014-09-29 14:11:30 -05:00
|
|
|
ty: Ty<'tcx>);
|
2014-07-04 19:55:51 -05:00
|
|
|
fn schedule_drop_and_zero_mem(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
2014-09-29 14:11:30 -05:00
|
|
|
ty: Ty<'tcx>);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_drop_immediate(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
2014-09-29 14:11:30 -05:00
|
|
|
ty: Ty<'tcx>);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_free_value(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
2014-05-20 23:18:10 -05:00
|
|
|
heap: Heap,
|
2014-09-29 14:11:30 -05:00
|
|
|
content_ty: Ty<'tcx>);
|
2014-09-05 02:39:15 -05:00
|
|
|
fn schedule_free_slice(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
|
|
|
val: ValueRef,
|
|
|
|
size: ValueRef,
|
|
|
|
align: ValueRef,
|
|
|
|
heap: Heap);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_clean(&self,
|
|
|
|
cleanup_scope: ScopeId,
|
2014-09-29 14:11:30 -05:00
|
|
|
cleanup: CleanupObj<'tcx>);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_clean_in_ast_scope(&self,
|
|
|
|
cleanup_scope: ast::NodeId,
|
2014-09-29 14:11:30 -05:00
|
|
|
cleanup: CleanupObj<'tcx>);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn schedule_clean_in_custom_scope(&self,
|
|
|
|
custom_scope: CustomScopeIndex,
|
2014-09-29 14:11:30 -05:00
|
|
|
cleanup: CleanupObj<'tcx>);
|
2014-01-15 13:39:08 -06:00
|
|
|
fn needs_invoke(&self) -> bool;
|
2014-09-06 11:13:04 -05:00
|
|
|
fn get_landing_pad(&'blk self) -> BasicBlockRef;
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-09-06 11:13:04 -05:00
|
|
|
trait CleanupHelperMethods<'blk, 'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
fn top_ast_scope(&self) -> Option<ast::NodeId>;
|
|
|
|
fn top_nonempty_cleanup_scope(&self) -> Option<uint>;
|
|
|
|
fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool;
|
|
|
|
fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool;
|
|
|
|
fn trans_scope_cleanups(&self,
|
2014-09-06 11:13:04 -05:00
|
|
|
bcx: Block<'blk, 'tcx>,
|
|
|
|
scope: &CleanupScope<'blk, 'tcx>) -> Block<'blk, 'tcx>;
|
|
|
|
fn trans_cleanups_to_exit_scope(&'blk self,
|
2014-01-15 13:39:08 -06:00
|
|
|
label: EarlyExitLabel)
|
|
|
|
-> BasicBlockRef;
|
2014-09-06 11:13:04 -05:00
|
|
|
fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef;
|
2014-01-15 13:39:08 -06:00
|
|
|
fn scopes_len(&self) -> uint;
|
2014-09-06 11:13:04 -05:00
|
|
|
fn push_scope(&self, scope: CleanupScope<'blk, 'tcx>);
|
|
|
|
fn pop_scope(&self) -> CleanupScope<'blk, 'tcx>;
|
2014-12-09 12:44:51 -06:00
|
|
|
fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'blk, 'tcx>) -> R;
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|