2014-01-17 09:18:39 -06:00
|
|
|
// Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
|
2014-01-15 13:39:08 -06:00
|
|
|
// file at the top-level directory of this distribution and at
|
|
|
|
// http://rust-lang.org/COPYRIGHT.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
|
|
|
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
|
|
|
|
// option. This file may not be copied, modified, or distributed
|
|
|
|
// except according to those terms.
|
|
|
|
|
2015-02-09 19:23:16 -06:00
|
|
|
//! ## The Cleanup module
|
|
|
|
//!
|
|
|
|
//! The cleanup module tracks what values need to be cleaned up as scopes
|
|
|
|
//! are exited, either via panic or just normal control flow. The basic
|
|
|
|
//! idea is that the function context maintains a stack of cleanup scopes
|
|
|
|
//! that are pushed/popped as we traverse the AST tree. There is typically
|
|
|
|
//! at least one cleanup scope per AST node; some AST nodes may introduce
|
|
|
|
//! additional temporary scopes.
|
|
|
|
//!
|
|
|
|
//! Cleanup items can be scheduled into any of the scopes on the stack.
|
|
|
|
//! Typically, when a scope is popped, we will also generate the code for
|
|
|
|
//! each of its cleanups at that time. This corresponds to a normal exit
|
|
|
|
//! from a block (for example, an expression completing evaluation
|
|
|
|
//! successfully without panic). However, it is also possible to pop a
|
|
|
|
//! block *without* executing its cleanups; this is typically used to
|
|
|
|
//! guard intermediate values that must be cleaned up on panic, but not
|
|
|
|
//! if everything goes right. See the section on custom scopes below for
|
|
|
|
//! more details.
|
|
|
|
//!
|
|
|
|
//! Cleanup scopes come in three kinds:
|
|
|
|
//!
|
|
|
|
//! - **AST scopes:** each AST node in a function body has a corresponding
|
|
|
|
//! AST scope. We push the AST scope when we start generate code for an AST
|
|
|
|
//! node and pop it once the AST node has been fully generated.
|
|
|
|
//! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
|
|
|
|
//! never scheduled into loop scopes; instead, they are used to record the
|
|
|
|
//! basic blocks that we should branch to when a `continue` or `break` statement
|
|
|
|
//! is encountered.
|
|
|
|
//! - **Custom scopes:** custom scopes are typically used to ensure cleanup
|
|
|
|
//! of intermediate values.
|
|
|
|
//!
|
|
|
|
//! ### When to schedule cleanup
|
|
|
|
//!
|
|
|
|
//! Although the cleanup system is intended to *feel* fairly declarative,
|
|
|
|
//! it's still important to time calls to `schedule_clean()` correctly.
|
|
|
|
//! Basically, you should not schedule cleanup for memory until it has
|
|
|
|
//! been initialized, because if an unwind should occur before the memory
|
|
|
|
//! is fully initialized, then the cleanup will run and try to free or
|
|
|
|
//! drop uninitialized memory. If the initialization itself produces
|
|
|
|
//! byproducts that need to be freed, then you should use temporary custom
|
|
|
|
//! scopes to ensure that those byproducts will get freed on unwind. For
|
|
|
|
//! example, an expression like `box foo()` will first allocate a box in the
|
|
|
|
//! heap and then call `foo()` -- if `foo()` should panic, this box needs
|
|
|
|
//! to be *shallowly* freed.
|
|
|
|
//!
|
|
|
|
//! ### Long-distance jumps
|
|
|
|
//!
|
|
|
|
//! In addition to popping a scope, which corresponds to normal control
|
|
|
|
//! flow exiting the scope, we may also *jump out* of a scope into some
|
|
|
|
//! earlier scope on the stack. This can occur in response to a `return`,
|
|
|
|
//! `break`, or `continue` statement, but also in response to panic. In
|
|
|
|
//! any of these cases, we will generate a series of cleanup blocks for
|
|
|
|
//! each of the scopes that is exited. So, if the stack contains scopes A
|
|
|
|
//! ... Z, and we break out of a loop whose corresponding cleanup scope is
|
|
|
|
//! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
|
|
|
|
//! After cleanup is done we would branch to the exit point for scope X.
|
|
|
|
//! But if panic should occur, we would generate cleanups for all the
|
|
|
|
//! scopes from A to Z and then resume the unwind process afterwards.
|
|
|
|
//!
|
|
|
|
//! To avoid generating tons of code, we cache the cleanup blocks that we
|
|
|
|
//! create for breaks, returns, unwinds, and other jumps. Whenever a new
|
|
|
|
//! cleanup is scheduled, though, we must clear these cached blocks. A
|
|
|
|
//! possible improvement would be to keep the cached blocks but simply
|
|
|
|
//! generate a new block which performs the additional cleanup and then
|
|
|
|
//! branches to the existing cached blocks.
|
|
|
|
//!
|
|
|
|
//! ### AST and loop cleanup scopes
|
|
|
|
//!
|
|
|
|
//! AST cleanup scopes are pushed when we begin and end processing an AST
|
|
|
|
//! node. They are used to house cleanups related to rvalue temporary that
|
|
|
|
//! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
|
|
|
|
//! AST scope is popped, we always trans all the cleanups, adding the cleanup
|
|
|
|
//! code after the postdominator of the AST node.
|
|
|
|
//!
|
|
|
|
//! AST nodes that represent breakable loops also push a loop scope; the
|
|
|
|
//! loop scope never has any actual cleanups, it's just used to point to
|
|
|
|
//! the basic blocks where control should flow after a "continue" or
|
|
|
|
//! "break" statement. Popping a loop scope never generates code.
|
|
|
|
//!
|
|
|
|
//! ### Custom cleanup scopes
|
|
|
|
//!
|
|
|
|
//! Custom cleanup scopes are used for a variety of purposes. The most
|
|
|
|
//! common though is to handle temporary byproducts, where cleanup only
|
|
|
|
//! needs to occur on panic. The general strategy is to push a custom
|
|
|
|
//! cleanup scope, schedule *shallow* cleanups into the custom scope, and
|
|
|
|
//! then pop the custom scope (without transing the cleanups) when
|
|
|
|
//! execution succeeds normally. This way the cleanups are only trans'd on
|
|
|
|
//! unwind, and only up until the point where execution succeeded, at
|
|
|
|
//! which time the complete value should be stored in an lvalue or some
|
|
|
|
//! other place where normal cleanup applies.
|
|
|
|
//!
|
|
|
|
//! To spell it out, here is an example. Imagine an expression `box expr`.
|
|
|
|
//! We would basically:
|
|
|
|
//!
|
|
|
|
//! 1. Push a custom cleanup scope C.
|
|
|
|
//! 2. Allocate the box.
|
|
|
|
//! 3. Schedule a shallow free in the scope C.
|
|
|
|
//! 4. Trans `expr` into the box.
|
|
|
|
//! 5. Pop the scope C.
|
|
|
|
//! 6. Return the box as an rvalue.
|
|
|
|
//!
|
|
|
|
//! This way, if a panic occurs while transing `expr`, the custom
|
|
|
|
//! cleanup scope C is pushed and hence the box will be freed. The trans
|
|
|
|
//! code for `expr` itself is responsible for freeing any other byproducts
|
|
|
|
//! that may be in play.
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2014-11-06 02:05:53 -06:00
|
|
|
pub use self::EarlyExitLabel::*;
|
|
|
|
|
2014-07-07 19:58:01 -05:00
|
|
|
use llvm::{BasicBlockRef, ValueRef};
|
2016-03-22 12:23:36 -05:00
|
|
|
use base;
|
|
|
|
use build;
|
|
|
|
use common;
|
2016-08-16 09:41:38 -05:00
|
|
|
use common::{Block, FunctionContext, LandingPad};
|
|
|
|
use debuginfo::{DebugLoc};
|
2016-03-22 12:23:36 -05:00
|
|
|
use glue;
|
|
|
|
use type_::Type;
|
|
|
|
use value::Value;
|
2016-08-16 09:41:38 -05:00
|
|
|
use rustc::ty::Ty;
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
pub struct CleanupScope<'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
// Cleanups to run upon scope exit.
|
2016-08-16 09:41:38 -05:00
|
|
|
cleanups: Vec<DropValue<'tcx>>,
|
2014-01-15 13:39:08 -06:00
|
|
|
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
// The debug location any drop calls generated for this scope will be
|
|
|
|
// associated with.
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc,
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
|
2014-03-19 07:16:56 -05:00
|
|
|
cached_early_exits: Vec<CachedEarlyExit>,
|
2014-01-15 13:39:08 -06:00
|
|
|
cached_landing_pad: Option<BasicBlockRef>,
|
|
|
|
}
|
|
|
|
|
2015-03-30 08:38:44 -05:00
|
|
|
#[derive(Copy, Clone, Debug)]
|
2014-01-15 13:39:08 -06:00
|
|
|
pub struct CustomScopeIndex {
|
2015-03-25 19:06:52 -05:00
|
|
|
index: usize
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2015-03-30 08:38:44 -05:00
|
|
|
#[derive(Copy, Clone, PartialEq, Debug)]
|
2014-03-02 17:26:39 -06:00
|
|
|
pub enum EarlyExitLabel {
|
2015-10-23 20:18:44 -05:00
|
|
|
UnwindExit(UnwindKind),
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2015-10-23 20:18:44 -05:00
|
|
|
#[derive(Copy, Clone, Debug)]
|
|
|
|
pub enum UnwindKind {
|
|
|
|
LandingPad,
|
|
|
|
CleanupPad(ValueRef),
|
|
|
|
}
|
|
|
|
|
2015-03-30 08:38:44 -05:00
|
|
|
#[derive(Copy, Clone)]
|
2014-03-02 17:26:39 -06:00
|
|
|
pub struct CachedEarlyExit {
|
2014-01-15 13:39:08 -06:00
|
|
|
label: EarlyExitLabel,
|
|
|
|
cleanup_block: BasicBlockRef,
|
2016-02-03 12:27:32 -06:00
|
|
|
last_cleanup: usize,
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
impl<'blk, 'tcx> FunctionContext<'blk, 'tcx> {
|
|
|
|
pub fn push_custom_cleanup_scope(&self) -> CustomScopeIndex {
|
2014-01-15 13:39:08 -06:00
|
|
|
let index = self.scopes_len();
|
|
|
|
debug!("push_custom_cleanup_scope(): {}", index);
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
|
|
|
|
// Just copy the debuginfo source location from the enclosing scope
|
|
|
|
let debug_loc = self.scopes
|
|
|
|
.borrow()
|
|
|
|
.last()
|
|
|
|
.map(|opt_scope| opt_scope.debug_loc)
|
2014-12-11 06:53:30 -06:00
|
|
|
.unwrap_or(DebugLoc::None);
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
self.push_scope(CleanupScope::new(debug_loc));
|
2014-01-15 13:39:08 -06:00
|
|
|
CustomScopeIndex { index: index }
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Removes the top cleanup scope from the stack without executing its cleanups. The top
|
|
|
|
/// cleanup scope must be the temporary scope `custom_scope`.
|
2016-08-16 09:41:38 -05:00
|
|
|
pub fn pop_custom_cleanup_scope(&self,
|
|
|
|
custom_scope: CustomScopeIndex) {
|
2014-01-15 13:39:08 -06:00
|
|
|
debug!("pop_custom_cleanup_scope({})", custom_scope.index);
|
|
|
|
assert!(self.is_valid_to_pop_custom_scope(custom_scope));
|
|
|
|
let _ = self.pop_scope();
|
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Removes the top cleanup scope from the stack, which must be a temporary scope, and
|
|
|
|
/// generates the code to do its cleanups for normal exit.
|
2016-08-16 09:41:38 -05:00
|
|
|
pub fn pop_and_trans_custom_cleanup_scope(&self,
|
|
|
|
bcx: Block<'blk, 'tcx>,
|
|
|
|
custom_scope: CustomScopeIndex)
|
|
|
|
-> Block<'blk, 'tcx> {
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("pop_and_trans_custom_cleanup_scope({:?})", custom_scope);
|
2014-01-15 13:39:08 -06:00
|
|
|
assert!(self.is_valid_to_pop_custom_scope(custom_scope));
|
|
|
|
|
|
|
|
let scope = self.pop_scope();
|
|
|
|
self.trans_scope_cleanups(bcx, &scope)
|
|
|
|
}
|
|
|
|
|
2015-10-23 20:18:44 -05:00
|
|
|
/// Schedules a (deep) drop of `val`, which is a pointer to an instance of
|
|
|
|
/// `ty`
|
2016-08-16 09:41:38 -05:00
|
|
|
pub fn schedule_drop_mem(&self,
|
|
|
|
cleanup_scope: CustomScopeIndex,
|
|
|
|
val: ValueRef,
|
|
|
|
ty: Ty<'tcx>) {
|
2015-02-25 16:09:58 -06:00
|
|
|
if !self.type_needs_drop(ty) { return; }
|
2016-08-16 09:41:38 -05:00
|
|
|
let drop = DropValue {
|
2014-01-15 13:39:08 -06:00
|
|
|
is_immediate: false,
|
|
|
|
val: val,
|
2014-07-04 19:55:51 -05:00
|
|
|
ty: ty,
|
2015-04-22 04:52:08 -05:00
|
|
|
skip_dtor: false,
|
2014-01-15 13:39:08 -06:00
|
|
|
};
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
debug!("schedule_drop_mem({:?}, val={:?}, ty={:?}) skip_dtor={}",
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope,
|
2016-02-18 11:49:45 -06:00
|
|
|
Value(val),
|
2015-06-18 12:25:05 -05:00
|
|
|
ty,
|
2015-04-22 04:52:08 -05:00
|
|
|
drop.skip_dtor);
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
self.schedule_clean(cleanup_scope, drop);
|
2015-04-22 04:52:08 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Issue #23611: Schedules a (deep) drop of the contents of
|
|
|
|
/// `val`, which is a pointer to an instance of struct/enum type
|
|
|
|
/// `ty`. The scheduled code handles extracting the discriminant
|
|
|
|
/// and dropping the contents associated with that variant
|
|
|
|
/// *without* executing any associated drop implementation.
|
2016-08-16 09:41:38 -05:00
|
|
|
pub fn schedule_drop_adt_contents(&self,
|
|
|
|
cleanup_scope: CustomScopeIndex,
|
|
|
|
val: ValueRef,
|
|
|
|
ty: Ty<'tcx>) {
|
2015-04-22 04:52:08 -05:00
|
|
|
// `if` below could be "!contents_needs_drop"; skipping drop
|
|
|
|
// is just an optimization, so sound to be conservative.
|
|
|
|
if !self.type_needs_drop(ty) { return; }
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
let drop = DropValue {
|
2015-04-22 04:52:08 -05:00
|
|
|
is_immediate: false,
|
|
|
|
val: val,
|
|
|
|
ty: ty,
|
|
|
|
skip_dtor: true,
|
2014-07-04 19:55:51 -05:00
|
|
|
};
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
debug!("schedule_drop_adt_contents({:?}, val={:?}, ty={:?}) skip_dtor={}",
|
2014-07-04 19:55:51 -05:00
|
|
|
cleanup_scope,
|
2016-02-18 11:49:45 -06:00
|
|
|
Value(val),
|
2015-06-18 12:25:05 -05:00
|
|
|
ty,
|
2015-04-22 04:52:08 -05:00
|
|
|
drop.skip_dtor);
|
2014-07-04 19:55:51 -05:00
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
self.schedule_clean(cleanup_scope, drop);
|
2014-07-04 19:55:51 -05:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a (deep) drop of `val`, which is an instance of `ty`
|
2016-08-16 09:41:38 -05:00
|
|
|
pub fn schedule_drop_immediate(&self,
|
|
|
|
cleanup_scope: CustomScopeIndex,
|
|
|
|
val: ValueRef,
|
|
|
|
ty: Ty<'tcx>) {
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2015-02-25 16:09:58 -06:00
|
|
|
if !self.type_needs_drop(ty) { return; }
|
2016-08-16 09:41:38 -05:00
|
|
|
let drop = DropValue {
|
2014-01-15 13:39:08 -06:00
|
|
|
is_immediate: true,
|
|
|
|
val: val,
|
2014-07-04 19:55:51 -05:00
|
|
|
ty: ty,
|
2015-04-22 04:52:08 -05:00
|
|
|
skip_dtor: false,
|
2016-08-16 09:41:38 -05:00
|
|
|
};
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
debug!("schedule_drop_immediate({:?}, val={:?}, ty={:?}) skip_dtor={}",
|
2014-01-15 13:39:08 -06:00
|
|
|
cleanup_scope,
|
2016-02-18 11:49:45 -06:00
|
|
|
Value(val),
|
2015-06-18 12:25:05 -05:00
|
|
|
ty,
|
2015-04-22 04:52:08 -05:00
|
|
|
drop.skip_dtor);
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
self.schedule_clean(cleanup_scope, drop);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Schedules a cleanup to occur in the top-most scope, which must be a temporary scope.
|
2016-08-16 09:41:38 -05:00
|
|
|
fn schedule_clean(&self, custom_scope: CustomScopeIndex, cleanup: DropValue<'tcx>) {
|
2014-01-15 13:39:08 -06:00
|
|
|
debug!("schedule_clean_in_custom_scope(custom_scope={})",
|
|
|
|
custom_scope.index);
|
|
|
|
|
|
|
|
assert!(self.is_valid_custom_scope(custom_scope));
|
|
|
|
|
|
|
|
let mut scopes = self.scopes.borrow_mut();
|
2014-10-23 10:42:21 -05:00
|
|
|
let scope = &mut (*scopes)[custom_scope.index];
|
2014-01-15 13:39:08 -06:00
|
|
|
scope.cleanups.push(cleanup);
|
2016-02-03 12:27:32 -06:00
|
|
|
scope.cached_landing_pad = None;
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns true if there are pending cleanups that should execute on panic.
|
2016-08-16 09:41:38 -05:00
|
|
|
pub fn needs_invoke(&self) -> bool {
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow().iter().rev().any(|s| s.needs_invoke())
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2015-10-23 20:18:44 -05:00
|
|
|
/// Returns a basic block to branch to in the event of a panic. This block
|
|
|
|
/// will run the panic cleanups and eventually resume the exception that
|
|
|
|
/// caused the landing pad to be run.
|
2016-08-16 09:41:38 -05:00
|
|
|
pub fn get_landing_pad(&'blk self) -> BasicBlockRef {
|
2014-01-15 13:39:08 -06:00
|
|
|
let _icx = base::push_ctxt("get_landing_pad");
|
|
|
|
|
|
|
|
debug!("get_landing_pad");
|
|
|
|
|
|
|
|
let orig_scopes_len = self.scopes_len();
|
|
|
|
assert!(orig_scopes_len > 0);
|
|
|
|
|
2014-10-09 14:17:22 -05:00
|
|
|
// Remove any scopes that do not have cleanups on panic:
|
2014-03-19 07:16:56 -05:00
|
|
|
let mut popped_scopes = vec!();
|
2014-01-15 13:39:08 -06:00
|
|
|
while !self.top_scope(|s| s.needs_invoke()) {
|
|
|
|
debug!("top scope does not need invoke");
|
|
|
|
popped_scopes.push(self.pop_scope());
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check for an existing landing pad in the new topmost scope:
|
|
|
|
let llbb = self.get_or_create_landing_pad();
|
|
|
|
|
|
|
|
// Push the scopes we removed back on:
|
2013-12-23 09:20:52 -06:00
|
|
|
loop {
|
|
|
|
match popped_scopes.pop() {
|
|
|
|
Some(scope) => self.push_scope(scope),
|
|
|
|
None => break
|
|
|
|
}
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
assert_eq!(self.scopes_len(), orig_scopes_len);
|
|
|
|
|
|
|
|
return llbb;
|
|
|
|
}
|
|
|
|
|
|
|
|
fn is_valid_to_pop_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
|
|
|
|
self.is_valid_custom_scope(custom_scope) &&
|
2014-03-20 21:49:20 -05:00
|
|
|
custom_scope.index == self.scopes.borrow().len() - 1
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
fn is_valid_custom_scope(&self, custom_scope: CustomScopeIndex) -> bool {
|
|
|
|
let scopes = self.scopes.borrow();
|
2016-08-16 09:41:38 -05:00
|
|
|
custom_scope.index < scopes.len()
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Generates the cleanups for `scope` into `bcx`
|
2014-01-15 13:39:08 -06:00
|
|
|
fn trans_scope_cleanups(&self, // cannot borrow self, will recurse
|
2014-09-06 11:13:04 -05:00
|
|
|
bcx: Block<'blk, 'tcx>,
|
2016-08-16 09:41:38 -05:00
|
|
|
scope: &CleanupScope<'tcx>) -> Block<'blk, 'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
let mut bcx = bcx;
|
|
|
|
if !bcx.unreachable.get() {
|
2014-01-23 13:41:57 -06:00
|
|
|
for cleanup in scope.cleanups.iter().rev() {
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
bcx = cleanup.trans(bcx, scope.debug_loc);
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
bcx
|
|
|
|
}
|
|
|
|
|
2015-03-25 19:06:52 -05:00
|
|
|
fn scopes_len(&self) -> usize {
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow().len()
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
fn push_scope(&self, scope: CleanupScope<'tcx>) {
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow_mut().push(scope)
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
fn pop_scope(&self) -> CleanupScope<'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
debug!("popping cleanup scope {}, {} scopes remaining",
|
|
|
|
self.top_scope(|s| s.block_name("")),
|
|
|
|
self.scopes_len() - 1);
|
|
|
|
|
2014-03-20 21:49:20 -05:00
|
|
|
self.scopes.borrow_mut().pop().unwrap()
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
fn top_scope<R, F>(&self, f: F) -> R where F: FnOnce(&CleanupScope<'tcx>) -> R {
|
2014-03-20 21:49:20 -05:00
|
|
|
f(self.scopes.borrow().last().unwrap())
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2015-10-23 20:18:44 -05:00
|
|
|
/// Used when the caller wishes to jump to an early exit, such as a return,
|
|
|
|
/// break, continue, or unwind. This function will generate all cleanups
|
|
|
|
/// between the top of the stack and the exit `label` and return a basic
|
|
|
|
/// block that the caller can branch to.
|
2014-11-25 20:17:11 -06:00
|
|
|
///
|
|
|
|
/// For example, if the current stack of cleanups were as follows:
|
|
|
|
///
|
|
|
|
/// AST 22
|
|
|
|
/// Custom 1
|
|
|
|
/// AST 23
|
|
|
|
/// Loop 23
|
|
|
|
/// Custom 2
|
|
|
|
/// AST 24
|
|
|
|
///
|
2015-10-23 20:18:44 -05:00
|
|
|
/// and the `label` specifies a break from `Loop 23`, then this function
|
|
|
|
/// would generate a series of basic blocks as follows:
|
2014-11-25 20:17:11 -06:00
|
|
|
///
|
|
|
|
/// Cleanup(AST 24) -> Cleanup(Custom 2) -> break_blk
|
|
|
|
///
|
2015-10-23 20:18:44 -05:00
|
|
|
/// where `break_blk` is the block specified in `Loop 23` as the target for
|
|
|
|
/// breaks. The return value would be the first basic block in that sequence
|
|
|
|
/// (`Cleanup(AST 24)`). The caller could then branch to `Cleanup(AST 24)`
|
|
|
|
/// and it will perform all cleanups and finally branch to the `break_blk`.
|
2014-09-06 11:13:04 -05:00
|
|
|
fn trans_cleanups_to_exit_scope(&'blk self,
|
2014-01-15 13:39:08 -06:00
|
|
|
label: EarlyExitLabel)
|
|
|
|
-> BasicBlockRef {
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("trans_cleanups_to_exit_scope label={:?} scopes={}",
|
2014-01-15 13:39:08 -06:00
|
|
|
label, self.scopes_len());
|
|
|
|
|
|
|
|
let orig_scopes_len = self.scopes_len();
|
|
|
|
let mut prev_llbb;
|
2014-03-19 07:16:56 -05:00
|
|
|
let mut popped_scopes = vec!();
|
2016-02-03 12:27:32 -06:00
|
|
|
let mut skip = 0;
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
// First we pop off all the cleanup stacks that are
|
|
|
|
// traversed until the exit is reached, pushing them
|
|
|
|
// onto the side vector `popped_scopes`. No code is
|
|
|
|
// generated at this time.
|
|
|
|
//
|
|
|
|
// So, continuing the example from above, we would wind up
|
|
|
|
// with a `popped_scopes` vector of `[AST 24, Custom 2]`.
|
|
|
|
// (Presuming that there are no cached exits)
|
|
|
|
loop {
|
|
|
|
if self.scopes_len() == 0 {
|
|
|
|
match label {
|
2015-10-23 20:18:44 -05:00
|
|
|
UnwindExit(val) => {
|
|
|
|
// Generate a block that will resume unwinding to the
|
|
|
|
// calling function
|
2016-08-16 09:41:38 -05:00
|
|
|
let bcx = self.new_block("resume");
|
2015-10-23 20:18:44 -05:00
|
|
|
match val {
|
|
|
|
UnwindKind::LandingPad => {
|
|
|
|
let addr = self.landingpad_alloca.get()
|
|
|
|
.unwrap();
|
|
|
|
let lp = build::Load(bcx, addr);
|
|
|
|
base::call_lifetime_end(bcx, addr);
|
|
|
|
base::trans_unwind_resume(bcx, lp);
|
|
|
|
}
|
|
|
|
UnwindKind::CleanupPad(_) => {
|
|
|
|
let pad = build::CleanupPad(bcx, None, &[]);
|
|
|
|
build::CleanupRet(bcx, pad, None);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
prev_llbb = bcx.llbb;
|
2014-01-15 13:39:08 -06:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-02-03 12:27:32 -06:00
|
|
|
// Pop off the scope, since we may be generating
|
|
|
|
// unwinding code for it.
|
|
|
|
let top_scope = self.pop_scope();
|
|
|
|
let cached_exit = top_scope.cached_early_exit(label);
|
|
|
|
popped_scopes.push(top_scope);
|
|
|
|
|
2014-01-15 13:39:08 -06:00
|
|
|
// Check if we have already cached the unwinding of this
|
|
|
|
// scope for this label. If so, we can stop popping scopes
|
|
|
|
// and branch to the cached label, since it contains the
|
|
|
|
// cleanups for any subsequent scopes.
|
2016-02-03 12:27:32 -06:00
|
|
|
if let Some((exit, last_cleanup)) = cached_exit {
|
2015-10-23 20:18:44 -05:00
|
|
|
prev_llbb = exit;
|
2016-02-03 12:27:32 -06:00
|
|
|
skip = last_cleanup;
|
2015-10-23 20:18:44 -05:00
|
|
|
break;
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
debug!("trans_cleanups_to_exit_scope: popped {} scopes",
|
|
|
|
popped_scopes.len());
|
|
|
|
|
|
|
|
// Now push the popped scopes back on. As we go,
|
|
|
|
// we track in `prev_llbb` the exit to which this scope
|
|
|
|
// should branch when it's done.
|
|
|
|
//
|
|
|
|
// So, continuing with our example, we will start out with
|
|
|
|
// `prev_llbb` being set to `break_blk` (or possibly a cached
|
|
|
|
// early exit). We will then pop the scopes from `popped_scopes`
|
|
|
|
// and generate a basic block for each one, prepending it in the
|
|
|
|
// series and updating `prev_llbb`. So we begin by popping `Custom 2`
|
|
|
|
// and generating `Cleanup(Custom 2)`. We make `Cleanup(Custom 2)`
|
|
|
|
// branch to `prev_llbb == break_blk`, giving us a sequence like:
|
|
|
|
//
|
|
|
|
// Cleanup(Custom 2) -> prev_llbb
|
|
|
|
//
|
|
|
|
// We then pop `AST 24` and repeat the process, giving us the sequence:
|
|
|
|
//
|
|
|
|
// Cleanup(AST 24) -> Cleanup(Custom 2) -> prev_llbb
|
|
|
|
//
|
|
|
|
// At this point, `popped_scopes` is empty, and so the final block
|
|
|
|
// that we return to the user is `Cleanup(AST 24)`.
|
2015-06-29 18:31:07 -05:00
|
|
|
while let Some(mut scope) = popped_scopes.pop() {
|
2015-07-22 16:15:01 -05:00
|
|
|
if !scope.cleanups.is_empty() {
|
|
|
|
let name = scope.block_name("clean");
|
|
|
|
debug!("generating cleanups for {}", name);
|
2015-10-23 20:18:44 -05:00
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
let bcx_in = self.new_block(&name[..]);
|
2015-10-23 20:18:44 -05:00
|
|
|
let exit_label = label.start(bcx_in);
|
2015-07-22 16:15:01 -05:00
|
|
|
let mut bcx_out = bcx_in;
|
2016-02-03 12:27:32 -06:00
|
|
|
let len = scope.cleanups.len();
|
|
|
|
for cleanup in scope.cleanups.iter().rev().take(len - skip) {
|
2015-10-23 20:18:44 -05:00
|
|
|
bcx_out = cleanup.trans(bcx_out, scope.debug_loc);
|
2015-07-22 16:15:01 -05:00
|
|
|
}
|
2016-02-03 12:27:32 -06:00
|
|
|
skip = 0;
|
2015-10-23 20:18:44 -05:00
|
|
|
exit_label.branch(bcx_out, prev_llbb);
|
2015-07-22 16:15:01 -05:00
|
|
|
prev_llbb = bcx_in.llbb;
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2016-02-03 12:27:32 -06:00
|
|
|
scope.add_cached_early_exit(exit_label, prev_llbb, len);
|
2015-07-22 16:15:01 -05:00
|
|
|
}
|
2014-01-15 13:39:08 -06:00
|
|
|
self.push_scope(scope);
|
|
|
|
}
|
|
|
|
|
2014-12-20 02:09:35 -06:00
|
|
|
debug!("trans_cleanups_to_exit_scope: prev_llbb={:?}", prev_llbb);
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
assert_eq!(self.scopes_len(), orig_scopes_len);
|
|
|
|
prev_llbb
|
|
|
|
}
|
|
|
|
|
2015-10-23 20:18:44 -05:00
|
|
|
/// Creates a landing pad for the top scope, if one does not exist. The
|
|
|
|
/// landing pad will perform all cleanups necessary for an unwind and then
|
|
|
|
/// `resume` to continue error propagation:
|
2014-11-25 20:17:11 -06:00
|
|
|
///
|
|
|
|
/// landing_pad -> ... cleanups ... -> [resume]
|
|
|
|
///
|
2015-10-23 20:18:44 -05:00
|
|
|
/// (The cleanups and resume instruction are created by
|
|
|
|
/// `trans_cleanups_to_exit_scope()`, not in this function itself.)
|
2014-09-06 11:13:04 -05:00
|
|
|
fn get_or_create_landing_pad(&'blk self) -> BasicBlockRef {
|
2014-01-15 13:39:08 -06:00
|
|
|
let pad_bcx;
|
|
|
|
|
|
|
|
debug!("get_or_create_landing_pad");
|
|
|
|
|
|
|
|
// Check if a landing pad block exists; if not, create one.
|
|
|
|
{
|
|
|
|
let mut scopes = self.scopes.borrow_mut();
|
2014-09-14 22:27:36 -05:00
|
|
|
let last_scope = scopes.last_mut().unwrap();
|
2014-01-15 13:39:08 -06:00
|
|
|
match last_scope.cached_landing_pad {
|
2015-10-23 20:18:44 -05:00
|
|
|
Some(llbb) => return llbb,
|
2014-01-15 13:39:08 -06:00
|
|
|
None => {
|
|
|
|
let name = last_scope.block_name("unwind");
|
2016-08-16 09:41:38 -05:00
|
|
|
pad_bcx = self.new_block(&name[..]);
|
2014-01-15 13:39:08 -06:00
|
|
|
last_scope.cached_landing_pad = Some(pad_bcx.llbb);
|
|
|
|
}
|
|
|
|
}
|
2015-10-23 20:18:44 -05:00
|
|
|
};
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2015-07-20 15:27:38 -05:00
|
|
|
let llpersonality = pad_bcx.fcx.eh_personality();
|
2014-01-15 13:39:08 -06:00
|
|
|
|
2015-10-23 20:18:44 -05:00
|
|
|
let val = if base::wants_msvc_seh(self.ccx.sess()) {
|
|
|
|
// A cleanup pad requires a personality function to be specified, so
|
|
|
|
// we do that here explicitly (happens implicitly below through
|
|
|
|
// creation of the landingpad instruction). We then create a
|
|
|
|
// cleanuppad instruction which has no filters to run cleanup on all
|
|
|
|
// exceptions.
|
|
|
|
build::SetPersonalityFn(pad_bcx, llpersonality);
|
|
|
|
let llretval = build::CleanupPad(pad_bcx, None, &[]);
|
|
|
|
UnwindKind::CleanupPad(llretval)
|
|
|
|
} else {
|
|
|
|
// The landing pad return type (the type being propagated). Not sure
|
|
|
|
// what this represents but it's determined by the personality
|
|
|
|
// function and this is what the EH proposal example uses.
|
|
|
|
let llretty = Type::struct_(self.ccx,
|
|
|
|
&[Type::i8p(self.ccx), Type::i32(self.ccx)],
|
|
|
|
false);
|
|
|
|
|
|
|
|
// The only landing pad clause will be 'cleanup'
|
|
|
|
let llretval = build::LandingPad(pad_bcx, llretty, llpersonality, 1);
|
|
|
|
|
|
|
|
// The landing pad block is a cleanup
|
|
|
|
build::SetCleanup(pad_bcx, llretval);
|
|
|
|
|
|
|
|
let addr = match self.landingpad_alloca.get() {
|
|
|
|
Some(addr) => addr,
|
|
|
|
None => {
|
|
|
|
let addr = base::alloca(pad_bcx, common::val_ty(llretval),
|
|
|
|
"");
|
|
|
|
base::call_lifetime_start(pad_bcx, addr);
|
|
|
|
self.landingpad_alloca.set(Some(addr));
|
|
|
|
addr
|
|
|
|
}
|
|
|
|
};
|
|
|
|
build::Store(pad_bcx, llretval, addr);
|
|
|
|
UnwindKind::LandingPad
|
|
|
|
};
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
// Generate the cleanup block and branch to it.
|
2015-10-23 20:18:44 -05:00
|
|
|
let label = UnwindExit(val);
|
|
|
|
let cleanup_llbb = self.trans_cleanups_to_exit_scope(label);
|
|
|
|
label.branch(pad_bcx, cleanup_llbb);
|
2014-01-15 13:39:08 -06:00
|
|
|
|
|
|
|
return pad_bcx.llbb;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
impl<'tcx> CleanupScope<'tcx> {
|
|
|
|
fn new(debug_loc: DebugLoc) -> CleanupScope<'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
CleanupScope {
|
debuginfo: Make sure that all calls to drop glue are associated with debug locations.
This commit makes rustc emit debug locations for all call
and invoke statements in LLVM IR, if they are contained
within a function that debuginfo is enabled for. This is
important because LLVM does not handle the case where a
function body containing debuginfo is inlined into another
function with debuginfo, but the inlined call statement
does not have a debug location. In this case, LLVM will
not know where (in terms of source code coordinates) the
function was inlined to and we end up with some statements
still linked to the source locations in there original,
non-inlined function without any indication that they are
indeed an inline-copy. Later, when generating DWARF from
the IR, LLVM will interpret this as corrupt IR and abort.
Unfortunately, the undesirable case described above can
still occur when using LTO. If there is a crate compiled
without debuginfo calling into a crate compiled with
debuginfo, we again end up with the conditions triggering
the error. This is why some LTO tests still fail with the
dreaded assertion, if the standard library was built with
debuginfo enabled.
That is, `RUSTFLAGS_STAGE2=-g make rustc-stage2` will
succeed but `RUSTFLAGS_STAGE2=-g make check` will still
fail after this commit has been merged. This is a problem
that has to be dealt with separately.
Fixes #17201
Fixes #15816
Fixes #15156
2014-09-24 01:49:38 -05:00
|
|
|
debug_loc: debug_loc,
|
2014-03-19 07:16:56 -05:00
|
|
|
cleanups: vec!(),
|
|
|
|
cached_early_exits: vec!(),
|
2014-01-15 13:39:08 -06:00
|
|
|
cached_landing_pad: None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn cached_early_exit(&self,
|
|
|
|
label: EarlyExitLabel)
|
2016-02-03 12:27:32 -06:00
|
|
|
-> Option<(BasicBlockRef, usize)> {
|
|
|
|
self.cached_early_exits.iter().rev().
|
2014-01-15 13:39:08 -06:00
|
|
|
find(|e| e.label == label).
|
2016-02-03 12:27:32 -06:00
|
|
|
map(|e| (e.cleanup_block, e.last_cleanup))
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
fn add_cached_early_exit(&mut self,
|
|
|
|
label: EarlyExitLabel,
|
2016-02-03 12:27:32 -06:00
|
|
|
blk: BasicBlockRef,
|
|
|
|
last_cleanup: usize) {
|
2014-01-15 13:39:08 -06:00
|
|
|
self.cached_early_exits.push(
|
|
|
|
CachedEarlyExit { label: label,
|
2016-02-03 12:27:32 -06:00
|
|
|
cleanup_block: blk,
|
|
|
|
last_cleanup: last_cleanup});
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// True if this scope has cleanups that need unwinding
|
2014-01-15 13:39:08 -06:00
|
|
|
fn needs_invoke(&self) -> bool {
|
|
|
|
self.cached_landing_pad.is_some() ||
|
2016-08-16 09:41:38 -05:00
|
|
|
!self.cleanups.is_empty()
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2014-11-25 20:17:11 -06:00
|
|
|
/// Returns a suitable name to use for the basic block that handles this cleanup scope
|
2014-05-22 18:57:53 -05:00
|
|
|
fn block_name(&self, prefix: &str) -> String {
|
2016-08-16 09:41:38 -05:00
|
|
|
format!("{}_custom_", prefix)
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl EarlyExitLabel {
|
2015-10-23 20:18:44 -05:00
|
|
|
/// Generates a branch going from `from_bcx` to `to_llbb` where `self` is
|
|
|
|
/// the exit label attached to the start of `from_bcx`.
|
|
|
|
///
|
|
|
|
/// Transitions from an exit label to other exit labels depend on the type
|
|
|
|
/// of label. For example with MSVC exceptions unwind exit labels will use
|
|
|
|
/// the `cleanupret` instruction instead of the `br` instruction.
|
|
|
|
fn branch(&self, from_bcx: Block, to_llbb: BasicBlockRef) {
|
|
|
|
if let UnwindExit(UnwindKind::CleanupPad(pad)) = *self {
|
|
|
|
build::CleanupRet(from_bcx, pad, Some(to_llbb));
|
|
|
|
} else {
|
|
|
|
build::Br(from_bcx, to_llbb, DebugLoc::None);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Generates the necessary instructions at the start of `bcx` to prepare
|
|
|
|
/// for the same kind of early exit label that `self` is.
|
|
|
|
///
|
|
|
|
/// This function will appropriately configure `bcx` based on the kind of
|
|
|
|
/// label this is. For UnwindExit labels, the `lpad` field of the block will
|
|
|
|
/// be set to `Some`, and for MSVC exceptions this function will generate a
|
|
|
|
/// `cleanuppad` instruction at the start of the block so it may be jumped
|
|
|
|
/// to in the future (e.g. so this block can be cached as an early exit).
|
|
|
|
///
|
|
|
|
/// Returns a new label which will can be used to cache `bcx` in the list of
|
|
|
|
/// early exits.
|
|
|
|
fn start(&self, bcx: Block) -> EarlyExitLabel {
|
2014-01-15 13:39:08 -06:00
|
|
|
match *self {
|
2015-10-23 20:18:44 -05:00
|
|
|
UnwindExit(UnwindKind::CleanupPad(..)) => {
|
|
|
|
let pad = build::CleanupPad(bcx, None, &[]);
|
2016-02-08 04:53:06 -06:00
|
|
|
bcx.lpad.set(Some(bcx.fcx.lpad_arena.alloc(LandingPad::msvc(pad))));
|
2015-10-23 20:18:44 -05:00
|
|
|
UnwindExit(UnwindKind::CleanupPad(pad))
|
|
|
|
}
|
|
|
|
UnwindExit(UnwindKind::LandingPad) => {
|
2016-02-08 04:53:06 -06:00
|
|
|
bcx.lpad.set(Some(bcx.fcx.lpad_arena.alloc(LandingPad::gnu())));
|
2015-10-23 20:18:44 -05:00
|
|
|
*self
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl PartialEq for UnwindKind {
|
|
|
|
fn eq(&self, val: &UnwindKind) -> bool {
|
|
|
|
match (*self, *val) {
|
|
|
|
(UnwindKind::LandingPad, UnwindKind::LandingPad) |
|
|
|
|
(UnwindKind::CleanupPad(..), UnwindKind::CleanupPad(..)) => true,
|
|
|
|
_ => false,
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
///////////////////////////////////////////////////////////////////////////
|
|
|
|
// Cleanup types
|
|
|
|
|
2015-03-30 08:38:44 -05:00
|
|
|
#[derive(Copy, Clone)]
|
2014-09-29 14:11:30 -05:00
|
|
|
pub struct DropValue<'tcx> {
|
2014-01-15 13:39:08 -06:00
|
|
|
is_immediate: bool,
|
|
|
|
val: ValueRef,
|
2014-09-29 14:11:30 -05:00
|
|
|
ty: Ty<'tcx>,
|
2015-04-22 04:52:08 -05:00
|
|
|
skip_dtor: bool,
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
|
2016-08-16 09:41:38 -05:00
|
|
|
impl<'tcx> DropValue<'tcx> {
|
2014-09-29 14:11:30 -05:00
|
|
|
fn trans<'blk>(&self,
|
|
|
|
bcx: Block<'blk, 'tcx>,
|
2014-12-11 06:53:30 -06:00
|
|
|
debug_loc: DebugLoc)
|
2014-09-29 14:11:30 -05:00
|
|
|
-> Block<'blk, 'tcx> {
|
2015-04-22 04:52:08 -05:00
|
|
|
let skip_dtor = self.skip_dtor;
|
|
|
|
let _icx = if skip_dtor {
|
|
|
|
base::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=true")
|
|
|
|
} else {
|
|
|
|
base::push_ctxt("<DropValue as Cleanup>::trans skip_dtor=false")
|
|
|
|
};
|
2014-07-04 19:55:51 -05:00
|
|
|
let bcx = if self.is_immediate {
|
2015-04-22 04:52:08 -05:00
|
|
|
glue::drop_ty_immediate(bcx, self.val, self.ty, debug_loc, self.skip_dtor)
|
2014-01-15 13:39:08 -06:00
|
|
|
} else {
|
2016-08-16 09:41:38 -05:00
|
|
|
glue::drop_ty_core(bcx, self.val, self.ty, debug_loc, self.skip_dtor)
|
2014-07-04 19:55:51 -05:00
|
|
|
};
|
|
|
|
bcx
|
2014-01-15 13:39:08 -06:00
|
|
|
}
|
|
|
|
}
|