The purpose here is to get rid of compile_upto, which pretty much always requires the user to read the source to figure out what it does. It's replaced by a sequence of obviously-named functions:
- phase_1_parse_input(sess, cfg, input);
- phase_2_configure_and_expand(sess, cfg, crate);
- phase_3_run_analysis_passes(sess, expanded_crate);
- phase_4_translate_to_llvm(sess, expanded_crate, &analysis, outputs);
- phase_5_run_llvm_passes(sess, &trans, outputs);
- phase_6_link_output(sess, &trans, outputs);
Each of which takes what it takes and returns what it returns, with as little variation as possible in behaviour: no "pairs of options" and "pairs of control flags". You can tell if you missed a phase because you will be missing a `phase_N` call to some `N` between 1 and 6.
It does mean that people invoking librustc from outside need to write more function calls. The benefit is that they can _figure out what they're doing_ much more easily, and stop at any point, rather than further overloading the tangled logic of `compile_upto`.
As the title says, valid debug info is now generated for any kind of pattern-based bindings like an example from the automated tests:
```rust
let ((u, v), ((w, (x, Struct { a: y, b: z})), Struct { a: ae, b: oe }), ue) =
((25, 26), ((27, (28, Struct { a: 29, b: 30})), Struct { a: 31, b: 32 }), 33);
```
(Not that you would necessarily want to do a thing like that :P )
Fixes#2533
Previously having optional lang_items caused an assertion failure at
compile-time, and then once that was fixed there was a segfault at runtime of
using a NULL crate-map (crates with no_std)
Hi,
As noted in #6804, a pattern that contains `NaN` will never match because `NaN != NaN`. This adds a warning for such a case. The first commit handles the basic case and the second one generalizes it to more complex patterns using `walk_pat`.
Until now, we only optimized away impossible branches when there is a
literal true/false in the code. But since the LLVM IR builder already does
constant folding for us, we can trivially expand that to work with
constants as well.
Refs #7834
Infers type of constants used as discriminants and ensures they are
integral, instead of forcing them to be a signed integer.
Also, stores discriminant values as uint instead of int interally and
deals with related fallout.
Fixes issue #7994
This is a cleanup pull request that does:
* removes `os::as_c_charp`
* moves `str::as_buf` and `str::as_c_str` into `StrSlice`
* converts some functions from `StrSlice::as_buf` to `StrSlice::as_c_str`
* renames `StrSlice::as_buf` to `StrSlice::as_imm_buf` (and adds `StrSlice::as_mut_buf` to match `vec.rs`.
* renames `UniqueStr::as_bytes_with_null_consume` to `UniqueStr::to_bytes`
* and other misc cleanups and minor optimizations
The code to build the transmute intrinsic currently makes the invalid
assumption that if the in-type is non-immediate, the out-type is
non-immediate as well. But this is wrong, for example when transmuting
[int, ..1] to int. So we need to handle this fourth case as well.
Fixes#7988
This allows for control over the section placement of static, static
mut, and fn items. One caveat is that if a static and a static mut are
placed in the same section, the static is declared first, and the static
mut is assigned to, the generated program crashes. For example:
#[link_section=".boot"]
static foo : uint = 0xdeadbeef;
#[link_section=".boot"]
static mut bar : uint = 0xcafebabe;
Declaring bar first would mark .bootdata as writable, preventing the
crash when bar is written to.
Improve vtable resolution in a handful of ways. First, if we don't
find a vtable for a self/param type, do a regular vtable search. This
could find impls of the form "impl for A". Second, we don't require
that types be fully resolved before looking up subtables, and we
process tables in reverse order. This allows us to gain more
information about early type parameters based on how they are used by
the impls used to resolve later params.
Closes#6967, I believe.
This allows for control over the section placement of static, static
mut, and fn items. One caveat is that if a static and a static mut are
placed in the same section, the static is declared first, and the static
mut is assigned to, the generated program crashes. For example:
#[link_section=".boot"]
static foo : uint = 0xdeadbeef;
#[link_section=".boot"]
static mut bar : uint = 0xcafebabe;
Declaring bar first would mark .bootdata as writable, preventing the
crash when bar is written to.
Continuation of https://github.com/mozilla/rust/pull/7826.
AST spanned<T> refactoring, AST type renamings:
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
`field => Field`
Also, Crate, Field and Local are not wrapped in spanned<T> anymore.
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
Also, Crate and Local are not wrapped in spanned<T> anymore.
These changes remove unnecessary basic blocks and the associated branches from
the LLVM IR that we emit. Together, they reduce the time for unoptimized builds
in stage2 by about 10% on my box.
These blocks were required because previously we could only insert
instructions at the end of blocks, but we wanted to have all allocas in
one place, so they can be collapse. But now we have "direct" access the
the LLVM IR builder and can position it freely. This allows us to use
the same trick that clang uses, which means that we insert a dummy
"marker" instruction to identify the spot at which we want to insert
allocas. We can then later position the IR builder at that spot and
insert the alloca instruction, without any dedicated block.
The block for loading the closure environment can now also go away,
because the function context now provides the toplevel block, and the
translation of the loading happens first, so that's good enough.
Makes the LLVM IR a bit more readable, saving a bunch of branches in the
unoptimized code, which benefits unoptimized builds.
Currently, the helper functions in the "build" module can only append
at the end of a block. For certain things we'll want to be able to
insert code at arbitrary locations inside a block though. Although can
we do that by directly calling the LLVM functions, that is rather ugly
and means that somethings need to be implemented twice. Once in terms
of the helper functions and once in terms of low level LLVM functions.
Instead of doing that, we should provide a Builder type that provides
low level access to the builder, and which can be used by both, the
helper functions in the "build" module, as well larger units of
abstractions that combine several LLVM instructions.
Currently, all closures have an llenv block to load values from the
captured environment, but for closure that don't actually capture
anything, that block is useless and can be skipped.
This does a number of things, but especially dramatically reduce the
number of allocations performed for operations involving attributes/
meta items:
- Converts ast::meta_item & ast::attribute and other associated enums
to CamelCase.
- Converts several standalone functions in syntax::attr into methods,
defined on two traits AttrMetaMethods & AttributeMethods. The former
is common to both MetaItem and Attribute since the latter is a thin
wrapper around the former.
- Deletes functions that are unnecessary due to iterators.
- Converts other standalone functions to use iterators and the generic
AttrMetaMethods rather than allocating a lot of new vectors (e.g. the
old code would have to allocate a new vector to use functions that
operated on &[meta_item] on &[attribute].)
- Moves the core algorithm of the #[cfg] matching to syntax::attr,
similar to find_inline_attr and find_linkage_metas.
This doesn't have much of an effect on the speed of #[cfg] stripping,
despite hugely reducing the number of allocations performed; presumably
most of the time is spent in the ast folder rather than doing attribute
checks.
Also fixes the Eq instance of MetaItem_ to correctly ignore spans, so
that `rustc --cfg 'foo(bar)'` now works.
This pull request includes various improvements:
+ Composite types (structs, tuples, boxes, etc) are now handled more cleanly by debuginfo generation. Most notably, field offsets are now extracted directly from LLVM types, as opposed to trying to reconstruct them. This leads to more stable handling of edge cases (e.g. packed structs or structs implementing drop).
+ `debuginfo.rs` in general has seen a major cleanup. This includes better formatting, more readable variable and function names, removal of dead code, and better factoring of functionality.
+ Handling of `VariantInfo` in `ty.rs` has been improved. That is, the `type VariantInfo = @VariantInfo_` typedef has been replaced with explicit uses of @VariantInfo, and the duplicated logic for creating VariantInfo instances in `ty::enum_variants()` and `typeck::check::mod::check_enum_variants()` has been unified into a single constructor function. Both function now look nicer too :)
+ Debug info generation for enum types is now mostly supported. This includes:
+ Good support for C-style enums. Both DWARF and `gdb` know how to handle them.
+ Proper description of tuple- and struct-style enum variants as unions of structs.
+ Proper handling of univariant enums without discriminator field.
+ Unfortunately `gdb` always prints all possible interpretations of a union, so debug output of enums is verbose and unintuitive. Neither `LLVM` nor `gdb` support DWARF's `DW_TAG_variant` which allows to properly describe tagged unions. Adding support for this to `LLVM` seems doable. `gdb` however is another story. In the future we might be able to use `gdb`'s Python scripting support to alleviate this problem. In agreement with @jdm this is not a high priority for now.
+ The debuginfo test suite has been extended with 14 test files including tests for packed structs (with Drop), boxed structs, boxed vecs, vec slices, c-style enums (standalone and embedded), empty enums, tuple- and struct-style enums, and various pointer types to the above.
~~What is not yet included is DI support for some enum edge-cases represented as described in `trans::adt::NullablePointer`.~~
Cheers,
Michael
PS: closes#7819, fixes#7712
This does a bunch of cleanup on the data structures for the trait system. (Unfortunately it doesn't remove `provided_method_sources`. Maybe later.)
It also changes how cross crate methods are handled, so that information about them is exported in metadata, instead of having the methods regenerated by every crate that imports an impl.
r? @nikomatsakis, maybe?
This does a number of things, but especially dramatically reduce the
number of allocations performed for operations involving attributes/
meta items:
- Converts ast::meta_item & ast::attribute and other associated enums
to CamelCase.
- Converts several standalone functions in syntax::attr into methods,
defined on two traits AttrMetaMethods & AttributeMethods. The former
is common to both MetaItem and Attribute since the latter is a thin
wrapper around the former.
- Deletes functions that are unnecessary due to iterators.
- Converts other standalone functions to use iterators and the generic
AttrMetaMethods rather than allocating a lot of new vectors (e.g. the
old code would have to allocate a new vector to use functions that
operated on &[meta_item] on &[attribute].)
- Moves the core algorithm of the #[cfg] matching to syntax::attr,
similar to find_inline_attr and find_linkage_metas.
This doesn't have much of an effect on the speed of #[cfg] stripping,
despite hugely reducing the number of allocations performed; presumably
most of the time is spent in the ast folder rather than doing attribute
checks.
Also fixes the Eq instance of MetaItem_ to correctly ignore spaces, so
that `rustc --cfg 'foo(bar)'` now works.
This is the first of a series of refactorings to get rid of the `codemap::spanned<T>` struct (see this thread for more information: https://mail.mozilla.org/pipermail/rust-dev/2013-July/004798.html).
The changes in this PR should not change any semantics, just rename `ast::blk_` to `ast::blk` and add a span field to it. 95% of the changes were of the form `block.node.id` -> `block.id`. Only some transformations in `libsyntax::fold` where not entirely trivial.
Currently, our intrinsics are generated as functions that have the
usual setup, which means an alloca, and therefore also a jump, for
those intrinsics that return an immediate value. This is especially bad
for unoptimized builds because it means that an intrinsic like
"contains_managed" that should be just "ret 0" or "ret 1" actually ends
up allocating stack space, doing a jump and a store/load sequence
before it finally returns the value.
To fix that, we need a way to stop the generic function declaration
mechanism from allocating stack space for the return value. This
implicitly also kills the jump, because the block for static allocas
isn't required anymore.
Additionally, trans_intrinsic needs to build the return itself instead
of calling finish_fn, because the latter relies on the availability of
the return value pointer.
With these changes, we get the bare minimum code required for our
intrinsics, which makes them small enough that inlining them makes the
resulting code smaller, so we can mark them as "always inline" to get
better performing unoptimized builds.
Optimized builds also benefit slightly from this change as there's less
code for LLVM to translate and the smaller intrinsics help it to make
better inlining decisions for a few code paths.
Building stage2 librustc gets ~1% faster for the optimized version and 5% for
the unoptimized version.
Most arms of the huge match contain the same code, differing only in
small details like the name of the llvm intrinsic that is to be called.
Thus the duplicated code can be factored out into a few functions that
take some parameters to handle the differences.
Whenever a lang_item is required, some relevant message is displayed, often with
a span of what triggered the usage of the lang item.
Now "hello word" is as small as:
```rust
#[no_std];
extern {
fn puts(s: *u8);
}
extern "rust-intrinsic" {
fn transmute<T, U>(t: T) -> U;
}
#[start]
fn main(_: int, _: **u8, _: *u8) -> int {
unsafe {
let (ptr, _): (*u8, uint) = transmute("Hello!");
puts(ptr);
}
return 0;
}
```
Allowing them in type signatures is a significant amount of extra work, unfortunately. This also doesn't apply to static values, which takes a different code path.
As per @pcwalton's request, `debug!(..)` is only activated when the `debug` cfg is set; that is, for `RUST_LOG=some_module=4 ./some_program` to work, it needs to be compiled with `rustc --cfg debug some_program.rs`. (Although, there is the sneaky `__debug!(..)` macro that is always active, if you *really* need it.)
It functions by making `debug!` expand to `if false { __debug!(..) }` (expanding to an `if` like this is required to make sure `debug!` statements are typechecked and to avoid unused variable warnings), and adjusting trans to skip the pointless branches in `if true ...` and `if false ...`.
The conditional expansion change also required moving the inject-std-macros step into a new pass, and makes it actually insert them at the top of the crate; this means that the cfg stripping traverses over the macros and so filters out the unused ones.
This appears to takes an unoptimised build of `librustc` from 65s to 59s; and the full bootstrap from 18m41s to 18m26s on my computer (with general background use).
`./configure --enable-debug` will enable `debug!` statements in the bootstrap build.
That is, the `b` branch in `if true { a } else { b }` will not be
trans'd, and that expression will be exactly the same as `a`. This
means that, for example, macros conditionally expanding to `if false
{ .. }` (like debug!) will not waste time in LLVM (or trans).
An alloca in an unreachable block would shortcircuit with Undef, but with type
`Type`, rather than type `*Type` (i.e. a plain value, not a pointer) but it is
expected to return a pointer into the stack, leading to confusion and LLVM
asserts later.
Similarly, attaching the range metadata to a Load in an unreachable block
makes LLVM unhappy, since the Load returns Undef.
Fixes#7344.
If the TLS key is 0-sized, then the linux linker is apparently smart enough to
put everything at the same pointer. OSX on the other hand, will reserve some
space for all of them. To get around this, the TLS key now actuall consumes
space to ensure that it gets a unique pointer
We used to have concrete types in glue functions, but the way we used
to implement that broke inlining of those functions. To fix that, we
converted all glue to just take an i8* and always casted to that type.
The problem with the old implementation was that we made a wrong
assumption about the glue functions, taking it for granted that they
always take an i8*, because that's the function type expected by the
TyDesc fields. Therefore, we always ended up with some kind of cast.
But actually, we can initially have the glue with concrete types and
only cast the functions to the generic type once we actually emit the
TyDesc data.
That means that for glue calls that can be statically resolved, we don't
need any casts, unless the glue uses a simplified type. In that case we
cast the argument. And for glue calls that are resolved at runtime, we
cast the argument to i8*, because that's what the glue function in the
TyDesc expects.
Since most of out glue calls are static, this saves a lot of bitcasts.
The size of the unoptimized librustc.ll goes down by 240k lines.
Currently, we always create a dedicated "return" basic block, but when
there's only a single predecessor for that block, it can be merged with
that predecessor. We can achieve that merge by only creating the return
block on demand, avoiding its creation when its not required.
Reduces the pre-optimization size of librustc.ll created with --passes ""
by about 90k lines which equals about 4%.
Currently, immediate values are copied into an alloca only to have an
addressable storage so that it can be used with memcpy. Obviously we
can skip the memcpy in this case.
cc #6004 and #3273
This is a rewrite of TLS to get towards not requiring `@` when using task local storage. Most of the rewrite is straightforward, although there are two caveats:
1. Changing `local_set` to not require `@` is blocked on #7673
2. The code in `local_pop` is some of the most unsafe code I've written. A second set of eyes should definitely scrutinize it...
The public-facing interface currently hasn't changed, although it will have to change because `local_data::get` cannot return `Option<T>`, nor can it return `Option<&T>` (the lifetime isn't known). This will have to be changed to be given a closure which yield `&T` (or as an Option). I didn't do this part of the api rewrite in this pull request as I figured that it could wait until when `@` is fully removed.
This also doesn't deal with the issue of using something other than functions as keys, but I'm looking into using static slices (as mentioned in the issues).
Currently, immediate values are copied into an alloca only to have an
addressable storage so that it can be used with memcpy. Obviously we
can skip the memcpy in this case.
The free-standing functions in f32, f64, i8, i16, i32, i64, u8, u16,
u32, u64, float, int, and uint are replaced with generic functions in
num instead.
This means that instead of having to know everywhere what the type is, like
~~~
f64::sin(x)
~~~
You can simply write code that uses the type-generic versions in num instead, this works for all types that implement the corresponding trait in num.
~~~
num::sin(x)
~~~
Note 1: If you were previously using any of those functions, just replace them
with the corresponding function with the same name in num.
Note 2: If you were using a function that corresponds to an operator, use the
operator instead.
Note 3: This is just https://github.com/mozilla/rust/pull/7090 reopened against master.
for cases where it's hard to decide what id to use for the lookup); modify
irrefutable bindings code to move or copy depending on the type, rather than
threading through a flag. Also updates how local variables and arguments are
registered. These changes were hard to isolate.
The free-standing functions in f32, f64, i8, i16, i32, i64, u8, u16,
u32, u64, float, int, and uint are replaced with generic functions in
num instead.
If you were previously using any of those functions, just replace them
with the corresponding function with the same name in num.
Note: If you were using a function that corresponds to an operator, use
the operator instead.
We currently still handle immediate return values a lot like
non-immediate ones. We provide a slot for them and store them into
memory, often just to immediately load them again. To improve this
situation, trans_call_inner has to return a Result which contains the
immediate return value.
Also, it also needs to accept "No destination" in addition to just
SaveIn and Ignore. Since "No destination" isn't something that fits
well into the Dest type, I've chosen to simply use Option<Dest>
instead, paired with an assertion that checks that "None" is only
allowed for immediate return values.
There's no need to allocate a return slot for anykind of immediate
return value, not just not for nils. Also, when the return value is
ignored, we only have to copy it to a temporary alloca if it's actually
required to call drop_ty on it.
This is work continued from the now landed #7495 and #7521 pulls.
Removing the headers from unique vectors is another project, so I've separated the allocator.
This way when you compile with -Z trans-stats you'll get a per-function cost breakdown, sorted with the most expensive functions first. Should help highlight pathological code.
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.
By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.
The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.
Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.
Refs #7462
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.
By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.
The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.
Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.
Refs #7462
Also, makes the pretty-printer use & instead of @ as much as possible,
which will help with later changes, though in the interim has produced
some... interesting constructs.
After getting an ICE trying to use the `Repr` enum from middle::trans::adt (see issue #7527), I tried to implement the missing case for struct-like enum variants in `middle::ty::enum_variants()`. It seems to work now (and passes make check) but there are still some uncertainties that bother me:
+ I'm not sure I did everything, right. Especially getting the variant constructor function from the variant node id is just copied from the tuple-variant case. Someone with more experience in the code base should be able to see rather quickly whether this OK so.
+ It is kind of strange that I could not reproduce the ICE with a smaller test case. The unimplemented code path never seems to be hit in most cases, even when using the exact same `Repr` enum, just with `ty::t` replaced by an opaque pointer. Also, within the `adt` module, `Repr` and matching on it is used multiple times, again without running into problems. Can anyone explain why this is the case? That would be much appreciated.
Apart from that, I hope this PR is useful.
This a followup to #7510. @catamorphism requested a test - so I have created one, but in doing so I noticed some inconsistency in the error messages resulting from referencing nonexistent traits, so I changed the messages to be more consistent.
Adds a lint for `static some_lowercase_name: uint = 1;`. Warning by default since it causes confusion, e.g. `static a: uint = 1; ... let a = 2;` => `error: only refutable patterns allowed here`.
I think it's WIP - but I wanted to ask for feedback (/cc @thestinger)
I had to move the impl of FromIter for vec into extra::iter because I don't think std can depend on extra, but that's a bit messed up. Similarly some FromIter uses are gone now, not sure if this is fixable or if I made a complete mess here..
common supertypes.
This was breaking with the change to regions because of the
(now incorrect) assumpton that our inference code makes,
which is that if a <: b succeeds, there is no need to compute
the LUB/GLB.
This patch makes error handling for region inference failures more
uniform by not reporting *any* region errors until the reigon inference
step. This requires threading through more information about what
caused a region constraint, so that we can still give informative
error messages.
I have only taken partial advantage of this information: when region
inference fails, we still report the same error we always did, despite
the fact that we now know precisely what caused the various constriants
and what the region variable represents, which we did not know before.
This change is required not only to improve error messages but
because the region hierarchy is not in fact fully known until regionck,
because it is not clear where closure bodies fit in (our current
treatment is unsound). Moreover, the relationships between free variables
cannot be fully determined until type inference is otherwise complete.
cc #3238.
@catamorphism, this re-enables threadsafe rustpkg tests, @brson this will fail unless the bots have LLVM rebuilt, so this is a good indicator of whether that happened or not.
Continuation of #7430.
I haven't removed the `map` method, since the replacement `v.iter().transform(f).collect::<~[SomeType]>()` is a little ridiculous at the moment.