Given that bootstrapping and running the testsuite works without
exporting discriminant values as global constants, I conclude that
they're unused and can be removed.
This requires changes to method search and to codegen. We now emit a
vtable for objects that includes methods from all supertraits.
Closes#4100.
Also, actually populate the cache for vtables, and also key it by type
so that it actually works.
Pointers to bound variables shouldn't be stored before checking pattern,
otherwise piped patterns can conflict with each other (issue #6338).
Closes#6338.
This pull request includes support for generic functions and self arguments in methods, and combinations thereof. This also encompasses any kind of trait methods, regular and static, with and without default implementation. The implementation is backed up by a felt ton of test cases `:)`
This is a very important step towards being able to compile larger programs with debug info, since practically any generic function caused an ICE before.
One point worth discussing is that activating debug info now automatically (and silently) sets the `no_monomorphic_collapse` flag. Otherwise debug info would show wrong type names in all but one instance of the monomorphized function.
Another thing to note is that the handling of generic types does not strictly follow the DWARF specification. That is, variables with type `T` (where `T=int`) are described as having type `int` and not as having type `T`. In other words, we are losing information whether a variable has been declared with a type parameter as its type. In practice this should not make much of difference though since the concrete type is mostly what one is interested in. I'll post an issue later so this won't be forgotten.
Also included are a number of bug fixes:
* Closes#1758
* Closes#8513
* Closes#8443
* Fixes handling of field names in tuple structs
* Fixes and re-enables test case for option-like enums that relied on undefined behavior before
* Closes#1339 (should have been closed a while ago)
Cheers,
Michael
Pointers to bound variables shouldn't be stored before checking pattern,
otherwise piped patterns can conflict with each other (issue #6338).
Closes#6338.
LLVMConstStringInContext() doesn't need a null-terminated string. It
takes a length instead. Using .to_c_str() here triggers an ICE whenever
the string literal embeds a null, as in "\x00".
.with_c_str() is a replacement for the old .as_c_str(), to avoid
unnecessary boilerplate.
Replace all usages of .to_c_str().with_ref() with .with_c_str().
This pull request re-implements handling of visibility scopes and source code positions in debug info. It should now be very stable and properly handle
+ variable shadowing
+ expanded code (macros and the new for-loop de-sugaring, for example)
+ variables in the middle of nested scopes
+ bindings declared in the head of match statement arms.
all of which did not work at all or did not work reliably before. Those interested in a more detailed description of the problems at hand, I kindly refer to http://michaelwoerister.github.io/2013/08/03/visibility-scopes.html
Why doesn't the `populate_scope_map()` function use `syntax::visit`?
Because it would not improve this particular AST walker (see: 69dc790849 (commitcomment-3781426))
Cheers,
Michael
what amount a T* pointer must be adjusted to reach the contents
of the box. For `~T` types, this requires knowing the type `T`,
which is not known in the case of objects.
This can be applied to statics and it will indicate that LLVM will attempt to
merge the constant in .data with other statics.
I have preliminarily applied this to all of the statics generated by the new
`ifmt!` syntax extension. I compiled a file with 1000 calls to `ifmt!` and a
separate file with 1000 calls to `fmt!` to compare the sizes, and the results
were:
```
fmt 310k
ifmt (before) 529k
ifmt (after) 202k
```
This now means that ifmt! is both faster and smaller than fmt!, yay!
When there is only a single store to the ret slot that dominates the
load that gets the value for the "ret" instruction, we can elide the
ret slot and directly return the operand of the dominating store
instruction. This is the same thing that clang does, except for a
special case that doesn't seem to affect us.
Fixes#8238
When there is only a single store to the ret slot that dominates the
load that gets the value for the "ret" instruction, we can elide the
ret slot and directly return the operand of the dominating store
instruction. This is the same thing that clang does, except for a
special case that doesn't seem to affect us.
Fixes#8238
This can be applied to statics and it will indicate that LLVM will attempt to
merge the constant in .data with other statics.
I have preliminarily applied this to all of the statics generated by the new
`ifmt!` syntax extension. I compiled a file with 1000 calls to `ifmt!` and a
separate file with 1000 calls to `fmt!` to compare the sizes, and the results
were:
fmt 310k
ifmt (before) 529k
ifmt (after) 202k
This now means that ifmt! is both faster and smaller than fmt!, yay!
Code that collects fields in struct-like patterns used to ignore
wildcard patterns like `Foo{_}`. But `enter_defaults` considered
struct-like patterns as default in order to overcome this
(accoring to my understanding of situation).
However such behaviour caused code like this:
```
enum E {
Foo{f: int},
Bar
}
let e = Bar;
match e {
Foo{f: _f} => { /* do something (1) */ }
_ => { /* do something (2) */ }
}
```
consider pattern `Foo{f: _f}` as default. That caused inproper behaviour
and even segfaults while trying to destruct `Bar` as `Foo{f: _f}`.
Issues: #5625 , #5530.
This patch fixes `collect_record_or_struct_fields` to split cases of
single wildcard struct-like pattern and no struct-like pattern at all.
Former case resolved with `enter_rec_or_struct` (and not with
`enter_defaults`).
Closes#5625.
Closes#5530.
According to #7887, we've decided to use the syntax of `fn map<U>(f: &fn(&T) -> U) -> U`, which passes a reference to the closure, and to `fn map_move<U>(f: &fn(T) -> U) -> U` which moves the value into the closure. This PR adds these `.map_move()` functions to `Option` and `Result`.
In addition, it has these other minor features:
* Replaces a couple uses of `option.get()`, `result.get()`, and `result.get_err()` with `option.unwrap()`, `result.unwrap()`, and `result.unwrap_err()`. (See #8268 and #8288 for a more thorough adaptation of this functionality.
* Removes `option.take_map()` and `option.take_map_default()`. These two functions can be easily written as `.take().map_move(...)`.
* Adds a better error message to `result.unwrap()` and `result.unwrap_err()`.
Code that collects fields in struct-like patterns used to ignore
wildcard patterns like `Foo{_}`. But `enter_defaults` considered
struct-like patterns as default in order to overcome this
(accoring to my understanding of situation).
However such behaviour caused code like this:
```
enum E {
Foo{f: int},
Bar
}
let e = Bar;
match e {
Foo{f: _f} => { /* do something (1) */ }
_ => { /* do something (2) */ }
}
```
consider pattern `Foo{f: _f}` as default. That caused inproper behaviour
and even segfaults while trying to destruct `Bar` as `Foo{f: _f}`.
Issues: #5625 , #5530.
This patch fixes `collect_record_or_struct_fields` to split cases of
single wildcard struct-like pattern and no struct-like pattern at all.
Former case resolved with `enter_rec_or_struct` (and not with
`enter_defaults`).
Closes#5625.
Closes#5530.
- Made naming schemes consistent between Option, Result and Either
- Changed Options Add implementation to work like the maybe monad (return None if any of the inputs is None)
- Removed duplicate Option::get and renamed all related functions to use the term `unwrap` instead
When strings lose their trailing null, this pattern will become dangerous:
let foo = "bar";
let foo_ptr: *u8 = &foo[0];
Instead we should use c_strs to handle this correctly.
* LLVM now has a C interface to LLVMBuildAtomicRMW
* The exception handling support for the JIT seems to have been dropped
* Various interfaces have been added or headers have changed
rvalues aren't going to be used anywhere but as the argument, so
there's no point in copying them. LLVM used to eliminate the copy
later, but why bother emitting it in the first place?
rvalues aren't going to be used anywhere but as the argument, so
there's no point in copying them. LLVM used to eliminate the copy
later, but why bother emitting it in the first place?
This is preparation for removing `@fn`.
This does *not* use default methods yet, because I don't know
whether they work. If they do, a forthcoming PR will use them.
This also changes the precedence of `as`.
* All globals marked as `pub` won't have the `internal` linkage type set
* All global references across crates are forced to use the address of the
global in the other crate via an external reference.
r? @graydon
Closes#8179
Change the former repetition::
for 5.times { }
to::
do 5.times { }
.times() cannot be broken with `break` or `return` anymore; for those
cases, use a numerical range loop instead.
* All globals marked as `pub` won't have the `internal` linkage type set
* All global references across crates are forced to use the address of the
global in the other crate via an external reference.
The purpose here is to get rid of compile_upto, which pretty much always requires the user to read the source to figure out what it does. It's replaced by a sequence of obviously-named functions:
- phase_1_parse_input(sess, cfg, input);
- phase_2_configure_and_expand(sess, cfg, crate);
- phase_3_run_analysis_passes(sess, expanded_crate);
- phase_4_translate_to_llvm(sess, expanded_crate, &analysis, outputs);
- phase_5_run_llvm_passes(sess, &trans, outputs);
- phase_6_link_output(sess, &trans, outputs);
Each of which takes what it takes and returns what it returns, with as little variation as possible in behaviour: no "pairs of options" and "pairs of control flags". You can tell if you missed a phase because you will be missing a `phase_N` call to some `N` between 1 and 6.
It does mean that people invoking librustc from outside need to write more function calls. The benefit is that they can _figure out what they're doing_ much more easily, and stop at any point, rather than further overloading the tangled logic of `compile_upto`.
As the title says, valid debug info is now generated for any kind of pattern-based bindings like an example from the automated tests:
```rust
let ((u, v), ((w, (x, Struct { a: y, b: z})), Struct { a: ae, b: oe }), ue) =
((25, 26), ((27, (28, Struct { a: 29, b: 30})), Struct { a: 31, b: 32 }), 33);
```
(Not that you would necessarily want to do a thing like that :P )
Fixes#2533
Until now, we only optimized away impossible branches when there is a
literal true/false in the code. But since the LLVM IR builder already does
constant folding for us, we can trivially expand that to work with
constants as well.
Refs #7834
Infers type of constants used as discriminants and ensures they are
integral, instead of forcing them to be a signed integer.
Also, stores discriminant values as uint instead of int interally and
deals with related fallout.
Fixes issue #7994
This is a cleanup pull request that does:
* removes `os::as_c_charp`
* moves `str::as_buf` and `str::as_c_str` into `StrSlice`
* converts some functions from `StrSlice::as_buf` to `StrSlice::as_c_str`
* renames `StrSlice::as_buf` to `StrSlice::as_imm_buf` (and adds `StrSlice::as_mut_buf` to match `vec.rs`.
* renames `UniqueStr::as_bytes_with_null_consume` to `UniqueStr::to_bytes`
* and other misc cleanups and minor optimizations
The code to build the transmute intrinsic currently makes the invalid
assumption that if the in-type is non-immediate, the out-type is
non-immediate as well. But this is wrong, for example when transmuting
[int, ..1] to int. So we need to handle this fourth case as well.
Fixes#7988
This allows for control over the section placement of static, static
mut, and fn items. One caveat is that if a static and a static mut are
placed in the same section, the static is declared first, and the static
mut is assigned to, the generated program crashes. For example:
#[link_section=".boot"]
static foo : uint = 0xdeadbeef;
#[link_section=".boot"]
static mut bar : uint = 0xcafebabe;
Declaring bar first would mark .bootdata as writable, preventing the
crash when bar is written to.
This allows for control over the section placement of static, static
mut, and fn items. One caveat is that if a static and a static mut are
placed in the same section, the static is declared first, and the static
mut is assigned to, the generated program crashes. For example:
#[link_section=".boot"]
static foo : uint = 0xdeadbeef;
#[link_section=".boot"]
static mut bar : uint = 0xcafebabe;
Declaring bar first would mark .bootdata as writable, preventing the
crash when bar is written to.
Continuation of https://github.com/mozilla/rust/pull/7826.
AST spanned<T> refactoring, AST type renamings:
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
`field => Field`
Also, Crate, Field and Local are not wrapped in spanned<T> anymore.
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
Also, Crate and Local are not wrapped in spanned<T> anymore.
These changes remove unnecessary basic blocks and the associated branches from
the LLVM IR that we emit. Together, they reduce the time for unoptimized builds
in stage2 by about 10% on my box.
These blocks were required because previously we could only insert
instructions at the end of blocks, but we wanted to have all allocas in
one place, so they can be collapse. But now we have "direct" access the
the LLVM IR builder and can position it freely. This allows us to use
the same trick that clang uses, which means that we insert a dummy
"marker" instruction to identify the spot at which we want to insert
allocas. We can then later position the IR builder at that spot and
insert the alloca instruction, without any dedicated block.
The block for loading the closure environment can now also go away,
because the function context now provides the toplevel block, and the
translation of the loading happens first, so that's good enough.
Makes the LLVM IR a bit more readable, saving a bunch of branches in the
unoptimized code, which benefits unoptimized builds.
Currently, the helper functions in the "build" module can only append
at the end of a block. For certain things we'll want to be able to
insert code at arbitrary locations inside a block though. Although can
we do that by directly calling the LLVM functions, that is rather ugly
and means that somethings need to be implemented twice. Once in terms
of the helper functions and once in terms of low level LLVM functions.
Instead of doing that, we should provide a Builder type that provides
low level access to the builder, and which can be used by both, the
helper functions in the "build" module, as well larger units of
abstractions that combine several LLVM instructions.
Currently, all closures have an llenv block to load values from the
captured environment, but for closure that don't actually capture
anything, that block is useless and can be skipped.
This does a number of things, but especially dramatically reduce the
number of allocations performed for operations involving attributes/
meta items:
- Converts ast::meta_item & ast::attribute and other associated enums
to CamelCase.
- Converts several standalone functions in syntax::attr into methods,
defined on two traits AttrMetaMethods & AttributeMethods. The former
is common to both MetaItem and Attribute since the latter is a thin
wrapper around the former.
- Deletes functions that are unnecessary due to iterators.
- Converts other standalone functions to use iterators and the generic
AttrMetaMethods rather than allocating a lot of new vectors (e.g. the
old code would have to allocate a new vector to use functions that
operated on &[meta_item] on &[attribute].)
- Moves the core algorithm of the #[cfg] matching to syntax::attr,
similar to find_inline_attr and find_linkage_metas.
This doesn't have much of an effect on the speed of #[cfg] stripping,
despite hugely reducing the number of allocations performed; presumably
most of the time is spent in the ast folder rather than doing attribute
checks.
Also fixes the Eq instance of MetaItem_ to correctly ignore spans, so
that `rustc --cfg 'foo(bar)'` now works.
This pull request includes various improvements:
+ Composite types (structs, tuples, boxes, etc) are now handled more cleanly by debuginfo generation. Most notably, field offsets are now extracted directly from LLVM types, as opposed to trying to reconstruct them. This leads to more stable handling of edge cases (e.g. packed structs or structs implementing drop).
+ `debuginfo.rs` in general has seen a major cleanup. This includes better formatting, more readable variable and function names, removal of dead code, and better factoring of functionality.
+ Handling of `VariantInfo` in `ty.rs` has been improved. That is, the `type VariantInfo = @VariantInfo_` typedef has been replaced with explicit uses of @VariantInfo, and the duplicated logic for creating VariantInfo instances in `ty::enum_variants()` and `typeck::check::mod::check_enum_variants()` has been unified into a single constructor function. Both function now look nicer too :)
+ Debug info generation for enum types is now mostly supported. This includes:
+ Good support for C-style enums. Both DWARF and `gdb` know how to handle them.
+ Proper description of tuple- and struct-style enum variants as unions of structs.
+ Proper handling of univariant enums without discriminator field.
+ Unfortunately `gdb` always prints all possible interpretations of a union, so debug output of enums is verbose and unintuitive. Neither `LLVM` nor `gdb` support DWARF's `DW_TAG_variant` which allows to properly describe tagged unions. Adding support for this to `LLVM` seems doable. `gdb` however is another story. In the future we might be able to use `gdb`'s Python scripting support to alleviate this problem. In agreement with @jdm this is not a high priority for now.
+ The debuginfo test suite has been extended with 14 test files including tests for packed structs (with Drop), boxed structs, boxed vecs, vec slices, c-style enums (standalone and embedded), empty enums, tuple- and struct-style enums, and various pointer types to the above.
~~What is not yet included is DI support for some enum edge-cases represented as described in `trans::adt::NullablePointer`.~~
Cheers,
Michael
PS: closes#7819, fixes#7712
This does a bunch of cleanup on the data structures for the trait system. (Unfortunately it doesn't remove `provided_method_sources`. Maybe later.)
It also changes how cross crate methods are handled, so that information about them is exported in metadata, instead of having the methods regenerated by every crate that imports an impl.
r? @nikomatsakis, maybe?
This does a number of things, but especially dramatically reduce the
number of allocations performed for operations involving attributes/
meta items:
- Converts ast::meta_item & ast::attribute and other associated enums
to CamelCase.
- Converts several standalone functions in syntax::attr into methods,
defined on two traits AttrMetaMethods & AttributeMethods. The former
is common to both MetaItem and Attribute since the latter is a thin
wrapper around the former.
- Deletes functions that are unnecessary due to iterators.
- Converts other standalone functions to use iterators and the generic
AttrMetaMethods rather than allocating a lot of new vectors (e.g. the
old code would have to allocate a new vector to use functions that
operated on &[meta_item] on &[attribute].)
- Moves the core algorithm of the #[cfg] matching to syntax::attr,
similar to find_inline_attr and find_linkage_metas.
This doesn't have much of an effect on the speed of #[cfg] stripping,
despite hugely reducing the number of allocations performed; presumably
most of the time is spent in the ast folder rather than doing attribute
checks.
Also fixes the Eq instance of MetaItem_ to correctly ignore spaces, so
that `rustc --cfg 'foo(bar)'` now works.
This is the first of a series of refactorings to get rid of the `codemap::spanned<T>` struct (see this thread for more information: https://mail.mozilla.org/pipermail/rust-dev/2013-July/004798.html).
The changes in this PR should not change any semantics, just rename `ast::blk_` to `ast::blk` and add a span field to it. 95% of the changes were of the form `block.node.id` -> `block.id`. Only some transformations in `libsyntax::fold` where not entirely trivial.
Currently, our intrinsics are generated as functions that have the
usual setup, which means an alloca, and therefore also a jump, for
those intrinsics that return an immediate value. This is especially bad
for unoptimized builds because it means that an intrinsic like
"contains_managed" that should be just "ret 0" or "ret 1" actually ends
up allocating stack space, doing a jump and a store/load sequence
before it finally returns the value.
To fix that, we need a way to stop the generic function declaration
mechanism from allocating stack space for the return value. This
implicitly also kills the jump, because the block for static allocas
isn't required anymore.
Additionally, trans_intrinsic needs to build the return itself instead
of calling finish_fn, because the latter relies on the availability of
the return value pointer.
With these changes, we get the bare minimum code required for our
intrinsics, which makes them small enough that inlining them makes the
resulting code smaller, so we can mark them as "always inline" to get
better performing unoptimized builds.
Optimized builds also benefit slightly from this change as there's less
code for LLVM to translate and the smaller intrinsics help it to make
better inlining decisions for a few code paths.
Building stage2 librustc gets ~1% faster for the optimized version and 5% for
the unoptimized version.
Most arms of the huge match contain the same code, differing only in
small details like the name of the llvm intrinsic that is to be called.
Thus the duplicated code can be factored out into a few functions that
take some parameters to handle the differences.
Whenever a lang_item is required, some relevant message is displayed, often with
a span of what triggered the usage of the lang item.
Now "hello word" is as small as:
```rust
#[no_std];
extern {
fn puts(s: *u8);
}
extern "rust-intrinsic" {
fn transmute<T, U>(t: T) -> U;
}
#[start]
fn main(_: int, _: **u8, _: *u8) -> int {
unsafe {
let (ptr, _): (*u8, uint) = transmute("Hello!");
puts(ptr);
}
return 0;
}
```
Allowing them in type signatures is a significant amount of extra work, unfortunately. This also doesn't apply to static values, which takes a different code path.
As per @pcwalton's request, `debug!(..)` is only activated when the `debug` cfg is set; that is, for `RUST_LOG=some_module=4 ./some_program` to work, it needs to be compiled with `rustc --cfg debug some_program.rs`. (Although, there is the sneaky `__debug!(..)` macro that is always active, if you *really* need it.)
It functions by making `debug!` expand to `if false { __debug!(..) }` (expanding to an `if` like this is required to make sure `debug!` statements are typechecked and to avoid unused variable warnings), and adjusting trans to skip the pointless branches in `if true ...` and `if false ...`.
The conditional expansion change also required moving the inject-std-macros step into a new pass, and makes it actually insert them at the top of the crate; this means that the cfg stripping traverses over the macros and so filters out the unused ones.
This appears to takes an unoptimised build of `librustc` from 65s to 59s; and the full bootstrap from 18m41s to 18m26s on my computer (with general background use).
`./configure --enable-debug` will enable `debug!` statements in the bootstrap build.
That is, the `b` branch in `if true { a } else { b }` will not be
trans'd, and that expression will be exactly the same as `a`. This
means that, for example, macros conditionally expanding to `if false
{ .. }` (like debug!) will not waste time in LLVM (or trans).
An alloca in an unreachable block would shortcircuit with Undef, but with type
`Type`, rather than type `*Type` (i.e. a plain value, not a pointer) but it is
expected to return a pointer into the stack, leading to confusion and LLVM
asserts later.
Similarly, attaching the range metadata to a Load in an unreachable block
makes LLVM unhappy, since the Load returns Undef.
Fixes#7344.