Not only can discriminants be smaller than int now, but they can be
larger than int on 32-bit targets. This has obvious implications for the
reflection interface. Without this change, things fail with LLVM
assertions when we try to "extend" i64 to i32.
Previously an ExprLit was created *per byte* causing a huge increase in memory
bloat. This adds a new `lit_binary` to contain a literal of binary data, which
is currently only used by the include_bin! syntax extension. This massively
speeds up compilation times of the shootout-k-nucleotide-pipes test
before:
time: 469s
memory: 6GB
assertion failure in LLVM (section too large)
after:
time: 2.50s
memory: 124MB
Closes#2598
This fixes two existing bugs along the way:
* The `transmute` intrinsic did not correctly handle casts of immediate
aggregates like newtype structs and tuples.
* The code for calling foreign functions used the wrong type to create
an `alloca` temporary
enum Foo { A, B }
fn foo() -> Foo { A }
Before:
; Function Attrs: nounwind uwtable
define void @_ZN3foo18hbedc642d5d9cf5aag4v0.0E(%enum.Foo* noalias nocapture sret, { i64, %tydesc*, i8*, i8*, i8 }* nocapture readnone) #0 {
"function top level":
%2 = getelementptr inbounds %enum.Foo* %0, i64 0, i32 0
store i64 0, i64* %2, align 8
ret void
}
After:
; Function Attrs: nounwind readnone uwtable
define %enum.Foo @_ZN3foo18hbedc642d5d9cf5aag4v0.0E({ i64, %tydesc*, i8*, i8*, i8 }* nocapture readnone) #0 {
"function top level":
ret %enum.Foo zeroinitializer
}
Use &mut Block and &Block references where possible in the builder
functions in trans::build.
@mut Block remains in a few functions where I could not (not yet at
least) track down the runtime borrowck failures.
This removes another large chunk of this odd 'clownshoes' identifier showing up
in symbol names. These all originated from external crates because the encoded
items were encoded independently of the paths calculated in ast_map. The
encoding of these paths now uses the helper function in ast_map to calculate the
"pretty name" for an impl block.
Unfortunately there is still no information about generics in the symbol name,
but it's certainly vastly better than before
hash::__extensions__::write::_version::v0.8
becomes
hash::Writer$SipState::write::hversion::v0.8
This also fixes bugs in which lots of methods would show up as `meth_XXX`, they
now only show up as `meth` and throw some extra characters onto the version
string.
These commits fix bugs related to identically named statics in functions of implementations in various situations. The commit messages have most of the information about what bugs are being fixed and why.
As a bonus, while I was messing around with name mangling, I improved the backtraces we'll get in gdb by removing `__extensions__` for the trait/type being implemented and by adding the method name as well. Yay!
Storing the type name in the `tydesc` aims to avoid the need to pass a type name in almost every single visitor method.
It would likely be much saner for `repr` to simply be passed the `TyDesc` corresponding to the function or just the type name, but this is good enough for now.
As with the previous commit, this is targeted at removing the possibility of
collisions between statics. The main use case here is when there's a
type-parametric function with an inner static that's compiled as a library.
Before this commit, any impl would generate a path item of "__extensions__".
This changes this identifier to be a "pretty name", which is either the last
element of the path of the trait implemented or the last element of the type's
path that's being implemented. That doesn't quite cut it though, so the (trait,
type) pair is hashed and again used to append information to the symbol.
Essentially, __extensions__ was removed for something nicer for debugging, and
then some more information was added to symbol name by including a hash of the
trait being implemented and type it's being implemented for. This should prevent
colliding names for inner statics in regular functions with similar names.
This requires changes to method search and to codegen. We now emit a
vtable for objects that includes methods from all supertraits.
Closes#4100.
Also, actually populate the cache for vtables, and also key it by type
so that it actually works.
LLVMConstStringInContext() doesn't need a null-terminated string. It
takes a length instead. Using .to_c_str() here triggers an ICE whenever
the string literal embeds a null, as in "\x00".
.with_c_str() is a replacement for the old .as_c_str(), to avoid
unnecessary boilerplate.
Replace all usages of .to_c_str().with_ref() with .with_c_str().
what amount a T* pointer must be adjusted to reach the contents
of the box. For `~T` types, this requires knowing the type `T`,
which is not known in the case of objects.
- Made naming schemes consistent between Option, Result and Either
- Changed Options Add implementation to work like the maybe monad (return None if any of the inputs is None)
- Removed duplicate Option::get and renamed all related functions to use the term `unwrap` instead
This is a cleanup pull request that does:
* removes `os::as_c_charp`
* moves `str::as_buf` and `str::as_c_str` into `StrSlice`
* converts some functions from `StrSlice::as_buf` to `StrSlice::as_c_str`
* renames `StrSlice::as_buf` to `StrSlice::as_imm_buf` (and adds `StrSlice::as_mut_buf` to match `vec.rs`.
* renames `UniqueStr::as_bytes_with_null_consume` to `UniqueStr::to_bytes`
* and other misc cleanups and minor optimizations
Continuation of https://github.com/mozilla/rust/pull/7826.
AST spanned<T> refactoring, AST type renamings:
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
`field => Field`
Also, Crate, Field and Local are not wrapped in spanned<T> anymore.
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
Also, Crate and Local are not wrapped in spanned<T> anymore.
These blocks were required because previously we could only insert
instructions at the end of blocks, but we wanted to have all allocas in
one place, so they can be collapse. But now we have "direct" access the
the LLVM IR builder and can position it freely. This allows us to use
the same trick that clang uses, which means that we insert a dummy
"marker" instruction to identify the spot at which we want to insert
allocas. We can then later position the IR builder at that spot and
insert the alloca instruction, without any dedicated block.
The block for loading the closure environment can now also go away,
because the function context now provides the toplevel block, and the
translation of the loading happens first, so that's good enough.
Makes the LLVM IR a bit more readable, saving a bunch of branches in the
unoptimized code, which benefits unoptimized builds.
This is the first of a series of refactorings to get rid of the `codemap::spanned<T>` struct (see this thread for more information: https://mail.mozilla.org/pipermail/rust-dev/2013-July/004798.html).
The changes in this PR should not change any semantics, just rename `ast::blk_` to `ast::blk` and add a span field to it. 95% of the changes were of the form `block.node.id` -> `block.id`. Only some transformations in `libsyntax::fold` where not entirely trivial.
Currently, we always create a dedicated "return" basic block, but when
there's only a single predecessor for that block, it can be merged with
that predecessor. We can achieve that merge by only creating the return
block on demand, avoiding its creation when its not required.
Reduces the pre-optimization size of librustc.ll created with --passes ""
by about 90k lines which equals about 4%.
This is work continued from the now landed #7495 and #7521 pulls.
Removing the headers from unique vectors is another project, so I've separated the allocator.
This way when you compile with -Z trans-stats you'll get a per-function cost breakdown, sorted with the most expensive functions first. Should help highlight pathological code.
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.
By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.
The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.
Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.
Refs #7462
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.
By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.
The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.
Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.
Refs #7462
Continuation of #7430.
I haven't removed the `map` method, since the replacement `v.iter().transform(f).collect::<~[SomeType]>()` is a little ridiculous at the moment.
With these changes, exchange allocator headers are never initialized, read or written to. Removing the header will now just involve updating the code in trans using an offset to only do it if the type contained is managed.
The only thing blocking removing the initialization of the last field in the header was ~fn since it uses it to store the dynamic size/types due to captures. I temporarily switched it to a `closure_exchange_alloc` lang item (it uses the same `exchange_free`) and #7496 is filed about removing that.
Since the `exchange_free` call is now inlined all over the codebase, I don't think we should have an assert for null. It doesn't currently ever happen, but it would be fine if we started generating code that did do it. The `exchange_free` function also had a comment declaring that it must not fail, but a regular assert would cause a failure. I also removed the atomic counter because valgrind can already find these leaks, and we have valgrind bots now.
Note that exchange free does not currently print an error an out-of-memory when it aborts, because our `io` code may allocate. We could probably get away with a `#[rust_stack]` call to a `stdio` function but it would be better to make a write system call.
Currently we pass all "self" arguments by reference, for the pointer
variants this means that we end up with double indirection which causes
a unnecessary performance hit.
The fix itself is pretty straight-forward and just means that "self"
needs to be handled like any other argument, except for by-value "self"
which still needs to be passed by reference. This is because
non-pointer types can't just be stuffed into the environment slot which
is used to pass "self".
What made things tricky is that there was also a bug in the typechecker
where the method map entries are created. For type impls, that stored
the base type instead of the actual self-type in the method map, e.g.
Foo instead of &Foo for &self. That worked with pass-by-reference, but
fails with pass-by-value which needs the real type.
Code that makes use of methods seems to be about 10% faster with this
change. Also, build times are reduced by about 4%.
Fixes#4355, #4402, #5280, #4406 and #7285
Instead of determining paths from the path tag, we iterate through
modules' children recursively in the metadata. This will allow for
lazy external module resolution.
Mostly just low-haning fruit, i.e. function arguments that were @ even
though & would work just as well.
Reduces librustc.so size by 200k when compiling without -O, by 100k when
compiling with -O.
I removed the `static-method-test.rs` test because it was heavily based
on `BaseIter` and there are plenty of other more complex uses of static
methods anyway.