Use &mut Block and &Block references where possible in the builder
functions in trans::build.
@mut Block remains in a few functions where I could not (not yet at
least) track down the runtime borrowck failures.
This removes another large chunk of this odd 'clownshoes' identifier showing up
in symbol names. These all originated from external crates because the encoded
items were encoded independently of the paths calculated in ast_map. The
encoding of these paths now uses the helper function in ast_map to calculate the
"pretty name" for an impl block.
Unfortunately there is still no information about generics in the symbol name,
but it's certainly vastly better than before
hash::__extensions__::write::_version::v0.8
becomes
hash::Writer$SipState::write::hversion::v0.8
This also fixes bugs in which lots of methods would show up as `meth_XXX`, they
now only show up as `meth` and throw some extra characters onto the version
string.
These commits fix bugs related to identically named statics in functions of implementations in various situations. The commit messages have most of the information about what bugs are being fixed and why.
As a bonus, while I was messing around with name mangling, I improved the backtraces we'll get in gdb by removing `__extensions__` for the trait/type being implemented and by adding the method name as well. Yay!
Storing the type name in the `tydesc` aims to avoid the need to pass a type name in almost every single visitor method.
It would likely be much saner for `repr` to simply be passed the `TyDesc` corresponding to the function or just the type name, but this is good enough for now.
As with the previous commit, this is targeted at removing the possibility of
collisions between statics. The main use case here is when there's a
type-parametric function with an inner static that's compiled as a library.
Before this commit, any impl would generate a path item of "__extensions__".
This changes this identifier to be a "pretty name", which is either the last
element of the path of the trait implemented or the last element of the type's
path that's being implemented. That doesn't quite cut it though, so the (trait,
type) pair is hashed and again used to append information to the symbol.
Essentially, __extensions__ was removed for something nicer for debugging, and
then some more information was added to symbol name by including a hash of the
trait being implemented and type it's being implemented for. This should prevent
colliding names for inner statics in regular functions with similar names.
This requires changes to method search and to codegen. We now emit a
vtable for objects that includes methods from all supertraits.
Closes#4100.
Also, actually populate the cache for vtables, and also key it by type
so that it actually works.
LLVMConstStringInContext() doesn't need a null-terminated string. It
takes a length instead. Using .to_c_str() here triggers an ICE whenever
the string literal embeds a null, as in "\x00".
.with_c_str() is a replacement for the old .as_c_str(), to avoid
unnecessary boilerplate.
Replace all usages of .to_c_str().with_ref() with .with_c_str().
what amount a T* pointer must be adjusted to reach the contents
of the box. For `~T` types, this requires knowing the type `T`,
which is not known in the case of objects.
- Made naming schemes consistent between Option, Result and Either
- Changed Options Add implementation to work like the maybe monad (return None if any of the inputs is None)
- Removed duplicate Option::get and renamed all related functions to use the term `unwrap` instead
This is a cleanup pull request that does:
* removes `os::as_c_charp`
* moves `str::as_buf` and `str::as_c_str` into `StrSlice`
* converts some functions from `StrSlice::as_buf` to `StrSlice::as_c_str`
* renames `StrSlice::as_buf` to `StrSlice::as_imm_buf` (and adds `StrSlice::as_mut_buf` to match `vec.rs`.
* renames `UniqueStr::as_bytes_with_null_consume` to `UniqueStr::to_bytes`
* and other misc cleanups and minor optimizations
Continuation of https://github.com/mozilla/rust/pull/7826.
AST spanned<T> refactoring, AST type renamings:
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
`field => Field`
Also, Crate, Field and Local are not wrapped in spanned<T> anymore.
`crate => Crate`
`local => Local`
`blk => Block`
`crate_num => CrateNum`
`crate_cfg => CrateConfig`
Also, Crate and Local are not wrapped in spanned<T> anymore.
These blocks were required because previously we could only insert
instructions at the end of blocks, but we wanted to have all allocas in
one place, so they can be collapse. But now we have "direct" access the
the LLVM IR builder and can position it freely. This allows us to use
the same trick that clang uses, which means that we insert a dummy
"marker" instruction to identify the spot at which we want to insert
allocas. We can then later position the IR builder at that spot and
insert the alloca instruction, without any dedicated block.
The block for loading the closure environment can now also go away,
because the function context now provides the toplevel block, and the
translation of the loading happens first, so that's good enough.
Makes the LLVM IR a bit more readable, saving a bunch of branches in the
unoptimized code, which benefits unoptimized builds.
This is the first of a series of refactorings to get rid of the `codemap::spanned<T>` struct (see this thread for more information: https://mail.mozilla.org/pipermail/rust-dev/2013-July/004798.html).
The changes in this PR should not change any semantics, just rename `ast::blk_` to `ast::blk` and add a span field to it. 95% of the changes were of the form `block.node.id` -> `block.id`. Only some transformations in `libsyntax::fold` where not entirely trivial.
Currently, we always create a dedicated "return" basic block, but when
there's only a single predecessor for that block, it can be merged with
that predecessor. We can achieve that merge by only creating the return
block on demand, avoiding its creation when its not required.
Reduces the pre-optimization size of librustc.ll created with --passes ""
by about 90k lines which equals about 4%.
This is work continued from the now landed #7495 and #7521 pulls.
Removing the headers from unique vectors is another project, so I've separated the allocator.
This way when you compile with -Z trans-stats you'll get a per-function cost breakdown, sorted with the most expensive functions first. Should help highlight pathological code.
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.
By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.
The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.
Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.
Refs #7462
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.
By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.
The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.
Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.
Refs #7462
Continuation of #7430.
I haven't removed the `map` method, since the replacement `v.iter().transform(f).collect::<~[SomeType]>()` is a little ridiculous at the moment.
With these changes, exchange allocator headers are never initialized, read or written to. Removing the header will now just involve updating the code in trans using an offset to only do it if the type contained is managed.
The only thing blocking removing the initialization of the last field in the header was ~fn since it uses it to store the dynamic size/types due to captures. I temporarily switched it to a `closure_exchange_alloc` lang item (it uses the same `exchange_free`) and #7496 is filed about removing that.
Since the `exchange_free` call is now inlined all over the codebase, I don't think we should have an assert for null. It doesn't currently ever happen, but it would be fine if we started generating code that did do it. The `exchange_free` function also had a comment declaring that it must not fail, but a regular assert would cause a failure. I also removed the atomic counter because valgrind can already find these leaks, and we have valgrind bots now.
Note that exchange free does not currently print an error an out-of-memory when it aborts, because our `io` code may allocate. We could probably get away with a `#[rust_stack]` call to a `stdio` function but it would be better to make a write system call.
Currently we pass all "self" arguments by reference, for the pointer
variants this means that we end up with double indirection which causes
a unnecessary performance hit.
The fix itself is pretty straight-forward and just means that "self"
needs to be handled like any other argument, except for by-value "self"
which still needs to be passed by reference. This is because
non-pointer types can't just be stuffed into the environment slot which
is used to pass "self".
What made things tricky is that there was also a bug in the typechecker
where the method map entries are created. For type impls, that stored
the base type instead of the actual self-type in the method map, e.g.
Foo instead of &Foo for &self. That worked with pass-by-reference, but
fails with pass-by-value which needs the real type.
Code that makes use of methods seems to be about 10% faster with this
change. Also, build times are reduced by about 4%.
Fixes#4355, #4402, #5280, #4406 and #7285
Instead of determining paths from the path tag, we iterate through
modules' children recursively in the metadata. This will allow for
lazy external module resolution.
Mostly just low-haning fruit, i.e. function arguments that were @ even
though & would work just as well.
Reduces librustc.so size by 200k when compiling without -O, by 100k when
compiling with -O.
I removed the `static-method-test.rs` test because it was heavily based
on `BaseIter` and there are plenty of other more complex uses of static
methods anyway.
The removed test for issue #2611 is well covered by the `std::iterator`
module itself.
This adds the `count` method to `IteratorUtil` to replace `EqIter`.
Currently, cleanup blocks are only reused when there are nested scopes, the
child scope's cleanup block will terminate with a jump to the parent
scope's cleanup block. But within a single scope, adding or revoking
any cleanup will force a fresh cleanup block. This means quadratic
growth with the number of allocations in a scope, because each
allocation needs a landing pad.
Instead of forcing a fresh cleanup block, we can keep a list chained
cleanup blocks that form a prefix of the currently required cleanups.
That way, the next cleanup block only has to handle newly added
cleanups. And by keeping the whole list instead of just the latest
block, we can also handle revocations more efficiently, by only
dropping those blocks that are no longer required, instead of all of
them.
Reduces the size of librustc by about 5% and the time required to build
it by about 10%.
Remove all the explicit @mut-fields from CrateContext, though many
fields are still @-ptrs.
This required changing every single function call that explicitly
took a @CrateContext, so I took advantage and changed as many as I
could get away with to &-ptrs or &mut ptrs.
This un-reverts the reverts of the rusti commits made awhile back. These were reverted for an LLVM failure in rustpkg. I believe that this is not a problem with these commits, but rather that rustc is being used in parallel for rustpkg tests (in-process). This is not working yet (almost! see #7011), so I serialized all the tests to run one after another.
@brson, I'm mainly just guessing as to the cause of the LLVM failures in rustpkg tests. I'm confident that running tests in parallel is more likely to be the problem than those commits I made.
Additionally, this fixes two recently reported issues with rusti.
The lookups for these items in external crates currently cause repeated
decoding of the EBML metadata, which is pretty slow. Adding caches to
avoid the repeated decoding reduces the time required for the type
checking of librustc by about 25%.
This almost removes the StringRef wrapper, since all strings are
Equiv-alent now. Removes a lot of `/* bad */ copy *`'s, and converts
several things to be &'static str (the lint table and the intrinsics
table).
There are many instances of .to_managed(), unfortunately.
This was a lot more painful than just changing `x.each` to `x.iter().advance` . I ran into my old friend #5898 and had to add underscores to some method names as a temporary workaround.
The borrow checker also had other ideas because rvalues aren't handled very well yet so temporary variables had to be added. However, storing the temporary in a variable led to dynamic `@mut` failures, so those had to be wrapped in blocks except where the scope ends immediately.
Anyway, the ugliness will be fixed as the compiler issues are fixed and this change will amount to `for x.each |x|` becoming `for x.iter |x|` and making all the iterator adaptors available.
I dropped the run-pass tests for `old_iter` because there's not much point in fixing a module that's on the way out in the next week or so.
Closes#6183.
The first commit changes the compiler's method of treating a `for` loop, and all the remaining commits are just dealing with the fallout.
The biggest fallout was the `IterBytes` trait, although it's really a whole lot nicer now because all of the `iter_bytes_XX` methods are just and-ed together. Sadly there was a huge amount of stuff that's `cfg(stage0)` gated, but whoever lands the next snapshot is going to have a lot of fun deleting all this code!
&str can be turned into @~str on demand, using to_owned(), so for
strings, we can create a specialized interner that accepts &str for
intern() and find() but stores and returns @~str.
A struct (inc. tuple struct) can be annotated with #[packed], so that there
is no padding between its elements, like GCC's `__attribute__((packed))`.
Closes#1704
- In a TraitRef, use the self type consistently to refer to the Self type:
- trait ref in `impl Trait<A,B,C> for S` has a self type of `S`.
- trait ref in `A:Trait` has the self type `A`
- trait ref associated with a trait decl has self type `Self`
- trait ref associated with a supertype has self type `Self`
- trait ref in an object type `@Trait` has no self type
- Rewrite `each_bound_traits_and_supertraits` to perform
substitutions as it goes, and thus yield a series of trait refs
that are always in the same 'namespace' as the type parameter
bound given as input. Before, we left this to the caller, but
this doesn't work because the caller lacks adequare information
to perform the type substitutions correctly.
- For provided methods, substitute the generics involved in the provided
method correctly.
- Introduce TypeParameterDef, which tracks the bounds declared on a type
parameter and brings them together with the def_id and (in the future)
other information (maybe even the parameter's name!).
- Introduce Subst trait, which helps to cleanup a lot of the
repetitive code involved with doing type substitution.
- Introduce Repr trait, which makes debug printouts far more convenient.
Fixes#4183. Needed for #5656.
bare function store (which is not in fact a kind of value) but rather
ty::TraitRef. Removes many uses of fail!() and other telltale signs of
type-semantic mismatch.
cc #4183 (not a fix, but related)
I believe this patch incorporates all expected syntax changes from extern
function reform (#3678). You can now write things like:
extern "<abi>" fn foo(s: S) -> T { ... }
extern "<abi>" mod { ... }
extern "<abi>" fn(S) -> T
The ABI for foreign functions is taken from this syntax (rather than from an
annotation). We support the full ABI specification I described on the mailing
list. The correct ABI is chosen based on the target architecture.
Calls by pointer to C functions are not yet supported, and the Rust type of
crust fns is still *u8.
Various FIXME comments added around to denote copies which when removed cause
the compiler to segfault at some point before stage2. None of these copies
should even be necessary.
and from typeck, which is verboten. We are supposed to write inference results
into the FnCtxt and then these get copied over in writeback. Add assertions
that no inference by-products are added to this table.
Fixes#3888Fixes#4036Fixes#4492
Later changes on this branch adapt the rest of rustc::middle::trans
to use this module instead of scattered hard-coded knowledge of
representations; a few of them also have improvements or cleanup for
adt.rs (and many added comments) that weren't drastic enough to justify
changing history to move them into this commit.