Commit Graph

214 Commits

Author SHA1 Message Date
bors
9c1e530bde auto merge of #7826 : michaelwoerister/rust/end_of_spanned, r=cmr
This is the first of a series of refactorings to get rid of the `codemap::spanned<T>` struct (see this thread for more information: https://mail.mozilla.org/pipermail/rust-dev/2013-July/004798.html).

The changes in this PR should not change any semantics, just rename `ast::blk_` to `ast::blk` and add a span field to it. 95% of the changes were of the form `block.node.id` -> `block.id`. Only some transformations in `libsyntax::fold` where not entirely trivial.
2013-07-17 09:49:43 -07:00
Michael Woerister
0cc70743d2 Made ast::blk not use spanned<T> anymore. 2013-07-17 08:21:46 +02:00
Alex Crichton
88a1b71305 Make all lang_items optional
Whenever a lang_item is required, some relevant message is displayed, often with
a span of what triggered the usage of the lang item
2013-07-16 21:37:52 -07:00
Daniel Micay
e118555ce6 remove headers from unique vectors 2013-07-15 23:57:27 -04:00
Björn Steinbrink
5df2bb1bcc Avoid empty "static_allocas" blocks
When there are no allocas, we don't need a block for them.
2013-07-13 13:33:48 +02:00
Björn Steinbrink
dcd5d14e6c Avoid return blocks that have only a single predecessor
Currently, we always create a dedicated "return" basic block, but when
there's only a single predecessor for that block, it can be merged with
that predecessor. We can achieve that merge by only creating the return
block on demand, avoiding its creation when its not required.

Reduces the pre-optimization size of librustc.ll created with --passes ""
by about 90k lines which equals about 4%.
2013-07-13 13:33:48 +02:00
Michael Sullivan
82ae2fa93a Clean up Repr impls a bit so we can add generic impls for @ and ~. 2013-07-11 15:51:09 -07:00
Niko Matsakis
41efcdf299 Make all allocas named so we can see where they originate
in the generated LLVM code.
2013-07-08 13:55:10 -04:00
bors
48ad726f2a auto merge of #7605 : thestinger/rust/vec, r=Aatch
This is work continued from the now landed #7495 and #7521 pulls.

Removing the headers from unique vectors is another project, so I've separated the allocator.
2013-07-08 02:52:56 -07:00
Daniel Micay
0aedecf96b add a temporary vector_exchange_malloc lang item 2013-07-08 03:41:21 -04:00
Daniel Micay
641aec7407 remove some method resolve workarounds 2013-07-07 19:51:13 -04:00
bors
28643d4135 auto merge of #7456 : graydon/rust/better-trans-stats, r=cmr
This way when you compile with -Z trans-stats you'll get a per-function cost breakdown, sorted with the most expensive functions first. Should help highlight pathological code.
2013-07-07 12:53:06 -07:00
bors
52abd1cc32 auto merge of #7636 : dotdash/rust/scope_cleanup, r=graydon
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.

By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.

The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.

Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.

Refs #7462
2013-07-07 11:16:59 -07:00
Björn Steinbrink
e41e435851 Implement scopes independent of LLVM basic blocks
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.

By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.

The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.

Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.

Refs #7462
2013-07-07 14:53:57 +02:00
James Miller
7ce68dc9e1 Clean up warnings 2013-07-07 22:51:10 +12:00
Graydon Hoare
f80d6dc4c1 rustc: improve -Z trans-stats to report per-fn LLVM instruction counts and translation timing 2013-07-03 18:06:36 -07:00
Huon Wilson
cdea73cf5b Convert vec::{as_imm_buf, as_mut_buf} to methods. 2013-07-04 00:46:50 +10:00
bors
07feeb95c5 auto merge of #7487 : huonw/rust/vec-kill, r=cmr
Continuation of #7430.

I haven't removed the `map` method, since the replacement `v.iter().transform(f).collect::<~[SomeType]>()` is a little ridiculous at the moment.
2013-06-30 21:14:13 -07:00
bors
040ac2a932 auto merge of #7495 : thestinger/rust/exchange, r=cmr
With these changes, exchange allocator headers are never initialized, read or written to. Removing the header will now just involve updating the code in trans using an offset to only do it if the type contained is managed.

The only thing blocking removing the initialization of the last field in the header was ~fn since it uses it to store the dynamic size/types due to captures. I temporarily switched it to a `closure_exchange_alloc` lang item (it uses the same `exchange_free`) and #7496 is filed about removing that.

Since the `exchange_free` call is now inlined all over the codebase, I don't think we should have an assert for null. It doesn't currently ever happen, but it would be fine if we started generating code that did do it. The `exchange_free` function also had a comment declaring that it must not fail, but a regular assert would cause a failure. I also removed the atomic counter because valgrind can already find these leaks, and we have valgrind bots now.

Note that exchange free does not currently print an error an out-of-memory when it aborts, because our `io` code may allocate. We could probably get away with a `#[rust_stack]` call to a `stdio` function but it would be better to make a write system call.
2013-06-30 15:02:05 -07:00
Daniel Micay
4a29d6eb3f add a closure_exchange_malloc lang item
this makes the exchange allocation header completely unused, and leaves
it uninitialized
2013-06-30 16:24:47 -04:00
Huon Wilson
c0a20d2929 Remove vec::{map, mapi, zip_map} and the methods, except for .map, since this
is very common, and the replacement (.iter().transform().collect()) is very
ugly.
2013-06-30 21:59:44 +10:00
bors
6fcd8bf567 auto merge of #7468 : cmr/rust/great_renaming, r=pcwalton 2013-06-30 01:19:38 -07:00
Björn Steinbrink
765a2901d5 Avoid double indirection for the "self" arg in methods
Currently we pass all "self" arguments by reference, for the pointer
variants this means that we end up with double indirection which causes
a unnecessary performance hit.

The fix itself is pretty straight-forward and just means that "self"
needs to be handled like any other argument, except for by-value "self"
which still needs to be passed by reference. This is because
non-pointer types can't just be stuffed into the environment slot which
is used to pass "self".

What made things tricky is that there was also a bug in the typechecker
where the method map entries are created. For type impls, that stored
the base type instead of the actual self-type in the method map, e.g.
Foo instead of &Foo for &self. That worked with pass-by-reference, but
fails with pass-by-value which needs the real type.

Code that makes use of methods seems to be about 10% faster with this
change. Also, build times are reduced by about 4%.

Fixes #4355, #4402, #5280, #4406 and #7285
2013-06-29 19:27:40 +02:00
Corey Richardson
1662bd371c Great renaming: propagate throughout the rest of the codebase 2013-06-29 11:20:02 -04:00
Ben Blum
89110fdf55 Use more deriving(IterBytes) in librustc. 2013-06-29 03:58:50 -04:00
Michael Sullivan
c05165bf93 Drop the impl_id field from fn_ctxt. 2013-06-28 18:09:02 -07:00
Michael Sullivan
5943118728 Drop an unused field from param_substs. 2013-06-28 18:09:02 -07:00
Michael Sullivan
0252693db2 Improve handling of trait bounds on a trait in default methods.
This is work on #7460.
It does not properly handle certain cross crate situations,
because we don't properly export vtable resolution information.
2013-06-28 18:09:02 -07:00
Michael Sullivan
649b26f7c6 Rework vtable_res to not be flattened. It is now a list of the resolutions for each param. 2013-06-28 16:12:08 -07:00
Michael Sullivan
817f98085f Make calling methods parameterized on the trait work from default methods.
This is done by adding a new notion of "vtable_self".
We do not yet properly handle super traits.

Closes #7183.
2013-06-28 16:12:08 -07:00
Patrick Walton
e015bee286 Rewrite each_path to allow performance improvements in the future.
Instead of determining paths from the path tag, we iterate through
modules' children recursively in the metadata. This will allow for
lazy external module resolution.
2013-06-28 10:44:16 -04:00
Patrick Walton
89eb995195 librustc: Fix merge fallout. 2013-06-28 10:44:16 -04:00
Patrick Walton
03ab6351cc librustc: Rewrite reachability and forbid duplicate methods in type implementations.
This should allow fewer symbols to be exported.
2013-06-28 10:44:16 -04:00
Patrick Walton
a1531ed946 librustc: Remove the broken overloaded assign-ops from the language.
They evaluated the receiver twice. They should be added back with
`AddAssign`, `SubAssign`, etc., traits.
2013-06-28 10:44:16 -04:00
James Miller
a897a9ab9f Remove useless namegen thunk 2013-06-28 18:00:20 +12:00
bors
63afb8ccc8 auto merge of #7430 : huonw/rust/vec-kill, r=thestinger 2013-06-27 15:01:58 -07:00
Philipp Brüschweiler
7295a6da92 Remove many shared pointers
Mostly just low-haning fruit, i.e. function arguments that were @ even
though & would work just as well.

Reduces librustc.so size by 200k when compiling without -O, by 100k when
compiling with -O.
2013-06-27 15:06:19 +02:00
Huon Wilson
d0512b1055 Convert vec::[mut_]slice to methods, remove vec::const_slice. 2013-06-27 22:36:09 +10:00
Philipp Brüschweiler
2234f61038 Remove the last traces of shapes 2013-06-26 18:08:23 -04:00
Luqman Aden
ca2966c6d0 Change finalize -> drop. 2013-06-25 21:14:39 -04:00
Daniel Micay
d2e9912aea vec: remove BaseIter implementation
I removed the `static-method-test.rs` test because it was heavily based
on `BaseIter` and there are plenty of other more complex uses of static
methods anyway.
2013-06-23 02:05:20 -04:00
James Miller
048ed1486f Move count-llvm-insn code into task-local storage 2013-06-22 12:38:40 +12:00
James Miller
0b0c756c9c Fix warnings in trans 2013-06-22 12:38:40 +12:00
James Miller
81cf72c264 Finish up Type refactoring 2013-06-22 12:35:35 +12:00
James Miller
57a75374d6 Initial Type Refactoring done 2013-06-22 12:32:11 +12:00
James Miller
fd83b92b59 More Type refactorings 2013-06-22 12:26:33 +12:00
James Miller
b4b2cbb299 Change calls for TypeName stuff to methods 2013-06-22 12:24:20 +12:00
Daniel Micay
883c966d5c vec: replace position with iter().position_ 2013-06-21 03:23:59 -04:00
Daniel Micay
49c74524e2 vec: rm old_iter implementations, except BaseIter
The removed test for issue #2611 is well covered by the `std::iterator`
module itself.

This adds the `count` method to `IteratorUtil` to replace `EqIter`.
2013-06-21 03:20:22 -04:00
Björn Steinbrink
642cd467c6 Avoid quadratic growth of cleanup blocks
Currently, cleanup blocks are only reused when there are nested scopes, the
child scope's cleanup block will terminate with a jump to the parent
scope's cleanup block. But within a single scope, adding or revoking
any cleanup will force a fresh cleanup block. This means quadratic
growth with the number of allocations in a scope, because each
allocation needs a landing pad.

Instead of forcing a fresh cleanup block, we can keep a list chained
cleanup blocks that form a prefix of the currently required cleanups.
That way, the next cleanup block only has to handle newly added
cleanups. And by keeping the whole list instead of just the latest
block, we can also handle revocations more efficiently, by only
dropping those blocks that are no longer required, instead of all of
them.

Reduces the size of librustc by about 5% and the time required to build
it by about 10%.
2013-06-16 17:52:46 +02:00