Commit Graph

21085 Commits

Author SHA1 Message Date
Michael Woerister
983cc777c5 debuginfo: Add some tests for visibiliy scopes within closures. 2013-08-13 11:13:49 +02:00
Michael Woerister
9c7d9eb6fd debuginfo: Add support for argument shadowing. 2013-08-13 11:13:49 +02:00
Michael Woerister
33e7d95e9c debuginfo: Implemented proper handling of lexical scopes and variable shadowing. 2013-08-13 11:13:49 +02:00
bors
4601ea65f8 auto merge of #8487 : brson/rust/local-opts, r=brson
I did this once but acciddentally undid it in a later patch.
2013-08-12 23:56:26 -07:00
Brian Anderson
95badabaaf std: Re-optimize tls access on local allocation path
I did this once but acciddentally undid it in a later patch.
2013-08-12 22:30:32 -07:00
bors
44675ac6af auto merge of #8476 : thestinger/rust/snapshot, r=brson 2013-08-12 20:29:22 -07:00
bors
d9492d72ba auto merge of #8450 : alexcrichton/rust/nopt-changes, r=graydon
Since the new runtime landed, the *-nopt builders have increased cycle time by roughly an hour. I have a feeling that this is because the entire runtime is in rust and it's not being optimized at all. In that past with an optimized C++ runtime it looks like things ran faster.

This adds the ability to disable optimizations in tests only, not for the entire compiler. This means that the entire compiler and associated libraries will be built with optimizations, but the tests themselves would be built and run without optimizations.

This isn't quite as good of a guarantee as disabling optimizations everywhere, but hopefully it'll improve cycle time for the *-nopt builds to move the queue along faster.
2013-08-12 17:59:19 -07:00
bors
59da4e0bc9 auto merge of #8419 : cmr/rust/fix-rtdebug, r=brson
It now actually does logging, and is compiled out when `--cfg rtdebug` is not
given to the libstd build, which it isn't by default. This makes the rt
benchmarks 18-50% faster.
2013-08-12 15:28:49 -07:00
Daniel Micay
0cb0ef2ca5 fix build with the new snapshot compiler 2013-08-12 17:37:46 -04:00
Daniel Micay
8b502d60ab register snapshots 2013-08-12 17:37:42 -04:00
bors
35040275b3 auto merge of #8400 : blake2-ppc/rust/seq-ord, r=cmr
Use Eq + Ord for lexicographical ordering of sequences.

For each of <, <=, >= or > as R, use::

    [x, ..xs] R [y, ..ys]  =  if x != y { x R y } else { xs R ys }

Previous code using `a < b` and then `!(b < a)` for short-circuiting
fails on cases such as  [1.0, 2.0] < [0.0/0.0, 3.0], where the first
element was effectively considered equal.

Containers like &[T] did also implement only one comparison operator `<`,
and derived the comparison results from this. This isn't correct either for
Ord.

Implement functions in `std::iterator::order::{lt,le,gt,ge,equal,cmp}` that all
iterable containers can use for lexical order.

We also visit tuple ordering, having the same problem and same solution
(but differing implementation).
2013-08-12 11:53:18 -07:00
bors
59434a1b8c auto merge of #8429 : catamorphism/rust/rustpkg-docs, r=catamorphism 2013-08-12 08:17:14 -07:00
bors
ecfc9a8223 auto merge of #8428 : blake2-ppc/rust/peekable-iterators, r=thestinger
Peekable changes by @SimonSapin and original PR is #8396
2013-08-12 04:29:11 -07:00
bors
de48274c50 auto merge of #8418 : pnkfelix/rust/fsk-issue3192-improve-parse-error-for-empty-struct-init, r=pcwalton,me
Fix #3192.  r? anyone

There are 4 different new tests, to check some different scenarios for
what the parse context is at the time of recovery, becasue our
compile-fail infrastructure does not appear to handle verifying
error-recovery situations.

Differentiate between unit-like struct definition item and unit-like
struct construction in the error message.

----

More generally, outlines a more generic strategy for parse error
recovery: By committing to an expression/statement at set points in
the parser, we can then do some look-ahead to catch common mistakes
and skip over them.

One detail about this strategy is that you want to avoid emitting the
"helpful" message unless the input is reasonably close to the case of
interest.  (E.g. do not warn about a potential unit struct for an
input of the form `let hmm = do foo { } { };`)

To accomplish this, I added (partial) last_token tracking; used for
`commit_stmt` support.

The check_for_erroneous_unit_struct_expecting fn returns bool to
signal whether it "made progress"; currently unused; this is meant for
use to compose several such recovery checks together in a loop.
2013-08-12 00:32:11 -07:00
bors
1785841a5b auto merge of #8410 : luqmana/rust/mcpu, r=sanxiyn
Adds `--target-cpu` flag which lets you choose a more specific target cpu instead of just passing the default, `generic`. It's more or less akin to `-mcpu`/`-mtune` in clang/gcc.
2013-08-11 20:50:14 -07:00
bors
0679436381 auto merge of #8427 : brson/rust/rustc-stack, r=thestinger
A lot of people are hitting stack overflows in rustc. This will make it
easier to experiment with stack size.
2013-08-11 17:38:09 -07:00
bors
b285f1e6c9 auto merge of #8455 : nikomatsakis/rust/issue-5762-objects-dralston-d, r=graydon
Fix #5762 and various other aspects of object invocation.

r? @graydon
2013-08-11 14:17:09 -07:00
Niko Matsakis
7343478d67 Convert from transform to map 2013-08-11 14:56:43 -04:00
Niko Matsakis
b402e343e4 tests: Add new tests for borrowck/objects and update some existing tests 2013-08-11 14:01:23 -04:00
Niko Matsakis
df016dc4bf Update type visitor to use &Visitor and not @Visitor 2013-08-11 14:01:23 -04:00
Niko Matsakis
66b8ad5867 borrowck: Integrate AutoBorrowObj into borrowck / mem_categorization
Also cleanup the treatment of mutability in mem_categorization, which still
included the concept of interior mutability. At some point, we should
refactor the types to exclude the possibility of interior mutability rather
than just ignoring the mutability value in those cases.
2013-08-11 14:01:23 -04:00
Niko Matsakis
1bceb98084 typeck: Modify method resolution to use new object adjustments, and
to favor inherent methods over extension methods.

The reason to favor inherent methods is that otherwise an impl
like

    impl Foo for @Foo { fn method(&self) { self.method() } }

causes infinite recursion.  The current change to favor inherent methods is
rather hacky; the method resolution code is in need of a refactoring.
2013-08-11 14:01:23 -04:00
Niko Matsakis
006c6b6be4 trans: Rely on new AutoBorrowObj adjustment to match up object receivers
Note: some portions of this commit written by @Sodel-the-Vociferous
(Daniel Ralston)
2013-08-11 14:01:19 -04:00
Niko Matsakis
6f319812d6 ty: Add (but do not yet use) AutoBorrowObject option to adjustments
Note: some portions of this commit written by @Sodel-the-Vociferous
(Daniel Ralston)
2013-08-11 14:01:19 -04:00
Niko Matsakis
38357ebef4 typeck/check/method: Remove pub from most methods 2013-08-11 13:59:46 -04:00
Niko Matsakis
38b2e2980e Misc small cleanups 2013-08-11 13:59:46 -04:00
Niko Matsakis
6fe59bf877 Add a field borrow_offset to the type descriptor indicating
what amount a T* pointer must be adjusted to reach the contents
of the box. For `~T` types, this requires knowing the type `T`,
which is not known in the case of objects.
2013-08-11 13:59:45 -04:00
bors
63c62bea3a auto merge of #8420 : blake2-ppc/rust/shrink-token, r=cmr
`enum Token` was 192 bytes (64-bit), as pointed out by pnkfelix; the only
bloating variant being `INTERPOLATED(nonterminal)`.

Updating `enum nonterminal` to use ~ where variants included big types,
shrunk size_of(Token) to 32 bytes (64-bit).

I am unsure if the `nt_ident` variant should have an indirection, with
ast::ident being only 16 bytes (64-bit), but without this, enum Token
would be 40 bytes.

A dumb benchmark says that compilation time is unchanged, while peak
memory usage for compiling std.rs is down 3%

Before::

    $ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc --cfg stage1 src/libstd/std.rs
    19.00user 0.39system 0:19.41elapsed 99%CPU (0avgtext+0avgdata 627820maxresident)k
    0inputs+28896outputs (0major+228665minor)pagefaults 0swaps
    $ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc -O --cfg stage1 src/libstd/std.rs
    31.64user 0.34system 0:32.02elapsed 99%CPU (0avgtext+0avgdata 629876maxresident)k
    0inputs+22432outputs (0major+229411minor)pagefaults 0swaps

After::

    $ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc --cfg stage1 src/libstd/std.rs
    19.07user 0.45system 0:19.55elapsed 99%CPU (0avgtext+0avgdata 609384maxresident)k
    0inputs+28896outputs (0major+221997minor)pagefaults 0swaps

    $ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc -O --cfg stage1 src/libstd/std.rs
    31.90user 0.34system 0:32.28elapsed 99%CPU (0avgtext+0avgdata 612080maxresident)k
    0inputs+22432outputs (0major+223726minor)pagefaults 0swaps
2013-08-11 10:50:10 -07:00
Niko Matsakis
3aefb9649d librustc: Convert from @Object to @mut Object as needed 2013-08-11 13:26:59 -04:00
Niko Matsakis
96254b4090 libsyntax: Update from @Object to @mut Object as required 2013-08-11 13:23:40 -04:00
bors
f08851e31a auto merge of #8421 : alexcrichton/rust/unnamed-addr, r=thestinger
This can be applied to statics and it will indicate that LLVM will attempt to
merge the constant in .data with other statics.

I have preliminarily applied this to all of the statics generated by the new
`ifmt!` syntax extension. I compiled a file with 1000 calls to `ifmt!` and a
separate file with 1000 calls to `fmt!` to compare the sizes, and the results
were:

```
fmt           310k
ifmt (before) 529k
ifmt (after)  202k
```

This now means that ifmt! is both faster and smaller than fmt!, yay!
2013-08-11 07:29:07 -07:00
bors
45da2a5f48 auto merge of #8412 : thestinger/rust/vector-add, r=alexcrichton 2013-08-11 04:11:08 -07:00
bors
cac9affc20 auto merge of #8408 : thestinger/rust/checked, r=Aatch 2013-08-11 00:47:07 -07:00
Alex Crichton
88b89f8476 Allow disabling optimizations in tests only 2013-08-11 00:29:45 -07:00
Daniel Micay
83b3a0eaf1 fix unused imports 2013-08-11 03:17:01 -04:00
Daniel Micay
774c9aebb3 move strdup_uniq lang item to std::str 2013-08-11 03:14:35 -04:00
Daniel Micay
768c9a43ab str: optimize with_capacity
before:

test bench_with_capacity ... bench: 104 ns/iter (+/- 4)

after:

test bench_with_capacity ... bench: 56 ns/iter (+/- 1)
2013-08-11 03:14:35 -04:00
Daniel Micay
2afed31ecc vec: optimize the Add implementation
before:

test add ... bench: 164 ns/iter (+/- 1)

after:

test add ... bench: 113 ns/iter (+/- 2)
2013-08-11 03:14:35 -04:00
Daniel Micay
7db605cd15 disable 64-bit CheckedMul on 32-bit
code generation problem reported as issue #8449
2013-08-11 02:58:52 -04:00
Daniel Micay
076b91f8ad add intrinsics for checked overflow add/sub/mul 2013-08-11 02:51:20 -04:00
blake2-ppc
5cfad6fbae syntax: Shrink enum Token and enum nonterminal
`enum Token` was 192 bytes (64-bit), as pointed out by pnkfelix; the only
bloating variant being `INTERPOLATED(nonterminal)`.

Updating `enum nonterminal` to use ~ where variants included big types,
shrunk size_of(Token) to 32 bytes (64-bit).

I am unsure if the `nt_ident` variant should have an indirection, with
ast::ident being only 16 bytes (64-bit), but without this, enum Token
would be 40 bytes.

A dumb benchmark says that compilation time is unchanged, while peak
memory usage for compiling std.rs is down 3%

Before::

    $ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc --cfg stage1 src/libstd/std.rs
    19.00user 0.39system 0:19.41elapsed 99%CPU (0avgtext+0avgdata 627820maxresident)k
    0inputs+28896outputs (0major+228665minor)pagefaults 0swaps
    $ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc -O --cfg stage1 src/libstd/std.rs
    31.64user 0.34system 0:32.02elapsed 99%CPU (0avgtext+0avgdata 629876maxresident)k
    0inputs+22432outputs (0major+229411minor)pagefaults 0swaps

After::

    $ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc --cfg stage1 src/libstd/std.rs
    19.07user 0.45system 0:19.55elapsed 99%CPU (0avgtext+0avgdata 609384maxresident)k
    0inputs+28896outputs (0major+221997minor)pagefaults 0swaps

    $ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc -O --cfg stage1 src/libstd/std.rs
    31.90user 0.34system 0:32.28elapsed 99%CPU (0avgtext+0avgdata 612080maxresident)k
    0inputs+22432outputs (0major+223726minor)pagefaults 0swaps
2013-08-11 06:56:07 +02:00
bors
eebcff1493 auto merge of #8404 : stepancheg/rust/zero-unit-inline, r=alexcrichton
Follow-up to #8155
2013-08-10 19:53:06 -07:00
blake2-ppc
0b35e3b5a3 extra::treemap: Use IteratorUtil::peekable
Replace the previous equivalent iterator adaptor with .peekable().
Refactor the set operation iterators so that they are easier to read.
2013-08-11 02:18:21 +02:00
Simon Sapin
deb7b67aa1 Add a "peekable" iterator adaptor, with a peek() method that returns the next element. 2013-08-11 02:18:21 +02:00
blake2-ppc
b44f423dd4 std::iterator: Rename .peek() to .inspect() 2013-08-11 01:57:20 +02:00
bors
bf809768ee auto merge of #8444 : erickt/rust/rollup, r=cmr
This merges these PR together:

#8430: r=thestinger 
#8370: r=thestinger
#8386: r=bstrie
#8388: r=thestinger
#8390: r=graydon
#8394: r=graydon
#8402: r=thestinger
#8403: r=catamorphism
2013-08-10 16:32:18 -07:00
Luqman Aden
fcfd6e7c79 rustc: Add --target-cpu flag to select a more specific processor instead of the default, 'generic'. 2013-08-10 17:03:43 -04:00
bors
8b9e1ce75a auto merge of #8430 : erickt/rust/cleanup-iterators, r=erickt
This PR does a bunch of cleaning up of various APIs. The major one is that it merges `Iterator` and `IteratorUtil`, and renames functions like `transform` into `map`. I also merged `DoubleEndedIterator` and `DoubleEndedIteratorUtil`, as well as I renamed various .consume* functions to .move_iter(). This helps to implement part of #7887.
2013-08-10 13:17:19 -07:00
Erick Tryzelaar
20953bb1fb Merge branch 'match' of https://github.com/msullivan/rust into rollup 2013-08-10 13:04:16 -07:00
Erick Tryzelaar
f007a46d37 Merge branch 'master' of https://github.com/MAnyKey/rust into rollup 2013-08-10 13:03:34 -07:00