Since the new runtime landed, the *-nopt builders have increased cycle time by roughly an hour. I have a feeling that this is because the entire runtime is in rust and it's not being optimized at all. In that past with an optimized C++ runtime it looks like things ran faster.
This adds the ability to disable optimizations in tests only, not for the entire compiler. This means that the entire compiler and associated libraries will be built with optimizations, but the tests themselves would be built and run without optimizations.
This isn't quite as good of a guarantee as disabling optimizations everywhere, but hopefully it'll improve cycle time for the *-nopt builds to move the queue along faster.
It now actually does logging, and is compiled out when `--cfg rtdebug` is not
given to the libstd build, which it isn't by default. This makes the rt
benchmarks 18-50% faster.
Use Eq + Ord for lexicographical ordering of sequences.
For each of <, <=, >= or > as R, use::
[x, ..xs] R [y, ..ys] = if x != y { x R y } else { xs R ys }
Previous code using `a < b` and then `!(b < a)` for short-circuiting
fails on cases such as [1.0, 2.0] < [0.0/0.0, 3.0], where the first
element was effectively considered equal.
Containers like &[T] did also implement only one comparison operator `<`,
and derived the comparison results from this. This isn't correct either for
Ord.
Implement functions in `std::iterator::order::{lt,le,gt,ge,equal,cmp}` that all
iterable containers can use for lexical order.
We also visit tuple ordering, having the same problem and same solution
(but differing implementation).
Fix#3192. r? anyone
There are 4 different new tests, to check some different scenarios for
what the parse context is at the time of recovery, becasue our
compile-fail infrastructure does not appear to handle verifying
error-recovery situations.
Differentiate between unit-like struct definition item and unit-like
struct construction in the error message.
----
More generally, outlines a more generic strategy for parse error
recovery: By committing to an expression/statement at set points in
the parser, we can then do some look-ahead to catch common mistakes
and skip over them.
One detail about this strategy is that you want to avoid emitting the
"helpful" message unless the input is reasonably close to the case of
interest. (E.g. do not warn about a potential unit struct for an
input of the form `let hmm = do foo { } { };`)
To accomplish this, I added (partial) last_token tracking; used for
`commit_stmt` support.
The check_for_erroneous_unit_struct_expecting fn returns bool to
signal whether it "made progress"; currently unused; this is meant for
use to compose several such recovery checks together in a loop.
Adds `--target-cpu` flag which lets you choose a more specific target cpu instead of just passing the default, `generic`. It's more or less akin to `-mcpu`/`-mtune` in clang/gcc.
The type of the result of option_env! was not fully specified in the
None case, leading to type check failures in the case where the variable
was not defined (e.g. option_env!("FOO").is_none()).
Also cleanup the treatment of mutability in mem_categorization, which still
included the concept of interior mutability. At some point, we should
refactor the types to exclude the possibility of interior mutability rather
than just ignoring the mutability value in those cases.
to favor inherent methods over extension methods.
The reason to favor inherent methods is that otherwise an impl
like
impl Foo for @Foo { fn method(&self) { self.method() } }
causes infinite recursion. The current change to favor inherent methods is
rather hacky; the method resolution code is in need of a refactoring.
what amount a T* pointer must be adjusted to reach the contents
of the box. For `~T` types, this requires knowing the type `T`,
which is not known in the case of objects.
`enum Token` was 192 bytes (64-bit), as pointed out by pnkfelix; the only
bloating variant being `INTERPOLATED(nonterminal)`.
Updating `enum nonterminal` to use ~ where variants included big types,
shrunk size_of(Token) to 32 bytes (64-bit).
I am unsure if the `nt_ident` variant should have an indirection, with
ast::ident being only 16 bytes (64-bit), but without this, enum Token
would be 40 bytes.
A dumb benchmark says that compilation time is unchanged, while peak
memory usage for compiling std.rs is down 3%
Before::
$ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc --cfg stage1 src/libstd/std.rs
19.00user 0.39system 0:19.41elapsed 99%CPU (0avgtext+0avgdata 627820maxresident)k
0inputs+28896outputs (0major+228665minor)pagefaults 0swaps
$ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc -O --cfg stage1 src/libstd/std.rs
31.64user 0.34system 0:32.02elapsed 99%CPU (0avgtext+0avgdata 629876maxresident)k
0inputs+22432outputs (0major+229411minor)pagefaults 0swaps
After::
$ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc --cfg stage1 src/libstd/std.rs
19.07user 0.45system 0:19.55elapsed 99%CPU (0avgtext+0avgdata 609384maxresident)k
0inputs+28896outputs (0major+221997minor)pagefaults 0swaps
$ time ./x86_64-unknown-linux-gnu/stage1/bin/rustc -O --cfg stage1 src/libstd/std.rs
31.90user 0.34system 0:32.28elapsed 99%CPU (0avgtext+0avgdata 612080maxresident)k
0inputs+22432outputs (0major+223726minor)pagefaults 0swaps
This can be applied to statics and it will indicate that LLVM will attempt to
merge the constant in .data with other statics.
I have preliminarily applied this to all of the statics generated by the new
`ifmt!` syntax extension. I compiled a file with 1000 calls to `ifmt!` and a
separate file with 1000 calls to `fmt!` to compare the sizes, and the results
were:
```
fmt 310k
ifmt (before) 529k
ifmt (after) 202k
```
This now means that ifmt! is both faster and smaller than fmt!, yay!