The only changes to the default passes is that O1 now doesn't run the inline
pass, just always-inline with lifetime intrinsics. O2 also now has a threshold
of 225 instead of 275. Otherwise the default passes being run is the same.
I've also added a few more options for configuring the pass pipeline. Namely you
can now specify arguments to LLVM directly via the `--llvm-args` command line
option which operates similarly to `--passes`. I also added the ability to turn
off pre-population of the pass manager in case you want to run *only* your own
passes.
I would consider this as closing #8890. I don't think that we should change the default inlining threshold because LLVM/clang will probably have chosen those numbers more carefully than we would. Regardless, here's the performance numbers from this commit:
```
$ ./x86_64-apple-darwin/stage0/bin/rustc ./gistfile1.rs --test --opt-level=3 -o before
warning: no debug symbols in executable (-arch x86_64)
$ ./before --bench
running 1 test
test bench::aes_bench_x8 ... bench: 1602 ns/iter (+/- 66) = 7990 MB/s
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
$ ./x86_64-apple-darwin/stage1/bin/rustc ./gistfile1.rs --test --opt-level=3 -o after
warning: no debug symbols in executable (-arch x86_64)
$ ./after --bench
running 1 test
test bench::aes_bench_x8 ... bench: 2103 ns/iter (+/- 175) = 6086 MB/s
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
$ ./x86_64-apple-darwin/stage1/bin/rustc ./gistfile1.rs --test --opt-level=3 -o after --llvm-args '-inline-threshold=225'
warning: no debug symbols in executable (-arch x86_64)
$ ./after --bench
running 1 test
test bench::aes_bench_x8 ... bench: 1600 ns/iter (+/- 71) = 8000 MB/s
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
```
The only changes to the default passes is that O1 now doesn't run the inline
pass, just always-inline with lifetime intrinsics. O2 also now has a threshold
of 225 instead of 275. Otherwise the default passes being run is the same.
I've also added a few more options for configuring the pass pipeline. Namely you
can now specify arguments to LLVM directly via the `--llvm-args` command line
option which operates similarly to `--passes`. I also added the ability to turn
off pre-population of the pass manager in case you want to run *only* your own
passes.
r? @brson
@metajack requested the ability to violate the "only workspaces can be in the RUST_PATH" rule for the purpose of bootstrapping Servo without having to restructure all the directories. This patch gives rustpkg the ability to find sources if a directory in the RUST_PATH directly contains one of rustpkg's "special" files (lib.rs, main.rs, bench.rs, or test.rs), even if it's not a workspace. In this case, it puts the build artifacts in the first workspace in the RUST_PATH.
I'm not sure whether or not it's a good idea to keep this feature in rustpkg permanently. Thus, I added a flag, ```--use-rust-path-hack```, and only enabled it when the flag is set.
This commit adds a rustpkg flag, --rust-path-hack, that allows
rustpkg to *search* inside package directories if they appear in
the RUST_PATH, while *building* libraries and executables into a
different target directory.
This behavior is hidden behind a flag because I believe we only
want to support it temporarily, to make it easier to port servo to
rustpkg.
This commit also includes a fix for how rustpkg fetches sources
from git repositories -- it uses a temporary directory as the target
when invoking `git clone`, then moves that directory into the workspace
if the clone was successful. (The old behavior was that when the
`git clone` failed, the empty target directory would be left lying
around anyway.)
This is a generalization of the vector .rposition() method, to all
double-ended iterators that have the ExactSizeHint trait.
This resolves the slight asymmetry around `position` and `rposition`
* position from front is `vec.iter().position()`
* position from the back was, `vec.rposition()` is now `vec.iter().rposition()`
Additionally, other indexed sequences (only `extra::ringbuf` I think),
will have the same method available once it implements ExactSizeHint.
The trait `ExactSizeHint` is introduced to solve a few small niggles:
* We can't reverse (`.invert()`) an enumeration iterator
* for a vector, we have `v.iter().position(f)` but `v.rposition(f)`.
* We can't reverse `Zip` even if both iterators are from vectors
`ExactSizeHint` is an empty trait that is intended to indicate that an
iterator, for example `VecIterator`, knows its exact finite size and
reports it correctly using `.size_hint()`. Only adaptors that preserve
this at all times, can expose this trait further. (Where here we say
finite for fitting in uint).
Fix a bug in `s.slice_chars(a, b)` that did not accept `a == s.len()`.
Fix a bug in `!=` defined for DList.
Also simplify NormalizationIterator to use the CharIterator directly instead of mimicing the iteration itself.
These are very easy to replace with methods on string slices, basically
`.char_len()` and `.len()`.
These are the replacement implementations I did to clean these
functions up, but seeing this I propose removal:
/// ...
pub fn count_chars(s: &str, begin: uint, end: uint) -> uint {
// .slice() checks the char boundaries
s.slice(begin, end).char_len()
}
/// Counts the number of bytes taken by the first `n` chars in `s`
/// starting from byte index `begin`.
///
/// Fails if there are less than `n` chars past `begin`
pub fn count_bytes<'b>(s: &'b str, begin: uint, n: uint) -> uint {
s.slice_from(begin).slice_chars(0, n).len()
}
The only user-facing change is handling non-integer (and zero) `RUST_THREADS` more nicely:
```
$ RUST_THREADS=x rustc # old
You've met with a terrible fate, haven't you?
fatal runtime error: runtime tls key not initialized
Aborted
$ RUST_THREADS=x ./x86_64-unknown-linux-gnu/stage2/bin/rustc # new
You've met with a terrible fate, haven't you?
fatal runtime error: `RUST_THREADS` is `x`, should be a positive integer
Aborted
```
The other changes are converting some `for .. in range(x,y)` to `vec::from_fn` or `for .. in x.iter()` as appropriate; and removing a chain of (seemingly) unnecessary pointer casts.
(Also, fixes a typo in `extra::test` from #8823.)
Whenever a generic function was encountered, only the top-level items were
recursed upon, even though the function could contain items inside blocks or
nested inside of other expressions. This fixes the existing code from traversing
just the top level items to using a Visitor to deeply recurse and find any items
which need to be translated.
This was uncovered when building code with --lib, because the encode_symbol
function would panic once it found that an item hadn't been translated.
Closes#8134
Python's zip() short-circuits by not even querying its right-hand
iterator if the left-hand one is done. Match that behavior here by not
calling .next() on the right iterator if the left one returns None.
Document the fact that the iterator protocol only defines behavior up
until the first None is returned. After this point, iterators are free
to behave how they wish.
Add a new iterator adaptor Fuse<T> that modifies iterators to return
None forever if they returned None once.
Whenever a generic function was encountered, only the top-level items were
recursed upon, even though the function could contain items inside blocks or
nested inside of other expressions. This fixes the existing code from traversing
just the top level items to using a Visitor to deeply recurse and find any items
which need to be translated.
This was uncovered when building code with --lib, because the encode_symbol
function would panic once it found that an item hadn't been translated.
Closes#8134
This should make benchmarks easier to understand. But, it doesn't work.
BENCH_RS in mk/tests.mk has everything, from what I can tell in remake, but
only those that are direct children of src/test/bench get build and run.
@graydon, can you lend your expertise? I can't make heads or tails of this
makefile.
...ing, r=brson"
This reverts commit b8d1fa3994, reversing
changes made to f22b4b1698.
Conflicts:
mk/rt.mk
src/libuv
This caused a big performance regression on the windows bots and possibly some unexpected segfaults in pretty-printing tests.