Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:
* Passes are no longer explicitly added one by one. This would be difficult to
keep up with as LLVM changes and we don't guaranteed always know the best
order in which to run passes
* Passes are now managed by LLVM's PassManagerBuilder object. This is then used
to populate the various pass managers run.
* We now run both a FunctionPassManager and a module-wide PassManager. This is
what clang does, and I presume that we *may* see a speed boost from the
module-wide passes just having to do less work. I have no measured this.
* The codegen pass manager has been extracted to its own separate pass manager
to not get mixed up with the other passes
* All pass managers now include passes for target-specific data layout and
analysis passes
Some new features include:
* You can now print all passes being run with `-Z print-llvm-passes`
* When specifying passes via `--passes`, the passes are now appended to the
default list of passes instead of overwriting them.
* The output of `--passes list` is now generated by LLVM instead of maintaining
a list of passes ourselves
* Loop vectorization is turned on by default as an optimization pass and can be
disabled with `-Z no-vectorize-loops`
All of these "copies" of clang are based off their [source code](http://clang.llvm.org/doxygen/BackendUtil_8cpp_source.html) in case anyone is curious what my source is. I was hoping that this would fix#8665, but this does not help the performance issues found there. Hopefully i'll allow us to tweak passes or see what's going on to try to debug that problem.
Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:
* Passes are no longer explicitly added one by one. This would be difficult to
keep up with as LLVM changes and we don't guaranteed always know the best
order in which to run passes
* Passes are now managed by LLVM's PassManagerBuilder object. This is then used
to populate the various pass managers run.
* We now run both a FunctionPassManager and a module-wide PassManager. This is
what clang does, and I presume that we *may* see a speed boost from the
module-wide passes just having to do less work. I have no measured this.
* The codegen pass manager has been extracted to its own separate pass manager
to not get mixed up with the other passes
* All pass managers now include passes for target-specific data layout and
analysis passes
Some new features include:
* You can now print all passes being run with `-Z print-llvm-passes`
* When specifying passes via `--passes`, the passes are now appended to the
default list of passes instead of overwriting them.
* The output of `--passes list` is now generated by LLVM instead of maintaining
a list of passes ourselves
* Loop vectorization is turned on by default as an optimization pass and can be
disabled with `-Z no-vectorize-loops`
This is a pull request for #2275
I've created a small python script to generate test files for a list of keywords (as break do else enum extern false fn for if impl let loop match mod mut priv pub ref return self static struct super true trait type unsafe use while), but I'm not really sure where to put it. I've added the created files as well.
I did not use
fn main() {
let $KW = "foo"; //~ error
println($KW); //~ error
}
as template, because for return, self, ref, loop, mut and break this does not raise an error in the ```println``` line, only in the ```let``` line.
This patchset enables rustc to cross-build mingw-w64 outputs.
Tested on mingw + mingw-w64 (mingw-builds, win64/seh/win32-threads/gcc-4.8.1).
I also patched llvm to support Win64 stack unwinding.
ebe22bdbce
I cross-built test/run-pass/smallest-hello-world.rs and confirmed it works.
However, I also found something went wrong if I don't have custom `#[start]` routine.
Further followup on #7081.
There still remains writeback.rs, but I want to wait to investigate that one because I've seen `make check` issues with it in the past.
This patch saves and restores win64's nonvolatile registers.
This patch also saves stack information of thread environment
block (TEB), which is at %gs:0x08 and %gs:0x10.
Some extern blobs are duplicated without "stdcall" abi,
since Win64 does not use any calling convention.
(Giving any abi to them causes llvm producing wrong bytecode.)
Make CharSplitIterator double-ended which is simple given that the operation is symmetric, once the split-N feature is factored out into its own adaptor.
`.rsplitn_iter()` allows splitting `N` times from the back of a string, so it is a completely new feature. With the double-ended impl, `.split_iter()`, `.line_iter()`, `.word_iter()` all allow picking off elements from either end.
`split_options_iter` is removed with the factoring of the split- and split-N- iterators, instead there is `split_terminator_iter`.
---
Add benchmarks using `#[bench]` and tune CharSplitIterator a bit after Huon Wilson's suggestions
Benchmarks 1-5 do the same split using different implementations of `CharEq`, all splitting an ascii string on ascii space. Benchmarks 6-7 split a unicode string on an ascii char.
Before this PR
test str::bench::split_iter_ascii ... bench: 166 ns/iter (+/- 2)
test str::bench::split_iter_closure ... bench: 113 ns/iter (+/- 1)
test str::bench::split_iter_extern_fn ... bench: 286 ns/iter (+/- 7)
test str::bench::split_iter_not_ascii ... bench: 114 ns/iter (+/- 4)
test str::bench::split_iter_slice ... bench: 220 ns/iter (+/- 12)
test str::bench::split_iter_unicode_ascii ... bench: 217 ns/iter (+/- 3)
test str::bench::split_iter_unicode_not_ascii ... bench: 248 ns/iter (+/- 3)
PR, first commit
test str::bench::split_iter_ascii ... bench: 331 ns/iter (+/- 9)
test str::bench::split_iter_closure ... bench: 114 ns/iter (+/- 2)
test str::bench::split_iter_extern_fn ... bench: 314 ns/iter (+/- 6)
test str::bench::split_iter_not_ascii ... bench: 132 ns/iter (+/- 1)
test str::bench::split_iter_slice ... bench: 157 ns/iter (+/- 3)
test str::bench::split_iter_unicode_ascii ... bench: 502 ns/iter (+/- 64)
test str::bench::split_iter_unicode_not_ascii ... bench: 250 ns/iter (+/- 3)
PR, final version
test str::bench::split_iter_ascii ... bench: 106 ns/iter (+/- 4)
test str::bench::split_iter_closure ... bench: 107 ns/iter (+/- 1)
test str::bench::split_iter_extern_fn ... bench: 267 ns/iter (+/- 6)
test str::bench::split_iter_not_ascii ... bench: 108 ns/iter (+/- 1)
test str::bench::split_iter_slice ... bench: 170 ns/iter (+/- 8)
test str::bench::split_iter_unicode_ascii ... bench: 128 ns/iter (+/- 5)
test str::bench::split_iter_unicode_not_ascii ... bench: 252 ns/iter (+/- 3)
---
There are several ways to deal with `CharEq::only_ascii`. It is a performance optimization, so with that in mind, we allow passing bogus char (outside ascii) as long as they don't match. We use a byte value check to make sure we don't split on these (would split substrings in the middle of encoded char). (A more principled way would be to only pass the ascii codepoints to the CharEq when it indicates only_ascii, but that undoes some of the performance optimization.)
Implement Huon Wilson's suggestions (since the benchmarks agree!).
Use `self.sep.matches(byte as char) && byte < 128u8` to match in the
only_ascii case so that mistaken matches outside the ascii range can't
create invalid substrings.
Put the conditional on only_ascii outside the loop.
This is in preparation for making discriminants not always be int (#1647), but it also makes compiles for a 64-bit target not behave differently — with respect to how many bits of discriminants are preserved — depending on the build host's word size, which is a nice property to have.
We may want to standardize how to abbreviate "discriminant" in a followup change.
This does two things: 1) stops compressing metadata, 2) stops copying the metadata section, instead holding a reference to the buffer returned by the LLVM section iterator.
Not compressing metadata requires something like 7x the storage space, but makes running tests about 9% faster. This has been a time improvement on all platforms I've tested, including windows. I considered leaving compression as an option but it doesn't seem to be worth the complexity since we don't currently have any use cases where we need to save that space.
In order to avoid copying the metadata section I had to hack up extra::ebml a bit to support unsafe buffers. We should probably move it into librustc so that it can evolve to support the compiler without worrying about having a crummy interface.
r? @graydon