`make dist` or building from a generated tarball is currently not possible due to some files that have been renamed and the ongoing libextra split. This PR fixes all (current) issues in order to build rust from the .tar.gz source alone.
This pull request tries to fix#12050.
I went after these wrong errors quite aggressively so it might be that I also changed some strings that are not actual errors.
Please point those out and I will update this pull request accordingly.
libextra is currently being split into several crates. This commit adds
them all to the dist target in order to have them in the final tarballs.
Signed-off-by: Luca Bruno <lucab@debian.org>
Error messages cleaned in librustc/middle
Error messages cleaned in libsyntax
Error messages cleaned in libsyntax more agressively
Error messages cleaned in librustc more aggressively
Fixed affected tests
Fixed other failing tests
Last failing tests fixed
src/README.txt has been renamed in a30d61b05a, make dist is
thus failing as unable to find it.
This commit makes the dist target working again.
Signed-off-by: Luca Bruno <lucab@debian.org>
The lexer and json were using `transmute(-1): char` as a sentinel value for EOF, which is invalid since `char` is strictly a unicode codepoint.
Fixing this allows for range asserts on chars since they always lie between 0 and 0x10FFFF.
Declare a `type SendStr = MaybeOwned<'static>` to ease readibility of
types that needed the old SendStr behavior.
Implement all the traits for MaybeOwned that SendStr used to implement.
- Convert the formatting traits to `&self` rather than `_: &Self`
- Rejig `syntax::ext::{format,deriving}` a little in preparation
- Implement `#[deriving(Show)]`
The transmute was unsound.
There are many instances of .unwrap_or('\x00') for "ignoring" EOF which
either do not make the situation worse than it was (well, actually make
it better, since it's easy to grep for places that don't handle EOF) or
can never ever be read.
Fixes#8971.
This also drops support for the managed pointer POISON_ON_FREE feature
as it's not worth adding back the support for it. After a snapshot, the
leftovers can be removed.
This pull request:
1) Changes the initial insertion sort to be in-place, and defers allocation of working set until merge is needed.
2) Increases the increases the maximum run length to use insertion sort for from 8 to 32 elements. This increases the size of vectors that will not allocate, and reduces the number of merge passes by two. It seemed to be the sweet spot in the benchmarks that I ran.
Here are the results of some benchmarks. Note that they are sorting u64s, so types that are more expensive to compare or copy may have different behaviors.
Before changes:
```
test vec::bench::sort_random_large bench: 719753 ns/iter (+/- 130173) = 111 MB/s
test vec::bench::sort_random_medium bench: 4726 ns/iter (+/- 742) = 169 MB/s
test vec::bench::sort_random_small bench: 344 ns/iter (+/- 76) = 116 MB/s
test vec::bench::sort_sorted bench: 437244 ns/iter (+/- 70043) = 182 MB/s
```
Deferred allocation (8 element insertion sort):
```
test vec::bench::sort_random_large bench: 702630 ns/iter (+/- 88158) = 113 MB/s
test vec::bench::sort_random_medium bench: 4529 ns/iter (+/- 497) = 176 MB/s
test vec::bench::sort_random_small bench: 185 ns/iter (+/- 49) = 216 MB/s
test vec::bench::sort_sorted bench: 425853 ns/iter (+/- 60907) = 187 MB/s
```
Deferred allocation (16 element insertion sort):
```
test vec::bench::sort_random_large bench: 692783 ns/iter (+/- 165837) = 115 MB/s
test vec::bench::sort_random_medium bench: 4434 ns/iter (+/- 722) = 180 MB/s
test vec::bench::sort_random_small bench: 187 ns/iter (+/- 38) = 213 MB/s
test vec::bench::sort_sorted bench: 393783 ns/iter (+/- 85548) = 203 MB/s
```
Deferred allocation (32 element insertion sort):
```
test vec::bench::sort_random_large bench: 682556 ns/iter (+/- 131008) = 117 MB/s
test vec::bench::sort_random_medium bench: 4370 ns/iter (+/- 1369) = 183 MB/s
test vec::bench::sort_random_small bench: 179 ns/iter (+/- 32) = 223 MB/s
test vec::bench::sort_sorted bench: 358353 ns/iter (+/- 65423) = 223 MB/s
```
Deferred allocation (64 element insertion sort):
```
test vec::bench::sort_random_large bench: 712040 ns/iter (+/- 132454) = 112 MB/s
test vec::bench::sort_random_medium bench: 4425 ns/iter (+/- 784) = 180 MB/s
test vec::bench::sort_random_small bench: 179 ns/iter (+/- 81) = 223 MB/s
test vec::bench::sort_sorted bench: 317812 ns/iter (+/- 62675) = 251 MB/s
```
This is the best I could manage with the basic merge sort while keeping the invariant that the original vector must contain each element exactly once when the comparison function is called. If one is not married to a stable sort, an in-place n*log(n) sorting algorithm may have better performance in some cases.
for #12011
cc @huonw
Added a seperate in-place insertion sort for short vectors.
Increased threshold for insertion short for 8 to 32 elements
for small types and 16 for larger types. Added benchmarks
for sorting larger types.
This PR extends the tidy formatting check to rust files in the test folder. To facilitate this, a few flags were added to tidy:
* `xfail-tidy-cr` - Disables the check for CR characters for all following lines in the file
* `xfail-tidy-tab` - Disables the check for tab characters for all following lines in the file
* `xfail-tidy-linelength` - Disables the line length check for all following lines in the file
Checks should not have to be disabled often. I disabled line length checks in `debug-info` tests that use `debugger:` checks, but aside from that, there were relatively few exclusions. Running tidy on all the tests does slow down the formatting check, so it may be worth investigating further optimization.
cc #4534