As mentioned in #6109, ```mkdir_recursive``` doesn't really need to use recursive calls, so here is an iterative version.
The other points of the proposed overhaul (renaming and existing permissions) still need to be resolved.
I also bundled an iterative ```rmdir_recursive```, for the same reason.
Please do not hesitate to provide feedback on style as this is my first code change in rust.
Added common and simple case folding, i.e. mapping one to one character mapping. For more information see http://www.unicode.org/faq/casemap_charprop.html
Removed auto-generated dead code which wasn't used.
For the following code snippet:
```rust
struct Foo { bar: int }
fn foo1(x: &Foo) -> &int {
&x.bar
}
```
This PR generates the following error message:
```rust
test.rs:2:1: 4:2 note: consider using an explicit lifetime parameter as shown: fn foo1<'a>(x: &'a Foo) -> &'a int
test.rs:2 fn foo1(x: &Foo) -> &int {
test.rs:3 &x.bar
test.rs:4 }
test.rs:3:5: 3:11 error: cannot infer an appropriate lifetime for borrow expression due to conflicting requirements
test.rs:3 &x.bar
^~~~~~
```
Currently it does not support methods.
This enables the lowering of llvm 64b intrinsics to hardware ops, resolving issues around `__kernel_cmpxchg64` on older kernels on ARM devices, and also enables use of the hardware floating point unit, resolving https://github.com/mozilla/rust/issues/10482.
Enables the dereference overloads introduced by #12491 to be applied wherever automatic dereferences would be used (field accesses, method calls and indexing).
Whenever a failure happens, if a program is run with
`RUST_LOG=std::rt::backtrace` a backtrace will be printed to the task's stderr
handle. Stack traces are uncondtionally printed on double-failure and
rtabort!().
This ended up having a nontrivial implementation, and here's some highlights of
it:
* We're bundling libbacktrace for everything but OSX and Windows
* We use libgcc_s and its libunwind apis to get a backtrace of instruction
pointers
* On OSX we use dladdr() to go from an instruction pointer to a symbol
* On unix that isn't OSX, we use libbacktrace to get symbols
* Windows, as usual, has an entirely separate implementation
Lots more fun details and comments can be found in the source itself.
Closes#10128
Whenever a failure happens, if a program is run with
`RUST_LOG=std::rt::backtrace` a backtrace will be printed to the task's stderr
handle. Stack traces are uncondtionally printed on double-failure and
rtabort!().
This ended up having a nontrivial implementation, and here's some highlights of
it:
* We're bundling libbacktrace for everything but OSX and Windows
* We use libgcc_s and its libunwind apis to get a backtrace of instruction
pointers
* On OSX we use dladdr() to go from an instruction pointer to a symbol
* On unix that isn't OSX, we use libbacktrace to get symbols
* Windows, as usual, has an entirely separate implementation
Lots more fun details and comments can be found in the source itself.
Closes#10128
Most IO related functions return an IoResult so that the caller can handle failure in whatever way is appropriate. However, the `lines`, `bytes`, and `chars` iterators all supress errors. This means that code that needs to handle errors can't use any of these iterators. All three of these iterators were updated to produce IoResults.
Fixes#12368
If a TTY fails to get initialized, it still needs to have uv_close invoked on
it. This fixes the problem by constructing the TtyWatcher struct before the call
to uv_tty_init. The struct has a destructor on it which will close the handle
properly.
Closes#12666
Most IO related functions return an IoResult so that the caller can handle failure
in whatever way is appropriate. However, the `lines`, `bytes`, and `chars` iterators all
supress errors. This means that code that needs to handle errors can't use any of these
iterators. All three of these iterators were updated to produce IoResults.
Fixes#12368
Partially addresses #11783.
Previously, rust's hashtable was totally unoptimized. It used an Option
per key-value pair, and used very naive open allocation.
The old hashtable had very high variance in lookup time. For an example,
see the 'find_nonexisting' benchmark below. This is fixed by keys in
'lucky' spots with a low probe sequence length getting their good spots
stolen by keys with long probe sequence lengths. This reduces hashtable
probe length variance, while maintaining the same mean.
Also, other optimization liberties were taken. Everything is as cache
aware as possible, and this hashtable should perform extremely well for
both large and small keys and values.
Benchmarks:
```
comprehensive_old_hashmap 378 ns/iter (+/- 8)
comprehensive_new_hashmap 206 ns/iter (+/- 4)
1.8x faster
old_hashmap_as_queue 238 ns/iter (+/- 8)
new_hashmap_as_queue 119 ns/iter (+/- 2)
2x faster
old_hashmap_insert 172 ns/iter (+/- 8)
new_hashmap_insert 146 ns/iter (+/- 11)
1.17x faster
old_hashmap_find_existing 50 ns/iter (+/- 12)
new_hashmap_find_existing 35 ns/iter (+/- 6)
1.43x faster
old_hashmap_find_notexisting 49 ns/iter (+/- 49)
new_hashmap_find_notexisting 34 ns/iter (+/- 4)
1.44x faster
Memory usage of old hashtable (64-bit assumed):
aligned(8+sizeof(Option)+sizeof(K)+sizeof(V))/0.75 + 48ish bytes
Memory usage of new hashtable:
(aligned(sizeof(K))
+ aligned(sizeof(V))
+ 8)/0.9 + 112ish bytes
Timing of building librustc:
compile_and_link: x86_64-unknown-linux-gnu/stage0/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc
time: 0.457 s parsing
time: 0.028 s gated feature checking
time: 0.000 s crate injection
time: 0.108 s configuration 1
time: 1.049 s expansion
time: 0.219 s configuration 2
time: 0.222 s maybe building test harness
time: 0.223 s prelude injection
time: 0.268 s assinging node ids and indexing ast
time: 0.075 s external crate/lib resolution
time: 0.026 s language item collection
time: 1.016 s resolution
time: 0.038 s lifetime resolution
time: 0.000 s looking for entry point
time: 0.030 s looking for macro registrar
time: 0.061 s freevar finding
time: 0.138 s region resolution
time: 0.110 s type collecting
time: 0.072 s variance inference
time: 0.126 s coherence checking
time: 9.110 s type checking
time: 0.186 s const marking
time: 0.049 s const checking
time: 0.418 s privacy checking
time: 0.057 s effect checking
time: 0.033 s loop checking
time: 1.293 s compute moves
time: 0.182 s match checking
time: 0.242 s liveness checking
time: 0.866 s borrow checking
time: 0.150 s kind checking
time: 0.013 s reachability checking
time: 0.175 s death checking
time: 0.461 s lint checking
time: 13.112 s translation
time: 4.352 s llvm function passes
time: 96.702 s llvm module passes
time: 50.574 s codegen passes
time: 154.611 s LLVM passes
time: 2.821 s running linker
time: 15.750 s linking
compile_and_link: x86_64-unknown-linux-gnu/stage1/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc
time: 0.422 s parsing
time: 0.031 s gated feature checking
time: 0.000 s crate injection
time: 0.126 s configuration 1
time: 1.014 s expansion
time: 0.251 s configuration 2
time: 0.249 s maybe building test harness
time: 0.273 s prelude injection
time: 0.279 s assinging node ids and indexing ast
time: 0.076 s external crate/lib resolution
time: 0.033 s language item collection
time: 1.028 s resolution
time: 0.036 s lifetime resolution
time: 0.000 s looking for entry point
time: 0.029 s looking for macro registrar
time: 0.063 s freevar finding
time: 0.133 s region resolution
time: 0.111 s type collecting
time: 0.077 s variance inference
time: 0.565 s coherence checking
time: 8.953 s type checking
time: 0.176 s const marking
time: 0.050 s const checking
time: 0.401 s privacy checking
time: 0.063 s effect checking
time: 0.032 s loop checking
time: 1.291 s compute moves
time: 0.172 s match checking
time: 0.249 s liveness checking
time: 0.831 s borrow checking
time: 0.121 s kind checking
time: 0.013 s reachability checking
time: 0.179 s death checking
time: 0.503 s lint checking
time: 14.385 s translation
time: 4.495 s llvm function passes
time: 92.234 s llvm module passes
time: 51.172 s codegen passes
time: 150.809 s LLVM passes
time: 2.542 s running linker
time: 15.109 s linking
```
BUT accesses are much more cache friendly. In fact, if the probe
sequence length is below 8, only two cache lines worth of hashes will be
pulled into cache. This is unlike the old version which would have to
stride over the stoerd keys and values, and would be more cache
unfriendly the bigger the stored values got.
And did you notice the higher load factor? We can now reasonably get a
load factor of 0.9 with very good performance.
Please review this very closely. This is my first major contribution to Rust. Sorry for the ugly diff!
If a TTY fails to get initialized, it still needs to have uv_close invoked on
it. This fixes the problem by constructing the TtyWatcher struct before the call
to uv_tty_init. The struct has a destructor on it which will close the handle
properly.
Closes#12666
The `~str` type is not long for this world as it will be superseded by the
soon-to-come DST changes for the language. The new type will be
`~Str`, and matching over the allocation will no longer be supported.
Matching on `&str` will continue to work, in both a pre and post DST world.
Previously, rust's hashtable was totally unoptimized. It used an Option
per key-value pair, and used very naive open allocation.
The old hashtable had very high variance in lookup time. For an example,
see the 'find_nonexisting' benchmark below. This is fixed by keys in
'lucky' spots with a low probe sequence length getting their good spots
stolen by keys with long probe sequence lengths. This reduces hashtable
probe length variance, while maintaining the same mean.
Also, other optimization liberties were taken. Everything is as cache
aware as possible, and this hashtable should perform extremely well for
both large and small keys and values.
Benchmarks:
comprehensive_old_hashmap 378 ns/iter (+/- 8)
comprehensive_new_hashmap 206 ns/iter (+/- 4)
1.8x faster
old_hashmap_as_queue 238 ns/iter (+/- 8)
new_hashmap_as_queue 119 ns/iter (+/- 2)
2x faster
old_hashmap_insert 172 ns/iter (+/- 8)
new_hashmap_insert 146 ns/iter (+/- 11)
1.17x faster
old_hashmap_find_existing 50 ns/iter (+/- 12)
new_hashmap_find_existing 35 ns/iter (+/- 6)
1.43x faster
old_hashmap_find_notexisting 49 ns/iter (+/- 49)
new_hashmap_find_notexisting 34 ns/iter (+/- 4)
1.44x faster
Memory usage of old hashtable (64-bit assumed):
aligned(8+sizeof(K)+sizeof(V))/0.75 + 6 words
Memory usage of new hashtable:
(aligned(sizeof(K))
+ aligned(sizeof(V))
+ 8)/0.9 + 6.5 words
BUT accesses are much more cache friendly. In fact, if the probe
sequence length is below 8, only two cache lines worth of hashes will be
pulled into cache. This is unlike the old version which would have to
stride over the stoerd keys and values, and would be more cache
unfriendly the bigger the stored values got.
And did you notice the higher load factor? We can now reasonably get a
load factor of 0.9 with very good performance.
Closes#12803 (std: Relax an assertion in oneshot selection) r=brson
Closes#12818 (green: Fix a scheduler assertion on yielding) r=brson
Closes#12819 (doc: discuss try! in std::io) r=alexcrichton
Closes#12820 (Use generic impls for `Hash`) r=alexcrichton
Closes#12826 (Remove remaining nolink usages) r=alexcrichton
Closes#12835 (Emacs: always jump the cursor if needed on indent) r=brson
Closes#12838 (Json method cleanup) r=alexcrichton
Closes#12843 (rustdoc: whitelist the headers that get a § on hover) r=alexcrichton
Closes#12844 (docs: add two unlisted libraries to the index page) r=pnkfelix
Closes#12846 (Added a test that checks that unary structs can be mutably borrowed) r=sfackler
Closes#12847 (mk: Fix warnings about duplicated rules) r=nmatsakis
Previously the :hover rules were making the links to the traits/types in
something like
impl<K: Hash + Eq, V> ... { ... }
be displayed with a trailing `§` when hovered over. This commit
restricts that behaviour to specific headers, i.e. those that are known
to be section headers (like those rendered in markdown doc-comments, and
the "Modules", "Functions" etc. headings).
The rust-mode-indent-line function had a check, which ran after all the
calculations for how to indent had already happened, that skipped
actually performing the indent if the line was already at the right
indentation.
Because of that, the cursor did not jump to the indentation if the line
wasn't changing. This was particularly annoying if there was nothing
but spaces on the line and you were at the beginning of it--it looked
like the indent just wasn't working.
This removes the check and adds test cases to cover this.
This commit fixes a small bug in the green scheduler where a scheduler task
calling `maybe_yield` would trip the assertion that `self.yield_check_count > 0`
This behavior was seen when a scheduler task was scheduled many times
successively, sending messages in a loop (via the channel `send` method), which
in turn invokes `maybe_yield`. Yielding on a sched task doesn't make sense
because as soon as it's done it will implicitly do a yield, and for this reason
the yield check is just skipped if it's a sched task.
I am unable to create a reliable test for this behavior, as there's no direct
way to have control over the scheduler tasks.
cc #12666, I discovered this when investigating that issue
The assertion was erroneously ensuring that there was no data on the port when
the port had selection aborted on it. This assertion was written in error
because it's possible for data to be waiting on a port, even after it was
disconnected. When aborting selection, if we see that there's data on the port,
then we return true that data is available on the port.
Closes#12802
Some types of error are caused by missing lifetime parameter on function
or method declaration. In such cases, this commit generates some
suggestion about what the function declaration could be. This does not
support method declaration yet.