These are very easy to replace with methods on string slices, basically
`.char_len()` and `.len()`.
These are the replacement implementations I did to clean these
functions up, but seeing this I propose removal:
/// ...
pub fn count_chars(s: &str, begin: uint, end: uint) -> uint {
// .slice() checks the char boundaries
s.slice(begin, end).char_len()
}
/// Counts the number of bytes taken by the first `n` chars in `s`
/// starting from byte index `begin`.
///
/// Fails if there are less than `n` chars past `begin`
pub fn count_bytes<'b>(s: &'b str, begin: uint, n: uint) -> uint {
s.slice_from(begin).slice_chars(0, n).len()
}
This moves all local_data stuff into the `local_data` module and only that
module alone. It also removes a fair amount of "super-unsafe" code in favor of
just vanilla code generated by the compiler at the same time.
Closes#8113
This moves all local_data stuff into the `local_data` module and only that
module alone. It also removes a fair amount of "super-unsafe" code in favor of
just vanilla code generated by the compiler at the same time.
Closes#8113
There were two main differences with the old libuv and the master version:
1. The uv_last_error function is now gone. The error code returned by each
function is the "last error" so now a UvError is just a wrapper around a
c_int.
2. The repo no longer includes a makefile, and the build system has change.
According to the build directions on joyent/libuv, this now downloads a `gyp`
program into the `libuv/build` directory and builds using that. This
shouldn't add any dependences on autotools or anything like that.
Closes#8407Closes#6567Closes#6315
This removes the stacking of type parameters that occurs when invoking
trait methods, and fixes all places in the standard library that were
relying on it. It is somewhat awkward in places; I think we'll probably
want something like the `Foo::<for T>::new()` syntax.
`UnsafeAtomicRcBox` → `UnsafeArc` (#7674), and `AtomicRcBoxData` → `ArcData` to reflect this.
Also, the inner pointer of `UnsafeArc` is now `*mut ArcData`, which avoids some transmutes to `~`: i.e. less chance of mistakes.
As for now, rekillable is an unsafe function, instead, it should behave
just like unkillable by encapsulating unsafe code within an unsafe
block.
This patch does that and removes unsafe blocks that were encapsulating
rekillable calls throughout rust's libs.
Fixes#8232
This means that fewer `transmute`s are required, so there is less
chance of a `transmute` not having the corresponding `forget`
(possibly leading to use-after-free, etc).
Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:
* Passes are no longer explicitly added one by one. This would be difficult to
keep up with as LLVM changes and we don't guaranteed always know the best
order in which to run passes
* Passes are now managed by LLVM's PassManagerBuilder object. This is then used
to populate the various pass managers run.
* We now run both a FunctionPassManager and a module-wide PassManager. This is
what clang does, and I presume that we *may* see a speed boost from the
module-wide passes just having to do less work. I have no measured this.
* The codegen pass manager has been extracted to its own separate pass manager
to not get mixed up with the other passes
* All pass managers now include passes for target-specific data layout and
analysis passes
Some new features include:
* You can now print all passes being run with `-Z print-llvm-passes`
* When specifying passes via `--passes`, the passes are now appended to the
default list of passes instead of overwriting them.
* The output of `--passes list` is now generated by LLVM instead of maintaining
a list of passes ourselves
* Loop vectorization is turned on by default as an optimization pass and can be
disabled with `-Z no-vectorize-loops`
All of these "copies" of clang are based off their [source code](http://clang.llvm.org/doxygen/BackendUtil_8cpp_source.html) in case anyone is curious what my source is. I was hoping that this would fix#8665, but this does not help the performance issues found there. Hopefully i'll allow us to tweak passes or see what's going on to try to debug that problem.
Beforehand, it was unclear whether rust was performing the "recommended set" of
optimizations provided by LLVM for code. This commit changes the way we run
passes to closely mirror that of clang, which in theory does it correctly. The
notable changes include:
* Passes are no longer explicitly added one by one. This would be difficult to
keep up with as LLVM changes and we don't guaranteed always know the best
order in which to run passes
* Passes are now managed by LLVM's PassManagerBuilder object. This is then used
to populate the various pass managers run.
* We now run both a FunctionPassManager and a module-wide PassManager. This is
what clang does, and I presume that we *may* see a speed boost from the
module-wide passes just having to do less work. I have no measured this.
* The codegen pass manager has been extracted to its own separate pass manager
to not get mixed up with the other passes
* All pass managers now include passes for target-specific data layout and
analysis passes
Some new features include:
* You can now print all passes being run with `-Z print-llvm-passes`
* When specifying passes via `--passes`, the passes are now appended to the
default list of passes instead of overwriting them.
* The output of `--passes list` is now generated by LLVM instead of maintaining
a list of passes ourselves
* Loop vectorization is turned on by default as an optimization pass and can be
disabled with `-Z no-vectorize-loops`
As for now, rekillable is an unsafe function, instead, it should behave
just like unkillable by encapsulating unsafe code within an unsafe
block.
This patch does that and removes unsafe blocks that were encapsulating
rekillable calls throughout rust's libs.
Fixes#8232
This patchset enables rustc to cross-build mingw-w64 outputs.
Tested on mingw + mingw-w64 (mingw-builds, win64/seh/win32-threads/gcc-4.8.1).
I also patched llvm to support Win64 stack unwinding.
ebe22bdbce
I cross-built test/run-pass/smallest-hello-world.rs and confirmed it works.
However, I also found something went wrong if I don't have custom `#[start]` routine.
This patch saves and restores win64's nonvolatile registers.
This patch also saves stack information of thread environment
block (TEB), which is at %gs:0x08 and %gs:0x10.
Some extern blobs are duplicated without "stdcall" abi,
since Win64 does not use any calling convention.
(Giving any abi to them causes llvm producing wrong bytecode.)
Make CharSplitIterator double-ended which is simple given that the operation is symmetric, once the split-N feature is factored out into its own adaptor.
`.rsplitn_iter()` allows splitting `N` times from the back of a string, so it is a completely new feature. With the double-ended impl, `.split_iter()`, `.line_iter()`, `.word_iter()` all allow picking off elements from either end.
`split_options_iter` is removed with the factoring of the split- and split-N- iterators, instead there is `split_terminator_iter`.
---
Add benchmarks using `#[bench]` and tune CharSplitIterator a bit after Huon Wilson's suggestions
Benchmarks 1-5 do the same split using different implementations of `CharEq`, all splitting an ascii string on ascii space. Benchmarks 6-7 split a unicode string on an ascii char.
Before this PR
test str::bench::split_iter_ascii ... bench: 166 ns/iter (+/- 2)
test str::bench::split_iter_closure ... bench: 113 ns/iter (+/- 1)
test str::bench::split_iter_extern_fn ... bench: 286 ns/iter (+/- 7)
test str::bench::split_iter_not_ascii ... bench: 114 ns/iter (+/- 4)
test str::bench::split_iter_slice ... bench: 220 ns/iter (+/- 12)
test str::bench::split_iter_unicode_ascii ... bench: 217 ns/iter (+/- 3)
test str::bench::split_iter_unicode_not_ascii ... bench: 248 ns/iter (+/- 3)
PR, first commit
test str::bench::split_iter_ascii ... bench: 331 ns/iter (+/- 9)
test str::bench::split_iter_closure ... bench: 114 ns/iter (+/- 2)
test str::bench::split_iter_extern_fn ... bench: 314 ns/iter (+/- 6)
test str::bench::split_iter_not_ascii ... bench: 132 ns/iter (+/- 1)
test str::bench::split_iter_slice ... bench: 157 ns/iter (+/- 3)
test str::bench::split_iter_unicode_ascii ... bench: 502 ns/iter (+/- 64)
test str::bench::split_iter_unicode_not_ascii ... bench: 250 ns/iter (+/- 3)
PR, final version
test str::bench::split_iter_ascii ... bench: 106 ns/iter (+/- 4)
test str::bench::split_iter_closure ... bench: 107 ns/iter (+/- 1)
test str::bench::split_iter_extern_fn ... bench: 267 ns/iter (+/- 6)
test str::bench::split_iter_not_ascii ... bench: 108 ns/iter (+/- 1)
test str::bench::split_iter_slice ... bench: 170 ns/iter (+/- 8)
test str::bench::split_iter_unicode_ascii ... bench: 128 ns/iter (+/- 5)
test str::bench::split_iter_unicode_not_ascii ... bench: 252 ns/iter (+/- 3)
---
There are several ways to deal with `CharEq::only_ascii`. It is a performance optimization, so with that in mind, we allow passing bogus char (outside ascii) as long as they don't match. We use a byte value check to make sure we don't split on these (would split substrings in the middle of encoded char). (A more principled way would be to only pass the ascii codepoints to the CharEq when it indicates only_ascii, but that undoes some of the performance optimization.)
Implement Huon Wilson's suggestions (since the benchmarks agree!).
Use `self.sep.matches(byte as char) && byte < 128u8` to match in the
only_ascii case so that mistaken matches outside the ascii range can't
create invalid substrings.
Put the conditional on only_ascii outside the loop.
Add new methods `.rsplit_iter()` and `.rsplitn_iter()` for &str.
Separate out CharSplitIterator and CharSplitNIterator,
CharSplitIterator (`split_iter` and `rsplit_iter`) is made double-ended
while `splitn_iter` and `rsplitn_iter` (limited to N splits) are not,
since these don't have the same symmetry.
With CharSplitIterator being double ended, derived iterators like
`line_iter` and `word_iter` are too.
Recent improvements to `&mut Trait` have made this work possible, and it solidifies that `ifmt` doesn't always have to return a string, but rather it's based around writers.
The method names in std::rt::io::extensions::WriterByteConversions are
the same as those in std::io::WriterUtils and a resolve error causes
rustc to fail after trying to find an impl of io::Writer instead of
trying to look for rt::io::Writer as well.
These aren't used for anything at the moment and cause some TLS hits
on some perf-critical code paths. Will need to put better thought into
it in the future.
This documents how to use trait bounds in a (hopefully) user-friendly way, in the containers tutorial, and also documents the task watching implementation for runtime developers in kill.rs.
r anybody
Naturally, and sadly, turning off sanity checks in the runtime is
a noticable performance win. The particular test I'm running goes from
~1.5 s to ~1.3s.
Sanity checks are turned *on* when not optimizing, or when cfg
includes `rtdebug` or `rtassert`.
Monomorphize's normalization results in a 2% decrease in non-optimized
code size for libstd, so there's a negligible cost to removing it. This
also fixes several visit glue bugs because normalize wasn't considering
the differences in visit glue between types.
Closes#8720
The method names in std::rt::io::extensions::WriterByteConversions are
the same as those in std::io::WriterUtils and a resolve error causes
rustc to fail after trying to find an impl of io::Writer instead of
trying to look for rt::io::Writer as well.
Same goes for ReaderByteConversions.
This resolves issue #908.
Notable changes:
- On Windows, LLVM integrated assembler emits bad stack unwind tables when segmented stacks are enabled. However, unwind info directives in the assembly output are correct, so we generate assembly first and then run it through an external assembler, just like it is already done for Android builds.
- Linker is invoked via "g++" command instead of "gcc": g++ passes the appropriate magic parameters to the linker, which ensure correct registration of stack unwind tables in dynamic libraries.
- change all uses of Path in fn args to &P
- FileStream.read assumptions were wrong (libuv file io is non-positional)
- the above will mean that we "own" Seek impl info .. should probably
push it in UvFileDescriptor..
- needs more tests
Implement CharIterator as a separate struct, so that it can be .clone()'d. Fix `.char_range_at_reverse` so that it performs better, closer to the forwards version. This makes the reverse iterators and users like `.rfind()` perform better.
Before
test str::bench::char_iterator ... bench: 146 ns/iter (+/- 0)
test str::bench::char_iterator_ascii ... bench: 397 ns/iter (+/- 49)
test str::bench::char_iterator_rev ... bench: 576 ns/iter (+/- 8)
test str::bench::char_offset_iterator ... bench: 128 ns/iter (+/- 2)
test str::bench::char_offset_iterator_rev ... bench: 425 ns/iter (+/- 59)
After
test str::bench::char_iterator ... bench: 130 ns/iter (+/- 1)
test str::bench::char_iterator_ascii ... bench: 307 ns/iter (+/- 5)
test str::bench::char_iterator_rev ... bench: 185 ns/iter (+/- 8)
test str::bench::char_offset_iterator ... bench: 131 ns/iter (+/- 13)
test str::bench::char_offset_iterator_rev ... bench: 183 ns/iter (+/- 2)
To be able to use a string slice to represent the CharIterator, a function `slice_unchecked` is added, that does the same as `slice_bytes` but without any boundary checks.
It would be possible to implement CharIterator with pointer arithmetic to make it *much more efficient*, but since vec iterator is still improving, it's too early to attempt to re-implement it in other places. Hopefully CharIterator can be implemented on top of vec iterator without any unsafe code later.
Additional changes fix the documentation about null termination.
This adds support for performing Unicode Normalization Forms D and KD on strings.
To enable this the decomposition and canonical combining class properties are added to std::unicode.
On my system this increases libstd's size by ~250KiB.
Linux and Android share the kernel, but not the C library, so sysconf constants are different. For example, _SC_PAGESIZE is 30 on Linux, but 39 on Android.
This patch
* splits sysconf constants to sysconf module
* merges non-MIPS and MIPS sysconf constants (they are same)
* adds Android sysconf constants
This patch also lets mmap tests to pass on Android.
Fixed a memory leak caused by the singleton idle callback failing to close correctly. The problem was that the close function requires running inside a callback in the event loop, but we were trying to close the idle watcher after the loop returned from run. The fix was to just call run again to process this callback. There is an additional tweak to move the initialization logic fully into bootstrap, so tasks that do not ever call run do not have problems destructing.
libuv handles are tied to the event loop that created them. In order to perform IO, the handle must be on the thread with its home event loop. Thus, when as task wants to do IO it must first go to the IO handle's home event loop and pin itself to the corresponding scheduler while the IO action is in flight. Once the IO action completes, the task is unpinned and either returns to its home scheduler if it is a pinned task, or otherwise stays on the current scheduler.
Making new blocking IO implementations (i.e. files) thread safe is rather simple. Add a home field to the IO handle's struct in uvio and implement the HomingIO trait. Wrap every IO call in the HomingIO.home_for_io method, which will take care of the scheduling.
I'm not sure if this remains thread safe in the presence of asynchronous IO at the libuv level. If we decide to do that, then this set up should be revisited.
Instead of a furious storm of idle callbacks we just have one. This is a major performance gain - around 40% on my machine for the ping pong bench.
Also in this PR is a cleanup commit for the scheduler code. Was previously up as a separate PR, but bors load + imminent merge hell led me to roll them together. Was #8549.
Each IO handle has a home event loop, which created it.
When a task wants to use an IO handle, it must first make sure it is on that home event loop.
It uses the scheduler handle in the IO handle to send itself there before starting the IO action.
Once the IO action completes, the task restores its previous home state.
If it is an AnySched task, then it will be executed on the new scheduler.
If it has a normal home, then it will return there before executing any more code after the IO action.
Long-standing branch to remove foreign function wrappers altogether. Calls to C functions are done "in place" with no stack manipulation; the scheme relies entirely on the correct use of `#[fixed_stack_segment]` to guarantee adequate stack space. A linter is added to detect when `#[fixed_stack_segment]` annotations are missing. An `externfn!` macro is added to make it easier to declare foreign fns and wrappers in one go: this macro may need some refinement, though, for example it might be good to be able to declare a group of foreign fns. I leave that for future work (hopefully somebody else's work :) ).
Fixes#3678.
Let CharIterator be a separate type from CharOffsetIterator (so that
CharIterator can be cloned, for example).
Implement CharOffsetIterator by using the same technique as the method
subslice_offset.
Add a function like raw::slice_bytes, but it doesn't check slice
boundaries. For iterator use where we always know the begin, end indices
are in range.
See discussion in #8489, but this selects option 3 by adding a `Default` trait to be implemented by various basic types.
Once this makes it into a snapshot I think it's about time to start overhauling all current use-cases of `fmt!` to move towards `ifmt!`. The goal is to replace `%X` with `{}` in 90% of situations, and this commit should enable that.
Add size_hint() to a few Iterators that were missing it.
Update a couple of existing size_hint()s to use checked_add() instead of
saturating_add() for the upper bound.
@brson grilled me about how this bugfix worked the first time around, and it occurred to me that it didn't in the case where the task is unwinding. Now it will.
Address issue #5257, for example these values all had the same hash value:
("aaa", "bbb", "ccc")
("aaab", "bb", "ccc")
("aaabbb", "", "ccc")
IterBytes for &[A] now includes the length, before calling iter_bytes on
each element.
IterBytes for &str is now terminated by a byte that does not appear in
UTF-8. This way only one more byte is processed when hashing strings.
Address issue #5257, for example these values all had the same hash value:
("aaa", "bbb", "ccc")
("aaab", "bb", "ccc")
("aaabbb", "", "ccc")
IterBytes for &[A] now includes the length, before calling iter_bytes on
each element.
IterBytes for &str is now terminated by a byte that does not appear in
UTF-8. This way only one more byte is processed when hashing strings.
I need `Clone` for `Tm` for my latest work on [rust-http](https://github.com/chris-morgan/rust-http) (static typing for headers, and headers like `Date` are a time), so here it is.
@huonw recommended deriving DeepClone while I was at it.
I also had to implement `DeepClone` for `~str` to get a derived implementation of `DeepClone` for `Tm`; I did `@str` while I was at it, for consistency.