Every time run_sched_once performs a 'scheduling action' it needs to guarantee
that it runs at least one more time, so enqueue another run_sched_once callback.
The primary reason it needs to do this is because not all async callbacks
are guaranteed to run, it's only guaranteed that *a* callback will run after
enqueing one - some may get dropped.
At the moment this means we wastefully create lots of callbacks to ensure that
there will *definitely* be a callback queued up to continue running the scheduler.
The logic really needs to be tightened up here.
WIth this patch `RUSTFLAGS='--cfg unicode' make check"` passed successfully.
* Why doesn't `#[link_name="icuuc"]` make libextra to link against libicuuc.so?
* In `extra::unicode::tests`, `use unicode; unicode::is_foo('a')` failed but `use unicode::*; is_foo('a')` succeeded. Is it right?
* LLVM now has a C interface to LLVMBuildAtomicRMW
* The exception handling support for the JIT seems to have been dropped
* Various interfaces have been added or headers have changed
rvalues aren't going to be used anywhere but as the argument, so
there's no point in copying them. LLVM used to eliminate the copy
later, but why bother emitting it in the first place?
multicast functions now take IpAddr (without port), because they dont't
need port.
Uv* types renamed:
* UvIpAddr -> UvSocketAddr
* UvIpv4 -> UvIpv4SocketAddr
* UvIpv6 -> UvIpv6SocketAddr
"Socket address" is a common name for (ip-address, port) pair (e.g. in
sockaddr_in struct).
P. S. Are there any backward compatibility concerns? What is std::rt module, is it a part of public API?
Use unchecked vec indexing since the vector bounds are checked by the
loop. Iterators are not easy to use in this case since we skip 1-4 bytes
each lap. This part of the commit speeds up is_utf8 for ASCII input.
Check codepoint ranges by checking the byte ranges manually instead of
computing a full decoding for multibyte encodings. This is easy to read
and corresponds to the UTF-8 syntax in the RFC.
No changes to what we accept. A comment notes that surrogate halves are
accepted.
Before:
test str::bench::is_utf8_100_ascii ... bench: 165 ns/iter (+/- 3)
test str::bench::is_utf8_100_multibyte ... bench: 218 ns/iter (+/- 5)
After:
test str::bench::is_utf8_100_ascii ... bench: 130 ns/iter (+/- 1)
test str::bench::is_utf8_100_multibyte ... bench: 156 ns/iter (+/- 3)
An improvement upon the previous pull #8133
I suspect that this is a race between process exit and the termination of
worker threads used by libuv (if I sleep before exit it doesn't leak). This
isn't going to cause any real problems but should probably be fixed at
some point.
r? @pcwalton
cc #8253
Previously it would call:
f(sf1.cmp(&of1), f(sf2.cmp(&of2), ...))
(where s/of1 = 'self/other field 1', and f was
std::cmp::lexical_ordering)
This meant that every .cmp subcall got evaluated when calling a derived
TotalOrd.cmp.
This corrects this to use
let test = sf1.cmp(&of1);
if test == Equal {
let test = sf2.cmp(&of2);
if test == Equal {
// ...
} else {
test
}
} else {
test
}
This gives a lexical ordering by short-circuiting on the first comparison
that is not Equal.
...y/catch
And before collect_failure. These are both running user dtors and need to be handled
in the task try/catch block and before the final task cleanup code.
And before collect_failure. These are both running user dtors and need to be handled
in the task try/catch block and before the final task cleanup code.