fn slice_bytes is marked unsafe since it allows violating the valid
string encoding property; but the function did also allow extending the
lifetime of the slice by mistake, since it's returning `&str`.
Use the annotation `slice_bytes<'a>(&'a str, ...) -> &'a str` so
that all uses of slice_bytes are region checked correctly.
It seems that relatively new code uses `Foo::new()` instead of `Foo()` so I wrote a patch to migrate some structs to the former style.
Is it a right direction? If there are any guidelines not to use new()-style, could you add them to the [style guide](https://github.com/omasanori/rust/wiki/Note-style-guide)?
The truncation needs to be done in the console logger in order
to catch all the logging output, and because truncation only matters
when outputting to the console.
Update the arm linux support some more. We had a previous patch for the RasberryPi. This adds a new target `arm-unknown-linux-gnueabi` for more general arm linux support.
Build/Host machine: x86_64 Debian testing (jessie) with the `gcc-4.4-arm-linux-gnueabi` package
Tested on targets:
- TS-7800 Feroceon (ARMv5TEJ) running Debian 7.0 wheezy
- Beaglebone black (ARMv7) running Angstrom GNU/Linux v2012.12
- rustc flags: `--target=arm-unknown-linux-gnueabi --linker=arm-linux-gnueabi-gcc`
- Samsung Galaxy S II (to make sure android still works)
- rustc flags: `--target=arm-linux-androideabi --android-cross-path=[path to standalone toolchain]`
Since not all arm devices (i.e. afaik anything older than armv6 like the ts-7800 i tested on) supported getting the tls address via the `mrc` instruction, I made it also try via the magic address the kernel maps into the address space (0xFFFF0FF0). One or the other should work (and on android it seems like both work).
Also fixes a bug where rustc would always try to invoke the android assembler for any kind of arm target.
The compiler-generated dtor for DList recurses deeply to drop Nodes.
For big lists this can overflow the stack.
This is a problem for the new scheduler, where split stacks are not implemented.
Thanks @blake2-ppc
Every time run_sched_once performs a 'scheduling action' it needs to guarantee
that it runs at least one more time, so enqueue another run_sched_once callback.
The primary reason it needs to do this is because not all async callbacks
are guaranteed to run, it's only guaranteed that *a* callback will run after
enqueing one - some may get dropped.
At the moment this means we wastefully create lots of callbacks to ensure that
there will *definitely* be a callback queued up to continue running the scheduler.
The logic really needs to be tightened up here.
WIth this patch `RUSTFLAGS='--cfg unicode' make check"` passed successfully.
* Why doesn't `#[link_name="icuuc"]` make libextra to link against libicuuc.so?
* In `extra::unicode::tests`, `use unicode; unicode::is_foo('a')` failed but `use unicode::*; is_foo('a')` succeeded. Is it right?
* LLVM now has a C interface to LLVMBuildAtomicRMW
* The exception handling support for the JIT seems to have been dropped
* Various interfaces have been added or headers have changed
rvalues aren't going to be used anywhere but as the argument, so
there's no point in copying them. LLVM used to eliminate the copy
later, but why bother emitting it in the first place?
multicast functions now take IpAddr (without port), because they dont't
need port.
Uv* types renamed:
* UvIpAddr -> UvSocketAddr
* UvIpv4 -> UvIpv4SocketAddr
* UvIpv6 -> UvIpv6SocketAddr
"Socket address" is a common name for (ip-address, port) pair (e.g. in
sockaddr_in struct).
P. S. Are there any backward compatibility concerns? What is std::rt module, is it a part of public API?
Use unchecked vec indexing since the vector bounds are checked by the
loop. Iterators are not easy to use in this case since we skip 1-4 bytes
each lap. This part of the commit speeds up is_utf8 for ASCII input.
Check codepoint ranges by checking the byte ranges manually instead of
computing a full decoding for multibyte encodings. This is easy to read
and corresponds to the UTF-8 syntax in the RFC.
No changes to what we accept. A comment notes that surrogate halves are
accepted.
Before:
test str::bench::is_utf8_100_ascii ... bench: 165 ns/iter (+/- 3)
test str::bench::is_utf8_100_multibyte ... bench: 218 ns/iter (+/- 5)
After:
test str::bench::is_utf8_100_ascii ... bench: 130 ns/iter (+/- 1)
test str::bench::is_utf8_100_multibyte ... bench: 156 ns/iter (+/- 3)
An improvement upon the previous pull #8133
I suspect that this is a race between process exit and the termination of
worker threads used by libuv (if I sleep before exit it doesn't leak). This
isn't going to cause any real problems but should probably be fixed at
some point.
r? @pcwalton
cc #8253
Previously it would call:
f(sf1.cmp(&of1), f(sf2.cmp(&of2), ...))
(where s/of1 = 'self/other field 1', and f was
std::cmp::lexical_ordering)
This meant that every .cmp subcall got evaluated when calling a derived
TotalOrd.cmp.
This corrects this to use
let test = sf1.cmp(&of1);
if test == Equal {
let test = sf2.cmp(&of2);
if test == Equal {
// ...
} else {
test
}
} else {
test
}
This gives a lexical ordering by short-circuiting on the first comparison
that is not Equal.