closes#13367
[breaking-change] Use `Sized?` to indicate a dynamically sized type parameter or trait (used to be `type`). E.g.,
```
trait Tr for Sized? {}
fn foo<Sized? X: Share>(x: X) {}
```
closes#13367
[breaking-change] Use `Sized?` to indicate a dynamically sized type parameter or trait (used to be `type`). E.g.,
```
trait Tr for Sized? {}
fn foo<Sized? X: Share>(x: X) {}
```
This will break code that looks like:
struct Foo {
...
}
mod Foo {
...
}
Change this code to:
struct Foo {
...
}
impl Foo {
...
}
Or rename the module.
Closes#15205.
[breaking-change]
r? @nick29581
Extend the null ptr optimization to work with slices, closures, procs, & trait objects by using the internal pointers as the discriminant.
This decreases the size of `Option<&[int]>` (and similar) by one word.
This will break code that used the old `Index` trait. Change this code
to use the new `Index` traits. For reference, here are their signatures:
pub trait Index<Index,Result> {
fn index<'a>(&'a self, index: &Index) -> &'a Result;
}
pub trait IndexMut<Index,Result> {
fn index_mut<'a>(&'a mut self, index: &Index) -> &'a mut Result;
}
Closes#6515.
[breaking-change]
r? @nick29581
- created new crate, libunicode, below libstd
- split Char trait into Char (libcore) and UnicodeChar (libunicode)
- Unicode-aware functions now live in libunicode
- is_alphabetic, is_XID_start, is_XID_continue, is_lowercase,
is_uppercase, is_whitespace, is_alphanumeric, is_control,
is_digit, to_uppercase, to_lowercase
- added width method in UnicodeChar trait
- determines printed width of character in columns, or None if it is
a non-NULL control character
- takes a boolean argument indicating whether the present context is
CJK or not (characters with 'A'mbiguous widths are double-wide in
CJK contexts, single-wide otherwise)
- split StrSlice into StrSlice (libcore) and UnicodeStrSlice
(libunicode)
- functionality formerly in StrSlice that relied upon Unicode
functionality from Char is now in UnicodeStrSlice
- words, is_whitespace, is_alphanumeric, trim, trim_left, trim_right
- also moved Words type alias into libunicode because words method is
in UnicodeStrSlice
- unified Unicode tables from libcollections, libcore, and libregex into
libunicode
- updated unicode.py in src/etc to generate aforementioned tables
- generated new tables based on latest Unicode data
- added UnicodeChar and UnicodeStrSlice traits to prelude
- libunicode is now the collection point for the std::char module,
combining the libunicode functionality with the Char functionality
from libcore
- thus, moved doc comment for char from core::char to unicode::char
- libcollections remains the collection point for std::str
The Unicode-aware functions that previously lived in the Char and
StrSlice traits are no longer available to programs that only use
libcore. To regain use of these methods, include the libunicode crate
and use the UnicodeChar and/or UnicodeStrSlice traits:
extern crate unicode;
use unicode::UnicodeChar;
use unicode::UnicodeStrSlice;
use unicode::Words; // if you want to use the words() method
NOTE: this does *not* impact programs that use libstd, since UnicodeChar
and UnicodeStrSlice have been added to the prelude.
closes#15224
[breaking-change]
This will break code that used the old `Index` trait. Change this code
to use the new `Index` traits. For reference, here are their signatures:
pub trait Index<Index,Result> {
fn index<'a>(&'a self, index: &Index) -> &'a Result;
}
pub trait IndexMut<Index,Result> {
fn index_mut<'a>(&'a mut self, index: &Index) -> &'a mut Result;
}
Closes#6515.
[breaking-change]
This will break code that looks like:
struct Foo {
...
}
mod Foo {
...
}
Change this code to:
struct Foo {
...
}
impl Foo {
...
}
Or rename the module.
Closes#15205.
[breaking-change]
I ran `make check` and everything went smoothly. I also tested `#[deriving(Decodable, Encodable)]` on a struct containing both Cell<T> and RefCell<T> and everything now seems to work fine.
LLVM doesn't handle i1 value in allocas/memory very well and skips a number of optimizations if it hits it. So we have to do the same thing that Clang does, using i1 for SSA values, but storing i8 in memory.
Fixes#15203.
LLVM doesn't really like types with a bit-width that isn't a multiple of
8 and disable various optimizations if it encounters such types used
with loads/stores. OTOH, booleans must be represented as i1 when used as
SSA values. To get the best results, we must use i1 for SSA values, and
i8 when storing the value to memory.
By using range asserts on loads, LLVM can eliminate the required
zero-extend and truncate operations.
Fixes#15203
`Vec::push_all` with a length 1 slice seems to have significant overhead compared to `Vec::push`.
```
test new_push_byte ... bench: 6985 ns/iter (+/- 487) = 17 MB/s
test old_push_byte ... bench: 19335 ns/iter (+/- 1368) = 6 MB/s
```
```rust
extern crate test;
use test::Bencher;
static TEXT: &'static str = "\
Unicode est un standard informatique qui permet des échanges \
de textes dans différentes langues, à un niveau mondial.";
#[bench]
fn old_push_byte(bencher: &mut Bencher) {
bencher.bytes = TEXT.len() as u64;
bencher.iter(|| {
let mut new = String::new();
for b in TEXT.bytes() {
unsafe { new.as_mut_vec().push_all([b]) }
}
})
}
#[bench]
fn new_push_byte(bencher: &mut Bencher) {
bencher.bytes = TEXT.len() as u64;
bencher.iter(|| {
let mut new = String::new();
for b in TEXT.bytes() {
unsafe { new.as_mut_vec().push(b) }
}
})
}
```