by-reference upvars.
This partially implements RFC 38. A snapshot will be needed to turn this
on, because stage0 cannot yet parse the keyword.
Part of #12381.
This makes edge cases in which the `Iterator` trait was not in scope
and/or `Option` or its variants were not in scope work properly.
This breaks code that looks like:
struct MyStruct { ... }
impl MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
for x in MyStruct { ... } { ... }
Change ad-hoc `next` methods like the above to implementations of the
`Iterator` trait. For example:
impl Iterator<int> for MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
Closes#15392.
[breaking-change]
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes#15665
This will break code that used the old `Index` trait. Change this code
to use the new `Index` traits. For reference, here are their signatures:
pub trait Index<Index,Result> {
fn index<'a>(&'a self, index: &Index) -> &'a Result;
}
pub trait IndexMut<Index,Result> {
fn index_mut<'a>(&'a mut self, index: &Index) -> &'a mut Result;
}
Closes#6515.
[breaking-change]
LLVM doesn't really like types with a bit-width that isn't a multiple of
8 and disable various optimizations if it encounters such types used
with loads/stores. OTOH, booleans must be represented as i1 when used as
SSA values. To get the best results, we must use i1 for SSA values, and
i8 when storing the value to memory.
By using range asserts on loads, LLVM can eliminate the required
zero-extend and truncate operations.
Fixes#15203
We currently compiled bools to i8 values, because there was a bug in
LLVM that sometimes caused miscompilations when using i1 in, for
example, structs.
Using i8 means a lot of unnecessary zero-extend and truncate operations
though, since we have to convert the value from and to i1 when using for
example icmp or br instructions. Besides the unnecessary overhead caused
by this, it also sometimes made LLVM miss some optimizations.
Fixes#8106.
Use ty_rptr/ty_uniq(ty_trait) rather than TraitStore to represent trait types.
Also addresses (but doesn't close) #12470.
Part of the work towards DST (#12938).
[breaking-change] lifetime parameters in `&mut trait` are now invariant. They used to be contravariant.
This commit uses the same trick as ~/Box to map Gc<T> to @T internally inside
the compiler. This moves a number of implementations of traits to the `gc`
module in the standard library.
This removes functions such as `Gc::new`, `Gc::borrow`, and `Gc::ptr_eq` in
favor of the more modern equivalents, `box(GC)`, `Deref`, and pointer equality.
The Gc pointer itself should be much more useful now, and subsequent commits
will move the compiler away from @T towards Gc<T>
[breaking-change]
Division and remainder by 0 are undefined behavior, and are detected at runtime.
This commit adds support for ensuring that MIN / -1 is also checked for at
runtime, as this would cause signed overflow, or undefined behvaior.
Closes#8460
This is part of the ongoing renaming of the equality traits. See #12517 for more
details. All code using Eq/Ord will temporarily need to move to Partial{Eq,Ord}
or the Total{Eq,Ord} traits. The Total traits will soon be renamed to {Eq,Ord}.
cc #12517
[breaking-change]
for `~str`/`~[]`.
Note that `~self` still remains, since I forgot to add support for
`Box<self>` before the snapshot.
How to update your code:
* Instead of `~EXPR`, you should write `box EXPR`.
* Instead of `~TYPE`, you should write `Box<Type>`.
* Instead of `~PATTERN`, you should write `box PATTERN`.
[breaking-change]
Similar to my recent changes to ~[T]/&[T], these changes remove the vstore abstraction and represent str types as ~(str) and &(str). The Option<uint> in ty_str is the length of the string, None if the string is dynamically sized.
Refactores all uses of ty_vec and associated things to remove the vstore abstraction (still used for strings, for now). Pointers to vectors are stored as ty_rptr or ty_uniq wrapped around a ty_vec. There are no user-facing changes. Existing behaviour is preserved by special-casing many instances of pointers containing vectors. Hopefully with DST most of these hacks will go away. For now it is useful to leave them hanging around rather than abstracting them into a method or something.
Closes#13554.