The right hand side of the comparison in these checks are values of type
Option<&Path> which are normalized versions of the left-hand side, so they're
not guaranteed to be byte-for-byte equivalent even though they're the same path.
For this reasons, the command line arguments are promoted to paths for
comparison of equality.
This fixes a bug on windows where if a library was specified with --extern it
would then be picked up twice because it was not considered to have been
previously registered.
method calls are involved.
This breaks code like:
impl<T:Copy> Foo for T { ... }
fn take_param<T:Foo>(foo: &T) { ... }
fn main() {
let x = box 3i; // note no `Copy` bound
take_param(&x);
}
Change this code to not contain a type error. For example:
impl<T:Copy> Foo for T { ... }
fn take_param<T:Foo>(foo: &T) { ... }
fn main() {
let x = 3i; // satisfies `Copy` bound
take_param(&x);
}
Closes#15860.
[breaking-change]
r? @alexcrichton
method calls are involved.
This breaks code like:
impl<T:Copy> Foo for T { ... }
fn take_param<T:Foo>(foo: &T) { ... }
fn main() {
let x = box 3i; // note no `Copy` bound
take_param(&x);
}
Change this code to not contain a type error. For example:
impl<T:Copy> Foo for T { ... }
fn take_param<T:Foo>(foo: &T) { ... }
fn main() {
let x = 3i; // satisfies `Copy` bound
take_param(&x);
}
Closes#15860.
[breaking-change]
The right hand side of the comparison in these checks are values of type
Option<&Path> which are normalized versions of the left-hand side, so they're
not guaranteed to be byte-for-byte equivalent even though they're the same path.
For this reasons, the command line arguments are promoted to paths for
comparison of equality.
This fixes a bug on windows where if a library was specified with --extern it
would then be picked up twice because it was not considered to have been
previously registered.
librustc: Stop desugaring `for` expressions and translate them directly.
This makes edge cases in which the `Iterator` trait was not in scope
and/or `Option` or its variants were not in scope work properly.
This breaks code that looks like:
struct MyStruct { ... }
impl MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
for x in MyStruct { ... } { ... }
Change ad-hoc `next` methods like the above to implementations of the
`Iterator` trait. For example:
impl Iterator<int> for MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
Closes#15392.
[breaking-change]
This makes edge cases in which the `Iterator` trait was not in scope
and/or `Option` or its variants were not in scope work properly.
This breaks code that looks like:
struct MyStruct { ... }
impl MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
for x in MyStruct { ... } { ... }
Change ad-hoc `next` methods like the above to implementations of the
`Iterator` trait. For example:
impl Iterator<int> for MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
Closes#15392.
[breaking-change]
This is done entirely in the libraries for functions up to 16 arguments.
A macro is used so that more arguments can be easily added if we need.
Note that I had to adjust the overloaded call algorithm to not try
calling the overloaded call operator if the callee is a built-in
function type, to prevent loops.
Closes#15448.
This eliminates the last vestige of the `~` syntax.
Instead of `~self`, write `self: Box<TypeOfSelf>`; instead of `mut
~self`, write `mut self: Box<TypeOfSelf>`, replacing `TypeOfSelf` with
the self-type parameter as specified in the implementation.
Closes#13885.
[breaking-change]
The allocas used in match expression currently don't get good lifetime
markers, in fact they only get lifetime start markers, because their
lifetimes don't match to cleanup scopes.
While the bindings themselves are bog standard and just need a matching
pair of start and end markers, they might need them twice, once for a
guard clause and once for the match body.
The __llmatch alloca OTOH needs a single lifetime start marker, but
when there's a guard clause, it needs two end markers, because its
lifetime ends either when the guard doesn't match or after the match
body.
With these intrinsics in place, LLVM can now, for example, optimize
code like this:
````rust
enum E {
A1(int),
A2(int),
A3(int),
A4(int),
}
pub fn variants(x: E) {
match x {
A1(m) => bar(&m),
A2(m) => bar(&m),
A3(m) => bar(&m),
A4(m) => bar(&m),
}
}
````
To a single call to bar, using only a single stack slot. It still fails
to eliminate some of checks.
````gas
.Ltmp5:
.cfi_def_cfa_offset 16
movb (%rdi), %al
testb %al, %al
je .LBB3_5
movzbl %al, %eax
cmpl $1, %eax
je .LBB3_5
cmpl $2, %eax
.LBB3_5:
movq 8(%rdi), %rax
movq %rax, (%rsp)
leaq (%rsp), %rdi
callq _ZN3bar20hcb7a0d8be8e17e37daaE@PLT
popq %rax
retq
````
Refs #15665
The allocas used in match expression currently don't get good lifetime
markers, in fact they only get lifetime start markers, because their
lifetimes don't match to cleanup scopes.
While the bindings themselves are bog standard and just need a matching
pair of start and end markers, they might need them twice, once for a
guard clause and once for the match body.
The __llmatch alloca OTOH needs a single lifetime start marker, but
when there's a guard clause, it needs two end markers, because its
lifetime ends either when the guard doesn't match or after the match
body.
With these intrinsics in place, LLVM can now, for example, optimize
code like this:
````rust
enum E {
A1(int),
A2(int),
A3(int),
A4(int),
}
pub fn variants(x: E) {
match x {
A1(m) => bar(&m),
A2(m) => bar(&m),
A3(m) => bar(&m),
A4(m) => bar(&m),
}
}
````
To a single call to bar, using only a single stack slot. It still fails
to eliminate some of checks.
````gas
.Ltmp5:
.cfi_def_cfa_offset 16
movb (%rdi), %al
testb %al, %al
je .LBB3_5
movzbl %al, %eax
cmpl $1, %eax
je .LBB3_5
cmpl $2, %eax
.LBB3_5:
movq 8(%rdi), %rax
movq %rax, (%rsp)
leaq (%rsp), %rdi
callq _ZN3bar20hcb7a0d8be8e17e37daaE@PLT
popq %rax
retq
````
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes#15665
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes#15665
In f1ad425199, I changed the handling
of macros, to prevent macro invocations from occurring in fully expanded
source. Instead, I added a side table. It contained only the
spans of the macros, because this was the only information required
in order to make macro export work.
However, librustdoc was also affected by this change, since it
extracts macro information in a similar way. As a result of the earlier
change, exported macros were no longer documented.
In order to repair this, I've adjusted the side table to contain whole
items, rather than just the spans.
`call_visit_glue` is only ever called from trans_intrinsic, and the
block won't be unreachable there. Also, the comment doesn't make sense
anymore. When the code was introduced in 38fee9526a the function was
also responsible for the cleanup glue, which is no longer the case.
While we're at it, also fixed the debug message to output the right
function name.
This implements RFC 39. Omitted lifetimes in return values will now be
inferred to more useful defaults, and an error is reported if a lifetime
in a return type is omitted and one of the two lifetime elision rules
does not specify what it should be.
This primarily breaks two uncommon code patterns. The first is this:
unsafe fn get_foo_out_of_thin_air() -> &Foo {
...
}
This should be changed to:
unsafe fn get_foo_out_of_thin_air() -> &'static Foo {
...
}
The second pattern that needs to be changed is this:
enum MaybeBorrowed<'a> {
Borrowed(&'a str),
Owned(String),
}
fn foo() -> MaybeBorrowed {
Owned(format!("hello world"))
}
Change code like this to:
enum MaybeBorrowed<'a> {
Borrowed(&'a str),
Owned(String),
}
fn foo() -> MaybeBorrowed<'static> {
Owned(format!("hello world"))
}
Closes#15552.
[breaking-change]
r? @nick29581
This is accomplished by rewriting static expressions into equivalent patterns.
This way, patterns referencing static variables can both participate
in exhaustiveness analysis as well as be compiled down into the appropriate
branch of the decision trees that match expressions are codegened to.
Fixes#6533.
Fixes#13626.
Fixes#13731.
Fixes#14576.
Fixes#15393.
This implements RFC 39. Omitted lifetimes in return values will now be
inferred to more useful defaults, and an error is reported if a lifetime
in a return type is omitted and one of the two lifetime elision rules
does not specify what it should be.
This primarily breaks two uncommon code patterns. The first is this:
unsafe fn get_foo_out_of_thin_air() -> &Foo {
...
}
This should be changed to:
unsafe fn get_foo_out_of_thin_air() -> &'static Foo {
...
}
The second pattern that needs to be changed is this:
enum MaybeBorrowed<'a> {
Borrowed(&'a str),
Owned(String),
}
fn foo() -> MaybeBorrowed {
Owned(format!("hello world"))
}
Change code like this to:
enum MaybeBorrowed<'a> {
Borrowed(&'a str),
Owned(String),
}
fn foo() -> MaybeBorrowed<'static> {
Owned(format!("hello world"))
}
Closes#15552.
[breaking-change]