by-reference upvars.
This partially implements RFC 38. A snapshot will be needed to turn this
on, because stage0 cannot yet parse the keyword.
Part of #12381.
This leaves the `Share` trait at `std::kinds` via a `#[deprecated]` `pub use`
statement, but the `NoShare` struct is no longer part of `std::kinds::marker`
due to #12660 (the build cannot bootstrap otherwise).
All code referencing the `Share` trait should now reference the `Sync` trait,
and all code referencing the `NoShare` type should now reference the `NoSync`
type. The functionality and meaning of this trait have not changed, only the
naming.
Closes#16281
[breaking-change]
This leaves the `Share` trait at `std::kinds` via a `#[deprecated]` `pub use`
statement, but the `NoShare` struct is no longer part of `std::kinds::marker`
due to #12660 (the build cannot bootstrap otherwise).
All code referencing the `Share` trait should now reference the `Sync` trait,
and all code referencing the `NoShare` type should now reference the `NoSync`
type. The functionality and meaning of this trait have not changed, only the
naming.
Closes#16281
[breaking-change]
Generic extern functions written in Rust have their names mangled, as well as their internal clownshoe __rust_abi functions. This allows e.g. specific monomorphizations of these functions to be used as callbacks.
Closes#12502.
The `type_overflow` lint, doesn't catch the overflow for `i64` because
the overflow happens earlier in the parse phase when the `u64` as biggest
possible int gets casted to `i64` , without checking the for overflows.
We can't lint in the parse phase, so a refactoring of the `LitInt` type
was necessary.
The types `LitInt`, `LitUint` and `LitIntUnsuffixed` where merged to one
type `LitInt` which stores it's value as `u64`. An additional parameter was
added which indicate the signedness of the type and the sign of the value.
Using the Show impl for Names created global symbols with names like
`"str\"str\"(1027)"`. This adjusts strings, binaries and vtables to
avoid using that impl.
Fixes#15799.
When generating a unique symbol for things like closures or glue_drop,
we call token::gensym() to create a crate-unique Name. Recently, Name
changed its Show impl so it no longer prints as a number. This caused
symbols like glue_drop:1542 to become glue_drop:"glue_drop"(1542), or in
mangled form, glue_drop.$x22glue_drop$x22$LP$1542$RP$.
When generating a unique symbol for things like closures or glue_drop,
we call token::gensym() to create a crate-unique Name. Recently, Name
changed its Show impl so it no longer prints as a number. This caused
symbols like glue_drop:1542 to become glue_drop:"glue_drop"(1542), or in
mangled form, glue_drop.$x22glue_drop$x22$LP$1542$RP$.
Our implementation of ebml has diverged from the standard in order
to better serve the needs of the compiler, so it doesn't make much
sense to call what we have ebml anyore. Furthermore, our implementation
is pretty crufty, and should eventually be rewritten into a format
that better suits the needs of the compiler. This patch factors out
serialize::ebml into librbml, otherwise known as the Really Bad
Markup Language. This is a stopgap library that shouldn't be used
by end users, and will eventually be replaced by something better.
[breaking-change]
Currently we don't emit lifetime end markers when translating the
unwinding code. I omitted that when I added the support for lifetime
intrinsics, because I initially made the mistake of just returning true
in clean_on_unwind(). That caused almost all calls to be translated as
invokes, leading to quite awful results.
To correctly emit the lifetime end markers, we must differentiate
between cleanup that requires unwinding and such cleanup that just wants
to emit code during unwinding.
Currently we don't emit lifetime end markers when translating the
unwinding code. I omitted that when I added the support for lifetime
intrinsics, because I initially made the mistake of just returning true
in clean_on_unwind(). That caused almost all calls to be translated as
invokes, leading to quite awful results.
To correctly emit the lifetime end markers, we must differentiate
between cleanup that requires unwinding and such cleanup that just wants
to emit code during unwinding.
This makes edge cases in which the `Iterator` trait was not in scope
and/or `Option` or its variants were not in scope work properly.
This breaks code that looks like:
struct MyStruct { ... }
impl MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
for x in MyStruct { ... } { ... }
Change ad-hoc `next` methods like the above to implementations of the
`Iterator` trait. For example:
impl Iterator<int> for MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
Closes#15392.
[breaking-change]
The allocas used in match expression currently don't get good lifetime
markers, in fact they only get lifetime start markers, because their
lifetimes don't match to cleanup scopes.
While the bindings themselves are bog standard and just need a matching
pair of start and end markers, they might need them twice, once for a
guard clause and once for the match body.
The __llmatch alloca OTOH needs a single lifetime start marker, but
when there's a guard clause, it needs two end markers, because its
lifetime ends either when the guard doesn't match or after the match
body.
With these intrinsics in place, LLVM can now, for example, optimize
code like this:
````rust
enum E {
A1(int),
A2(int),
A3(int),
A4(int),
}
pub fn variants(x: E) {
match x {
A1(m) => bar(&m),
A2(m) => bar(&m),
A3(m) => bar(&m),
A4(m) => bar(&m),
}
}
````
To a single call to bar, using only a single stack slot. It still fails
to eliminate some of checks.
````gas
.Ltmp5:
.cfi_def_cfa_offset 16
movb (%rdi), %al
testb %al, %al
je .LBB3_5
movzbl %al, %eax
cmpl $1, %eax
je .LBB3_5
cmpl $2, %eax
.LBB3_5:
movq 8(%rdi), %rax
movq %rax, (%rsp)
leaq (%rsp), %rdi
callq _ZN3bar20hcb7a0d8be8e17e37daaE@PLT
popq %rax
retq
````
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes#15665
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes#15665