groundwork for rustc_clean/dirty improvements
This is a WIP PR that needs mentoring from @michaelwoerister.
There are several TODOs but no outstanding questions (except for the main one -- **is this the right approach?**)
This is the plumbing for supporing groups in `rustc_clean(labels="...")`, as well as supporting an `except="..."` which will remove the excepted labels in the "clean" check and then assert that they are dirty (this is still TODO).
See the code TODO's and example comments for a rough design.
I'd like to know if this is the design you would like to do, and then I can go about actually filling out the groups and implementing the remaining logic.
This commit tweaks the behavior of inlining functions into multiple codegen
units when rustc is compiling in debug mode. Today rustc will unconditionally
treat `#[inline]` functions by translating them into all codegen units that
they're needed within, marking the linkage as `internal`. This commit changes
the behavior so that in debug mode (compiling at `-O0`) rustc will instead only
translate `#[inline]` functions into *one* codegen unit, forcing all other
codegen units to reference this one copy.
The goal here is to improve debug compile times by reducing the amount of
translation that happens on behalf of multiple codegen units. It was discovered
in #44941 that increasing the number of codegen units had the adverse side
effect of increasing the overal work done by the compiler, and the suspicion
here was that the compiler was inlining, translating, and codegen'ing more
functions with more codegen units (for example `String` would be basically
inlined into all codegen units if used). The strategy in this commit should
reduce the cost of `#[inline]` functions to being equivalent to one codegen
unit, which is only translating and codegen'ing inline functions once.
Collected [data] shows that this does indeed improve the situation from [before]
as the overall cpu-clock time increases at a much slower rate and when pinned to
one core rustc does not consume significantly more wall clock time than with one
codegen unit.
One caveat of this commit is that the symbol names for inlined functions that
are only translated once needed some slight tweaking. These inline functions
could be translated into multiple crates and we need to make sure the symbols
don't collideA so the crate name/disambiguator is mixed in to the symbol name
hash in these situations.
[data]: https://github.com/rust-lang/rust/issues/44941#issuecomment-334880911
[before]: https://github.com/rust-lang/rust/issues/44941#issuecomment-334583384
Allow atomic operations up to 32 bits
The ARMv5te platform does not have instruction-level support for atomics, however the kernel provides [user space helpers] which can be used to perform atomic operations. When linked with `libgcc`, the atomic symbols needed by Rust will be provided, rather than CPU level intrinsics.
[user space helpers]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
32-bit versions of these kernel level helpers were introduced in Linux Kernel 2.6.12, and 64-bit version of these kernel level helpers were introduced in Linux Kernel 3.1. I have selected 32 bit versions as std currently only requires Linux version 2.6.18 and above as far as I am aware.
As this target is specifically linux and gnueabi, it is reasonable to assume the Linux Kernel and libc will be available for the target. There is a large performance penalty, as we are not using CPU level intrinsics, however this penalty is likely preferable to not having the target at all.
I have used this change in a custom target (along with xargo) to build std, as well as a number of higher level crates.
## Additional information
For reference, here is what a a code snippet decompiles to:
```rust
use std::sync::atomic::{AtomicIsize, Ordering};
#[no_mangle]
pub extern fn foo(a: &AtomicIsize) -> isize {
a.fetch_add(1, Ordering::SeqCst)
}
```
```
Disassembly of section .text.foo:
00000000 <foo>:
0: e92d4800 push {fp, lr}
4: e3a01001 mov r1, #1
8: ebfffffe bl 0 <__sync_fetch_and_add_4>
c: e8bd8800 pop {fp, pc}
```
Which in turn is provided by `libgcc.a`, which has code which looks like this:
```
Disassembly of section .text:
00000000 <__sync_fetch_and_add_4>:
0: e92d40f8 push {r3, r4, r5, r6, r7, lr}
4: e1a05000 mov r5, r0
8: e1a07001 mov r7, r1
c: e59f6028 ldr r6, [pc, #40] ; 3c <__sync_fetch_and_add_4+0x3c>
10: e5954000 ldr r4, [r5]
14: e1a02005 mov r2, r5
18: e1a00004 mov r0, r4
1c: e0841007 add r1, r4, r7
20: e1a0e00f mov lr, pc
24: e12fff16 bx r6
28: e3500000 cmp r0, #0
2c: 1afffff7 bne 10 <__sync_fetch_and_add_4+0x10>
30: e1a00004 mov r0, r4
34: e8bd40f8 pop {r3, r4, r5, r6, r7, lr}
38: e12fff1e bx lr
3c: ffff0fc0 .word 0xffff0fc0
```
Where you can see the reference to `0xffff0fc0`, which is provided by the [user space helpers].
`PhantomData<*const T>` has the implication of Send / Syncness following
the *const T type, but the discriminant should always be Send and Sync.
Use `PhantomData<fn() -> T>` which has the same variance in T, but is Send + Sync
rustc: Implement ThinLTO
This commit is an implementation of LLVM's ThinLTO for consumption in rustc
itself. Currently today LTO works by merging all relevant LLVM modules into one
and then running optimization passes. "Thin" LTO operates differently by having
more sharded work and allowing parallelism opportunities between optimizing
codegen units. Further down the road Thin LTO also allows *incremental* LTO
which should enable even faster release builds without compromising on the
performance we have today.
This commit uses a `-Z thinlto` flag to gate whether ThinLTO is enabled. It then
also implements two forms of ThinLTO:
* In one mode we'll *only* perform ThinLTO over the codegen units produced in a
single compilation. That is, we won't load upstream rlibs, but we'll instead
just perform ThinLTO amongst all codegen units produced by the compiler for
the local crate. This is intended to emulate a desired end point where we have
codegen units turned on by default for all crates and ThinLTO allows us to do
this without performance loss.
* In anther mode, like full LTO today, we'll optimize all upstream dependencies
in "thin" mode. Unlike today, however, this LTO step is fully parallelized so
should finish much more quickly.
There's a good bit of comments about what the implementation is doing and where
it came from, but the tl;dr; is that currently most of the support here is
copied from upstream LLVM. This code duplication is done for a number of
reasons:
* Controlling parallelism means we can use the existing jobserver support to
avoid overloading machines.
* We will likely want a slightly different form of incremental caching which
integrates with our own incremental strategy, but this is yet to be
determined.
* This buys us some flexibility about when/where we run ThinLTO, as well as
having it tailored to fit our needs for the time being.
* Finally this allows us to reuse some artifacts such as our `TargetMachine`
creation, where all our options we used today aren't necessarily supported by
upstream LLVM yet.
My hope is that we can get some experience with this copy/paste in tree and then
eventually upstream some work to LLVM itself to avoid the duplication while
still ensuring our needs are met. Otherwise I fear that maintaining these
bindings may be quite costly over the years with LLVM updates!
update FIXME(#6298) to point to open issue 15020
update FIXME(#6268) to point to RFC 811
update FIXME(#10520) to point to RFC 1751
remove FIXME for emscripten issue 4563 and include target in `test_estimate_scaling_factor`
remove FIXME(#18207) since node_id isn't used for `ref` pattern analysis
remove FIXME(#6308) since DST was implemented in #12938
remove FIXME(#2658) since it was decided to not reorganize module
remove FIXME(#20590) since it was decided to stay conservative with projection types
remove FIXME(#20297) since it was decided that solving the issue is unnecessary
remove FIXME(#27086) since closures do correspond to structs now
remove FIXME(#13846) and enable `function_sections` for windows
remove mention of #22079 in FIXME(#22079) since this is a general FIXME
remove FIXME(#5074) since the restriction on borrow were lifted
Fnty args rustdoc
Fixes#44570.
cc @QuietMisdreavus
cc @rust-lang/dev-tools
Considering the impact on the `hir` libs, I'll put @eddyb as reviewer.
r? @eddyb
As discovered in #44538 ARMv6 devices may or may not support unaligned memory accesses. ARMv6
Linux *seems* to have no problem with unaligned accesses but this is because the kernel is stepping
in to fix each unaligned memory access -- this incurs in a performance penalty.
This commit enforces aligned memory accesses on all our in-tree ARM targets that may be used with
ARMv6 devices. This should improve performance of Rust programs on ARMv6 devices. For the record,
clang also applies this attribute when targeting ARMv6 devices that are not running Darwin or
NetBSD.
LLDB's output may be None instead of '', and that will cause type
mismatch when normalize_whitespace() expects a string instead of
None. This commit simply ensures we do pass '' even if the output
is None.
This commit is an implementation of LLVM's ThinLTO for consumption in rustc
itself. Currently today LTO works by merging all relevant LLVM modules into one
and then running optimization passes. "Thin" LTO operates differently by having
more sharded work and allowing parallelism opportunities between optimizing
codegen units. Further down the road Thin LTO also allows *incremental* LTO
which should enable even faster release builds without compromising on the
performance we have today.
This commit uses a `-Z thinlto` flag to gate whether ThinLTO is enabled. It then
also implements two forms of ThinLTO:
* In one mode we'll *only* perform ThinLTO over the codegen units produced in a
single compilation. That is, we won't load upstream rlibs, but we'll instead
just perform ThinLTO amongst all codegen units produced by the compiler for
the local crate. This is intended to emulate a desired end point where we have
codegen units turned on by default for all crates and ThinLTO allows us to do
this without performance loss.
* In anther mode, like full LTO today, we'll optimize all upstream dependencies
in "thin" mode. Unlike today, however, this LTO step is fully parallelized so
should finish much more quickly.
There's a good bit of comments about what the implementation is doing and where
it came from, but the tl;dr; is that currently most of the support here is
copied from upstream LLVM. This code duplication is done for a number of
reasons:
* Controlling parallelism means we can use the existing jobserver support to
avoid overloading machines.
* We will likely want a slightly different form of incremental caching which
integrates with our own incremental strategy, but this is yet to be
determined.
* This buys us some flexibility about when/where we run ThinLTO, as well as
having it tailored to fit our needs for the time being.
* Finally this allows us to reuse some artifacts such as our `TargetMachine`
creation, where all our options we used today aren't necessarily supported by
upstream LLVM yet.
My hope is that we can get some experience with this copy/paste in tree and then
eventually upstream some work to LLVM itself to avoid the duplication while
still ensuring our needs are met. Otherwise I fear that maintaining these
bindings may be quite costly over the years with LLVM updates!
Fix TcpStream::local_addr docs example code
The local address's port is not 8080 in this example, that's the remote peer address port. On my machine, the local address is different every time, so I've changed `assert_eq` to only test the IP address
Provides a reasonable interface for Box::from_raw implementation.
Does not get around the requirement of mem::transmute for converting
back and forth between Unique and Box.
A few pretty-printers were returning a quoted string from their
to_string method. It's preferable in gdb to return a lazy string and to
let gdb handle the display by having a "display_hint" method that
returns "string" -- it lets gdb settings (like "set print ...") work, it
handles corrupted strings a bit better, and it passes the information
along to IDEs.
Previously the constant index was reported as `[x of y]` or `[-x of y]` where
`x` was the offset and `y` the minimum length of the slice. The minus sign
wasn't in the right case since for `&[_, x, .., _, _]`, the error reported was
`[-1 of 4]`, and for `&[_, _, .., x, _]`, the error reported was `[2 of 4]`.
This commit fixes the sign so that the indexes 1 and -2 are reported, and
remove the ` of y` part of the message to make it more succinct.
fix logic error in #44269's `prune_cache_value_obligations`
We want to retain obligations that *contain* inference variables, not
obligations that *don't contain* them, in order to fix#43132. Because
of surrounding changes to inference, the ICE doesn't occur in its
original case, but I believe it could still be made to occur on master.
Maybe I should try to write a new test case? Certainly not right now
(I'm mainly trying to get us a beta that we can ship) but maybe before
we land this PR on nightly?
This seems to cause a 10% performance regression in my imprecise
attempt to benchmark item-body checking for #43613, but it's better to
be slow and right than fast and wrong. If we want to recover that, I
think we can change the constrained-type-parameter code to actually
give a list of projections that are important for resolving inference
variables and filter everything else out.