Remove sync::Once::call_once 'static bound
See https://internals.rust-lang.org/t/sync-once-per-instance/7918 for more context.
Suggested r is @alexcrichton, the one who added the `'static` bound back in 2014. I don't want to officially r? though, if the system would even let me. I'd rather let the system choose the appropriate member since it knows more than I do.
`git blame` history for `sync::Once::call_once`'s signature:
- [std: Second pass stabilization of sync](f3a7ec7028) (Dec 2014)
```diff
- pub fn doit<F>(&'static self, f: F) where F: FnOnce() {
+ #[stable]
+ pub fn call_once<F>(&'static self, f: F) where F: FnOnce() {
```
- [libstd: use unboxed closures](cdbb3ca9b7) (Dec 2014)
```diff
- pub fn doit(&'static self, f: ||) {
+ pub fn doit<F>(&'static self, f: F) where F: FnOnce() {
```
- [std: Rewrite the `sync` module](71d4e77db8) (Nov 2014)
```diff
- pub fn doit(&self, f: ||) {
+ pub fn doit(&'static self, f: ||) {
```
> ```text
> The second layer is the layer provided by `std::sync` which is intended to be
> the thinnest possible layer on top of `sys_common` which is entirely safe to
> use. There are a few concerns which need to be addressed when making these
> system primitives safe:
>
> * Once used, the OS primitives can never be **moved**. This means that they
> essentially need to have a stable address. The static primitives use
> `&'static self` to enforce this, and the non-static primitives all use a
> `Box` to provide this guarantee.
> ```
The author of this diff is @alexcrichton. `sync::Once` now contains only a pointer to (privately hidden) `Waiter`s, which are all stack-allocated. The `'static` bound to `sync::Once` is thus unnecessary to guarantee that any OS primitives are non-relocatable.
As the `'static` bound is not required for `sync::Once`'s operation, removing it is strictly more useful. As an example, it allows attaching a one-time operation to instances rather than only to global singletons.
improve error message shown for unsafe operations
Add a short explanation saying why undefined behavior could arise. In particular, the error many people got for "creating a pointer to a packed field requires unsafe block" was not worded great -- it lead to people just adding the unsafe block without considering if what they are doing follows the rules.
I am not sure if a "note" is the right thing, but that was the easiest thing to add...
Inspired by @gnzlbg at https://github.com/rust-lang/rust/issues/46043#issuecomment-381544673
hygiene: Decouple transparencies from expansion IDs
And remove fallback to parent modules during resolution of names in scope.
This is a breaking change for users of unstable macros 2.0 (both procedural and declarative), code like this:
```rust
#![feature(decl_macro)]
macro m($S: ident) {
struct $S;
mod m {
type A = $S;
}
}
fn main() {
m!(S);
}
```
or equivalent
```rust
#![feature(decl_macro)]
macro m($S: ident) {
mod m {
type A = $S;
}
}
fn main() {
struct S;
m!(S);
}
```
stops working due to module boundaries being properly enforced.
For proc macro derives this is still reported as a compatibility warning to give `actix_derive`, `diesel_derives` and `palette_derive` time to fix their issues.
Fixes https://github.com/rust-lang/rust/issues/50504 in accordance with [this comment](https://github.com/rust-lang/rust/issues/50504#issuecomment-399764767).
Infinite loop detection for const evaluation
Resolves#50637.
An `EvalContext` stores the transient state (stack, heap, etc.) of the MIRI virtual machine while it executing code. As long as MIRI only executes pure functions, we can detect if a program is in a state where it will never terminate by periodically taking a "snapshot" of this transient state and comparing it to previous ones. If any two states are exactly equal, the machine must be in an infinite loop.
Instead of fully cloning a snapshot every time the detector is run, we store a snapshot's hash. Only when a hash collision occurs do we fully clone the interpreter state. Future snapshots which cause a collision will be compared against this clone, causing the interpreter to abort if they are equal.
At the moment, snapshots are not taken until MIRI has progressed a certain amount. After this threshold, snapshots are taken every `DETECTOR_SNAPSHOT_PERIOD` steps. This means that an infinite loop with period `P` will be detected after a maximum of `2 * P * DETECTOR_SNAPSHOT_PERIOD` interpreter steps. The factor of 2 arises because we only clone a snapshot after it causes a hash collision.
Unix sockets on redox
This is done using the ipcd daemon. It's not exactly like unix sockets because there is not actually a physical file for the path, but it's close enough for a basic implementation :)
This allows mio-uds and tokio-uds to work with a few modifications as well, which is exciting!
Disable LLVM verification by default
Currently -Z no-verify only controls IR verification prior to LLVM codegen, while verification is performed unconditionally both before and after linking with (Thin)LTO.
Also wondering what the sentiment is on disabling verification by default (and e.g. only enabling it on ALT builds with assertions). This does not seem terribly useful outside of rustc development and it does seem to show up in profiles (at something like 3%).
**EDIT:** A table showing the various configurations and what is enabled when.
| Configuration | Dynamic verification performed | LLVM static assertions compiled in |
| --- | --- | --- |
| alt builds | | yes |
| nightly builds | | no |
| stable builds | | no |
| CI builds | | |
| dev builds in a checkout | | |
Better docs for copy_from_slice & clone_from_slice
I copy-pasted the text from clone_from_slice to copy_from_slice 😄
@steveklabnik feel free to suggest changes.
edit: closes#49769
Upgrade to LLVM's master branch (LLVM 7)
### Current status
~~Blocked on a [performance regression](https://github.com/rust-lang/rust/pull/51966#issuecomment-402320576). The performance regression has an [upstream LLVM issue](https://bugs.llvm.org/show_bug.cgi?id=38047) and has also [been bisected](https://reviews.llvm.org/D44282) to an LLVM revision.~~
Ready to merge!
---
This commit upgrades the main LLVM submodule to LLVM's current master branch.
The LLD submodule is updated in tandem as well as compiler-builtins.
Along the way support was also added for LLVM 7's new features. This primarily
includes the support for custom section concatenation natively in LLD so we now
add wasm custom sections in LLVM IR rather than having custom support in rustc
itself for doing so.
Some other miscellaneous changes are:
* We now pass `--gc-sections` to `wasm-ld`
* The optimization level is now passed to `wasm-ld`
* A `--stack-first` option is passed to LLD to have stack overflow always cause
a trap instead of corrupting static data
* The wasm target for LLVM switched to `wasm32-unknown-unknown`.
* The syntax for aligned pointers has changed in LLVM IR and tests are updated
to reflect this.
* ~~The `thumbv6m-none-eabi` target is disabled due to an [LLVM bug][llbug]~~
Nowadays we've been mostly only upgrading whenever there's a major release of
LLVM but enough changes have been happening on the wasm target that there's been
growing motivation for quite some time now to upgrade out version of LLD. To
upgrade LLD, however, we need to upgrade LLVM to avoid needing to build yet
another version of LLVM on the builders.
The revision of LLVM in use here is arbitrarily chosen. We will likely need to
continue to update it over time if and when we discover bugs. Once LLVM 7 is
fully released we can switch to that channel as well.
[llbug]: https://bugs.llvm.org/show_bug.cgi?id=37382
cc #50543
- [std: Rewrite the `sync` module71d4e77db8) (Nov 2014)
```diff
- pub fn doit(&self, f: ||) {
+ pub fn doit(&'static self, f: ||) {
```
> ```text
> The second layer is the layer provided by `std::sync` which is intended to be
> the thinnest possible layer on top of `sys_common` which is entirely safe to
> use. There are a few concerns which need to be addressed when making these
> system primitives safe:
>
> * Once used, the OS primitives can never be **moved**. This means that they
> essentially need to have a stable address. The static primitives use
> `&'static self` to enforce this, and the non-static primitives all use a
> `Box` to provide this guarantee.
> ```
The author of this diff is @alexcrichton. `sync::Once` contains only a pointer to (privately hidden) `Waiter`s, which are all stack-allocated. The `'static` bound to `sync::Once` is thus unnecessary to guarantee that any OS primitives are non-relocatable.
See https://internals.rust-lang.org/t/sync-once-per-instance/7918 for more context.
use the adjusted type for cat_pattern in tuple patterns
This looks like a typo introduced in #51686.
Fixes#52213.
r? @pnkfelix
beta + stable nominating because regression + unsoundness.
I don't see why MC should fail on well-formed code, so it might be a
better idea to just add a `delay_span_bug` there (anyone remember the
`cat_expr Errd` bug from the 1.0 days?).
However, I don't think this is a good idea to backport a new delay_span_bug
into stable and this code is going away soon-ish anyway.
rustc: Avoid /tmp/ in graphviz writing
This issue was reported to security@rust-lang.org by Sebastien Marie following
our recent [security advisory][1]. Because `/tmp` is typically globally writable
it's possible for one user to place symlinks in `/tmp` pointing to files in
another user's directories, causing `rustc` to overwrite the contents of
innocent files by accident.
This patch instead defaults the output path here to the cwd which should avoid
this issue.
[1]: https://blog.rust-lang.org/2018/07/06/security-advisory-for-rustdoc.html
Correct some codegen stats counter inconsistencies
I noticed some possible typos/inconsistencies in the codegen counters. For example, `fsub` was getting counted as an integer `sub`, whereas `fadd` was counted as an add. And `addincoming` was only being counted on the initial call.
dbd10f8175/src/librustc_codegen_llvm/builder.rs (L831-L841)
Only remaining inconsistencies I can see are things like `fadd_fast` are counted as `fadd`. But the vector versions like `vector_reduce_fmax_fast` are counted as `vector.reduce.fmax_fast` not as their 'base' versions (`vector_reduce_fmax` is counted as `vector.reduce.fmax`).
Add #[repr(transparent)] to Atomic* types
This allows them to be used in `#[repr(C)]` structs without warnings. Since rust-lang/rfcs#1649 and rust-lang/rust#35603 they are already documented to have "the same in-memory representation as" their corresponding primitive types. This just makes that explicit.
This was briefly part of #51395, but was controversial and therefore dropped. But it turns out that it's essentially already documented (which I had forgotten).
Clarifying how the alignment of the struct works
The docs were not specifying how to compute the alignment of the struct, so I had to spend some time trying to figure out how that works. Found the answer [on this page](http://camlorn.net/posts/April%202017/rust-struct-field-reordering.html):
> The total size of this struct is 5, but the most-aligned field is b with alignment 2, so we round up to 6 and give the struct an alignment of 2 bytes.