The structure of the old translator as well as MIR assumed that drop glue cannot possibly panic and
translated the drops accordingly. However, in presence of `Drop::drop` this assumption can be
trivially shown to be untrue. As such, the Rust code like the following would never print number 2:
```rust
struct Droppable(u32);
impl Drop for Droppable {
fn drop(&mut self) {
if self.0 == 1 { panic!("Droppable(1)") } else { println!("{}", self.0) }
}
}
fn main() {
let x = Droppable(2);
let y = Droppable(1);
}
```
While the behaviour is allowed according to the language rules (we allow drops to not run), that’s
a very counter-intuitive behaviour. We fix this in MIR by allowing `Drop` to have a target to take
on divergence and connect the drops in such a way so the leftover drops are executed when some drop
unwinds.
Note, that this commit still does not implement the translator part of changes necessary for the
grand scheme of things to fully work, so the actual observed behaviour does not change yet. Coming
soon™.
See #14875.
We used to have CallKind only because there was a requirement to have all successors in a
contiguous memory block. Now that the requirement is gone, remove the CallKind and instead just
have the necessary information inline.
Awesome!
I have it set as stable right now under the rationale that it's extending an existing, stable API to another type in the "obvious" way.
r? @alexcrichton
cc @reem
Changed the description of the `make_mut` copy-on-write behaviour in arc.rs
The sentence "doesn't have one strong reference and no weak references." is a
hard to understand double negative, which can be much more easily explained.
After the truly incredible and embarrassing mess I managed to make in my last pull request, this should be a bit less messy.
Fixes#31267 - with this change, the code mentioned in the issue compiles.
Found and fixed another issue as well - constants of zero-size types, when used in ExprRepeats inside associated constants, were causing the compiler to crash at the same place as #31267. An example of this:
```
struct Bar;
const BAZ: Bar = Bar;
struct Foo([Bar; 1]);
struct Biz;
impl Biz {
const BAZ: Foo = Foo([BAZ; 1]);
}
fn main() {
let foo = Biz::BAZ;
println!("{:?}", foo);
}
```
However, I'm fairly certain that my fix for this is not as elegant as it could be. The problem seems to occur only with an associated constant of a tuple struct containing a fixed size array which is initialized using a repeat expression, and when the element to be repeated provided to the repeat expression is another constant which is of a zero-sized type. The fix works by looking for constants and associated constants which are zero-width and consequently contain no data, but for which rustc is still attempting to emit an LLVM value; it simply stops rustc from attempting to emit anything. By my logic, this should work fine since the only values that are emitted in this case (according to the comments) are for closures with side effects, and constants will never have side effects, so it's fine to simply get rid of them. It fixes the error and things compile fine with it, but I have a sneaking suspicion that it could be done in a far better manner.
r? @nikomatsakis
Rust currently emits atomic loads and stores with the LLVM `volatile` qualifier. This is unnecessary and prevents LLVM from performing optimization on these atomic operations.
If a new cleanup is added to a cleanup scope, the cached exits for that
scope are cleared, so all previous cleanups have to be translated
again. In the worst case this means that we get N distinct landing pads
where the last one has N cleanups, then N-1 and so on.
As new cleanups are to be executed before older ones, we can instead
cache the number of already translated cleanups in addition to the
block that contains them, and then only translate new ones, if any and
then jump to the cached ones, getting away with linear growth instead.
For the crate in #31381 this reduces the compile time for an optimized
build from >20 minutes (I cancelled the build at that point) to about 11
seconds. Testing a few crates that come with rustc show compile time
improvements somewhere between 1 and 8%. The "big" winner being
rustc_platform_intrinsics which features code similar to that in #31381.
Fixes#31381
This pull request adds support for [Illumos](http://illumos.org/)-based operating systems: SmartOS, OpenIndiana, and others. For now it's x86-64 only, as I'm not sure if 32-bit installations are widespread. This PR is based on #28589 by @potatosalad, and also closes#21000, #25845, and #25846.
Required changes in libc are already merged: https://github.com/rust-lang-nursery/libc/pull/138
Here's a snapshot required to build a stage0 compiler:
https://s3-eu-west-1.amazonaws.com/nbaksalyar/rustc-sunos-snapshot.tar.gz
It passes all checks from `make check`.
There are some changes I'm not quite sure about, e.g. macro usage in `src/libstd/num/f64.rs` and `DirEntry` structure in `src/libstd/sys/unix/fs.rs`, so any comments on how to rewrite it better would be greatly appreciated.
Also, LLVM configure script might need to be patched to build it successfully, or a pre-built libLLVM should be used. Some details can be found here: https://llvm.org/bugs/show_bug.cgi?id=25409
Thanks!
r? @brson
This is very useful when the RwLock is synchronizing access to a data
structure and you would like to return or store guards which contain
references to data inside the data structure instead of the data structure
itself.