Fixed mutable vars being marked used when they weren't
#### NB : bootstrapping is slow on my machine, even with `keep-stage` - fixes for occurances in the current codebase are <s>in the pipeline</s> done. This PR is being put up for review of the fix of the issue.
Fixes#43526, Fixes#30280, Fixes#25049
### Issue
Whenever the compiler detected a mutable deref being used mutably, it marked an associated value as being used mutably as well. In the case of derefencing local variables which were mutable references, this incorrectly marked the reference itself being used mutably, instead of its contents - with the consequence of making the following code emit no warnings
```
fn do_thing<T>(mut arg : &mut T) {
... // don't touch arg - just deref it to access the T
}
```
### Fix
Make dereferences not be counted as a mutable use, but only when they're on borrows on local variables.
#### Why not on things other than local variables?
* Whenever you capture a variable in a closure, it gets turned into a hidden reference - when you use it in the closure, it gets dereferenced. If the closure uses the variable mutably, that is actually a mutable use of the thing being dereffed to, so it has to be counted.
* If you deref a mutable `Box` to access the contents mutably, you are using the `Box` mutably - so it has to be counted.
Make the "main" constructors of NonZero/Shared/Unique return Option
Per discussion in https://github.com/rust-lang/rust/issues/27730#issuecomment-303939441.
This is a breaking change to unstable APIs.
The old behavior is still available under the name `new_unchecked`. Note that only that one can be `const fn`, since `if` is currently not allowed in constant contexts.
In the case of `NonZero` this requires adding a new `is_zero` method to the `Zeroable` trait. I mildly dislike this, but it’s not much worse than having a `Zeroable` trait in the first place. `Zeroable` and `NonZero` are both unstable, this can be reworked later.
This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.
[RFC 1974]: https://github.com/rust-lang/rfcs/pull/197
The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.
cc #27389
* Changed btree_map's and hash_map's Entry (etc.) docs to be consistent
* Changed VecDeque's type and module summary sentences to be consistent
with each other as well as with other summary sentences in the module
* Changed HashMap's and HashSet's summary sentences to be less redundantly
phrased and also more consistant with the other summary sentences in the
module
* Also, added an example to Bound
* Added links where possible (limited because of facading)
* Changed references to methods from `foo()` to `foo` in module docs
* Changed references to methods from `HashMap::foo` to just `foo` in
top-level docs for `HashMap` and the `default` doc for `DefaultHasher`
* Various small other fixes
* Store capacity_mask instead of capacity
* Move bucket index into RawBucket
* Bucket index is now always within [0..table_capacity)
* Clone RawTable using RawBucket
* Simplify iterators by moving logic into RawBuckets
* Make retain aware of the number of elements
This replaces the `std::collections:#️⃣:table::RevMoveBuckets`
iterator with a simpler `while` loop. This iterator was only used for
dropping the remaining elements of a `RawTable`, so instead we can just
loop through directly and drop them in place.
This should be functionally equivalent to the former code, but a little
easier to read. I was hoping it might have some performance benefit
too, but it seems the optimizer was already good enough to see through
the iterator -- the generated code is nearly the same. Maybe it will
still help if an element type has more complicated drop code.
Fix a spelling error in HashMap documentation, and slightly reword surrounding text for precision
Noticed while reading docs just now.
It's possible that the prior wording *meant* to state that the seed's randomness depends on the exact instant that the system RNG was created, I guess. But unless there's an API guarantee that this is the case, the wording seems over-precise. Is there a formal API guarantee that would forbid, say, the system RNG from generating all output using the Intel RDRAND instruction? I don't think the quality of output in that case would depend on when the RNG was created. Yet it seems to me like it could well be a valid source of randomness when computing the initial seed.
For that reason, tying the randomness of the seed, to the quality of the RNG's output *at the precise instant the seed is computed*, seems less confining. That instantaneous quality level could be determined by the quality at the instant the RNG was created -- but instantaneous quality need not be low for that precise reason.