See Issue 8142 for discussion.
This makes it illegal for a Drop impl to be more specialized than the
original item.
So for example, all of the following are now rejected (when they would
have been blindly accepted before):
```rust
struct S<A> { ... };
impl Drop for S<i8> { ... } // error: specialized to concrete type
struct T<'a> { ... };
impl Drop for T<'static> { ... } // error: specialized to concrete region
struct U<A> { ... };
impl<A:Clone> Drop for U<A> { ... } // error: added extra type requirement
struct V<'a,'b>;
impl<'a,'b:a> Drop for V<'a,'b> { ... } // error: added extra region requirement
```
Due to examples like the above, this is a [breaking-change].
(The fix is to either remove the specialization from the `Drop` impl,
or to transcribe the requirements into the struct/enum definition;
examples of both are shown in the PR's fixed to `libstd`.)
----
This is likely to be the last thing blocking the removal of the
`#[unsafe_destructor]` attribute.
Includes two new error codes for the new dropck check.
Update run-pass tests to accommodate new dropck pass.
Update tests and docs to reflect new destructor restriction.
----
Implementation notes:
We identify Drop impl specialization by not being as parametric as the
struct/enum definition via unification.
More specifically:
1. Attempt unification of a skolemized instance of the struct/enum
with an instance of the Drop impl's type expression where all of
the impl's generics (i.e. the free variables of the type
expression) have been replaced with unification variables.
2. If unification fails, then reject Drop impl as specialized.
3. If unification succeeds, check if any of the skolemized
variables "leaked" into the constraint set for the inference
context; if so, then reject Drop impl as specialized.
4. Otherwise, unification succeeded without leaking skolemized
variables: accept the Drop impl.
We identify whether a Drop impl is injecting new predicates by simply
looking whether the predicate, after an appropriate substitution,
appears on the struct/enum definition.
A lot has changed since this doc text was last touched up, and this is
just a minor edit. I remove the trait section entirely since we don't
use extension traits that much anymore, so there are no significant
trait hilights for this module.
Main access point of .split() and other similar methods are not using
the StrExt trait anymore, so update the libcore docs to reflect this
(because these docs are visible in libstd API documentation).
This permits all coercions to be performed in casts, but adds lints to warn in those cases.
Part of this patch moves cast checking to a later stage of type checking. We acquire obligations to check casts as part of type checking where we previously checked them. Once we have type checked a function or module, then we check any cast obligations which have been acquired. That means we have more type information available to check casts (this was crucial to making coercions work properly in place of some casts), but it means that casts cannot feed input into type inference.
[breaking change]
* Adds two new lints for trivial casts and trivial numeric casts, these are warn by default, but can cause errors if you build with warnings as errors. Previously, trivial numeric casts and casts to trait objects were allowed.
* The unused casts lint has gone.
* Interactions between casting and type inference have changed in subtle ways. Two ways this might manifest are:
- You may need to 'direct' casts more with extra type information, for example, in some cases where `foo as _ as T` succeeded, you may now need to specify the type for `_`
- Casts do not influence inference of integer types. E.g., the following used to type check:
```
let x = 42;
let y = &x as *const u32;
```
Because the cast would inform inference that `x` must have type `u32`. This no longer applies and the compiler will fallback to `i32` for `x` and thus there will be a type error in the cast. The solution is to add more type information:
```
let x: u32 = 42;
let y = &x as *const u32;
```
This commit alters the behavior of the `Read::read_to_end()` method to zero all
memory instead of passing an uninitialized buffer to `read`. This change is
motivated by the [discussion on the internals forum][discuss] where the
conclusion has been that the standard library will not expose uninitialized
memory.
[discuss]: http://internals.rust-lang.org/t/uninitialized-memory/1652Closes#20314
This commit enables writing a stable implementation of the `Hasher` trait as
well as actually calculating the hash of a vlaue in a stable fashion. The
signature is stabilized as-is.
PR #23104 moved `is_null` and `offset` to an inherent impl on the raw pointer type.
I'm not sure whether or how it's possible to link to docs for that impl.
r? @steveklabnik
This commit marks as `#[stable]` the `Entry` types for the maps provided
by `std`. The main reason these had been left unstable previously was
uncertainty about an eventual trait design, but several plausible
designs have been proposed that all work fine with the current type definitions.
r? @Gankro
impls. This is a [breaking-change] (for gated code) in that when you
implement `Fn` (`FnMut`) you must also implement `FnOnce`. This commit
demonstrates how to fix it.
This commit marks as `#[stable]` the `Entry` types for the maps provided
by `std`. The main reason these had been left unstable previously was
uncertainty about an eventual trait design, but several plausible
designs have been proposed that all work fine with the current type definitions.
The method with which backwards compatibility was retained ended up leading to
documentation that rustdoc didn't handle well and largely ended up confusing.
This trait has proven quite useful when defining marker traits to avoid the
semi-confusing `PhantomFn` trait and it looks like it will continue to be a
useful tool for defining these traits.