rustc: Implement the #[global_allocator] attribute
This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.
[RFC 1974]: https://github.com/rust-lang/rfcs/pull/1974
The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.
cc #27389
This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.
[RFC 1974]: https://github.com/rust-lang/rfcs/pull/197
The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.
cc #27389
Document unintuitive argument order for Vec::dedup_by relation
When trying to use `dedup_by` to merge some auxiliary information from removed elements into kept elements, I was surprised to observe that `vec.dedup_by(same_bucket)` calls `same_bucket(a, b)` where `b` appears before `a` in the vector, and discards `a` when true is returned. This argument order is probably a bug, but since it has already been stabilized, I guess we should document it as a feature and move on.
(`Vec::dedup` also uses `==` with this unexpected argument order, but I figure that’s not important since `==` is expected to be symmetric with no side effects.)
Delete deprecated & unstable range-specific `step_by`
Using the new one is annoying while this one exists, since the inherent method hides the one on iterator.
Tracking issue: #27741
Replacement: #41439
Deprecation: #42310 for 1.19
Fixes#41477
When trying to use dedup_by to merge some auxiliary information from
removed elements into kept elements, I was surprised to observe that
vec.dedup_by(same_bucket) calls same_bucket(a, b) where b appears
before a in the vector, and discards a when true is returned. This
argument order is probably a bug, but since it has already been
stabilized, I guess we should document it as a feature and move on.
(Vec::dedup also uses == with this unexpected argument order, but I
figure that’s not important since == is expected to be symmetric with
no side effects.)
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Improve tests and benchmarks for slice::sort and slice::sort_unstable
This PR just hardens the tests and improves benchmarks.
More specifically:
1. Benchmarks don't generate vectors in `Bencher::iter` loops, but simply clone pregenerated vectors.
2. Benchmark `*_strings` doesn't allocate Strings in `Bencher::iter` loops, but merely clones a `Vec<&str>`.
3. Benchmarks use seeded `XorShiftRng` to be more consistent.
4. Additional tests for `slice::sort` are added, which test sorting on slices with several ascending/descending runs. The implementation identifies such runs so it's a good idea to test that scenario a bit.
5. More checks are added to `run-pass/vector-sort-panic-safe.rs`. Sort algorithms copy elements around a lot (merge sort uses an auxilliary buffer and pdqsort copies the pivot onto the stack before partitioning, then writes it back into the slice). If elements that are being sorted are internally mutable and comparison function mutates them, it is important to make sure that sort algorithms always use the latest "versions" of elements. New checks verify that this is true for both `slice::sort` and `slice::sort_unstable`.
As a side note, all of those improvements were made as part of the parallel sorts PR in Rayon (nikomatsakis/rayon#379) and now I'm backporting them into libcore/libstd.
r? @alexcrichton
std: Fix implementation of `Alloc::alloc_one`
This had an accidental `u8 as *mut T` where it was intended to have just a
normal pointer-to-pointer cast.
Closes#42827
Relaxed Debug constraints on {HashMap,BTreeMap}::{Keys,Values}.
I has hit by this yesterday too. 😄
And I've realised that Debug for BTreeMap::{Keys,Values} wasn't formatting just keys and values respectively, but the whole map. 🤔Fixed#41924
r? @jonhoo
Replaced by adding extra imports, adding hidden code (`# ...`), modifying
examples to be runnable (sorry Homura), specifying non-Rust code, and
converting to should_panic, no_run, or compile_fail.
Remaining "```ignore"s received an explanation why they are being ignored.
Convert `Into<Box<[T]>> for Vec<T>` into `From<Vec<T>> for Box<[T]>`
As the `collections` crate has been merged into `alloc` in #42648 this impl is now possible. This is the final part of #42129 missing from #42227.
Allocator integration
Lets start getting some feedback on `trait Alloc`.
Here is:
* the `trait Alloc` itself,
* the `struct Layout` and `enum AllocErr` that its API relies on
* a `struct HeapAlloc` that exposes the system allocator as an instance of `Alloc`
* an integration of `Alloc` with `RawVec`
* ~~an integration of `Alloc` with `Vec`~~
TODO
* [x] split `fn realloc_in_place` into `grow` and `shrink` variants
* [x] add `# Unsafety` and `# Errors` sections to documentation for all relevant methods
* [x] remove `Vec` integration with `Allocator`
* [x] add `allocate_zeroed` impl to `HeapAllocator`
* [x] remove typedefs e.g. `type Size = usize;`
* [x] impl `trait Error` for all error types in PR
* [x] make `Layout::from_size_align` public
* [x] clarify docs of `fn padding_needed_for`.
* [x] revise `Layout` constructors to ensure that [size+align combination is valid](https://github.com/rust-lang/rust/pull/42313#issuecomment-306845446)
* [x] resolve mismatch re requirements of align on dealloc. See [comment](https://github.com/rust-lang/rust/pull/42313#issuecomment-306202489).
Introduce tidy lint to check for inconsistent tracking issues
This PR
* Refactors the collect_lib_features function to work in a
non-checking mode (no bad pointer needed, and list of
lang features).
* Introduces checking whether unstable/stable tags for a
given feature have inconsistent tracking issues, as in,
multiple tracking issues per feature.
* Fixes such inconsistencies throughout the codebase.
This commit
* Refactors the collect_lib_features function to work in a
non-checking mode (no bad pointer needed, and list of
lang features).
* Introduces checking whether unstable/stable tags for a
given feature have inconsistent tracking issues.
* Fixes such inconsistencies throughout the codebase.
Includes methods exposing underlying allocator and the dellocation
routine.
Includes test illustrating a tiny `Alloc` that just bounds the total
bytes allocated.
Alpha-renamed `Allocator` to `Alloc` (and `HeapAllocator` to `HeapAlloc`).
Alpha-renamed `HeapAllocator` to `HeapAlloc`.
`<HeapAlloc as Alloc>::alloc_zeroed` is hooked up to `heap::allocate_zeroed`.
`HeapAlloc::realloc` falls back on alloc+copy+realloc on align mismatch.
Added `unwrap` calls in all the places where I can infer that the
conditions are met to avoid panic (or when the calling method itself
says it will panic in such a case).
Includes `alloc_zeroed` method that `RawVec` has come to depend on.
Exposed private `Layout::from_size_align` ctor to be `pub`, and added
explicit conditions for when it will panic (namely, when `align` is
not power-of-two, or if rounding up `size` to a multiple of `align`
overflows). Normalized all `Layout` construction to go through
`Layout::from_size_align`.
Addressed review feedback regarding `struct Layout` and zero-sized
layouts.
Restrict specification for `dealloc`, adding additional constraint
that the given alignment has to match that used to allocate the block.
(This is a maximally conservative constraint on the alignment. An open
question to resolve (before stabilization) is whether we can return to
a looser constraint such as the one previously specified.)
Split `fn realloc_in_place` into separate `fn grow_in_place` and `fn
shrink_in_place` methods, which have default impls that check against
usable_size for reuse. Make `realloc` default impl try `grow_in_place`
or `shrink_in_place` as appropriate before fallback on
alloc+copy+dealloc.
Drive-by: When reviewing calls to `padding_needed_for`, discovered
what I think was an over-conservative choice for its argument
alignment. Namely, in `fn extend`, we automatically realign the whole
resulting layout to satisfy both old (self) and new alignments. When
the old alignment exceeds the new, this means we would insert
unnecessary padding. So I changed the code to pass in `next.align`
instead of `new_align` to `padding_needed_for`.
Replaced ref to `realloc_in_place` with `grow_in_place`/`shrink_in_place`.
Revised docs replacing my idiosyncratic style of `fn foo` with just
`foo` when referring to the function or method `foo`.
(Alpha-renamed `Allocator` to `Alloc`.)
Post-rebased, added `Debug` derive for `allocator::Excess` to satisfy
`missing_debug_implementations`.
It was decided in the RFC discussion https://github.com/rust-lang/rfcs/pull/1954 to make the function call syntax Rc::clone(&foo) the idiomatic way to clone a reference counted pointer (over the method call syntax foo.clone(). This change updates the documentation of Rc, Arc and their respoective Weak pointers to reflect it and bring more exposure to the existence of the function call syntax.
refactor NonZero, Shared, and Unique APIs
Major difference is that I removed Deref impls, as apparently LLVM has
trouble maintaining metadata with a `&ptr -> &ptr` API. This was cited
as a blocker for ever stabilizing this API. It wasn't that ergonomic
anyway.
* Added `get` to NonZero to replace Deref impl
* Added `ptr` getter to Shared/Unique to replace Deref impl
* Added Unique's `get` and `get_mut` conveniences to Shared
* Deprecated `as_mut_ptr` on Shared in favour of `ptr`
Note that Shared used to primarily expose only `*const` but there isn't
a good justification for that, so I made it `*mut`.