The GNU C library (glibc) is documented to always allocate with an alignment
of at least 8 or 16 bytes, on 32-bit or 64-bit platforms:
https://www.gnu.org/software/libc/manual/html_node/Aligned-Memory-Blocks.html
This matches our use of `MIN_ALIGN` before this commit.
However, even when libc is glibc, the program might be linked
with another allocator that redefines the `malloc` symbol and friends.
(The `alloc_jemalloc` crate does, in some cases.)
So `alloc_system` doesn’t know which allocator it calls,
and needs to be conservative in assumptions it makes.
The C standard says:
https://port70.net/%7Ensz/c/c11/n1570.html#7.22.3
> The pointer returned if the allocation succeeds is suitably aligned
> so that it may be assigned to a pointer to any type of object
> with a fundamental alignment requirement
https://port70.net/~nsz/c/c11/n1570.html#6.2.8p2
> A fundamental alignment is represented by an alignment less than
> or equal to the greatest alignment supported by the implementation
> in all contexts, which is equal to `_Alignof (max_align_t)`.
`_Alignof (max_align_t)` depends on the ABI and doesn’t seem to have
a clear definition, but it seems to match our `MIN_ALIGN` in practice.
However, the size of objects is rounded up to the next multiple
of their alignment (since that size is also the stride used in arrays).
Conversely, the alignment of a non-zero-size object is at most its size.
So for example it seems ot be legal for `malloc(8)` to return a pointer
that’s only 8-bytes-aligned, even if `_Alignof (max_align_t)` is 16.
This commit removes the `rand` crate from the standard library facade as
well as the `__rand` module in the standard library. Neither of these
were used in any meaningful way in the standard library itself. The only
need for randomness in libstd is to initialize the thread-local keys of
a `HashMap`, and that unconditionally used `OsRng` defined in the
standard library anyway.
The cruft of the `rand` crate and the extra `rand` support in the
standard library makes libstd slightly more difficult to port to new
platforms, namely WebAssembly which doesn't have any randomness at all
(without interfacing with JS). The purpose of this commit is to clarify
and streamline randomness in libstd, focusing on how it's only required
in one location, hashmap seeds.
Note that the `rand` crate out of tree has almost always been a drop-in
replacement for the `rand` crate in-tree, so any usage (accidental or
purposeful) of the crate in-tree should switch to the `rand` crate on
crates.io. This then also has the further benefit of avoiding
duplication (mostly) between the two crates!
Add Vec::drain_filter
This implements the API proposed in #43244.
So I spent like half a day figuring out how to implement this in some awesome super-optimized unsafe way, which had me very confident this was worth putting into the stdlib.
Then I looked at the impl for `retain`, and was like "oh dang". I compared the two and they basically ended up being the same speed. And the `retain` impl probably translates to DoubleEndedIter a lot more cleanly if we ever want that.
So now I'm not totally confident this needs to go in the stdlib, but I've got two implementations and an amazingly robust test suite, so I figured I might as well toss it over the fence for discussion.
Add method `String::retain`
Behaves like `Vec::retain`, accepting a predicate `FnMut(char) -> bool`
and reducing the string to only characters for which the predicate
returns `true`.
Behaves like `Vec::retain`, accepting a predicate `FnMut(char) -> bool`
and reducing the string to only characters for which the predicate
returns `true`.