Port sort-research-rs test suite Rust stdlib tests

This commit is a followup to https://github.com/rust-lang/rust/pull/124032. It
replaces the tests that test the various sort functions in the standard library
with a test-suite developed as part of
https://github.com/Voultapher/sort-research-rs. The current tests suffer a
couple of problems:

- They don't cover important real world patterns that the implementations take
  advantage of and execute special code for.
- The input lengths tested miss out on code paths. For example, important safety
  property tests never reach the quicksort part of the implementation.
- The miri side is often limited to `len <= 20` which means it very thoroughly
  tests the insertion sort, which accounts for 19 out of 1.5k LoC.
- They are split into to core and alloc, causing code duplication and uneven
  coverage.
- The randomness is not repeatable, as it
  relies on `std:#️⃣:RandomState::new().build_hasher()`.

Most of these issues existed before
https://github.com/rust-lang/rust/pull/124032, but they are intensified by it.
One thing that is new and requires additional testing, is that the new sort
implementations specialize based on type properties. For example `Freeze` and
non `Freeze` execute different code paths.

Effectively there are three dimensions that matter:

- Input type
- Input length
- Input pattern

The ported test-suite tests various properties along all three dimensions,
greatly improving test coverage. It side-steps the miri issue by preferring
sampled approaches. For example the test that checks if after a panic the set of
elements is still the original one, doesn't do so for every single possible
panic opportunity but rather it picks one at random, and performs this test
across a range of input length, which varies the panic point across them. This
allows regular execution to easily test inputs of length 10k, and miri execution
up to 100 which covers significantly more code. The randomness used is tied to a
fixed - but random per process execution - seed. This allows for fully
repeatable tests and fuzzer like exploration across multiple runs.

Structure wise, the tests are previously found in the core integration tests for
`sort_unstable` and alloc unit tests for `sort`. The new test-suite was
developed to be a purely black-box approach, which makes integration testing the
better place, because it can't accidentally rely on internal access. Because
unwinding support is required the tests can't be in core, even if the
implementation is, so they are now part of the alloc integration tests. Are
there architectures that can only build and test core and not alloc? If so, do
such platforms require sort testing? For what it's worth the current
implementation state passes miri `--target mips64-unknown-linux-gnuabi64` which
is big endian.

The test-suite also contains tests for properties that were and are given by the
current and previous implementations, and likely relied upon by users but
weren't tested. For example `self_cmp` tests that the two parameters `a` and `b`
passed into the comparison function are never references to the same object,
which if the user is sorting for example a `&mut [Mutex<i32>]` could lead to a
deadlock.

Instead of using the hashed caller location as rand seed, it uses seconds since
unix epoch / 10, which given timestamps in the CI should be reasonably easy to
reproduce, but also allows fuzzer like space exploration.
This commit is contained in:
Lukas Bergdoll 2024-09-30 10:54:23 +02:00
parent e9df22f51d
commit 71bb0e72ce
10 changed files with 1955 additions and 434 deletions

View File

@ -19,20 +19,6 @@
use core::mem::{self, MaybeUninit};
#[cfg(not(no_global_oom_handling))]
use core::ptr;
#[cfg(not(no_global_oom_handling))]
use core::slice::sort;
use crate::alloc::Allocator;
#[cfg(not(no_global_oom_handling))]
use crate::alloc::Global;
#[cfg(not(no_global_oom_handling))]
use crate::borrow::ToOwned;
use crate::boxed::Box;
use crate::vec::Vec;
#[cfg(test)]
mod tests;
#[unstable(feature = "array_chunks", issue = "74985")]
pub use core::slice::ArrayChunks;
#[unstable(feature = "array_chunks", issue = "74985")]
@ -43,6 +29,8 @@
pub use core::slice::EscapeAscii;
#[stable(feature = "slice_get_slice", since = "1.28.0")]
pub use core::slice::SliceIndex;
#[cfg(not(no_global_oom_handling))]
use core::slice::sort;
#[stable(feature = "slice_group_by", since = "1.77.0")]
pub use core::slice::{ChunkBy, ChunkByMut};
#[stable(feature = "rust1", since = "1.0.0")]
@ -83,6 +71,14 @@
#[cfg(test)]
pub use hack::to_vec;
use crate::alloc::Allocator;
#[cfg(not(no_global_oom_handling))]
use crate::alloc::Global;
#[cfg(not(no_global_oom_handling))]
use crate::borrow::ToOwned;
use crate::boxed::Box;
use crate::vec::Vec;
// HACK(japaric): With cfg(test) `impl [T]` is not available, these three
// functions are actually methods that are in `impl [T]` but not in
// `core::slice::SliceExt` - we need to supply these functions for the

View File

@ -1,369 +0,0 @@
use core::cell::Cell;
use core::cmp::Ordering::{self, Equal, Greater, Less};
use core::convert::identity;
use core::sync::atomic::AtomicUsize;
use core::sync::atomic::Ordering::Relaxed;
use core::{fmt, mem};
use std::panic;
use rand::distributions::Standard;
use rand::prelude::*;
use rand::{Rng, RngCore};
use crate::borrow::ToOwned;
use crate::rc::Rc;
use crate::string::ToString;
use crate::test_helpers::test_rng;
use crate::vec::Vec;
macro_rules! do_test {
($input:ident, $func:ident) => {
let len = $input.len();
// Work out the total number of comparisons required to sort
// this array...
let mut count = 0usize;
$input.to_owned().$func(|a, b| {
count += 1;
a.cmp(b)
});
// ... and then panic on each and every single one.
for panic_countdown in 0..count {
// Refresh the counters.
VERSIONS.store(0, Relaxed);
for i in 0..len {
DROP_COUNTS[i].store(0, Relaxed);
}
let v = $input.to_owned();
let _ = panic::catch_unwind(move || {
let mut v = v;
let mut panic_countdown = panic_countdown;
v.$func(|a, b| {
if panic_countdown == 0 {
SILENCE_PANIC.with(|s| s.set(true));
panic!();
}
panic_countdown -= 1;
a.cmp(b)
})
});
// Check that the number of things dropped is exactly
// what we expect (i.e., the contents of `v`).
for (i, c) in DROP_COUNTS.iter().enumerate().take(len) {
let count = c.load(Relaxed);
assert!(count == 1, "found drop count == {} for i == {}, len == {}", count, i, len);
}
// Check that the most recent versions of values were dropped.
assert_eq!(VERSIONS.load(Relaxed), 0);
}
};
}
const MAX_LEN: usize = 80;
static DROP_COUNTS: [AtomicUsize; MAX_LEN] = [
// FIXME(RFC 1109): AtomicUsize is not Copy.
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
AtomicUsize::new(0),
];
static VERSIONS: AtomicUsize = AtomicUsize::new(0);
#[derive(Clone, Eq)]
struct DropCounter {
x: u32,
id: usize,
version: Cell<usize>,
}
impl PartialEq for DropCounter {
fn eq(&self, other: &Self) -> bool {
self.partial_cmp(other) == Some(Ordering::Equal)
}
}
impl PartialOrd for DropCounter {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
self.version.set(self.version.get() + 1);
other.version.set(other.version.get() + 1);
VERSIONS.fetch_add(2, Relaxed);
self.x.partial_cmp(&other.x)
}
}
impl Ord for DropCounter {
fn cmp(&self, other: &Self) -> Ordering {
self.partial_cmp(other).unwrap()
}
}
impl Drop for DropCounter {
fn drop(&mut self) {
DROP_COUNTS[self.id].fetch_add(1, Relaxed);
VERSIONS.fetch_sub(self.version.get(), Relaxed);
}
}
std::thread_local!(static SILENCE_PANIC: Cell<bool> = Cell::new(false));
#[test]
#[cfg_attr(target_os = "emscripten", ignore)] // no threads
#[cfg_attr(not(panic = "unwind"), ignore = "test requires unwinding support")]
fn panic_safe() {
panic::update_hook(move |prev, info| {
if !SILENCE_PANIC.with(|s| s.get()) {
prev(info);
}
});
let mut rng = test_rng();
// Miri is too slow (but still need to `chain` to make the types match)
let lens = if cfg!(miri) { (1..10).chain(0..0) } else { (1..20).chain(70..MAX_LEN) };
let moduli: &[u32] = if cfg!(miri) { &[5] } else { &[5, 20, 50] };
for len in lens {
for &modulus in moduli {
for &has_runs in &[false, true] {
let mut input = (0..len)
.map(|id| DropCounter {
x: rng.next_u32() % modulus,
id: id,
version: Cell::new(0),
})
.collect::<Vec<_>>();
if has_runs {
for c in &mut input {
c.x = c.id as u32;
}
for _ in 0..5 {
let a = rng.gen::<usize>() % len;
let b = rng.gen::<usize>() % len;
if a < b {
input[a..b].reverse();
} else {
input.swap(a, b);
}
}
}
do_test!(input, sort_by);
do_test!(input, sort_unstable_by);
}
}
}
// Set default panic hook again.
drop(panic::take_hook());
}
#[test]
#[cfg_attr(miri, ignore)] // Miri is too slow
#[cfg_attr(not(panic = "unwind"), ignore = "test requires unwinding support")]
fn test_sort() {
let mut rng = test_rng();
for len in (2..25).chain(500..510) {
for &modulus in &[5, 10, 100, 1000] {
for _ in 0..10 {
let orig: Vec<_> = (&mut rng)
.sample_iter::<i32, _>(&Standard)
.map(|x| x % modulus)
.take(len)
.collect();
// Sort in default order.
let mut v = orig.clone();
v.sort();
assert!(v.windows(2).all(|w| w[0] <= w[1]));
// Sort in ascending order.
let mut v = orig.clone();
v.sort_by(|a, b| a.cmp(b));
assert!(v.windows(2).all(|w| w[0] <= w[1]));
// Sort in descending order.
let mut v = orig.clone();
v.sort_by(|a, b| b.cmp(a));
assert!(v.windows(2).all(|w| w[0] >= w[1]));
// Sort in lexicographic order.
let mut v1 = orig.clone();
let mut v2 = orig.clone();
v1.sort_by_key(|x| x.to_string());
v2.sort_by_cached_key(|x| x.to_string());
assert!(v1.windows(2).all(|w| w[0].to_string() <= w[1].to_string()));
assert!(v1 == v2);
// Sort with many pre-sorted runs.
let mut v = orig.clone();
v.sort();
v.reverse();
for _ in 0..5 {
let a = rng.gen::<usize>() % len;
let b = rng.gen::<usize>() % len;
if a < b {
v[a..b].reverse();
} else {
v.swap(a, b);
}
}
v.sort();
assert!(v.windows(2).all(|w| w[0] <= w[1]));
}
}
}
const ORD_VIOLATION_MAX_LEN: usize = 500;
let mut v = [0; ORD_VIOLATION_MAX_LEN];
for i in 0..ORD_VIOLATION_MAX_LEN {
v[i] = i as i32;
}
// Sort using a completely random comparison function. This will reorder the elements *somehow*,
// it may panic but the original elements must still be present.
let _ = panic::catch_unwind(move || {
v.sort_by(|_, _| *[Less, Equal, Greater].choose(&mut rng).unwrap());
});
v.sort();
for i in 0..ORD_VIOLATION_MAX_LEN {
assert_eq!(v[i], i as i32);
}
// Should not panic.
[0i32; 0].sort();
[(); 10].sort();
[(); 100].sort();
let mut v = [0xDEADBEEFu64];
v.sort();
assert!(v == [0xDEADBEEF]);
}
#[test]
fn test_sort_stability() {
// Miri is too slow
let large_range = if cfg!(miri) { 0..0 } else { 500..510 };
let rounds = if cfg!(miri) { 1 } else { 10 };
let mut rng = test_rng();
for len in (2..25).chain(large_range) {
for _ in 0..rounds {
let mut counts = [0; 10];
// create a vector like [(6, 1), (5, 1), (6, 2), ...],
// where the first item of each tuple is random, but
// the second item represents which occurrence of that
// number this element is, i.e., the second elements
// will occur in sorted order.
let orig: Vec<_> = (0..len)
.map(|_| {
let n = rng.gen::<usize>() % 10;
counts[n] += 1;
(n, counts[n])
})
.collect();
let mut v = orig.clone();
// Only sort on the first element, so an unstable sort
// may mix up the counts.
v.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
// This comparison includes the count (the second item
// of the tuple), so elements with equal first items
// will need to be ordered with increasing
// counts... i.e., exactly asserting that this sort is
// stable.
assert!(v.windows(2).all(|w| w[0] <= w[1]));
let mut v = orig.clone();
v.sort_by_cached_key(|&(x, _)| x);
assert!(v.windows(2).all(|w| w[0] <= w[1]));
}
}
}

View File

@ -41,6 +41,7 @@
#![feature(local_waker)]
#![feature(vec_pop_if)]
#![feature(unique_rc_arc)]
#![feature(macro_metavar_expr_concat)]
#![allow(internal_features)]
#![deny(fuzzy_provenance_casts)]
#![deny(unsafe_op_in_unsafe_fn)]
@ -60,6 +61,7 @@
mod linked_list;
mod rc;
mod slice;
mod sort;
mod str;
mod string;
mod task;

View File

@ -0,0 +1,82 @@
use std::cmp::Ordering;
// Very large stack value.
#[repr(C)]
#[derive(PartialEq, Eq, Debug, Clone)]
pub struct FFIOneKibiByte {
values: [i64; 128],
}
impl FFIOneKibiByte {
pub fn new(val: i32) -> Self {
let mut values = [0i64; 128];
let mut val_i64 = val as i64;
for elem in &mut values {
*elem = val_i64;
val_i64 = std::hint::black_box(val_i64 + 1);
}
Self { values }
}
fn as_i64(&self) -> i64 {
self.values[11] + self.values[55] + self.values[77]
}
}
impl PartialOrd for FFIOneKibiByte {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for FFIOneKibiByte {
fn cmp(&self, other: &Self) -> Ordering {
self.as_i64().cmp(&other.as_i64())
}
}
// 16 byte stack value, with more expensive comparison.
#[repr(C)]
#[derive(PartialEq, Debug, Clone, Copy)]
pub struct F128 {
x: f64,
y: f64,
}
impl F128 {
pub fn new(val: i32) -> Self {
let val_f = (val as f64) + (i32::MAX as f64) + 10.0;
let x = val_f + 0.1;
let y = val_f.log(4.1);
assert!(y < x);
assert!(x.is_normal() && y.is_normal());
Self { x, y }
}
}
// This is kind of hacky, but we know we only have normal comparable floats in there.
impl Eq for F128 {}
impl PartialOrd for F128 {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
// Goal is similar code-gen between Rust and C++
// - Rust https://godbolt.org/z/3YM3xenPP
// - C++ https://godbolt.org/z/178M6j1zz
impl Ord for F128 {
fn cmp(&self, other: &Self) -> Ordering {
// Simulate expensive comparison function.
let this_div = self.x / self.y;
let other_div = other.x / other.y;
// SAFETY: We checked in the ctor that both are normal.
unsafe { this_div.partial_cmp(&other_div).unwrap_unchecked() }
}
}

View File

@ -0,0 +1,192 @@
// This module implements a known good stable sort implementation that helps provide better error
// messages when the correctness tests fail, we can't use the stdlib sort functions because we are
// testing them for correctness.
//
// Based on https://github.com/voultapher/tiny-sort-rs.
use alloc::alloc::{Layout, alloc, dealloc};
use std::{mem, ptr};
/// Sort `v` preserving initial order of equal elements.
///
/// - Guaranteed O(N * log(N)) worst case perf
/// - No adaptiveness
/// - Branch miss-prediction not affected by outcome of comparison function
/// - Uses `v.len()` auxiliary memory.
///
/// If `T: Ord` does not implement a total order the resulting order is
/// unspecified. All original elements will remain in `v` and any possible modifications via
/// interior mutability will be observable. Same is true if `T: Ord` panics.
///
/// Panics if allocating the auxiliary memory fails.
#[inline(always)]
pub fn sort<T: Ord>(v: &mut [T]) {
stable_sort(v, |a, b| a.lt(b))
}
#[inline(always)]
fn stable_sort<T, F: FnMut(&T, &T) -> bool>(v: &mut [T], mut is_less: F) {
if mem::size_of::<T>() == 0 {
return;
}
let len = v.len();
// Inline the check for len < 2. This happens a lot, instrumenting the Rust compiler suggests
// len < 2 accounts for 94% of its calls to `slice::sort`.
if len < 2 {
return;
}
// SAFETY: We checked that len is > 0 and that T is not a ZST.
unsafe {
mergesort_main(v, &mut is_less);
}
}
/// The core logic should not be inlined.
///
/// SAFETY: The caller has to ensure that len is > 0 and that T is not a ZST.
#[inline(never)]
unsafe fn mergesort_main<T, F: FnMut(&T, &T) -> bool>(v: &mut [T], is_less: &mut F) {
// While it would be nice to have a merge implementation that only requires N / 2 auxiliary
// memory. Doing so would make the merge implementation significantly more complex and
// SAFETY: See function safety description.
let buf = unsafe { BufGuard::new(v.len()) };
// SAFETY: `scratch` has space for `v.len()` writes. And does not alias `v`.
unsafe {
mergesort_core(v, buf.buf_ptr.as_ptr(), is_less);
}
}
/// Tiny recursive top-down merge sort optimized for binary size. It has no adaptiveness whatsoever,
/// no run detection, etc.
///
/// Buffer as pointed to by `scratch` must have space for `v.len()` writes. And must not alias `v`.
#[inline(always)]
unsafe fn mergesort_core<T, F: FnMut(&T, &T) -> bool>(
v: &mut [T],
scratch_ptr: *mut T,
is_less: &mut F,
) {
let len = v.len();
if len > 2 {
// SAFETY: `mid` is guaranteed in-bounds. And caller has to ensure that `scratch_ptr` can
// hold `v.len()` values.
unsafe {
let mid = len / 2;
// Sort the left half recursively.
mergesort_core(v.get_unchecked_mut(..mid), scratch_ptr, is_less);
// Sort the right half recursively.
mergesort_core(v.get_unchecked_mut(mid..), scratch_ptr, is_less);
// Combine the two halves.
merge(v, scratch_ptr, is_less, mid);
}
} else if len == 2 {
if is_less(&v[1], &v[0]) {
v.swap(0, 1);
}
}
}
/// Branchless merge function.
///
/// SAFETY: The caller must ensure that `scratch_ptr` is valid for `v.len()` writes. And that mid is
/// in-bounds.
#[inline(always)]
unsafe fn merge<T, F>(v: &mut [T], scratch_ptr: *mut T, is_less: &mut F, mid: usize)
where
F: FnMut(&T, &T) -> bool,
{
let len = v.len();
debug_assert!(mid > 0 && mid < len);
let len = v.len();
// Indexes to track the positions while merging.
let mut l = 0;
let mut r = mid;
// SAFETY: No matter what the result of is_less is we check that l and r remain in-bounds and if
// is_less panics the original elements remain in `v`.
unsafe {
let arr_ptr = v.as_ptr();
for i in 0..len {
let left_ptr = arr_ptr.add(l);
let right_ptr = arr_ptr.add(r);
let is_lt = !is_less(&*right_ptr, &*left_ptr);
let copy_ptr = if is_lt { left_ptr } else { right_ptr };
ptr::copy_nonoverlapping(copy_ptr, scratch_ptr.add(i), 1);
l += is_lt as usize;
r += !is_lt as usize;
// As long as neither side is exhausted merge left and right elements.
if ((l == mid) as u8 + (r == len) as u8) != 0 {
break;
}
}
// The left or right side is exhausted, drain the right side in one go.
let copy_ptr = if l == mid { arr_ptr.add(r) } else { arr_ptr.add(l) };
let i = l + (r - mid);
ptr::copy_nonoverlapping(copy_ptr, scratch_ptr.add(i), len - i);
// Now that scratch_ptr holds the full merged content, write it back on-top of v.
ptr::copy_nonoverlapping(scratch_ptr, v.as_mut_ptr(), len);
}
}
// SAFETY: The caller has to ensure that Option is Some, UB otherwise.
unsafe fn unwrap_unchecked<T>(opt_val: Option<T>) -> T {
match opt_val {
Some(val) => val,
None => {
// SAFETY: See function safety description.
unsafe {
core::hint::unreachable_unchecked();
}
}
}
}
// Extremely basic versions of Vec.
// Their use is super limited and by having the code here, it allows reuse between the sort
// implementations.
struct BufGuard<T> {
buf_ptr: ptr::NonNull<T>,
capacity: usize,
}
impl<T> BufGuard<T> {
// SAFETY: The caller has to ensure that len is not 0 and that T is not a ZST.
unsafe fn new(len: usize) -> Self {
debug_assert!(len > 0 && mem::size_of::<T>() > 0);
// SAFETY: See function safety description.
let layout = unsafe { unwrap_unchecked(Layout::array::<T>(len).ok()) };
// SAFETY: We checked that T is not a ZST.
let buf_ptr = unsafe { alloc(layout) as *mut T };
if buf_ptr.is_null() {
panic!("allocation failure");
}
Self { buf_ptr: ptr::NonNull::new(buf_ptr).unwrap(), capacity: len }
}
}
impl<T> Drop for BufGuard<T> {
fn drop(&mut self) {
// SAFETY: We checked that T is not a ZST.
unsafe {
dealloc(self.buf_ptr.as_ptr() as *mut u8, Layout::array::<T>(self.capacity).unwrap());
}
}
}

View File

@ -0,0 +1,17 @@
pub trait Sort {
fn name() -> String;
fn sort<T>(v: &mut [T])
where
T: Ord;
fn sort_by<T, F>(v: &mut [T], compare: F)
where
F: FnMut(&T, &T) -> std::cmp::Ordering;
}
mod ffi_types;
mod known_good_stable_sort;
mod patterns;
mod tests;
mod zipf;

View File

@ -0,0 +1,211 @@
use std::env;
use std::hash::Hash;
use std::str::FromStr;
use std::sync::OnceLock;
use rand::prelude::*;
use rand_xorshift::XorShiftRng;
use crate::sort::zipf::ZipfDistribution;
/// Provides a set of patterns useful for testing and benchmarking sorting algorithms.
/// Currently limited to i32 values.
// --- Public ---
pub fn random(len: usize) -> Vec<i32> {
// .
// : . : :
// :.:::.::
random_vec(len)
}
pub fn random_uniform<R>(len: usize, range: R) -> Vec<i32>
where
R: Into<rand::distributions::Uniform<i32>> + Hash,
{
// :.:.:.::
let mut rng: XorShiftRng = rand::SeedableRng::seed_from_u64(get_or_init_rand_seed());
// Abstracting over ranges in Rust :(
let dist: rand::distributions::Uniform<i32> = range.into();
(0..len).map(|_| dist.sample(&mut rng)).collect()
}
pub fn random_zipf(len: usize, exponent: f64) -> Vec<i32> {
// https://en.wikipedia.org/wiki/Zipf's_law
let mut rng: XorShiftRng = rand::SeedableRng::seed_from_u64(get_or_init_rand_seed());
// Abstracting over ranges in Rust :(
let dist = ZipfDistribution::new(len, exponent).unwrap();
(0..len).map(|_| dist.sample(&mut rng) as i32).collect()
}
pub fn random_sorted(len: usize, sorted_percent: f64) -> Vec<i32> {
// .:
// .:::. :
// .::::::.::
// [----][--]
// ^ ^
// | |
// sorted |
// unsorted
// Simulate pre-existing sorted slice, where len - sorted_percent are the new unsorted values
// and part of the overall distribution.
let mut v = random_vec(len);
let sorted_len = ((len as f64) * (sorted_percent / 100.0)).round() as usize;
v[0..sorted_len].sort_unstable();
v
}
pub fn all_equal(len: usize) -> Vec<i32> {
// ......
// ::::::
(0..len).map(|_| 66).collect::<Vec<_>>()
}
pub fn ascending(len: usize) -> Vec<i32> {
// .:
// .:::
// .:::::
(0..len as i32).collect::<Vec<_>>()
}
pub fn descending(len: usize) -> Vec<i32> {
// :.
// :::.
// :::::.
(0..len as i32).rev().collect::<Vec<_>>()
}
pub fn saw_mixed(len: usize, saw_count: usize) -> Vec<i32> {
// :. :. .::. .:
// :::.:::..::::::..:::
if len == 0 {
return Vec::new();
}
let mut vals = random_vec(len);
let chunks_size = len / saw_count.max(1);
let saw_directions = random_uniform((len / chunks_size) + 1, 0..=1);
for (i, chunk) in vals.chunks_mut(chunks_size).enumerate() {
if saw_directions[i] == 0 {
chunk.sort_unstable();
} else if saw_directions[i] == 1 {
chunk.sort_unstable_by_key(|&e| std::cmp::Reverse(e));
} else {
unreachable!();
}
}
vals
}
pub fn saw_mixed_range(len: usize, range: std::ops::Range<usize>) -> Vec<i32> {
// :.
// :. :::. .::. .:
// :::.:::::..::::::..:.:::
// ascending and descending randomly picked, with length in `range`.
if len == 0 {
return Vec::new();
}
let mut vals = random_vec(len);
let max_chunks = len / range.start;
let saw_directions = random_uniform(max_chunks + 1, 0..=1);
let chunk_sizes = random_uniform(max_chunks + 1, (range.start as i32)..(range.end as i32));
let mut i = 0;
let mut l = 0;
while l < len {
let chunk_size = chunk_sizes[i] as usize;
let chunk_end = std::cmp::min(l + chunk_size, len);
let chunk = &mut vals[l..chunk_end];
if saw_directions[i] == 0 {
chunk.sort_unstable();
} else if saw_directions[i] == 1 {
chunk.sort_unstable_by_key(|&e| std::cmp::Reverse(e));
} else {
unreachable!();
}
i += 1;
l += chunk_size;
}
vals
}
pub fn pipe_organ(len: usize) -> Vec<i32> {
// .:.
// .:::::.
let mut vals = random_vec(len);
let first_half = &mut vals[0..(len / 2)];
first_half.sort_unstable();
let second_half = &mut vals[(len / 2)..len];
second_half.sort_unstable_by_key(|&e| std::cmp::Reverse(e));
vals
}
pub fn get_or_init_rand_seed() -> u64 {
*SEED_VALUE.get_or_init(|| {
env::var("OVERRIDE_SEED")
.ok()
.map(|seed| u64::from_str(&seed).unwrap())
.unwrap_or_else(rand_root_seed)
})
}
// --- Private ---
static SEED_VALUE: OnceLock<u64> = OnceLock::new();
#[cfg(not(miri))]
fn rand_root_seed() -> u64 {
// Other test code hashes `panic::Location::caller()` and constructs a seed from that, in these
// tests we want to have a fuzzer like exploration of the test space, if we used the same caller
// based construction we would always test the same.
//
// Instead we use the seconds since UNIX epoch / 10, given CI log output this value should be
// reasonably easy to re-construct.
use std::time::{SystemTime, UNIX_EPOCH};
let epoch_seconds = SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs();
epoch_seconds / 10
}
#[cfg(miri)]
fn rand_root_seed() -> u64 {
// Miri is usually run with isolation with gives us repeatability but also permutations based on
// other code that runs before.
use core::hash::{BuildHasher, Hash, Hasher};
let mut hasher = std::hash::RandomState::new().build_hasher();
core::panic::Location::caller().hash(&mut hasher);
hasher.finish()
}
fn random_vec(len: usize) -> Vec<i32> {
let mut rng: XorShiftRng = rand::SeedableRng::seed_from_u64(get_or_init_rand_seed());
(0..len).map(|_| rng.gen::<i32>()).collect()
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,208 @@
// This module implements a Zipfian distribution generator.
//
// Based on https://github.com/jonhoo/rust-zipf.
use rand::Rng;
/// Random number generator that generates Zipf-distributed random numbers using rejection
/// inversion.
#[derive(Clone, Copy)]
pub struct ZipfDistribution {
/// Number of elements
num_elements: f64,
/// Exponent parameter of the distribution
exponent: f64,
/// `hIntegral(1.5) - 1}`
h_integral_x1: f64,
/// `hIntegral(num_elements + 0.5)}`
h_integral_num_elements: f64,
/// `2 - hIntegralInverse(hIntegral(2.5) - h(2)}`
s: f64,
}
impl ZipfDistribution {
/// Creates a new [Zipf-distributed](https://en.wikipedia.org/wiki/Zipf's_law)
/// random number generator.
///
/// Note that both the number of elements and the exponent must be greater than 0.
pub fn new(num_elements: usize, exponent: f64) -> Result<Self, ()> {
if num_elements == 0 {
return Err(());
}
if exponent <= 0f64 {
return Err(());
}
let z = ZipfDistribution {
num_elements: num_elements as f64,
exponent,
h_integral_x1: ZipfDistribution::h_integral(1.5, exponent) - 1f64,
h_integral_num_elements: ZipfDistribution::h_integral(
num_elements as f64 + 0.5,
exponent,
),
s: 2f64
- ZipfDistribution::h_integral_inv(
ZipfDistribution::h_integral(2.5, exponent)
- ZipfDistribution::h(2f64, exponent),
exponent,
),
};
// populate cache
Ok(z)
}
}
impl ZipfDistribution {
fn next<R: Rng + ?Sized>(&self, rng: &mut R) -> usize {
// The paper describes an algorithm for exponents larger than 1 (Algorithm ZRI).
//
// The original method uses
// H(x) = (v + x)^(1 - q) / (1 - q)
// as the integral of the hat function.
//
// This function is undefined for q = 1, which is the reason for the limitation of the
// exponent.
//
// If instead the integral function
// H(x) = ((v + x)^(1 - q) - 1) / (1 - q)
// is used, for which a meaningful limit exists for q = 1, the method works for all
// positive exponents.
//
// The following implementation uses v = 0 and generates integral number in the range [1,
// num_elements]. This is different to the original method where v is defined to
// be positive and numbers are taken from [0, i_max]. This explains why the implementation
// looks slightly different.
let hnum = self.h_integral_num_elements;
loop {
use std::cmp;
let u: f64 = hnum + rng.gen::<f64>() * (self.h_integral_x1 - hnum);
// u is uniformly distributed in (h_integral_x1, h_integral_num_elements]
let x: f64 = ZipfDistribution::h_integral_inv(u, self.exponent);
// Limit k to the range [1, num_elements] if it would be outside
// due to numerical inaccuracies.
let k64 = x.max(1.0).min(self.num_elements);
// float -> integer rounds towards zero, so we add 0.5
// to prevent bias towards k == 1
let k = cmp::max(1, (k64 + 0.5) as usize);
// Here, the distribution of k is given by:
//
// P(k = 1) = C * (hIntegral(1.5) - h_integral_x1) = C
// P(k = m) = C * (hIntegral(m + 1/2) - hIntegral(m - 1/2)) for m >= 2
//
// where C = 1 / (h_integral_num_elements - h_integral_x1)
if k64 - x <= self.s
|| u >= ZipfDistribution::h_integral(k64 + 0.5, self.exponent)
- ZipfDistribution::h(k64, self.exponent)
{
// Case k = 1:
//
// The right inequality is always true, because replacing k by 1 gives
// u >= hIntegral(1.5) - h(1) = h_integral_x1 and u is taken from
// (h_integral_x1, h_integral_num_elements].
//
// Therefore, the acceptance rate for k = 1 is P(accepted | k = 1) = 1
// and the probability that 1 is returned as random value is
// P(k = 1 and accepted) = P(accepted | k = 1) * P(k = 1) = C = C / 1^exponent
//
// Case k >= 2:
//
// The left inequality (k - x <= s) is just a short cut
// to avoid the more expensive evaluation of the right inequality
// (u >= hIntegral(k + 0.5) - h(k)) in many cases.
//
// If the left inequality is true, the right inequality is also true:
// Theorem 2 in the paper is valid for all positive exponents, because
// the requirements h'(x) = -exponent/x^(exponent + 1) < 0 and
// (-1/hInverse'(x))'' = (1+1/exponent) * x^(1/exponent-1) >= 0
// are both fulfilled.
// Therefore, f(x) = x - hIntegralInverse(hIntegral(x + 0.5) - h(x))
// is a non-decreasing function. If k - x <= s holds,
// k - x <= s + f(k) - f(2) is obviously also true which is equivalent to
// -x <= -hIntegralInverse(hIntegral(k + 0.5) - h(k)),
// -hIntegralInverse(u) <= -hIntegralInverse(hIntegral(k + 0.5) - h(k)),
// and finally u >= hIntegral(k + 0.5) - h(k).
//
// Hence, the right inequality determines the acceptance rate:
// P(accepted | k = m) = h(m) / (hIntegrated(m+1/2) - hIntegrated(m-1/2))
// The probability that m is returned is given by
// P(k = m and accepted) = P(accepted | k = m) * P(k = m)
// = C * h(m) = C / m^exponent.
//
// In both cases the probabilities are proportional to the probability mass
// function of the Zipf distribution.
return k;
}
}
}
}
impl rand::distributions::Distribution<usize> for ZipfDistribution {
fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> usize {
self.next(rng)
}
}
use std::fmt;
impl fmt::Debug for ZipfDistribution {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
f.debug_struct("ZipfDistribution")
.field("e", &self.exponent)
.field("n", &self.num_elements)
.finish()
}
}
impl ZipfDistribution {
/// Computes `H(x)`, defined as
///
/// - `(x^(1 - exponent) - 1) / (1 - exponent)`, if `exponent != 1`
/// - `log(x)`, if `exponent == 1`
///
/// `H(x)` is an integral function of `h(x)`, the derivative of `H(x)` is `h(x)`.
fn h_integral(x: f64, exponent: f64) -> f64 {
let log_x = x.ln();
helper2((1f64 - exponent) * log_x) * log_x
}
/// Computes `h(x) = 1 / x^exponent`
fn h(x: f64, exponent: f64) -> f64 {
(-exponent * x.ln()).exp()
}
/// The inverse function of `H(x)`.
/// Returns the `y` for which `H(y) = x`.
fn h_integral_inv(x: f64, exponent: f64) -> f64 {
let mut t: f64 = x * (1f64 - exponent);
if t < -1f64 {
// Limit value to the range [-1, +inf).
// t could be smaller than -1 in some rare cases due to numerical errors.
t = -1f64;
}
(helper1(t) * x).exp()
}
}
/// Helper function that calculates `log(1 + x) / x`.
/// A Taylor series expansion is used, if x is close to 0.
fn helper1(x: f64) -> f64 {
if x.abs() > 1e-8 { x.ln_1p() / x } else { 1f64 - x * (0.5 - x * (1.0 / 3.0 - 0.25 * x)) }
}
/// Helper function to calculate `(exp(x) - 1) / x`.
/// A Taylor series expansion is used, if x is close to 0.
fn helper2(x: f64) -> f64 {
if x.abs() > 1e-8 {
x.exp_m1() / x
} else {
1f64 + x * 0.5 * (1f64 + x * 1.0 / 3.0 * (1f64 + 0.25 * x))
}
}

View File

@ -1800,57 +1800,6 @@ fn brute_force_rotate_test_1() {
}
}
#[test]
#[cfg(not(target_arch = "wasm32"))]
fn sort_unstable() {
use rand::Rng;
// Miri is too slow (but still need to `chain` to make the types match)
let lens = if cfg!(miri) { (2..20).chain(0..0) } else { (2..25).chain(500..510) };
let rounds = if cfg!(miri) { 1 } else { 100 };
let mut v = [0; 600];
let mut tmp = [0; 600];
let mut rng = crate::test_rng();
for len in lens {
let v = &mut v[0..len];
let tmp = &mut tmp[0..len];
for &modulus in &[5, 10, 100, 1000] {
for _ in 0..rounds {
for i in 0..len {
v[i] = rng.gen::<i32>() % modulus;
}
// Sort in default order.
tmp.copy_from_slice(v);
tmp.sort_unstable();
assert!(tmp.windows(2).all(|w| w[0] <= w[1]));
// Sort in ascending order.
tmp.copy_from_slice(v);
tmp.sort_unstable_by(|a, b| a.cmp(b));
assert!(tmp.windows(2).all(|w| w[0] <= w[1]));
// Sort in descending order.
tmp.copy_from_slice(v);
tmp.sort_unstable_by(|a, b| b.cmp(a));
assert!(tmp.windows(2).all(|w| w[0] >= w[1]));
}
}
}
// Should not panic.
[0i32; 0].sort_unstable();
[(); 10].sort_unstable();
[(); 100].sort_unstable();
let mut v = [0xDEADBEEFu64];
v.sort_unstable();
assert!(v == [0xDEADBEEF]);
}
#[test]
#[cfg(not(target_arch = "wasm32"))]
#[cfg_attr(miri, ignore)] // Miri is too slow