Rollup merge of #89017 - the8472:fix-u64-time-monotonizer, r=kennytm

fix potential race in AtomicU64 time monotonizer

The AtomicU64-based monotonizer introduced in #83093 is incorrect because several threads could try to update the value concurrently and a thread which doesn't have the newest value among all the updates could win.

That bug probably has little real world impact since it doesn't make observed time worse than hardware clocks. The worst case would probably be a thread which has a clock that is behind by several cycles observing several inconsistent fixups, which should be similar to observing the unfiltered backslide in the first place.

New benchmarks, they don't look as good as the original PR but still an improvement compared to the mutex.
I don't know why the contended mutex case is faster now than in the previous benchmarks.

```
actually_monotonic() == true:
test time::tests::instant_contention_01_threads                   ... bench:          44 ns/iter (+/- 0)
test time::tests::instant_contention_02_threads                   ... bench:          45 ns/iter (+/- 0)
test time::tests::instant_contention_04_threads                   ... bench:          45 ns/iter (+/- 0)
test time::tests::instant_contention_08_threads                   ... bench:          45 ns/iter (+/- 0)
test time::tests::instant_contention_16_threads                   ... bench:          46 ns/iter (+/- 0)

atomic u64:
test time::tests::instant_contention_01_threads                   ... bench:          66 ns/iter (+/- 0)
test time::tests::instant_contention_02_threads                   ... bench:         287 ns/iter (+/- 14)
test time::tests::instant_contention_04_threads                   ... bench:         296 ns/iter (+/- 43)
test time::tests::instant_contention_08_threads                   ... bench:         604 ns/iter (+/- 163)
test time::tests::instant_contention_16_threads                   ... bench:       1,147 ns/iter (+/- 29)

mutex:
test time::tests::instant_contention_01_threads                   ... bench:          78 ns/iter (+/- 0)
test time::tests::instant_contention_02_threads                   ... bench:         652 ns/iter (+/- 275)
test time::tests::instant_contention_04_threads                   ... bench:         900 ns/iter (+/- 32)
test time::tests::instant_contention_08_threads                   ... bench:       1,927 ns/iter (+/- 62)
test time::tests::instant_contention_16_threads                   ... bench:       3,748 ns/iter (+/- 146)
```
This commit is contained in:
Yuki Okushi 2021-09-19 17:31:32 +09:00 committed by GitHub
commit 91c5e7cbb6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 30 additions and 28 deletions

View File

@ -234,6 +234,7 @@
#![feature(atomic_mut_ptr)]
#![feature(auto_traits)]
#![feature(bench_black_box)]
#![feature(bool_to_option)]
#![feature(box_syntax)]
#![feature(c_unwind)]
#![feature(c_variadic)]

View File

@ -37,35 +37,36 @@ pub(in crate::time) fn monotonize_impl(mono: &AtomicU64, raw: time::Instant) ->
// This could be a problem for programs that call instants at intervals greater
// than 68 years. Interstellar probes may want to ensure that actually_monotonic() is true.
let packed = (secs << 32) | nanos;
let old = mono.load(Relaxed);
if old == UNINITIALIZED || packed.wrapping_sub(old) < u64::MAX / 2 {
mono.store(packed, Relaxed);
raw
} else {
// Backslide occurred. We reconstruct monotonized time from the upper 32 bit of the
// passed in value and the 64bits loaded from the atomic
let seconds_lower = old >> 32;
let mut seconds_upper = secs & 0xffff_ffff_0000_0000;
if secs & 0xffff_ffff > seconds_lower {
// Backslide caused the lower 32bit of the seconds part to wrap.
// This must be the case because the seconds part is larger even though
// we are in the backslide branch, i.e. the seconds count should be smaller or equal.
//
// We assume that backslides are smaller than 2^32 seconds
// which means we need to add 1 to the upper half to restore it.
//
// Example:
// most recent observed time: 0xA1_0000_0000_0000_0000u128
// bits stored in AtomicU64: 0x0000_0000_0000_0000u64
// backslide by 1s
// caller time is 0xA0_ffff_ffff_0000_0000u128
// -> we can fix up the upper half time by adding 1 << 32
seconds_upper = seconds_upper.wrapping_add(0x1_0000_0000);
let updated = mono.fetch_update(Relaxed, Relaxed, |old| {
(old == UNINITIALIZED || packed.wrapping_sub(old) < u64::MAX / 2).then_some(packed)
});
match updated {
Ok(_) => raw,
Err(newer) => {
// Backslide occurred. We reconstruct monotonized time from the upper 32 bit of the
// passed in value and the 64bits loaded from the atomic
let seconds_lower = newer >> 32;
let mut seconds_upper = secs & 0xffff_ffff_0000_0000;
if secs & 0xffff_ffff > seconds_lower {
// Backslide caused the lower 32bit of the seconds part to wrap.
// This must be the case because the seconds part is larger even though
// we are in the backslide branch, i.e. the seconds count should be smaller or equal.
//
// We assume that backslides are smaller than 2^32 seconds
// which means we need to add 1 to the upper half to restore it.
//
// Example:
// most recent observed time: 0xA1_0000_0000_0000_0000u128
// bits stored in AtomicU64: 0x0000_0000_0000_0000u64
// backslide by 1s
// caller time is 0xA0_ffff_ffff_0000_0000u128
// -> we can fix up the upper half time by adding 1 << 32
seconds_upper = seconds_upper.wrapping_add(0x1_0000_0000);
}
let secs = seconds_upper | seconds_lower;
let nanos = newer as u32;
ZERO.checked_add_duration(&Duration::new(secs, nanos)).unwrap()
}
let secs = seconds_upper | seconds_lower;
let nanos = old as u32;
ZERO.checked_add_duration(&Duration::new(secs, nanos)).unwrap()
}
}
}