Commit Graph

26 Commits

Author SHA1 Message Date
Ralf Jung
5d5999ab13 make cache consistency checks into regular debug assertions 2022-07-14 13:00:35 -04:00
bors
af2c50fb89 Auto merge of #2328 - RalfJung:perf, r=RalfJung
move checking ptr tracking on item pop into cold helper function

Before:
```
Benchmark 1: cargo miri run --manifest-path bench-cargo-miri/serde1/Cargo.toml
  Time (mean ± σ):      6.729 s ±  0.050 s    [User: 6.608 s, System: 0.124 s]
  Range (min … max):    6.665 s …  6.799 s    5 runs

Benchmark 2: cargo miri run --manifest-path bench-cargo-miri/unicode/Cargo.toml
  Time (mean ± σ):     20.923 s ±  0.271 s    [User: 20.386 s, System: 0.537 s]
  Range (min … max):   20.580 s … 21.165 s    5 runs
```
After:
```
Benchmark 1: cargo miri run --manifest-path bench-cargo-miri/serde1/Cargo.toml
  Time (mean ± σ):      6.562 s ±  0.023 s    [User: 6.430 s, System: 0.135 s]
  Range (min … max):    6.544 s …  6.594 s    5 runs

Benchmark 2: cargo miri run --manifest-path bench-cargo-miri/unicode/Cargo.toml
  Time (mean ± σ):     20.375 s ±  0.228 s    [User: 19.964 s, System: 0.413 s]
  Range (min … max):   20.201 s … 20.736 s    5 runs
```
Nothing major, but we'll take it I guess. 🤷

Fixes https://github.com/rust-lang/miri/issues/2132
2022-07-14 00:34:00 +00:00
Ralf Jung
cc42cb1b21 reborrow error: clarify that we are reborrowing *from* that tag 2022-07-13 19:40:53 -04:00
Ralf Jung
83b9172774 move stacked_borrows.rs together with the other files of its module 2022-07-13 19:37:41 -04:00
Ben Kimock
4eff60ad6e Rearrange and document the new implementation
stacked_borrow now has an item module, and its own FrameExtra. These
serve to protect the implementation of Item (which is a bunch of
bit-packing tricks) from the primary logic of Stacked Borrows, and the
FrameExtra we have separates Stacked Borrows more cleanly from the
interpreter itself.

The new strategy for checking protectors also makes some subtle
performance tradeoffs, so they are now documented in Stack::item_popped
because that function primarily benefits from them, and it also touches
every aspect of them.

Also separating the actual CallId that is protecting a Tag from the Tag
makes it inconvienent to reproduce exactly the same protector errors, so
this also takes the opportunity to use some slightly cleaner English in
those errors. We need to make some change, might as well make it good.
2022-07-12 21:03:54 -04:00
Ben Kimock
afa1dddcf9 Store protectors outside Item, pack Tag and Perm
Previously, Item was a struct of a NonZeroU64, an Option which was
usually unset or irrelevant, and a 4-variant enum. So collectively, the
size of an Item was 24 bytes, but only 8 bytes were used for the most
part.

So this takes advantage of the fact that it is probably impossible to
exhaust the total space of SbTags, and steals 3 bits from it to pack the
whole struct into a single u64. This bit-packing means that we reduce
peak memory usage when Miri goes memory-bound by ~3x. We also get CPU
performance improvements of varying size, because not only are we simply
accessing less memory, we can now compare a Vec<Item> using a memcmp
because it does not have any padding.
2022-07-12 21:01:33 -04:00
Ralf Jung
907a003f14 tweak format strings 2022-07-06 09:47:48 -04:00
Ralf Jung
a07398d441 we don't need HexRange any more 2022-07-05 07:38:42 -04:00
Oli Scherer
afb937ab25 Bump rust version 2022-07-05 10:17:43 +00:00
Ben Kimock
b004a03bdb Typo 2022-07-02 20:45:27 -04:00
Ben Kimock
f3b479d556 Explain cache behavior a bit better, clean up diff 2022-07-02 19:44:55 -04:00
Ben Kimock
7cdbf98f57 Explain the behavior of the cache upon clear 2022-07-02 13:22:22 -04:00
Ben Kimock
17acc3f71c Document implementation a bit, add some fast paths 2022-07-01 20:15:34 -04:00
Ben Kimock
3bbcafe3b5 Cache lookups into the borrow stack
This adds a very simple LRU-like cache which stores the locations of
often-used tags. While the implementation is very simple, the cache hit
rate is incredible at ~99.9% on most programs, and often the element at
position 0 in the cache has a hit rate of 90%. So the sub-optimality of
this cache basicaly vanishes into the noise in a profile.

Additionally, we keep a range which denotes where there might be an item
granting Unique permission in the stack, so that when we invalidate
Uniques we do not need to scan much of the stack, and often scan nothing
at all.
2022-07-01 17:51:16 -04:00
Ralf Jung
1a5dfbeb7a fmt 2022-06-26 22:36:45 -04:00
Ralf Jung
5c16713056 remove support for untagged pointers
good riddance!
2022-06-26 22:19:56 -04:00
Ralf Jung
58c79c5b6f tweaks and feedback 2022-06-24 22:02:17 -04:00
Ralf Jung
4fbb284a99 implement 'delimited' expose tracking so we still detect some UB 2022-06-24 20:05:56 -04:00
carbotaniuman
d1e7de117c Try fix stuff 2022-06-24 16:10:23 -04:00
carbotaniuman
c7feb014b0 Maybe this wil work 2022-06-24 16:10:23 -04:00
carbotaniuman
57ce47b728 Handle wildcard pointers in SB 2022-06-24 16:10:23 -04:00
Ralf Jung
7cd5fc3de3 rustup 2022-05-29 14:06:35 +02:00
Ben Kimock
b20c6cfd81 Factor current-span logic into a lazy caching handle 2022-05-22 18:23:01 -04:00
Ben Kimock
8ff0aac06c More review feedback
* Store the local crates in an Rc<[CrateNum]>
* Move all the allocation history into Stacks
* Clean up the implementation of get_logs_relevant_to a bit
2022-05-13 19:04:51 -04:00
Ben Kimock
972b3b340a Cleanup/Refactoring from review
* Pass a ThreadInfo down to grant/access to get the current span lazily
* Rename add_* to log_* for clarity
* Hoist borrow_mut calls out of loops by tweaking the for_each signature
* Explain the parameters of check_protector a bit more
2022-05-11 20:07:44 -04:00
Ben Kimock
cddd85e2f3 Move SB diagnostics to a module 2022-04-30 10:26:26 -04:00