We have a CLI for benchmarking, but no one actually uses it it seems.
Let's try switching to "internal" benchmarks, implemented as rust tests.
They should be easier to "script" to automate tracking of perf
regressions.
After we started reporting progress when running cargo check during
loading, it is possible to crash the client with two identical progress
tokens.
This points to a deeper issue: we might be running several cargo checks
concurrently, which doesn't make sense.
This commit linearizes all workspace fetches, making sure no updates are
lost.
As an additional touch, it also normalizes progress & result reporting,
to make sure they stand in sync.
Historically, we intentinally violated JSON-RPC spec here by hard
crashing. The idea was to poke both the clients and servers to fix
stuff.
However, this is confusing for server implementors, and falls down in
one important place -- protocol extension are not always backwards
compatible, which causes crashes simply due to version mismatch. We
had once such case with our own extension, and one for semantic
tokens.
So let's be less adventerous and just err on the err side!
There are two reasons why we don't want a generic ra_progress crate
just yet:
*First*, it introduces a common interface between separate components,
and that is usually undesirable (b/c components start to fit the
interface, rather than doing what makes most sense for each particular
component).
*Second*, it introduces a separate async channel for progress, which
makes it harder to correlate progress reports with the work done. Ie,
when we see 100% progress, it's not blindly obvious that the work has
actually finished, we might have some pending messages still.