rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
// exact-check
|
|
|
|
|
|
|
|
const EXPECTED = [
|
|
|
|
{
|
|
|
|
query: '-> trait:Some',
|
|
|
|
others: [
|
|
|
|
{ path: 'foo', name: 'alpha' },
|
rustdoc-search: use set ops for ranking and filtering
This commit adds ranking and quick filtering to type-based search,
improving performance and having it order results based on their
type signatures.
Motivation
----------
If I write a query like `str -> String`, a lot of functions come up.
That's to be expected, but `String::from_str` should come up on top, and
it doesn't right now. This is because the sorting algorithm is based
on the functions name, and doesn't consider the type signature at all.
`slice::join` even comes up above it!
To fix this, the sorting should take into account the function's
signature, and the closer match should come up on top.
Guide-level description
-----------------------
When searching by type signature, types with a "closer" match will
show up above types that match less precisely.
Reference-level explanation
---------------------------
Functions signature search works in three major phases:
* A compact "fingerprint," based on the [bloom filter] technique, is used to
check for matches and to estimate the distance. It sometimes has false
positive matches, but it also operates on 128 bit contiguous memory and
requires no backtracking, so it performs a lot better than real
unification.
The fingerprint represents the set of items in the type signature, but it
does not represent nesting, and it ignores when the same item appears more
than once.
The result is rejected if any query bits are absent in the function, or
if the distance is higher than the current maximum and 200
results have already been found.
* The second step performs unification. This is where nesting and true bag
semantics are taken into account, and it has no false positives. It uses a
recursive, backtracking algorithm.
The result is rejected if any query elements are absent in the function.
[bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter
Drawbacks
---------
This makes the code bigger.
More than that, this design is a subtle trade-off. It makes the cases I've
tested against measurably faster, but it's not clear how well this extends
to other crates with potentially more functions and fewer types.
The more complex things get, the more important it is to gather a good set
of data to test with (this is arguably more important than the actual
benchmarking ifrastructure right now).
Rationale and alternatives
--------------------------
Throwing a bloom filter in front makes it faster.
More than that, it tries to take a tactic where the system can not only check
for potential matches, but also gets an accurate distance function without
needing to do unification. That way it can skip unification even on items
that have the needed elems, as long as they have more items than the
currently found maximum.
If I didn't want to be able to cheaply do set operations on the fingerprint,
a [cuckoo filter] is supposed to have better performance.
But the nice bit-banging set intersection doesn't work AFAIK.
I also looked into [minhashing], but since it's actually an unbiased
estimate of the similarity coefficient, I'm not sure how it could be used
to skip unification (I wouldn't know if the estimate was too low or
too high).
This function actually uses the number of distinct items as its
"distance function."
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|F\cap{}Q|}{|F\cup{}Q|}$, while being cheaper to compute.
This is because:
* The function $F$ must be a superset of the query $Q$, so their union is
just $F$ and the intersection is $Q$ and it can be reduced to
$1-\frac{|Q|}{|F|}.
* There are no magic thresholds. These values are only being used to
compare against each other while sorting (and, if 200 results are found,
to compare with the maximum match). This means we only care if one value
is bigger than the other, not what it's actual value is, and since $Q$ is
the same for everything, it can be safely left out, reducing the formula
to $1-\frac{1}{|F|} = \frac{|F|}{|F|}-\frac{1}{|F|} = |F|-1$. And, since
the values are only being compared with each other, $|F|$ is fine.
Prior art
---------
This is significantly different from how Hoogle does it.
It doesn't account for order, and it has no special account for nesting,
though `Box<t>` is still two items, while `t` is only one.
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|A\cap{}B|}{|A\cup{}B|}$, while being cheaper to compute.
Unresolved questions
--------------------
`[]` and `()`, the slice/array and tuple/union operators, are ignored while
building the signature for the query. This is because they match more than
one thing, making them ambiguous. Unfortunately, this also makes them
a performance cliff. Is this likely to be a problem?
Right now, the system just stashes the type distance into the
same field that levenshtein distance normally goes in. This means exact
query matches show up on top (for example, if you have a function like
`fn nothing(a: Nothing, b: i32)`, then searching for `nothing` will show it
on top even if there's another function with `fn bar(x: Nothing)` that's
technically a closer match in type signature.
Future possibilities
--------------------
It should be possible to adopt more sorting criteria to act as a tie breaker,
which could be determined during unification.
[cuckoo filter]: https://en.wikipedia.org/wiki/Cuckoo_filter
[minhashing]: https://en.wikipedia.org/wiki/MinHash
2023-11-27 23:41:45 -06:00
|
|
|
{ path: 'foo', name: 'alef' },
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: '-> generic:T',
|
|
|
|
others: [
|
|
|
|
{ path: 'foo', name: 'bet' },
|
|
|
|
{ path: 'foo', name: 'alef' },
|
2024-10-31 15:09:37 -05:00
|
|
|
{ path: 'foo', name: 'beta' },
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'A -> B',
|
|
|
|
others: [
|
|
|
|
{ path: 'foo', name: 'bet' },
|
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'A -> A',
|
|
|
|
others: [
|
|
|
|
{ path: 'foo', name: 'beta' },
|
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'A, A',
|
|
|
|
others: [
|
|
|
|
{ path: 'foo', name: 'alternate' },
|
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'A, B',
|
|
|
|
others: [
|
|
|
|
{ path: 'foo', name: 'other' },
|
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'Other, Other',
|
|
|
|
others: [
|
|
|
|
{ path: 'foo', name: 'alternate' },
|
rustdoc-search: use set ops for ranking and filtering
This commit adds ranking and quick filtering to type-based search,
improving performance and having it order results based on their
type signatures.
Motivation
----------
If I write a query like `str -> String`, a lot of functions come up.
That's to be expected, but `String::from_str` should come up on top, and
it doesn't right now. This is because the sorting algorithm is based
on the functions name, and doesn't consider the type signature at all.
`slice::join` even comes up above it!
To fix this, the sorting should take into account the function's
signature, and the closer match should come up on top.
Guide-level description
-----------------------
When searching by type signature, types with a "closer" match will
show up above types that match less precisely.
Reference-level explanation
---------------------------
Functions signature search works in three major phases:
* A compact "fingerprint," based on the [bloom filter] technique, is used to
check for matches and to estimate the distance. It sometimes has false
positive matches, but it also operates on 128 bit contiguous memory and
requires no backtracking, so it performs a lot better than real
unification.
The fingerprint represents the set of items in the type signature, but it
does not represent nesting, and it ignores when the same item appears more
than once.
The result is rejected if any query bits are absent in the function, or
if the distance is higher than the current maximum and 200
results have already been found.
* The second step performs unification. This is where nesting and true bag
semantics are taken into account, and it has no false positives. It uses a
recursive, backtracking algorithm.
The result is rejected if any query elements are absent in the function.
[bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter
Drawbacks
---------
This makes the code bigger.
More than that, this design is a subtle trade-off. It makes the cases I've
tested against measurably faster, but it's not clear how well this extends
to other crates with potentially more functions and fewer types.
The more complex things get, the more important it is to gather a good set
of data to test with (this is arguably more important than the actual
benchmarking ifrastructure right now).
Rationale and alternatives
--------------------------
Throwing a bloom filter in front makes it faster.
More than that, it tries to take a tactic where the system can not only check
for potential matches, but also gets an accurate distance function without
needing to do unification. That way it can skip unification even on items
that have the needed elems, as long as they have more items than the
currently found maximum.
If I didn't want to be able to cheaply do set operations on the fingerprint,
a [cuckoo filter] is supposed to have better performance.
But the nice bit-banging set intersection doesn't work AFAIK.
I also looked into [minhashing], but since it's actually an unbiased
estimate of the similarity coefficient, I'm not sure how it could be used
to skip unification (I wouldn't know if the estimate was too low or
too high).
This function actually uses the number of distinct items as its
"distance function."
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|F\cap{}Q|}{|F\cup{}Q|}$, while being cheaper to compute.
This is because:
* The function $F$ must be a superset of the query $Q$, so their union is
just $F$ and the intersection is $Q$ and it can be reduced to
$1-\frac{|Q|}{|F|}.
* There are no magic thresholds. These values are only being used to
compare against each other while sorting (and, if 200 results are found,
to compare with the maximum match). This means we only care if one value
is bigger than the other, not what it's actual value is, and since $Q$ is
the same for everything, it can be safely left out, reducing the formula
to $1-\frac{1}{|F|} = \frac{|F|}{|F|}-\frac{1}{|F|} = |F|-1$. And, since
the values are only being compared with each other, $|F|$ is fine.
Prior art
---------
This is significantly different from how Hoogle does it.
It doesn't account for order, and it has no special account for nesting,
though `Box<t>` is still two items, while `t` is only one.
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|A\cap{}B|}{|A\cup{}B|}$, while being cheaper to compute.
Unresolved questions
--------------------
`[]` and `()`, the slice/array and tuple/union operators, are ignored while
building the signature for the query. This is because they match more than
one thing, making them ambiguous. Unfortunately, this also makes them
a performance cliff. Is this likely to be a problem?
Right now, the system just stashes the type distance into the
same field that levenshtein distance normally goes in. This means exact
query matches show up on top (for example, if you have a function like
`fn nothing(a: Nothing, b: i32)`, then searching for `nothing` will show it
on top even if there's another function with `fn bar(x: Nothing)` that's
technically a closer match in type signature.
Future possibilities
--------------------
It should be possible to adopt more sorting criteria to act as a tie breaker,
which could be determined during unification.
[cuckoo filter]: https://en.wikipedia.org/wiki/Cuckoo_filter
[minhashing]: https://en.wikipedia.org/wiki/MinHash
2023-11-27 23:41:45 -06:00
|
|
|
{ path: 'foo', name: 'other' },
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'generic:T',
|
|
|
|
in_args: [
|
rustdoc-search: use set ops for ranking and filtering
This commit adds ranking and quick filtering to type-based search,
improving performance and having it order results based on their
type signatures.
Motivation
----------
If I write a query like `str -> String`, a lot of functions come up.
That's to be expected, but `String::from_str` should come up on top, and
it doesn't right now. This is because the sorting algorithm is based
on the functions name, and doesn't consider the type signature at all.
`slice::join` even comes up above it!
To fix this, the sorting should take into account the function's
signature, and the closer match should come up on top.
Guide-level description
-----------------------
When searching by type signature, types with a "closer" match will
show up above types that match less precisely.
Reference-level explanation
---------------------------
Functions signature search works in three major phases:
* A compact "fingerprint," based on the [bloom filter] technique, is used to
check for matches and to estimate the distance. It sometimes has false
positive matches, but it also operates on 128 bit contiguous memory and
requires no backtracking, so it performs a lot better than real
unification.
The fingerprint represents the set of items in the type signature, but it
does not represent nesting, and it ignores when the same item appears more
than once.
The result is rejected if any query bits are absent in the function, or
if the distance is higher than the current maximum and 200
results have already been found.
* The second step performs unification. This is where nesting and true bag
semantics are taken into account, and it has no false positives. It uses a
recursive, backtracking algorithm.
The result is rejected if any query elements are absent in the function.
[bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter
Drawbacks
---------
This makes the code bigger.
More than that, this design is a subtle trade-off. It makes the cases I've
tested against measurably faster, but it's not clear how well this extends
to other crates with potentially more functions and fewer types.
The more complex things get, the more important it is to gather a good set
of data to test with (this is arguably more important than the actual
benchmarking ifrastructure right now).
Rationale and alternatives
--------------------------
Throwing a bloom filter in front makes it faster.
More than that, it tries to take a tactic where the system can not only check
for potential matches, but also gets an accurate distance function without
needing to do unification. That way it can skip unification even on items
that have the needed elems, as long as they have more items than the
currently found maximum.
If I didn't want to be able to cheaply do set operations on the fingerprint,
a [cuckoo filter] is supposed to have better performance.
But the nice bit-banging set intersection doesn't work AFAIK.
I also looked into [minhashing], but since it's actually an unbiased
estimate of the similarity coefficient, I'm not sure how it could be used
to skip unification (I wouldn't know if the estimate was too low or
too high).
This function actually uses the number of distinct items as its
"distance function."
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|F\cap{}Q|}{|F\cup{}Q|}$, while being cheaper to compute.
This is because:
* The function $F$ must be a superset of the query $Q$, so their union is
just $F$ and the intersection is $Q$ and it can be reduced to
$1-\frac{|Q|}{|F|}.
* There are no magic thresholds. These values are only being used to
compare against each other while sorting (and, if 200 results are found,
to compare with the maximum match). This means we only care if one value
is bigger than the other, not what it's actual value is, and since $Q$ is
the same for everything, it can be safely left out, reducing the formula
to $1-\frac{1}{|F|} = \frac{|F|}{|F|}-\frac{1}{|F|} = |F|-1$. And, since
the values are only being compared with each other, $|F|$ is fine.
Prior art
---------
This is significantly different from how Hoogle does it.
It doesn't account for order, and it has no special account for nesting,
though `Box<t>` is still two items, while `t` is only one.
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|A\cap{}B|}{|A\cup{}B|}$, while being cheaper to compute.
Unresolved questions
--------------------
`[]` and `()`, the slice/array and tuple/union operators, are ignored while
building the signature for the query. This is because they match more than
one thing, making them ambiguous. Unfortunately, this also makes them
a performance cliff. Is this likely to be a problem?
Right now, the system just stashes the type distance into the
same field that levenshtein distance normally goes in. This means exact
query matches show up on top (for example, if you have a function like
`fn nothing(a: Nothing, b: i32)`, then searching for `nothing` will show it
on top even if there's another function with `fn bar(x: Nothing)` that's
technically a closer match in type signature.
Future possibilities
--------------------
It should be possible to adopt more sorting criteria to act as a tie breaker,
which could be determined during unification.
[cuckoo filter]: https://en.wikipedia.org/wiki/Cuckoo_filter
[minhashing]: https://en.wikipedia.org/wiki/MinHash
2023-11-27 23:41:45 -06:00
|
|
|
{ path: 'foo', name: 'bet' },
|
2024-10-31 15:09:37 -05:00
|
|
|
{ path: 'foo', name: 'beta' },
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
{ path: 'foo', name: 'alternate' },
|
rustdoc-search: use set ops for ranking and filtering
This commit adds ranking and quick filtering to type-based search,
improving performance and having it order results based on their
type signatures.
Motivation
----------
If I write a query like `str -> String`, a lot of functions come up.
That's to be expected, but `String::from_str` should come up on top, and
it doesn't right now. This is because the sorting algorithm is based
on the functions name, and doesn't consider the type signature at all.
`slice::join` even comes up above it!
To fix this, the sorting should take into account the function's
signature, and the closer match should come up on top.
Guide-level description
-----------------------
When searching by type signature, types with a "closer" match will
show up above types that match less precisely.
Reference-level explanation
---------------------------
Functions signature search works in three major phases:
* A compact "fingerprint," based on the [bloom filter] technique, is used to
check for matches and to estimate the distance. It sometimes has false
positive matches, but it also operates on 128 bit contiguous memory and
requires no backtracking, so it performs a lot better than real
unification.
The fingerprint represents the set of items in the type signature, but it
does not represent nesting, and it ignores when the same item appears more
than once.
The result is rejected if any query bits are absent in the function, or
if the distance is higher than the current maximum and 200
results have already been found.
* The second step performs unification. This is where nesting and true bag
semantics are taken into account, and it has no false positives. It uses a
recursive, backtracking algorithm.
The result is rejected if any query elements are absent in the function.
[bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter
Drawbacks
---------
This makes the code bigger.
More than that, this design is a subtle trade-off. It makes the cases I've
tested against measurably faster, but it's not clear how well this extends
to other crates with potentially more functions and fewer types.
The more complex things get, the more important it is to gather a good set
of data to test with (this is arguably more important than the actual
benchmarking ifrastructure right now).
Rationale and alternatives
--------------------------
Throwing a bloom filter in front makes it faster.
More than that, it tries to take a tactic where the system can not only check
for potential matches, but also gets an accurate distance function without
needing to do unification. That way it can skip unification even on items
that have the needed elems, as long as they have more items than the
currently found maximum.
If I didn't want to be able to cheaply do set operations on the fingerprint,
a [cuckoo filter] is supposed to have better performance.
But the nice bit-banging set intersection doesn't work AFAIK.
I also looked into [minhashing], but since it's actually an unbiased
estimate of the similarity coefficient, I'm not sure how it could be used
to skip unification (I wouldn't know if the estimate was too low or
too high).
This function actually uses the number of distinct items as its
"distance function."
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|F\cap{}Q|}{|F\cup{}Q|}$, while being cheaper to compute.
This is because:
* The function $F$ must be a superset of the query $Q$, so their union is
just $F$ and the intersection is $Q$ and it can be reduced to
$1-\frac{|Q|}{|F|}.
* There are no magic thresholds. These values are only being used to
compare against each other while sorting (and, if 200 results are found,
to compare with the maximum match). This means we only care if one value
is bigger than the other, not what it's actual value is, and since $Q$ is
the same for everything, it can be safely left out, reducing the formula
to $1-\frac{1}{|F|} = \frac{|F|}{|F|}-\frac{1}{|F|} = |F|-1$. And, since
the values are only being compared with each other, $|F|$ is fine.
Prior art
---------
This is significantly different from how Hoogle does it.
It doesn't account for order, and it has no special account for nesting,
though `Box<t>` is still two items, while `t` is only one.
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|A\cap{}B|}{|A\cup{}B|}$, while being cheaper to compute.
Unresolved questions
--------------------
`[]` and `()`, the slice/array and tuple/union operators, are ignored while
building the signature for the query. This is because they match more than
one thing, making them ambiguous. Unfortunately, this also makes them
a performance cliff. Is this likely to be a problem?
Right now, the system just stashes the type distance into the
same field that levenshtein distance normally goes in. This means exact
query matches show up on top (for example, if you have a function like
`fn nothing(a: Nothing, b: i32)`, then searching for `nothing` will show it
on top even if there's another function with `fn bar(x: Nothing)` that's
technically a closer match in type signature.
Future possibilities
--------------------
It should be possible to adopt more sorting criteria to act as a tie breaker,
which could be determined during unification.
[cuckoo filter]: https://en.wikipedia.org/wiki/Cuckoo_filter
[minhashing]: https://en.wikipedia.org/wiki/MinHash
2023-11-27 23:41:45 -06:00
|
|
|
{ path: 'foo', name: 'other' },
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'generic:Other',
|
|
|
|
in_args: [
|
rustdoc-search: use set ops for ranking and filtering
This commit adds ranking and quick filtering to type-based search,
improving performance and having it order results based on their
type signatures.
Motivation
----------
If I write a query like `str -> String`, a lot of functions come up.
That's to be expected, but `String::from_str` should come up on top, and
it doesn't right now. This is because the sorting algorithm is based
on the functions name, and doesn't consider the type signature at all.
`slice::join` even comes up above it!
To fix this, the sorting should take into account the function's
signature, and the closer match should come up on top.
Guide-level description
-----------------------
When searching by type signature, types with a "closer" match will
show up above types that match less precisely.
Reference-level explanation
---------------------------
Functions signature search works in three major phases:
* A compact "fingerprint," based on the [bloom filter] technique, is used to
check for matches and to estimate the distance. It sometimes has false
positive matches, but it also operates on 128 bit contiguous memory and
requires no backtracking, so it performs a lot better than real
unification.
The fingerprint represents the set of items in the type signature, but it
does not represent nesting, and it ignores when the same item appears more
than once.
The result is rejected if any query bits are absent in the function, or
if the distance is higher than the current maximum and 200
results have already been found.
* The second step performs unification. This is where nesting and true bag
semantics are taken into account, and it has no false positives. It uses a
recursive, backtracking algorithm.
The result is rejected if any query elements are absent in the function.
[bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter
Drawbacks
---------
This makes the code bigger.
More than that, this design is a subtle trade-off. It makes the cases I've
tested against measurably faster, but it's not clear how well this extends
to other crates with potentially more functions and fewer types.
The more complex things get, the more important it is to gather a good set
of data to test with (this is arguably more important than the actual
benchmarking ifrastructure right now).
Rationale and alternatives
--------------------------
Throwing a bloom filter in front makes it faster.
More than that, it tries to take a tactic where the system can not only check
for potential matches, but also gets an accurate distance function without
needing to do unification. That way it can skip unification even on items
that have the needed elems, as long as they have more items than the
currently found maximum.
If I didn't want to be able to cheaply do set operations on the fingerprint,
a [cuckoo filter] is supposed to have better performance.
But the nice bit-banging set intersection doesn't work AFAIK.
I also looked into [minhashing], but since it's actually an unbiased
estimate of the similarity coefficient, I'm not sure how it could be used
to skip unification (I wouldn't know if the estimate was too low or
too high).
This function actually uses the number of distinct items as its
"distance function."
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|F\cap{}Q|}{|F\cup{}Q|}$, while being cheaper to compute.
This is because:
* The function $F$ must be a superset of the query $Q$, so their union is
just $F$ and the intersection is $Q$ and it can be reduced to
$1-\frac{|Q|}{|F|}.
* There are no magic thresholds. These values are only being used to
compare against each other while sorting (and, if 200 results are found,
to compare with the maximum match). This means we only care if one value
is bigger than the other, not what it's actual value is, and since $Q$ is
the same for everything, it can be safely left out, reducing the formula
to $1-\frac{1}{|F|} = \frac{|F|}{|F|}-\frac{1}{|F|} = |F|-1$. And, since
the values are only being compared with each other, $|F|$ is fine.
Prior art
---------
This is significantly different from how Hoogle does it.
It doesn't account for order, and it has no special account for nesting,
though `Box<t>` is still two items, while `t` is only one.
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|A\cap{}B|}{|A\cup{}B|}$, while being cheaper to compute.
Unresolved questions
--------------------
`[]` and `()`, the slice/array and tuple/union operators, are ignored while
building the signature for the query. This is because they match more than
one thing, making them ambiguous. Unfortunately, this also makes them
a performance cliff. Is this likely to be a problem?
Right now, the system just stashes the type distance into the
same field that levenshtein distance normally goes in. This means exact
query matches show up on top (for example, if you have a function like
`fn nothing(a: Nothing, b: i32)`, then searching for `nothing` will show it
on top even if there's another function with `fn bar(x: Nothing)` that's
technically a closer match in type signature.
Future possibilities
--------------------
It should be possible to adopt more sorting criteria to act as a tie breaker,
which could be determined during unification.
[cuckoo filter]: https://en.wikipedia.org/wiki/Cuckoo_filter
[minhashing]: https://en.wikipedia.org/wiki/MinHash
2023-11-27 23:41:45 -06:00
|
|
|
{ path: 'foo', name: 'bet' },
|
2024-10-31 15:09:37 -05:00
|
|
|
{ path: 'foo', name: 'beta' },
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
{ path: 'foo', name: 'alternate' },
|
rustdoc-search: use set ops for ranking and filtering
This commit adds ranking and quick filtering to type-based search,
improving performance and having it order results based on their
type signatures.
Motivation
----------
If I write a query like `str -> String`, a lot of functions come up.
That's to be expected, but `String::from_str` should come up on top, and
it doesn't right now. This is because the sorting algorithm is based
on the functions name, and doesn't consider the type signature at all.
`slice::join` even comes up above it!
To fix this, the sorting should take into account the function's
signature, and the closer match should come up on top.
Guide-level description
-----------------------
When searching by type signature, types with a "closer" match will
show up above types that match less precisely.
Reference-level explanation
---------------------------
Functions signature search works in three major phases:
* A compact "fingerprint," based on the [bloom filter] technique, is used to
check for matches and to estimate the distance. It sometimes has false
positive matches, but it also operates on 128 bit contiguous memory and
requires no backtracking, so it performs a lot better than real
unification.
The fingerprint represents the set of items in the type signature, but it
does not represent nesting, and it ignores when the same item appears more
than once.
The result is rejected if any query bits are absent in the function, or
if the distance is higher than the current maximum and 200
results have already been found.
* The second step performs unification. This is where nesting and true bag
semantics are taken into account, and it has no false positives. It uses a
recursive, backtracking algorithm.
The result is rejected if any query elements are absent in the function.
[bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter
Drawbacks
---------
This makes the code bigger.
More than that, this design is a subtle trade-off. It makes the cases I've
tested against measurably faster, but it's not clear how well this extends
to other crates with potentially more functions and fewer types.
The more complex things get, the more important it is to gather a good set
of data to test with (this is arguably more important than the actual
benchmarking ifrastructure right now).
Rationale and alternatives
--------------------------
Throwing a bloom filter in front makes it faster.
More than that, it tries to take a tactic where the system can not only check
for potential matches, but also gets an accurate distance function without
needing to do unification. That way it can skip unification even on items
that have the needed elems, as long as they have more items than the
currently found maximum.
If I didn't want to be able to cheaply do set operations on the fingerprint,
a [cuckoo filter] is supposed to have better performance.
But the nice bit-banging set intersection doesn't work AFAIK.
I also looked into [minhashing], but since it's actually an unbiased
estimate of the similarity coefficient, I'm not sure how it could be used
to skip unification (I wouldn't know if the estimate was too low or
too high).
This function actually uses the number of distinct items as its
"distance function."
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|F\cap{}Q|}{|F\cup{}Q|}$, while being cheaper to compute.
This is because:
* The function $F$ must be a superset of the query $Q$, so their union is
just $F$ and the intersection is $Q$ and it can be reduced to
$1-\frac{|Q|}{|F|}.
* There are no magic thresholds. These values are only being used to
compare against each other while sorting (and, if 200 results are found,
to compare with the maximum match). This means we only care if one value
is bigger than the other, not what it's actual value is, and since $Q$ is
the same for everything, it can be safely left out, reducing the formula
to $1-\frac{1}{|F|} = \frac{|F|}{|F|}-\frac{1}{|F|} = |F|-1$. And, since
the values are only being compared with each other, $|F|$ is fine.
Prior art
---------
This is significantly different from how Hoogle does it.
It doesn't account for order, and it has no special account for nesting,
though `Box<t>` is still two items, while `t` is only one.
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|A\cap{}B|}{|A\cup{}B|}$, while being cheaper to compute.
Unresolved questions
--------------------
`[]` and `()`, the slice/array and tuple/union operators, are ignored while
building the signature for the query. This is because they match more than
one thing, making them ambiguous. Unfortunately, this also makes them
a performance cliff. Is this likely to be a problem?
Right now, the system just stashes the type distance into the
same field that levenshtein distance normally goes in. This means exact
query matches show up on top (for example, if you have a function like
`fn nothing(a: Nothing, b: i32)`, then searching for `nothing` will show it
on top even if there's another function with `fn bar(x: Nothing)` that's
technically a closer match in type signature.
Future possibilities
--------------------
It should be possible to adopt more sorting criteria to act as a tie breaker,
which could be determined during unification.
[cuckoo filter]: https://en.wikipedia.org/wiki/Cuckoo_filter
[minhashing]: https://en.wikipedia.org/wiki/MinHash
2023-11-27 23:41:45 -06:00
|
|
|
{ path: 'foo', name: 'other' },
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'trait:Other',
|
|
|
|
in_args: [
|
|
|
|
{ path: 'foo', name: 'alternate' },
|
rustdoc-search: use set ops for ranking and filtering
This commit adds ranking and quick filtering to type-based search,
improving performance and having it order results based on their
type signatures.
Motivation
----------
If I write a query like `str -> String`, a lot of functions come up.
That's to be expected, but `String::from_str` should come up on top, and
it doesn't right now. This is because the sorting algorithm is based
on the functions name, and doesn't consider the type signature at all.
`slice::join` even comes up above it!
To fix this, the sorting should take into account the function's
signature, and the closer match should come up on top.
Guide-level description
-----------------------
When searching by type signature, types with a "closer" match will
show up above types that match less precisely.
Reference-level explanation
---------------------------
Functions signature search works in three major phases:
* A compact "fingerprint," based on the [bloom filter] technique, is used to
check for matches and to estimate the distance. It sometimes has false
positive matches, but it also operates on 128 bit contiguous memory and
requires no backtracking, so it performs a lot better than real
unification.
The fingerprint represents the set of items in the type signature, but it
does not represent nesting, and it ignores when the same item appears more
than once.
The result is rejected if any query bits are absent in the function, or
if the distance is higher than the current maximum and 200
results have already been found.
* The second step performs unification. This is where nesting and true bag
semantics are taken into account, and it has no false positives. It uses a
recursive, backtracking algorithm.
The result is rejected if any query elements are absent in the function.
[bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter
Drawbacks
---------
This makes the code bigger.
More than that, this design is a subtle trade-off. It makes the cases I've
tested against measurably faster, but it's not clear how well this extends
to other crates with potentially more functions and fewer types.
The more complex things get, the more important it is to gather a good set
of data to test with (this is arguably more important than the actual
benchmarking ifrastructure right now).
Rationale and alternatives
--------------------------
Throwing a bloom filter in front makes it faster.
More than that, it tries to take a tactic where the system can not only check
for potential matches, but also gets an accurate distance function without
needing to do unification. That way it can skip unification even on items
that have the needed elems, as long as they have more items than the
currently found maximum.
If I didn't want to be able to cheaply do set operations on the fingerprint,
a [cuckoo filter] is supposed to have better performance.
But the nice bit-banging set intersection doesn't work AFAIK.
I also looked into [minhashing], but since it's actually an unbiased
estimate of the similarity coefficient, I'm not sure how it could be used
to skip unification (I wouldn't know if the estimate was too low or
too high).
This function actually uses the number of distinct items as its
"distance function."
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|F\cap{}Q|}{|F\cup{}Q|}$, while being cheaper to compute.
This is because:
* The function $F$ must be a superset of the query $Q$, so their union is
just $F$ and the intersection is $Q$ and it can be reduced to
$1-\frac{|Q|}{|F|}.
* There are no magic thresholds. These values are only being used to
compare against each other while sorting (and, if 200 results are found,
to compare with the maximum match). This means we only care if one value
is bigger than the other, not what it's actual value is, and since $Q$ is
the same for everything, it can be safely left out, reducing the formula
to $1-\frac{1}{|F|} = \frac{|F|}{|F|}-\frac{1}{|F|} = |F|-1$. And, since
the values are only being compared with each other, $|F|$ is fine.
Prior art
---------
This is significantly different from how Hoogle does it.
It doesn't account for order, and it has no special account for nesting,
though `Box<t>` is still two items, while `t` is only one.
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|A\cap{}B|}{|A\cup{}B|}$, while being cheaper to compute.
Unresolved questions
--------------------
`[]` and `()`, the slice/array and tuple/union operators, are ignored while
building the signature for the query. This is because they match more than
one thing, making them ambiguous. Unfortunately, this also makes them
a performance cliff. Is this likely to be a problem?
Right now, the system just stashes the type distance into the
same field that levenshtein distance normally goes in. This means exact
query matches show up on top (for example, if you have a function like
`fn nothing(a: Nothing, b: i32)`, then searching for `nothing` will show it
on top even if there's another function with `fn bar(x: Nothing)` that's
technically a closer match in type signature.
Future possibilities
--------------------
It should be possible to adopt more sorting criteria to act as a tie breaker,
which could be determined during unification.
[cuckoo filter]: https://en.wikipedia.org/wiki/Cuckoo_filter
[minhashing]: https://en.wikipedia.org/wiki/MinHash
2023-11-27 23:41:45 -06:00
|
|
|
{ path: 'foo', name: 'other' },
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'Other',
|
|
|
|
in_args: [
|
rustdoc-search: use set ops for ranking and filtering
This commit adds ranking and quick filtering to type-based search,
improving performance and having it order results based on their
type signatures.
Motivation
----------
If I write a query like `str -> String`, a lot of functions come up.
That's to be expected, but `String::from_str` should come up on top, and
it doesn't right now. This is because the sorting algorithm is based
on the functions name, and doesn't consider the type signature at all.
`slice::join` even comes up above it!
To fix this, the sorting should take into account the function's
signature, and the closer match should come up on top.
Guide-level description
-----------------------
When searching by type signature, types with a "closer" match will
show up above types that match less precisely.
Reference-level explanation
---------------------------
Functions signature search works in three major phases:
* A compact "fingerprint," based on the [bloom filter] technique, is used to
check for matches and to estimate the distance. It sometimes has false
positive matches, but it also operates on 128 bit contiguous memory and
requires no backtracking, so it performs a lot better than real
unification.
The fingerprint represents the set of items in the type signature, but it
does not represent nesting, and it ignores when the same item appears more
than once.
The result is rejected if any query bits are absent in the function, or
if the distance is higher than the current maximum and 200
results have already been found.
* The second step performs unification. This is where nesting and true bag
semantics are taken into account, and it has no false positives. It uses a
recursive, backtracking algorithm.
The result is rejected if any query elements are absent in the function.
[bloom filter]: https://en.wikipedia.org/wiki/Bloom_filter
Drawbacks
---------
This makes the code bigger.
More than that, this design is a subtle trade-off. It makes the cases I've
tested against measurably faster, but it's not clear how well this extends
to other crates with potentially more functions and fewer types.
The more complex things get, the more important it is to gather a good set
of data to test with (this is arguably more important than the actual
benchmarking ifrastructure right now).
Rationale and alternatives
--------------------------
Throwing a bloom filter in front makes it faster.
More than that, it tries to take a tactic where the system can not only check
for potential matches, but also gets an accurate distance function without
needing to do unification. That way it can skip unification even on items
that have the needed elems, as long as they have more items than the
currently found maximum.
If I didn't want to be able to cheaply do set operations on the fingerprint,
a [cuckoo filter] is supposed to have better performance.
But the nice bit-banging set intersection doesn't work AFAIK.
I also looked into [minhashing], but since it's actually an unbiased
estimate of the similarity coefficient, I'm not sure how it could be used
to skip unification (I wouldn't know if the estimate was too low or
too high).
This function actually uses the number of distinct items as its
"distance function."
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|F\cap{}Q|}{|F\cup{}Q|}$, while being cheaper to compute.
This is because:
* The function $F$ must be a superset of the query $Q$, so their union is
just $F$ and the intersection is $Q$ and it can be reduced to
$1-\frac{|Q|}{|F|}.
* There are no magic thresholds. These values are only being used to
compare against each other while sorting (and, if 200 results are found,
to compare with the maximum match). This means we only care if one value
is bigger than the other, not what it's actual value is, and since $Q$ is
the same for everything, it can be safely left out, reducing the formula
to $1-\frac{1}{|F|} = \frac{|F|}{|F|}-\frac{1}{|F|} = |F|-1$. And, since
the values are only being compared with each other, $|F|$ is fine.
Prior art
---------
This is significantly different from how Hoogle does it.
It doesn't account for order, and it has no special account for nesting,
though `Box<t>` is still two items, while `t` is only one.
This should give the same results that it would have gotten from a Jaccard
Distance $1-\frac{|A\cap{}B|}{|A\cup{}B|}$, while being cheaper to compute.
Unresolved questions
--------------------
`[]` and `()`, the slice/array and tuple/union operators, are ignored while
building the signature for the query. This is because they match more than
one thing, making them ambiguous. Unfortunately, this also makes them
a performance cliff. Is this likely to be a problem?
Right now, the system just stashes the type distance into the
same field that levenshtein distance normally goes in. This means exact
query matches show up on top (for example, if you have a function like
`fn nothing(a: Nothing, b: i32)`, then searching for `nothing` will show it
on top even if there's another function with `fn bar(x: Nothing)` that's
technically a closer match in type signature.
Future possibilities
--------------------
It should be possible to adopt more sorting criteria to act as a tie breaker,
which could be determined during unification.
[cuckoo filter]: https://en.wikipedia.org/wiki/Cuckoo_filter
[minhashing]: https://en.wikipedia.org/wiki/MinHash
2023-11-27 23:41:45 -06:00
|
|
|
// because function is called "other", it's sorted first
|
|
|
|
// even though it has higher type distance
|
rustdoc-search: add support for type parameters
When writing a type-driven search query in rustdoc, specifically one
with more than one query element, non-existent types become generic
parameters instead of auto-correcting (which is currently only done
for single-element queries) or giving no result. You can also force a
generic type parameter by writing `generic:T` (and can force it to not
use a generic type parameter with something like `struct:T` or whatever,
though if this happens it means the thing you're looking for doesn't
exist and will give you no results).
There is no syntax provided for specifying type constraints
for generic type parameters.
When you have a generic type parameter in a search query, it will only
match up with generic type parameters in the actual function, not
concrete types that match, not concrete types that implement a trait.
It also strictly matches based on when they're the same or different,
so `option<T>, option<U> -> option<U>` matches `Option::and`, but not
`Option::or`. Similarly, `option<T>, option<T> -> option<T>`` matches
`Option::or`, but not `Option::and`.
2023-06-16 16:43:28 -05:00
|
|
|
{ path: 'foo', name: 'other' },
|
|
|
|
{ path: 'foo', name: 'alternate' },
|
|
|
|
],
|
|
|
|
},
|
|
|
|
{
|
|
|
|
query: 'trait:T',
|
|
|
|
in_args: [],
|
|
|
|
},
|
|
|
|
];
|