10181: Begining of lsif r=HKalbasi a=HKalbasi
This PR adds a `lsif` command to cli, which can be used as `rust-analyzer lsif /path/to/project > dump.lsif`. It now generates a valid, but pretty useless lsif (only supports folding ranges). The propose of this PR is to discussing about the structure of lsif generator, before starting anything serious.
cc `@matklad` #8696#3098
Co-authored-by: hamidreza kalbasi <hamidrezakalbasi@protonmail.com>
Consider these expples
{ 92 }
async { 92 }
'a: { 92 }
#[a] { 92 }
Previously the tree for them were
BLOCK_EXPR
{ ... }
EFFECT_EXPR
async
BLOCK_EXPR
{ ... }
EFFECT_EXPR
'a:
BLOCK_EXPR
{ ... }
BLOCK_EXPR
#[a]
{ ... }
As you see, it gets progressively worse :) The last two items are
especially odd. The last one even violates the balanced curleys
invariant we have (#10357) The new approach is to say that the stuff in
`{}` is stmt_list, and the block is stmt_list + optional modifiers
BLOCK_EXPR
STMT_LIST
{ ... }
BLOCK_EXPR
async
STMT_LIST
{ ... }
BLOCK_EXPR
'a:
STMT_LIST
{ ... }
BLOCK_EXPR
#[a]
STMT_LIST
{ ... }
FragmentKind played two roles:
* entry point to the parser
* syntactic category of a macro call
These are different use-cases, and warrant different types. For example,
macro can't expand to visibility, but we have such fragment today.
This PR introduces `ExpandsTo` enum to separate this two use-cases.
I suspect we might further split `FragmentKind` into `$x:specifier` enum
specific to MBE, and a general parser entry point, but that's for
another PR!
10066: internal: improve compile times a bit r=matklad a=matklad
I wanted to *quickly* remove `smol_str = {features = "serde"}`, and figured out that the simplest way to do that is to replace our straightforward proc macro serialization with something significantly more obscure.
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
This pulls in https://github.com/rust-analyzer/rowan/pull/111, which
fixes a bug in green node hash, making it more efficient.
On analysis stats, total memory goes from 1271mb to 1244mb, instructions
from 358ginstr to 353ginstr (not 100% clear on this one -- for some
reasons instruction counts are not stable for me anymore).
The counts are (before, than after):
rowan::green::node::GreenNode 11_490_596 2_357_063 2_233_347
rowan::green::token::GreenToken 5_010_401 994_281 991_920
rowan::green::node::GreenNode 9_738_085 1_988_164 1_890_549
rowan::green::token::GreenToken 3_353_409 687_333 685_831
total max_live live
I don't think there's anything wrong with project_model depending on
proc_macro_api directly -- fundamentally, both are about gluing our pure
data model to the messy outside world.
However, it's easy enough to avoid the dependency, so why not.
As an additional consideration, `proc_macro_api` now pulls in `base_db`.
project_model should definitely not depend on that!
cargo llvm-lines shows that path_to_error bloats the code. I don't think
I've needed this functionality recently, seems that we've fixed most of
the serialization problems. So let's just remove it. Should be easy to
add back if we ever need it, and it does make sense to keep the
`from_json` function around.
Today, rust-analyzer (and rustc, and bat, and IntelliJ) fail badly on
some kinds of maliciously constructed code, like a deep sequence of
nested parenthesis.
"Who writes 100k nested parenthesis" you'd ask?
Well, in a language with macros, a run-away macro expansion might do
that (see the added tests)! Such expansion can be broad, rather than
deep, so it bypasses recursion check at the macro-expansion layer, but
triggers deep recursion in parser.
In the ideal world, the parser would just handle deeply nested structs
gracefully. We'll get there some day, but at the moment, let's try to be
simple, and just avoid expanding macros with unbalanced parenthesis in
the first place.
closes#9358
9453: Add first-class limits. r=matklad,lnicola a=rbartlensky
Partially fixes#9286.
This introduces a new `Limits` structure which is passed as an input
to `SourceDatabase`. This makes limits accessible almost everywhere in
the code, since most places have a database in scope.
One downside of this approach is that whenever you query limits, you
essentially do an `Arc::clone` which is less than ideal.
Let me know if I missed anything, or would like me to take a different approach!
Co-authored-by: Robert Bartlensky <bartlensky.robert@gmail.com>