Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Heart of a Language Server (rust-analyzer.github.io)
84 points by thesuperbigfrog on Dec 30, 2023 | hide | past | favorite | 17 comments


> For Rust, a file foo.rs only exists semantically if some parent file includes it via mod foo; declaration! And, in general, it’s impossible to locate the parent file automatically. Usually, for src/bar/foo.rs the parent would be src/bar.rs, but, due to #[path] attributes which override this default, this might not be true.

This is where the path based package structure of Java/c# was a huge win from a static analysis perspective.


C# does not have a path-based structure.


> The second one is actually less true in Rust than it is in Kotlin or C#! In those languages, each file starts with a package declaration, which immediately mounts the file at the appropriate place in the semantic model.

That's the difference between languages designed to be IDE-friendly and languages designed to have flexible semantics without much concern for making analysis easy.

Languages like C++, D, Lisp, Rust are terribly difficult to analyse compared to Java, Kotlin and Dart... and that's no accident.

Rust only has good support (not great but ok) in IDEs because it managed to get enough people to muster the huge effort that requires. I've been trying the Kate editor (from the KDE project), for example, and it does a pretty good job of helping with Rust thanks to the Rust Analyzer LSP... all Kate had to do was provide a LSP client for that to work. The situation for D, for example, which has not had the same amount of effort (and due to the difficult to analyze nature of the language) is much worse, unfortunately... that's a big reason for me to prefer Rust over D (others being more libraries and much fewer compiler bugs), despite finding D more fun to write.


>because it managed to get enough people to muster the huge effort that requires.

Not really true on both counts:

- you don't need _that_ many people, if you know what you are doing. For both rust-analyzer and IntelliJ Rust, the headcount was around one-and-a-half people between the first line written and the point where the thing basically works

- the "mustering" only came together after the tools have proven themselves to be successful. _Today_ both rust-analyzer and Rust Rover have dedicated teams working on them, but it is a _consequences_ of the fact that these IDEs work great, not a cause. It was a shoestring budget at the beginning!

Rust is not the easiest language to build an IDE for, but you really don't need a Google-level of effort for that either. What you do absolutely need though, is an _understanding_ of how to build an IDE, and that's very niche knowledge!


A short history of the rust-analyzer can be found here[1].

> The rust-analyzer project was started at the very end of 2017 (first commit). At that time, the existing LSP implementation, RLS, had been providing IDE support for Rust for several years.

By 2017, Rust had just basic IDE support with RLS and it was already quite popular. I completely reject your theory that the success of rust-analyzer is due to "the fact these IDEs worked great". It was Rust's success, even while having quite poor tooling, that brought people in to work on improving the tooling (to a point most other languages never reach).

> it took surprisingly little effort to get to a prototype which was already useful as an IDE, which happened in Autumn 2018

So, yeah it takes little effort to get a basic IDE... that's where most languages (e.g. D, Nim) stop... the hard part is to get something that's really good.

> At that critical point, the company Ferrous Systems (which was newborn itself) stepped in to fund further development of the prototype.

So, rust-analyzer was not just a hobby project since at least 2018, when it got sponsored by a business.

> By 2020, we realized that what we had built was no longer a prototype, but an already tremendously useful tool for day-to-day Rust programming.

It took 2 years of work while sponsored by a real business to get to the point where rust-analyzer was considered "tremendously useful".

That's a "huge effort" on my book, not Google-level (and I never said that), but huge nevertheless. And to get to the same level as you get with Java in IntelliJ, for example, you can be sure it'll take about as much effort as was already put into it, maybe more.

[1] https://blog.rust-lang.org/2022/02/21/rust-analyzer-joins-ru...


> It took 2 years of work while sponsored by a real business

With a team of 1.5 people. I personally wouldn't call that amount of effort "huge", but adjective definitions may vary. The important bit here is "couple of people over couple of years" is what it takes to get a decent IDE support from zero for a reasonable language, provided that the initial architecture is right.

> A short history of the rust-analyzer can be found

To clarify, I am the original author of that text, rust-analyzer, and IntelliJ Rust. To add a little more details, the reason why rust-analyzer was started in 2017, is that before that I had been working on IntelliJ Rust. By 2017, IntelliJ Rust was already in a good shape! And the success of IntelliJ Rust was the main reason why folks were willing to pay me money for working on rust-analyzer.

IntelliJ Rust was started on the 1st of September in 2015 (before RLS). No one used Rust at that point, not even for blockchain, and then I think it was impossible to predict with high level of confidence that Rust would be big. Even more so, I think both me and my manager at JetBrains were underestimating future Rust success at that time, as we were thinking mostly about relatively niche systems programming market, and didn't see that Rust would be a success as a general-purpose programming language with people doing web stuff in it (obviously, that niche would be occupied by Kotlin!).

But funding one intern to hack on an IDE for obscure language in St. Petersburg was not a significant expense for JetBrains. And, given that the plugin was well received by then nascent Rust community, it made sense to gamble and to just hire the intern to be a full time junior dev.

>So, yeah it takes little effort to get a basic IDE... that's where most languages (e.g. D, Nim) stop... the hard part is to get something that's really good.

This glosses over an important detail: rust-analyzer was playing catch up with RLS, the work on which started roughly a year after IntelliJ Rust, I think in august of 2016. Unlike rust-analyzer, RLS was a real effort by the rust project itself, but it just couldn't be good enough, because the underlying architecture was not suitable for IDEs. Adding more effort to RLS wouldn't have helped, it was necessary to build a different thing.

This I think is a bad equilibrium a lot of IDE efforts can get stuck in --- you really need to apply effort on top of the right architecture to not get stuck, but the knowledge of which architecture is right used to be very obscure.


I see what you're saying... most people writing LSPs for new languages are doing it for the first time. They may spend a lot of time doing it and it will never be very good because they just don't really know what it takes. It's like that for any area of programming. What I am saying is that I think it's a huge effort exactly because of that: how many people have written an LSP before and are willing to do it for a new language? The fact that Jetbrains was actually behind rust-analyzer is actually news to me, despite having had exchanges with you before as a user of Rust IntelliJ for several years - I though you were "independent" of Jetbrains. That explains how it got so good so fast :D though I am very surprised a junior dev pulled it off... I suppose there was lots of "support" from the more experienced people, specially regarding the architecture, which I believe is what you're saying made it even possible?

Anyway, great job on the Rust plugin, I've always used that and it was always solid... though now I am "migrating" (at least trying to) to Kate and using rust-analyzer via the LSP just because JetBrains has "deprecated" the Rust plugin in favour of the new Rust-specific IDE they're cooking up (as I'm sure you're aware). I also contributed to the emacs plugin (rustic) before but got a bit disillusioned with emacs performance :) small world.

I guess Rust would've been successful (because of its own merits as a language) even if its IDE story was as bad as other new languages normally are (and eventually, because of many people having interest in it, the situation would improve fast), but maybe I'm wrong and the existence of rust-analyzer and RLS before that did actually "cause" the success... I don't think it's completely clear which is the case, and perhaps you're a bit biased in this story (no offense).


>The fact that Jetbrains was actually behind rust-analyzer

To clarify, I was not working at JetBrains when I started to work on rust-analyzer, so they are not _directly_ behind. They are very much indirectly behind though, as

- JetBrains was funding a lot of educational projects for students of software engineering, such as myself. - JetBrains generally pioneered modern static analysis based IDEs: https://martinfowler.com/bliki/PostIntelliJ.html - Most of things I've learned about IDEs, I learned at JB, studying their open source tools - I was able to spend a year hacking on rust-analyzer for free due to my savings from the work at JetBrains - And, after those funds run out, I worked part time in the educational department of JB, teaching students algorithms, Python, and Rust

> I suppose there was lots of "support" from the more experienced people, specially regarding the architecture, which I believe is what you're saying made it even possible?

Not really, in a sense that I didn't have a dedicated mentor to check on me on a daily basis, but also such support is not really needed. You are writing a plugin, so the overall architecture is a given, and you just fill in the details. And most of the things you need you can learn from the docs and source code of existing plugins, and that also retroactively explains the overall architecture to you. So, by the time I started hacking on rust-analyzer, the core "throughput-vs-latency" tradeoff was clear to me, and, after taking an extra look at dart-analyzer, roslyn, and Swift libsyntax all pieces fell into place. It's a shame of course that there's no single "IDE book", and you really need to dig into the source and uncover the knowledge, but that's why `https://rust-analyzer.github.io/blog` exists!

>I guess Rust would've been successful (because of its own merits as a language) even if its IDE story was as bad as other new languages normally are (and eventually, because of many people having interest in it, the situation would improve fast), but maybe I'm wrong and the existence of rust-analyzer and RLS before that did actually "cause" the success...

Yeah, it's hard to untangle these! I would say these all were mutually reinforcing: fundamentals of language design screaming "this _could_ be big if it doesn't fail", people getting excited about Rust early on and hacking on the language and associated tooling, the tooling making it pleasant to actually use the language, people getting excited about using Rust to do stuff and increasing the size of the ecosystem (which directly defines the amount of funding available). The bottom turtle is maybe Graydon engineering the best language-design-and-implementation team in the world?


Interesting that you put Lisp in difficult category despite the IDE experience being in another league compared to any other language I've tried.


The IDE experience with Lisp is great because of the interactivity, not because its great analysis. You do get good analysis from the language introspection capabilities itself, but you need to "manually" use that... It's just different. I know SLIME and love it, but the analysis it's capable of doing is not near the level you get with IntelliJ and Java or Kotlin. For starter, Common Lisp (and most other Lisps) don't even have static types (I know you can declare types, but most libs don't do it and SLIME doesn't use that for any IDE feature as far as I know, other than perhaps warn when you use an obviously wrong argument type), making features like advanced refactoring very difficult. Macros make it very hard to even know whether the syntax is right (try writing a LOOP in Common Lisp - SLIME won't help you).


I'm not a Common Lisp expert, but so far, I dislike the LOOP macro specifically because of its complex syntax compared to other Lisp macros and constructs.

The DO macro is complex, but I'm pretty sure Emacs will write something like this at the bottom of your screen if you have ElDoc enabled. (I'm not at my computer right now, so this is just me writing from memory).

  (do ((var init step)...) (condition value) &body)
You're right that this isn't as good as actual autocomplete like in IDEs, though.


Since semantic information had to be generated from syntax in the first place, why not retain that linkage somewhere so that it doesn't have to be searched again?

Also to find usage of symbols generated by macros, can't one make compiler plugin to dump usage information somewhere (or modify compiler if impossible with plugin)? This might be a bit stale but much better than nothing.


Yes and that is what these analyzers do.

The problem is that your space grow really fast and that your compilers are really not built to extract that information in a few ms.

Even less to regenerate only part of it based on partial input. Even less when the input may not be correct syntax.

Also that linkage being kept is exactly what this post talk about. How to keep it intact through the different steps and transformations in your pipeline in a way adapted to the kind of queries you are going to need is... Actually hard and dependent on the query.

Which means that adding new features to your IDE would regularly need (and actually does need) a new way to store and query that data.

But yes. Reusing part of the rust compiler (or replacing some of them) in rust-analyzer is already something that happens and that maintainers work on.

It is just not that easy. But yes, C# and Roslyn in general was built with that in mind. Typescript too.


Reading this, I am increasing my belief that this is very, if not exactly, like XSLT. You are searching a tree looking for parents, siblings, children, etc nodes then analysing or transforming them.


Visiting and transforming trees is indeed a well trodden pattern. Here’s another kind of tree walking which I’ve enjoyed working with, this year:

https://libcst.readthedocs.io/en/latest/tutorial.html#Build-...

It is a little less pure than the functional XSLT, and also a lot more practical.


>In this formulation, a language server needs to just enough analysis to drill down to a specific node.

Does rust-analyzer compute semantic information on the fly and incrementally?


Yes




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: