It is RAG for your codebase, and provides code completion. The gain is the local inference, and is actually useful with smaller models.
The plugin itself provides chat also, but my gut feeling is that ggerganov runs several models at the some time, given he uses a 192gb machine.
Have not tried this scenario yet, but looking at my API bill I’m probably going to try 100% local dev at some point. Besides vibe coding with existing tools seems to not work that good for enterprise size codebases.
Is this the one? https://github.com/ggml-org/llama.vscode it sems to be built for code completion rather than outright agent mode