Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm on a m1 max with 64gb ram, but i never use this vscode plugin before. Should I try?

Is this the one? https://github.com/ggml-org/llama.vscode it sems to be built for code completion rather than outright agent mode



It is RAG for your codebase, and provides code completion. The gain is the local inference, and is actually useful with smaller models.

The plugin itself provides chat also, but my gut feeling is that ggerganov runs several models at the some time, given he uses a 192gb machine.

Have not tried this scenario yet, but looking at my API bill I’m probably going to try 100% local dev at some point. Besides vibe coding with existing tools seems to not work that good for enterprise size codebases.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: