Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for letting me know about this. I'd experimented with some local LLM's before, but at the time I couldn't find a good model that was small enough to run on my 3080 Ti. This one is just small enough for me to run at a useable speed (just over 1 token per second), and so far seems to be nearly as good as GPT3.5.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: