Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why not just run a local LLM for practically free? You can even trivially parallelize it with multiple instances.

I would believe that many NLP problems can be easily solved even by smaller LLM models.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: