Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For what it's worth, locally runnable language models are becoming exceptionally capable these days, so if you assume you will have some computer to do computing, it seems reasonable to assume that it will enable you to do some language model based things. I have a server with a single GPU running language models that easily blow GPT 3.5 out of the water. At that point, I am offloading reasoning tasks to my computer in the same way that I offload memory take to my computer through my note taking habits.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: