Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there something like Moore's law for LLM's that will eventually turn them into ubiquitus compute?


There are scaling laws which show LLMs can benefit from an order of magnitude more training data than the current state of the art, suggesting that far beyond GPT-4 level performance should be possible in 4GB of RAM with enough training data and compute time.

So, kinda?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: