Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, they've made insane scaling bets before and they have paid off.

If what we've heard about no acceptable pre-training runs from them in the last two years trying to increase the memory for training by two orders of magnitude is just a rehash of what got them from gpt2 to gpt3.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: