Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Would this allow you to run each expert on a cheap commodity GPU card so that instead of using expensive 200GB cards we can use a computer with 8 cheap gaming cards in it?

I would think no differently than you can run a large regular model on a multiGPU setup (which people do!). Its still all one network even if not all of it is activated for each token, and since its much smaller than a 56B model, it seems like there are significant components of the network that are shared.



Attention is shared. It's ~30% of params here. So ~2B params are shared between experts and ~5B params are unique to each expert.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: