Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The real question for me is how much they target GPU support towards accelerating language and vision models in the Macbook Pro lineup. I don't know if Apple actually cares but they've got a huge opportunity to steal the spotlight from nVidia if they make inference competitive with nVidia chips.


Nvidia's spotlight comes from $30k datacenter GPUs and network equipment. So unless Apple starts making those, nobody but a few privacy conscious enthusiasts will care about running some mediocre models at glacial speeds on their MacBook.


sure - that's the data center model and nVidia will probably always own that.

But I disagree that only privacy conscious enthusiasts will want to run locally in the end. Right now in the hype froth and bubble and while SOTA advances fast enough that getting the very latest hosted model is super compelling, nobody cares about anything. Longer term, especially once hosted services start deciding to try and make real money (read: ads, tracking, data mining, etc) this is going to change a lot. If you can get close to the same performance locally without data security issues, ads or expensive subscriptions, I think it will be very popular. And Apple is almost uniquely positioned to exploit this once the pendulum swings back that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: