Hacker News new | past | comments | ask | show | jobs | submit login

So what you're saying is Intel, or any other would-be NVIDIA competitor, needs to put out fast interconnects, not just compute cards. This is true.

I'm not sure your argument stands when it comes to OP's idea of a single card with 128GB VRAM. This would be enough to run ~180B models with reasonable quantization --we're not near maxing out the capability of 180B yet (see the latest 32B models performing near public SOTA).

This indeed would push rapid and wide adoption and be quite disruptive. But sure, it wouldn't instantly enable competitive training of 405B models.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: