Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you suggesting that Intel 'just' release a GPU at the same price point as an M4 Max SOC? And that there would be a large market for it if they did so? Seems like an extremely niche product that would be demanding to manufacture. The M4 Max makes sense because it's a complete system they can sell to Apple's price-insensitive audience, Intel doesn't have a captive market like that to sell bespoke LLM accelerator cards to yet.

If this hypothetical 128GB LLM accelerator was also a capable GPU that would be more interesting but Intel hasn't proven an ability to execute on that level yet.



Nothing in my comment says about pricing it at the M4 Max level. Apple charges as much because they can (typing this on an $8000 M3 Max). 128GB LPDDR5 is dirt cheap these days just Apple adds its premium because they like to. Nothing prevents Intel from releasing a basic GPU with that much RAM for under $1k.


You're asking for a GPU die at least as large as NVIDIA's TU102 that was $1k in 2018 when paired with only 11GB of RAM (because $1k couldn't get you a fully-enabled die to use 12GB of RAM). I think you're off by at least a factor of two in your cost estimates.


If Intel came out with an ARC GPU with 128GB VRAM at a $2000 price point, I and many others would likely buy it immediately.


Though Intel should also identify say the top-100 finetuners and just send it to them for free, on the down low. That would create some market pressure.


HBM plz.


Intel has Xeon Phi which was a spin-off of their first attempt at GPU so they have a lot of tech in place they can reuse already. They don't need to go with GDDRx/HBMx designs that require large dies.


I don't want to further this discussions but may be you dont realise some of the people who replied to you either design hardware for a living or has been in the hardware industry for longer than 20 years.


While it is not a GPU, Qualcomm already made an inferencing card with 128GB RAM:

https://www.qualcomm.com/news/onq/2023/11/introducing-qualco...

It would be interesting if those saying that a regular GPU with 128GB of VRAM cannot be made would explain how Qualcomm was able to make this card. It is not a big stretch to imagine a GPU with the same memory configuration. Note that Qualcomm did not use HBM for this.


For some reason Apple did it with M3/M4 Max likely by folks that are also on HN. The question is how many of the years spent designing HW were spent also by educating oneselves on the latest best ways to do it.


>For some reason.....

They already replied with an answer.


Even LPDDR requires a large die. It only takes things out of the realm of technologically impossible to merely economically impractical. A 512-bit bus is still very inconveniently large for a single die.


> release a GPU at the same price point as an M4 Max SOC

Why would it need to be introduced at Apple's high-margin pricing?


It's also impossible and it would need to be a CPU.

CPUs and GPUs access memory very differently.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: