Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is, we can correct for wattage and it's still a blowout.

Compare the M2 Ultra's 5nm GPU performance-per-watt to a product like Nvidia's RTX A5000 laptop GPU on Samsung 8nm. The M2 Ultra is rated to pull over 290w at load, compared to the A5000's maximum TDP of 165w. Apple's desktop-grade iGPUs are less power-efficient than Nvidia's last-generation laptop GPUs, even when they have a definitive node advantage and higher TDPs.

You don't have to just look at the server chips. Compare like-for-like and you'll quickly find that Apple's GPU designs are pretty lacking across the board. The M3 Max barely swaps spit with Nvidia's 2000-series in performance-per-watt. When you pull out the 4000-series hardware and compare the efficiency of those cards, it's just plainly unfair. The $300 RTX 4060 outperforms Apple's $3000 desktop GPUs in performance and efficiency.



Why are you comparing SoCs to dedicated GPUs?


Why not? Apple's SOCs are in the 200-300w range that desktop systems occupy, it's only fair to compare it with it's contemporaries. Nvidia is said to be developing desktop SOCs too so we'll soon have systems for comparison, but I don't see how laptop dGPUs are an unfair comparison against desktop SOCs in any case. It's basically the same hardware with similar thermal constraints.

Plus; say we did include all of the dGPUs that Apple officially supports. Not even the W6800X gets more performance-per-watt than an Nvidia laptop chip. It only highlights the fact that Apple goes out of their way to prevent Nvidia from providing GPU support on Mac.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: