Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you scroll down a little and see the chip icon, where it says "NVIDIA GB10 Superchip " it also says "Experience up to 1 petaFLOP of AI performance at FP4 precision with the NVIDIA Grace Blackwell architecture."

Further down, in the exploded view it says "Blackwell GPU 1PetaFLOP FP4 AI Compute"

Then further down in the spec chart they get less specific again with "Tensor Performance^1 1 PFLOP" and "^1" says "1 Theoretical FP4 TOPS using the sparsity feature."

Also, if you click "Reserve Now" the second line below that redundant "Reserve Now" button says "1 PFLOPS of FP4 AI performance"

I mean I'll give you that they could be more clear and that it's not cool to just hype up on FP4 performance, but they aren't exactly hiding the context like they did during GTC. I wouldn't call this "disingenuous"



Even if that "sparsity feature" is that two or of every four adjacent values in your areay be zeros, and that performance halves if not doing this?

I think lots of children are going to be very disappointed running their blas benchmarks on Christmas morning and seeing barely tens of teraflops.

(For reference see how the still optimistic numbers are for the H200 when you use realistic datatypes.

https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200... )


Using sparsity in advertising is incredibly misleading to the point of lying. The entire point of sparsity is that you avoid doing calculations. Sparsity support means you need fewer FLOPs for a matrix of the same size. It doesn't magically increase the number of FLOPs you have.

Even AMD got that memo and is mostly advertising their 8bit/block fp16 performance on their GPUs and NPUs, even though the NPUs support 4 bit INT with sparsity, which would 4x the quoted numbers if they used Nvidia's marketing FLOPs.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: