If you scroll down a little and see the chip icon, where it says "NVIDIA GB10 Superchip " it also says "Experience up to 1 petaFLOP of AI performance at FP4 precision with the NVIDIA Grace Blackwell architecture."
Further down, in the exploded view it says "Blackwell GPU 1PetaFLOP FP4 AI Compute"
Then further down in the spec chart they get less specific again with "Tensor Performance^1 1 PFLOP" and "^1" says "1 Theoretical FP4 TOPS using the sparsity feature."
Also, if you click "Reserve Now" the second line below that redundant "Reserve Now" button says "1 PFLOPS of FP4 AI performance"
I mean I'll give you that they could be more clear and that it's not cool to just hype up on FP4 performance, but they aren't exactly hiding the context like they did during GTC. I wouldn't call this "disingenuous"
Using sparsity in advertising is incredibly misleading to the point of lying. The entire point of sparsity is that you avoid doing calculations. Sparsity support means you need fewer FLOPs for a matrix of the same size. It doesn't magically increase the number of FLOPs you have.
Even AMD got that memo and is mostly advertising their 8bit/block fp16 performance on their GPUs and NPUs, even though the NPUs support 4 bit INT with sparsity, which would 4x the quoted numbers if they used Nvidia's marketing FLOPs.
Further down, in the exploded view it says "Blackwell GPU 1PetaFLOP FP4 AI Compute"
Then further down in the spec chart they get less specific again with "Tensor Performance^1 1 PFLOP" and "^1" says "1 Theoretical FP4 TOPS using the sparsity feature."
Also, if you click "Reserve Now" the second line below that redundant "Reserve Now" button says "1 PFLOPS of FP4 AI performance"
I mean I'll give you that they could be more clear and that it's not cool to just hype up on FP4 performance, but they aren't exactly hiding the context like they did during GTC. I wouldn't call this "disingenuous"