Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> inference kernels produced incorrect results when used in certain GPU configurations.

It seems reasonable to assume that GPT inference is done entirely on Nvidia GPUs. I wonder if this is a subtle clue that they're experimenting with getting it to run on competing hardware.



Why would they subtly hint at anything they’re trying to keep secret?


I don't know. Maybe they'd like to drive down the price of Nvidia so they can buy it.


Now I know why Altman said he needs 7 trillion, 2 trillion is for Nvidia




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: