Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My impression is the same. To train anything you just need to have CUDA gpus. For inference I think AMD and Apple M chips are getting better and better.


For inference, Nvidia/AMD/Intel/Apple are all generally on the same tier now.

There's a post on github of a madman who got llama.cpp generating tokens for an AI model that's running on an Intel Arc, Nvidia 3090, and AMD gpu at the same time. https://github.com/ggml-org/llama.cpp/pull/5321




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: