Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, for example llama.cpp runs on Intel GPUs via Vulkan or SYCL. The latter is actively being maintained by Intel developers.

Obviously that is only one piece of software, but its a certainly a useful one if you are using one of the many LLMs it supports.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: