Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. They want you to target HIP, which is compatible with both CUDA and ROCm.

https://rocm.docs.amd.com/projects/HIP/en/latest/index.html



And with HIP now supporting Windows[0] it could be a game changer. I would love to see it working with llama.cpp[1]. I tried HIP via Blender, but it did not work for my 6600XT, it still used my CPU...

[0] https://github.com/RadeonOpenCompute/ROCm/issues/666#issueco...

[1] https://github.com/ggerganov/llama.cpp/discussions/915#discu...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: