Hacker News new | past | comments | ask | show | jobs | submit login

Will this work with ROCm instead of CUDA?



Or MLX/Apple?


No way. AMD is lightyears behind in software support.


That isn't really what being behind implies. We've known how to multiply matrices since ... at least the 70s. And video processing isn't a wild new task for our friends at AMD. I'd expect that this would run on an AMD card.

But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.


I have a 9070 XT... rockm ATM is unoptimized for it and the generation speed is less than what it should be if AMD isn't fudging the specs. Also the memory management is dire / buggy and will cause random OOMs on one ruin then be fine the next. splitting workflow helps so you can have one OOM crash in between. VAEs also crash from OOM. This is all just software issues because vram isn't released properly on AMD.

*OOM = Out Of Memory Error


2B model was running well on AMD, fingers crossed with 13B too: https://www.reddit.com/user/kejos92/comments/1hjkkmx/ltxv_in...


any idea how i could implement that for comfyUI on the 9070? Going to try to apply whats in the reddit post to my venv and see if it does anything.


update: didn't help :')


Sometimes it is a little more work to get stuff setup, but it works fine I've run plenty of models on my 7900 XTX wan2.1 14B, flux 1.dev and whisper. (wan and flux were with comfyui and whisper with whisper.cpp)


Specifically for video? Ollama runs great on my 7900 XTX.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: