Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You might want to check out RamaLama. It's a container based replacement for Ollama by the same folks that brought us Podman.

I tried it a while back, I was very surprised to find that simply running `uvx ramalama run deepseek-r1:1.5b` just worked. I'm on Fedora Silverblue with nothing layered on the ostree. Before RamaLama, getting llama.cpp working with my GPU was a major PITA.

https://github.com/containers/ramalama





Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: