Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> llama.cpp is designed to rapidly adopt research-level optimisations and features, but the downside is that reported speeds change all the time

Ironic that (according to the article) ollama rushed to implement GPT-OSS support, and thus broke the rest of the gguf quants (iiuc correctly).



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: