Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Linux kernel does not break userspace.

> What's wrong with using an older well-tested build of llama.cpp, instead of reinventing the wheel?

Yeah, they tried this, this was the old setup as I understand it. But every time they needed support for a new model and had to update llama.cpp, an old model would break and one of their partners would go ape on them. They said it happened more than once, but one particular case (wish I could remember what it was) was so bad they felt they had no choice but to reimplement. It's the lowest risk strategy.



> Yeah, they tried this, this was the old setup as I understand it. But every time they needed support for a new model and had to update llama.cpp, an old model would break and one of their partners would go ape on them.

Shouldn't any such regressions be regarded as bugs in llama.cpp and fixed there? Surely the Ollama folks can test and benchmark the main models that people care about before shipping the update in a stable release. That would be a lot easier than trying to reimplement major parts of llama.cpp from scratch.


> every time they needed support for a new model and had to update llama.cpp, an old model would break and one of their partners would go ape on them. They said it happened more than once, but one particular case (wish I could remember what it was) was so bad they felt they had no choice but to reimplement. It's the lowest risk strategy.

A much lower risk strategy would be using multiple versions of llama-server to keep supporting old models that would break on newer llama.cpp versions.


The Ollama distribution size is already pretty big (at least on Windows) due to all the GPU support libraries and whatnot. Having to multiple that by the number of llama.cpp versions supported would not be great.


?

    llamacpp> ls -l \*llama\*
    -rwxr-xr-x 1 root root 2505480 Aug  7 05:06 libllama.so
    -rwxr-xr-x 1 root root 5092024 Aug  7 05:23 llama-server
That's a terrible excuse, Llama.cpp is just 7.5 megabytes. You can easily ship a couple copies of that. The current ollama for windows download is 700MB.

I don't buy it. They're not willing to make an 700MB download a few megabytes bigger to ~730MB, but they are willing to support a fork/rewrite indefinitely (and the fork is outside of their core competency, as seen by the current issues)? What kind of decisionmaking is that?


Sorry, I forgot to include in my comment this part:

If you include multiple version of llama, and each of those llama version depends on different GPU libraries, that could balloon the download size.

If these GPU libraries change rarely, then yes, you are correct, it might not be a problem.


Well llama.cpp requires minimum CUDA 11 from 2020, or if you need CUDA C++17 support, then CUDA 12 from 2022.

It'll compile fine against latest CUDA 12.8 or 12.9 etc, but there's zero need to pack whatever the latest CUDA version is.


It’s 700mib because they’re likely redistributing the CUDA libraries so that users don’t need to separately run that installer. Llama.cpp is a bit more “you are expected to know what you’re doing” on that front. But yeah, you could plausibly ship multiple versions of the inference engine although from a maintenance perspective that sounds like hell for any number of reasons




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: