Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> some of the cutting edge local LLMs have been a little bit slow to be available recently

You can pull models directily from hugging face ollama pull hf.co/google/gemma-3-27b-it



I know, I often do that, but it's still not enough. E.g. things like SmolLM3 which required some llama ccp tweaks wouldn't work via guff for the first week after it had been released.

Just checked: https://github.com/ollama/ollama/issues/11340 still open issue.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: