Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you decide which layers are the important ones?


I wrote approximately in the blog about it and linked some papers! I also wrote about it here - https://unsloth.ai/blog/dynamic-4bit - one has to inspect the activation and weight quantization errors!


So you are basically looking at "fMRI" of the "brain" while it's doing a wide range of things and cutting out the things that stay dark the most?


Oh that's a good analogy! Yes that sounds right!


> The key reason to use Unsloth quants is because of our deep involvement in fixing critical bugs across major models

sounds convincing, eh ... /s

On the less cynical note, approach does look interesting but I'd also like to understand how and why does it work, if it works at all.


Oh we actually fixed bugs! We fixed a few bugs in Gemma - see https://news.ycombinator.com/item?id=39671146, a gradient accumulation bug see https://news.ycombinator.com/item?id=41859037, Phi bugs, Llama bugs and more! See https://unsloth.ai/blog/reintroducing for more details!


What does your approach with dynamics weights has to do with those bugs? All those bugs seem uncorrelated to the technique.


Oh apologies I got confused - it's because when we calculate our dynamic quants, we have to do it on the fixed model!

For example in Phi 3 for example, the end of sentence token was wrong - if we use this, then our quants would be calibrated incorrectly, since chatting with the model will use the actual correct token.

Another is Llama 4 - https://github.com/ggml-org/llama.cpp/pull/12889 in which I fixed a RoPE issue - if we didn't fix it first, then again the calibration process would be incorrect.


Ok, this then goes to say that your approach doesn't work without applying whatever fixes to the vanilla models. What I'm trying to understand is the approach itself. Why does it and how does it work?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: