Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LoRA has actually been around for a little while! I first saw it when it became popular in fine-tuning models quantized down to about 8 bits or so. I'm sure it's doing stuff in the 4bit range now! :D

I believe it's a core toolbox piece of tech required to really push the limits of LLMs either in original training or in inference. Similar sort of to how batch norm was for convolutional neural networks. I look forward to seeing how this will be applied in the future.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: