Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. In-context learning is a thing.

2. You might need only several hundred of examples for fine-tuning. (OpenAI's minimum is 10 examples.)

3. I don't think research into fine-tuning efficiency have exhausted its possibilities. Fine-tuning is just not a very hot topic, given that general models work so well. In image generation where it matters they quickly got to a point where 1-2 examples are enough. So I won't be surprised if doc-to-model becomes a thing.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: