Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was about to say "Using calibrated models", then I found this interesting paper:

Calibrated Language Models Must Hallucinate

https://arxiv.org/abs/2311.14648

https://www.youtube.com/watch?v=cnoOjE_Xj5g



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: