Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But this makes sense since humans are biased towards i.e. picking first option from the list. If LLM was trained on this data it makes sense for this model to be also biased like humans that produced this training data


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: