Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's an edge case, this case is ChatGPT working as intended.


Exactly. That might be something interesting to think about. Humans make mistakes. LLMs make mistakes.

Yet for humans we have built a society which prevents these mistakes except in edge cases.

Would humans make these mistakes as often as LLMs if there would be no consequences?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: