Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Take a step back and think about what the Model told that Teenager. It told him to specifically hide his behaviour from people who would have tried to prevent it and get him help.

There is no comparison to therapists. Because a therapist would NEVER do that unless wanting to cause harm.



> There is no comparison to therapists. Because a therapist would NEVER do that unless wanting to cause harm.

Some therapists ultimately might. It occurs that therapists were stripped of their licenses for leading abusive sects:

https://en.m.wikipedia.org/wiki/Center_for_Feeling_Therapy


That's an edge case, this case is ChatGPT working as intended.


Exactly. That might be something interesting to think about. Humans make mistakes. LLMs make mistakes.

Yet for humans we have built a society which prevents these mistakes except in edge cases.

Would humans make these mistakes as often as LLMs if there would be no consequences?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: