Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it’s a silly take. Companies want to avoid getting bad PR. People having schizophrenic episodes with ChatGPT is bad PR.

There are plenty of legitimate purposes for weird psychological explorations, but there are also a lot of risks. There are people giving their AI names and considering them their spouse.

If you want completely unfiltered language models there are plenty of open source providers you can use.





No-one blames Cutco when some psycho with a knife fetish stabs someone. There’s a social programming aspect here that we are engaging with, where we are collectively deciding if/where to point a finger. We should clarify for folks what these LLMs are, and let them use them as is.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: