Everything about this sucks. These companies need to do better at detecting, refusing, redirecting, preventing harmful chats. They need to offer this to anyone using the APIs to build products too.
And that all also sucks. I don't trust these companies one bit to be monitoring all of these. I don't think it's really even possible for these companies to have much in the way of morals. So they also need to NOT do any of that.
And then there's the issue of reporting to authorities. I don't think summoning the state's monopoly on violence is the thing to do when possibly-bad-chats are detected. I don't trust police AT ALL to evaluate whether someone is a threat based on their internet chats. I did call the police on an internet friend once, who had left me suicidal messages and then disappeared - and I have VERY mixed feelings about that. I didn't know any other way to get someone to try to get to him. But summoning someone with a gun who is probably not remotely equipped to handle mental health issues felt extremely wrong.
Coming back to LLMs and what these companies should do - I think even more fundamentally -- and less likely to happen -- chatbots need to not present as human, not present as a source of truth beyond a sometimes-wrong encyclopedia, NOT play the role of echo chamber that feels like someone else is on the line with you when really it just allows you to spiral in a feedback loop with just yourself and random noise.
I love this technology and yet I am tempted to say, shut it all down. Of course, that won't happen. But it is how I feel at times.
And that all also sucks. I don't trust these companies one bit to be monitoring all of these. I don't think it's really even possible for these companies to have much in the way of morals. So they also need to NOT do any of that.
And then there's the issue of reporting to authorities. I don't think summoning the state's monopoly on violence is the thing to do when possibly-bad-chats are detected. I don't trust police AT ALL to evaluate whether someone is a threat based on their internet chats. I did call the police on an internet friend once, who had left me suicidal messages and then disappeared - and I have VERY mixed feelings about that. I didn't know any other way to get someone to try to get to him. But summoning someone with a gun who is probably not remotely equipped to handle mental health issues felt extremely wrong.
Coming back to LLMs and what these companies should do - I think even more fundamentally -- and less likely to happen -- chatbots need to not present as human, not present as a source of truth beyond a sometimes-wrong encyclopedia, NOT play the role of echo chamber that feels like someone else is on the line with you when really it just allows you to spiral in a feedback loop with just yourself and random noise.
I love this technology and yet I am tempted to say, shut it all down. Of course, that won't happen. But it is how I feel at times.