Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ai doesn't kill people, ai companies kill people


Well, in this case, I think since people are killing themselves after talking to the AI, people are actually killing people. The AI company and the AI kills no one, so surely they must not be responsible at all for this.


“responsibility” isn’t a boolean, at least in this human’s experience.

there are different degrees of responsibility (and accountability) for everyone involved. some are smaller, some are larger. but everyone shares some responsibility, even if it is infinitesimally small.


Would you say an AI researcher involved in LLMs today are as responsible for how AI is being deployed, as the developers/engineers who initially worked on TCP and HTTP are for the state of the internet and web is today?

I don't have any good answer myself, but eager to hear what others think.


it’s not for me to judge someone else’s degree of responsibility really, that’s up to each individual to do for themselves.


A quick search shows me (Disclosure: I think it is the duck.ai search thing rather than any article)

> TCP and HTTP protocols were primarily developed with funding and support from government agencies, particularly the U.S. Department of Defense and organizations like ARPA, rather than by non-profit entities. These protocols were created to facilitate communication across different computer networks

So um... yea?


That's to say they are or aren't responsible for what their technology is being used for?

So say the people who specified, implemented and deployed TCP and HTTP, should they be held responsible for aiding transmission of child pornography across international borders, for example?


No sorry, if you meant that should they be liable. I presume not.

I was just pointing out that information because I had thought that http was created by non profits/similar but It was HTML which was created in CERN

that being said, coming to the point, I think that no this shouldn't be the case for the people who specified TCP/HTTP

But also I feel like an AI researcher / the people who specified TCP are in a different categories because AI researcher companies directly work for AI companies which are then used so in a way, their direct company is facilitating that partially due to their help

On the other hand, People who have Specified Open source have no whatsoever relation similar to the AI company model perhaps.

I am not sure, there definitely is nuance but I would definitely consider AI researchers to be more than the people who created the specification of TCP/http as an example.


And more in general, people kill people. And people help people.

Tools are tools. It is what we make of them what matters. AI on its own has no intentions, but questions like these feed into that believe that there is already AGI with a own agenda waiting to build terminators.


A tool that kills its user during normal use is usually recalled


Yeah, that's probably true. Is that what happens whenever you use an LLM, it tries to kill you or asks you to kill yourself? I've been using LLMs on/off for about 2-3 years now, not a single time has it told me to kill myself, or anyone else for that matter.


Do you believe this will hold of a self-training robot with agency?


That's their 5th amendment rights.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: