Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We don't know how to make safe AI systems, so we should be more cavalier and anything-goes than if we did? Madness.


We don't know if there's even any danger. All statements so far of any danger are somewhere between science-fiction stories and anthropomorphizing AI as some kind of god. The equivalent of "if the bridge breaks down, someone can be hurt", namely a real, quantifiable danger is sorely lacking here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: