Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This whole thing isn't really going that well. From what I can tell, 20 years ago it was pretty common to think that even if you had a "friendly" AI that didn't need to be boxed, you didn't let anyone else do anything with it!

The point of the AI being "friendly" was that it would stop and let you correct it. You still needed to make sure you kept anyone else from "correcting it" to do something bad!



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: