Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I might be on board with this if the things they hope to accomplish in 6 months were tractable, let alone in such a short amount of time. The concepts in the letter are basically ill-defined puffery.

For example, take the word “safe”. The letter suggests making AI that is “safe”. Great idea, but what does it mean? Safe at all costs? Safe within certain bounds? Who gets to decide? What do I do if I disagree? They should probably start asking people tomorrow if they hope to get a consensus on the goal of safety, let alone realize it. Needless to say, no such consensus exists.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: