Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Existential risk from superintelligent AI, naturally. Even if it turns out to not be a risk I'd sleep a lot easier at night with an ironclad mathematical proof of why exactly it isn't on the presses. And if it is a risk, it's almost certainly the most important thing anyone could work on right now.

I'm interpreting "can't fail" as "guaranteed to succeed one way or another", here.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: