Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t disagree with the “AI will upend the world so we have to prepare”, it’s the “AI will kill everyone” that I have issue with.

And your mob boss example is a good reason why: it doesn’t extrapolate that much. There is no case where a mob boss, or a disabled Hitler for that matter, can kill everyone and ends humanity.



The mob boss analogy breaks down when they need assistance from other humans to do stuff. To the extent that an AI can build its own supply chains, that doesn't apply here. That may or may not be a large extent, depending on how hard it is to bootstrap something which can operate independently of humans.

The extent to which it's possible for a very intelligent AI with limited starting resources to build up a supply chain which generates GPUs and enough power to run them, and disempower anyone who might stop it from doing so (not necessarily in that order), is a matter of some debate. The term to search for is "sharp left turn".

I am, again, pretty sure that's not the scenario we're going to see. Like at least 90% sure. It's still fewer 9s than I'd like (though I am not with Eliezer in the "a full nuclear exchange is preferable" camp).


I will take an example that Eliezer has used and explain why I think he is wrong: AlphaGo. Eliezer used it as an example where the AI just blew through humanity really quickly, and extrapolate it to how an AGI will do the same.

But here is the thing: AlphaGo and subsequent AI didn’t make the previous human knowledge wrong at all, most of what was figured out and taught are still correct. There are changes at the margin, but arguably the human are on track to discovered it anyway. There are corner sequences that are truly unusual, but the big picture of playing style and game idea are already on track to be similar.

And it matters because things like nanotech is hard. Building stuffs at scale is hard. Building factories at scale is hard. And just because there is a super intelligence being doesn’t mean they become a genie. Just imagine how much trouble we have with distributed computing, how would a cluster of computing gives rise to a singularity of an AI? And if the computer device has to be the human brain size, there is a high chance it hits the same limits as our brain.


I mean I think his point there was "there is plenty of room for systems to be far, far more capable than humans in at least some problem domains". But yeah, Eliezer's FOOM take does seem predicated on the bitter lesson[1] not holding.

To the extent I expect doom, I expect it'll look more like this[2].

[1] http://incompleteideas.net/IncIdeas/BitterLesson.html

[2] https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: