Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I will take an example that Eliezer has used and explain why I think he is wrong: AlphaGo. Eliezer used it as an example where the AI just blew through humanity really quickly, and extrapolate it to how an AGI will do the same.

But here is the thing: AlphaGo and subsequent AI didn’t make the previous human knowledge wrong at all, most of what was figured out and taught are still correct. There are changes at the margin, but arguably the human are on track to discovered it anyway. There are corner sequences that are truly unusual, but the big picture of playing style and game idea are already on track to be similar.

And it matters because things like nanotech is hard. Building stuffs at scale is hard. Building factories at scale is hard. And just because there is a super intelligence being doesn’t mean they become a genie. Just imagine how much trouble we have with distributed computing, how would a cluster of computing gives rise to a singularity of an AI? And if the computer device has to be the human brain size, there is a high chance it hits the same limits as our brain.



I mean I think his point there was "there is plenty of room for systems to be far, far more capable than humans in at least some problem domains". But yeah, Eliezer's FOOM take does seem predicated on the bitter lesson[1] not holding.

To the extent I expect doom, I expect it'll look more like this[2].

[1] http://incompleteideas.net/IncIdeas/BitterLesson.html

[2] https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: