Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For smaller values of doom. The one he's talking about is unaligned AGI doing to humans what humans did to Xerces blue.


LLMs will never be AGI


I see only two outcomes at this point. LLMs evolve into AGI or they evolve into something perceptually indistinguishable from AGI. Either way the result is the same and we’re just arguing semantics.


Explain how a language model can “evolve” into AGI.


It's like saying an 8086 will never be able to render photorealistic graphics in realtime. They fuel the investment in technology and research that will likely lead there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: