Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I suspect complexity is necessary but not sufficient for consciousness to occur. I don't think that takes away from your suggestion that consciousness in AI systems is _possible_, but I don't think it's the case that it's an inevitable outcome if only we can make our systems sufficiently complex. There's probably something about the specific structure of the complex thing we'll need to master as well.

That's a very good argument, and I completely agree.

As much as it's faulty logic to reduce AI to soulless machinery because we know how it works, it's also faulty logic to assume that scaling to more and more complex models will in itself create consciousness. At the very least, some mechanism of continuous self-modification is necessary, so current fixed-point neural networks most likely will never be conscious.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: