> the only factor in that transition was the complexity of the nervous system
It seems likely to me that some arrangement of nerves is possible that's comparably complex to ours, but does not produce consciousness. (I dunno, maybe some organism with much more complex sensory organs than ours that devotes so much complexity budget to that that it only has enough left to devote to general cognition to give it the intelligence of a mushroom, who knows). In other words: I suspect complexity is necessary but not sufficient for consciousness to occur. I don't think that takes away from your suggestion that consciousness in AI systems is _possible_, but I don't think it's the case that it's an inevitable outcome if only we can make our systems sufficiently complex. There's probably something about the specific structure of the complex thing we'll need to master as well.
Animals are conscious, yes? They may not be as intelligent as humans but they still perceive their environment, have internal drives/desires, make decisions, play, plan routes, solve mazes/puzzles, hunt, have some forms of language communication, some use tools, exploit their surroundings, learn new things, cooperate/work in groups and so on.
If one built an AGI that was at the intelligence level of say, a rat or mouse. How would one go about proving it had the same capacity for consciousness as that rat or mouse?
Can we have certain knowledge whether or not they're conscious? - Unfortunately no. We can't compare what we cannot measure, and we haven't found any way to measure consciousness directly.
When AI passes all possible tests that could distinguish it from a rat, the question becomes whether or not consciousness is necessary for all those rat-like capabilities we tested for. And if not, then why rats have consciousness?
I personally don't like unfinished stories, so I believe it is necessary - that consciousness is just a side-effect of matter performing some complex computation. It wraps the theory up nicely with a little bow on the top.
> I suspect complexity is necessary but not sufficient for consciousness to occur. I don't think that takes away from your suggestion that consciousness in AI systems is _possible_, but I don't think it's the case that it's an inevitable outcome if only we can make our systems sufficiently complex. There's probably something about the specific structure of the complex thing we'll need to master as well.
That's a very good argument, and I completely agree.
As much as it's faulty logic to reduce AI to soulless machinery because we know how it works, it's also faulty logic to assume that scaling to more and more complex models will in itself create consciousness. At the very least, some mechanism of continuous self-modification is necessary, so current fixed-point neural networks most likely will never be conscious.
It seems likely to me that some arrangement of nerves is possible that's comparably complex to ours, but does not produce consciousness. (I dunno, maybe some organism with much more complex sensory organs than ours that devotes so much complexity budget to that that it only has enough left to devote to general cognition to give it the intelligence of a mushroom, who knows). In other words: I suspect complexity is necessary but not sufficient for consciousness to occur. I don't think that takes away from your suggestion that consciousness in AI systems is _possible_, but I don't think it's the case that it's an inevitable outcome if only we can make our systems sufficiently complex. There's probably something about the specific structure of the complex thing we'll need to master as well.