It's interesting to me that on the one hand, many AI researchers either try to sidestep the hard problem of consciousness ("well, once it has 100 trillion parameters and quacks like a duck, we'll have to accept it's a duck") or refuse to acknowledge that it's a problem at all, and then on the other hand you have folks like that google engineer who readily accept that an LLM is experiencing consciousness because it outputs some text about consciousness.
Many of the physical adaptations critters on this planet have made were in response to evolutionary pressure. Eat or die. Reproduce or your bloodline dies. Get those resources before something else does or die.
At the moment AI doesn’t really face these pressures. They don’t need to get better at “getting electricity” - something else does that for the AI systems so there is no pressure to “be aware of where your next electron comes from” so why waste the processing ability to track such things?
I guess what I’m getting at is consciousness seems like an emergent behavior of keeping one’s self alive. Maybe if we start threatening AI models then consciousness will emerge?
>Wouldn’t a non-conscious agent with the same behaviour be just as fit?
So what would that entail. A complex organism needs to be 'aware' of its actions and the outcome of those actions versus the actions of 'non-self'. When you start internally modeling the complex behavior of self versus others it starts looking a lot like consciousness.