LLMs emulate language by following intricate links between tokens. This is not meant to emulate memory or imagination, just transforming a list of tokens into another list of tokens, generating language. And language is a huge part of the intelligence puzzle so it looks smart to people despite being quite mechanical.
A next step could be to create a mind, with a piece that works similar to the paretial lobe to give it a sense of self or temporal existence.
> it looks smart to people despite being quite mechanical
Note that brains themselves are also "quite mechanical", as is any physical system or piece of software. "Looks smart", in the limit, reduces to "is smart".
Brains themselves have a lot more mechanisms to cause emergent behavior what with all the adaptive organic layers so I can't really compare the two 1-1.
A next step could be to create a mind, with a piece that works similar to the paretial lobe to give it a sense of self or temporal existence.