All current LLMs openly make simple mistakes that are completely incompatible with true "reasoning" (in the sense any human would have used that term years ago).
If you showed the raw output of, say, QwQ-32 to any engineer from 10 years ago, I suspect they would be astonished to hear that this doesn't count as "true reasoning".
I feel like I'm taking crazy pills sometimes.