The most interesting thing is that a bunch of apparently smart people keep treating LLMs as if they were truly capable of something resembling problem-solving or logical thought, or even "understanding" a question.
It should surprise no one that they're bad at string processing, and yet.
We already know it knows nothing – but this doesn't verify it. I could produce a similar response, even though I know you can't crack Enigma in O(n) time.