Simon has been working on this technology for a while, and is fairly prolific about writing about it. It's more likely that it didn't "take long" for Simon to figure it out, but more that it was come to be realized that others had misconceptions about how it worked, thus the usefulness of posting about it presented itself.
Things usually aren't black and white, unless of course they are bits...
Maybe they thought the users would understand how LLMs get trained.
Seems that's not the case and some think the model could br trained instantly by their input instead of much later when a good amount of new training data is collected.
In part I suspect that's down to prompting, which is somewhat obscured (on purpose) from most users.
"Prompts" use symbols within the conversation that's gone before, along with hidden prompting by the organisation that controls access to an LLM, as a part of the prompt. So when you ask, 'do you remember what I said about donuts' the LLM can answer -- it doesn't remember, but that's [an obscured] part of the current prompt issued to the LLM.
It's not too surprising users are confused when purposeful deception is part of standard practice.
I can ask ChatGPT what my favourite topics are and it gives an answer ...