I think that's true with known optical illusions, but there are definitely times where we're fooled by the limitations in our ability to perceive the world and that leads people to argue their potentially false reality.
A lot of times people cannot fathom that what they see is not the same thing as what other people see or that what they see isn't actually reality. Anyone remember "The Dress" from 2015? Or just the phenomenon of pareidolia leading people to think there are backwards messages embedded in songs or faces on Mars.
"The Dress" was also what came to mind for the claim being obviously wrong. There are people arguing to this day that it is gold even when confronted with other images revealing the truth.
It has not learned anything. It just looks in its context window for your answer.
For a fresh conversation it will make the same mistake again. Most likely, there is some randomness and also some context is stashed and shared between conversations by most LLM based assistants.
Hypothetically that might ne true. But current systems do not do online learning. Several recent models have cutoff points that are over 6 months ago.
It is unclear to which extent user data is trained on. And it is is not clear whether one can achieve meaningful improvements to correctness based on training on user data. User data might be inadvertently incorrect and it may also be adversarial, trying to out bad things in on purpose.