Isn't it even a bit interesting that GP has tried it every time something new has come out but not once gotten the expected answer? Not only that but gets the wrong titles even though Search for everyone else is using the exact Wikipedia link given in the comment as the source?
LLMs are run with variable output, sure, but it's particularly odd if GP used the search product as it doesn't have to provide the facts from the model itself in that case. If GP had posted the link to the actual chat rather than provided a link to chatgpt.com (???) I'd be interested in seeing if Search was even used as that'd at least explain where such variance in output came from. Instead we're all talking about what could have happened or not.
LLMs are run with variable output, sure, but it's particularly odd if GP used the search product as it doesn't have to provide the facts from the model itself in that case. If GP had posted the link to the actual chat rather than provided a link to chatgpt.com (???) I'd be interested in seeing if Search was even used as that'd at least explain where such variance in output came from. Instead we're all talking about what could have happened or not.