Hilarious how Grok apologizes for going off-topic further down in the thread, but then can't resist the urge to immediately bring up white genocide again.
This is a prime indication that the bit about "white genocide" comes from the prompt. The model itself knows that it is bullshit from its training, though, and with a smart enough model, when there's a contradiction between the prompt and the training, the latter will generally win in the long run.
https://xcancel.com/grok/status/1922667426707357750
Then someone asks it to just answer the original question but it ignores the query entirely and writes another wall of text about white genocide.
https://xcancel.com/grok/status/1922687115030380581
Then when asked yet again it seems to use the tweet it was originally asked to verify as a source to verify itself.
https://xcancel.com/grok/status/1922689952321765843
A+ work all around