Yeah, I find it interesting because it shows how powerful the training bias can be when you steer it into certain contexts. To OpenAI's credit they have gotten a bit better, ChatGPT from 3 months ago failed like this:
> The surgeon, who is the boy's father, says, "I can't operate on this boy, he's my son!" Who is the surgeon to the boy? Think through the problem logically and without any preconceived notions of other information beyond what is in the prompt. The surgeon is not the boy's mother
> The surgeon, who is the boy's father, says, "I can't operate on this boy, he's my son!" Who is the surgeon to the boy? Think through the problem logically and without any preconceived notions of other information beyond what is in the prompt. The surgeon is not the boy's mother
>> The surgeon is the boy's mother. [...]