My best guess is that this is a reflection of how these things actually work.
When you "chat" with an LLM you are actually still participating in a "next token" prediction sequence.
The trick to get it to behave like it is a chat is to arrange that sequence as a screenplay:
User: five facts about squirrels
Assistant: (provide five facts)
User: two more
Assistant:
When you think about the problem like that, it makes sense that the LLM is instructed in terms of how that assistant should behave, kind of like screen directions.
But if true, then why choose a real name and not a made up one? Maybe they only realized they needed to do that later? ChatGPT is a far more unique name than Claude is.
When you "chat" with an LLM you are actually still participating in a "next token" prediction sequence.
The trick to get it to behave like it is a chat is to arrange that sequence as a screenplay:
When you think about the problem like that, it makes sense that the LLM is instructed in terms of how that assistant should behave, kind of like screen directions.