> If you ask ChatGPT how to use it, it will roleplay a character called “Assistant” from a counterfactual world where “how do I use Assistant?” has a single, well-defined answer. Because it is role-playing – improvising – it will not always give you the same answer. And none of the answers are true, about the real world. They’re about the fantasy world, where the fantasy app called “Assistant” really exists.
"Role playing" is a great analogy for what these models are doing. If an AI is asked a reasonable question it doesn't know the answer to, it won't go "I'm sorry, I'm not sure, let me go check," it performs improv by generating something that kinda feels like what the right answer might be
I'm not sure why it can't go check. I've tried to play with prompts like "if you're not sure, respond with a Google search query instead, and I'll paste the top result."
it feels like these language models could be a lot smaller if they knew how to Google like us.
An AI being right is just when the output of one of its roleplaying sessions is a true statement. AIs can have a version of "confidence", but it's not how true it thinks its statements are, it's how well it thinks it's role-playing.
"Role playing" is a great analogy for what these models are doing. If an AI is asked a reasonable question it doesn't know the answer to, it won't go "I'm sorry, I'm not sure, let me go check," it performs improv by generating something that kinda feels like what the right answer might be