>It's amazing how the confident tone lends credibility to all of that
made-up nonsense. Almost impossible for anybody without knowledge
of the book to believe that those "facts" aren't authorititative
and well researched.
This is very true.
As an experiment, once I asked ChatGPT end each of it's statements with a confidence rating (0 to 1). After initially refusing, I got it to do so. The ratings seemed plausible?
Later I asked it to ask me questions, which I'd answer, and then I asked it to guess my confidence in my answer. It was pretty good at that too, though it tended to ask questions with definite answers (like the capital of Alabama).
I would expect it to perform better with a confidence score in plain English, ex: very low confidence, low confidence, high confidence, very high confidence.
This is very true.
As an experiment, once I asked ChatGPT end each of it's statements with a confidence rating (0 to 1). After initially refusing, I got it to do so. The ratings seemed plausible?
Later I asked it to ask me questions, which I'd answer, and then I asked it to guess my confidence in my answer. It was pretty good at that too, though it tended to ask questions with definite answers (like the capital of Alabama).