|  | 
|
| |  |  | Ask HN: Is there any way to stop ChatGPT from lying? |  |  | 6 points by replwoacause on Jan 7, 2023  | hide | past | favorite | 5 comments |  |  | I spent hours feeding ChatGPT documentation on a programming language framework and providing it with detailed instructions for answering questions. I specifically told it to only provide an answer if it had learned the information during our conversation, and to reply "I haven't learned about that from you yet" if it was not familiar with the topic. Despite these instructions and numerous reminders, it continued to generate false information that had no relation to our conversation. It appeared to be inferring or extrapolating, but ended up fabricating nonsense that seemed accurate but was actually incorrect. Is it possible to prevent this behavior through prompts, or will ChatGPT always mix false information with true information? | 
 
 
 
 | 
|  Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
 Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
 
 
 | 
So while you can get it to label things and do various kinds of logical evaluation, it's almost impossible to get it to preemptively scrutinize its own output. It will scrutinize its prior output and correctly identify contradictions, mistakes, and irrational inferences...and then it will go right ahead and repeat them.
It's easy to get it to give answers containing two sentences that wholly contradict each other, and subsequently to identify the contradiction and what is illogical about it. But giving it standing orders or negative orders (like 'do not suggest $incorrect_answer') generally doesn't work, so chain-of-reasoning exercises become very frustrating.