Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wonder the implications of "SpookGPT" hallucinating minor details and facts that then get into reports that later are summarized again and it all snowballs into some catastrophic set of decisions.


"Iraqis are beating people, bombing and shooting. They are taking all hospital equipment, babies out of incubators. Life-support systems are turned off. ... They are even removing traffic lights."

https://en.wikipedia.org/wiki/Nayirah_testimony


It's not like this doesn't happen without AI. Thinking of dossiers that have been made infamous in the news. These are things that humans have created based on their interpretation of data. Sometimes they are so outlandish that they also sound like hallucinations. Can I sue OpenAI if their chatbot says there are pee tapes about me?


As someone who writes these sorts of GPT instructions all the time, if the person writing the prompts isn't a muppet, they're getting GPT to cite snippets of the original text as evidence for summary points. That reduces hallucinations at the same time it provides you with the ability to verify inferred summaries. You can even have GPT (or another model) separately check the summary against the evidence included and give it a score so you can flag it for manual review as part of a pipeline.


How do you verify ground-truth of the original snippets?

> snippets of the original text as evidence

Assuredly the sources ("news"?) are poisoned wells, now?


You can ask GPT to only cite certain sources, and try to corroborate non credible sources, and it does a decent job.


tl;dr "Trust me bro"


We’ve proven again n again to be capable of this all on our own


Global Thermonuclear War




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: