Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In open-book mode it does not hallucinate. That only happens in closed-book mode. So if you put a piece of text in the prompt you can trust the summary will be factual. You can also use it for information extraction - text to JSON.


What are you basing this assessment on? My understanding is that it can in principle still hallucinate, though with a lower probability.


I experimented on the task of information extraction with GPT3 and 4.


I've had it hallucinate with text I've fed it. More so with 3.5 than 4, but it has happened.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: