Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are you basing this assessment on? My understanding is that it can in principle still hallucinate, though with a lower probability.


I experimented on the task of information extraction with GPT3 and 4.


I've had it hallucinate with text I've fed it. More so with 3.5 than 4, but it has happened.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: