Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are use cases where it doesn't matter, i.e. creative writing. Additionally, I don't think AI engineers have even figured out the path for LLMs to be hallucination free and extremely accurate. It's better to ship something that is not perfect (or even not great) now, and that way the industry gains experience and the tools slowly but surely get better.


This is a false line of thinking. Karpathy say's hallucinating is what makes LLMs special. LLMs are way more like compression with a mix of hallucination rather than anything else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: