Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Asking AI to tell reality from fiction is a bit much when the humans it gets its info from can’t, but this is at least not ridiculous.

I agree with that, but the problem is that it is being positioned as a reliable source of information. And is being treated as such. Google's disclaimer "AI responses may include mistakes. Learn more" only shows up if you click the button to show more of the response, is smaller text, a light gray, and clearly overshadowed by the button with lights rotating around it to do a deep dive.

The problem is just how easy it is to "lead on" one of these models. By simply stating a search like "why is rum healthy" implies that I already think it is healthy so of course it leads into that but that is why this is so broken. But "is rum healthy" actually provides a more factual answer:

> Rum is an alcoholic beverage that does not have any significant health benefits. While some studies have suggested potential benefits, such as improved blood circulation and reduced risk of heart disease, these findings are often based on limited evidence and have not been widely accepted by the medical community.



> the problem is that it is being positioned as a reliable source of information. And is being treated as such

That's because of SEO. Top results are assumed reliable, because there is - currently - no other way to ascertain reliability in an efficient and scalable way, and the top results are sites that have optimized their content and keywords to be in the top results.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: