Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What makes you think ChatGPT isn't also returning false and/or misleading info? Maybe you just haven't noticed...

Personally, I struggle with anything even slightly technical from all of the current LLM's. You really have to know enough about the topic to detect BS when you see it... which is a significant problem for those using it as a learning tool.



This is my problem with chatgpt and why I won't use it; I've seen it confidently return incorrect information enough times that I just cannot trust it.


The version with search will give you links to the references it bases its answers on.


You still have to go read the references and comprehend the material to determine if the GPT answer was correct or not.

I don't know the name for the effect, but it's similar to when you listen/watch the news. When the news is about a topic you know an awful lot about, it's plainly obvious how wrong they are. Yet... when you know little about the topic, you just trust what you hear even though they're as likely to be wrong about that topic as well.

The problem is people (myself included) try to use GPT as a guided research/learning tool, but it's filled with constant BS. When you don't know much about the topic, you're not going to understand what is BS and what is not.


In my particular case, the fact that it returns bullshit is kind of useful.

Obviously they need to fix that for realistic usage, but I use it as a studying technique. Usually when I ask it to give me some detailed information about stuff that I know a bit about, it will get some details about it wrong. Then I will argue with it until it admits that it was mistaken.

Why is this useful? Because it gets "just close enough to right" that it can be an excellent study technique. It forces me to think about why it's wrong, how to explain why it's wrong, and how to utilize research papers to get a better understanding.


> You still have to go read the references and comprehend the material

Like...it always has been?


How many will actually do that when presented with convincing, accurate-sounding information?

There's the problem... and it defeats the entire purpose of using a tool like GPT.


The Gell-Mann Amnesia Effect


I get information from unreliable sources all the time. In fact all my sources are unreliable.

I just ignore how confident ChatGPT sounds.


True, it often returns solutions that may work but are illogical. Or solutions that use tutorial style code and fall apart once you tinker a bit with it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: