Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you handle the lying/hallucination problem? Do you just run the command and hope?


The stuff I use it for is usually to recall or find simple information rather than creating textual content. Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments or generating code to get me started on something. So I don't see the hallucination issue much. There have been cases where it has pulled some outdated information tied to a specific library I'm using and I asked it to generate some example code on how to use that library. But with GPT 4 , I don't see that often now.

Now Google's Bard/Gemini on the other hand quite frequently makes up stuff. So for now I'm sticking to GPT 4 via Chatgpt plus subscription to augment my daily dev work.


> Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments

So, to repeat the question:

Do you just run the command and hope? Or do you double-check using the manpage that it isn't going to do something drastic and unexpected?


I see what you mean. Yes, I do verify it or do it in an environment where a mistake isn't going to cripple an application or cause some crisis. But in most cases once it points out the argument for the command or the command name - that usually jogs my memory to know that its correct. Lately it's been mostly when creating dockerfiles and also stored procedure syntax. I'm not really good with keeping notes.


Anyone who still talks about hallucinations today hasn't used a paid service in the last 6 months.


I just had a paid open ai service tell me all about a command line argument that doesn't exist.

It isn't possible to do what I wanted with the proposed command, but the hallucination helped me to Google a method that worked.


What do you mean? Hallucinations are unavoidable, even humans produce them semi-regularly. Our memories are not nearly reliable enough to prevent it.

In my experience the only more or less reliable way to avoid hallucinations is to provide the right amount of quality information in the prompt and make sure the LLM uses that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: