The stuff I use it for is usually to recall or find simple information rather than creating textual content. Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments or generating code to get me started on something. So I don't see the hallucination issue much. There have been cases where it has pulled some outdated information tied to a specific library I'm using and I asked it to generate some example code on how to use that library. But with GPT 4 , I don't see that often now.
Now Google's Bard/Gemini on the other hand quite frequently makes up stuff. So for now I'm sticking to GPT 4 via Chatgpt plus subscription to augment my daily dev work.
I see what you mean. Yes, I do verify it or do it in an environment where a mistake isn't going to cripple an application or cause some crisis. But in most cases once it points out the argument for the command or the command name - that usually jogs my memory to know that its correct. Lately it's been mostly when creating dockerfiles and also stored procedure syntax. I'm not really good with keeping notes.
What do you mean? Hallucinations are unavoidable, even humans produce them semi-regularly. Our memories are not nearly reliable enough to prevent it.
In my experience the only more or less reliable way to avoid hallucinations is to provide the right amount of quality information in the prompt and make sure the LLM uses that.