Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The stuff I use it for is usually to recall or find simple information rather than creating textual content. Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments or generating code to get me started on something. So I don't see the hallucination issue much. There have been cases where it has pulled some outdated information tied to a specific library I'm using and I asked it to generate some example code on how to use that library. But with GPT 4 , I don't see that often now.

Now Google's Bard/Gemini on the other hand quite frequently makes up stuff. So for now I'm sticking to GPT 4 via Chatgpt plus subscription to augment my daily dev work.



> Stuff like looking up some linux command that I've used before, but can't recall specific parameters/arguments

So, to repeat the question:

Do you just run the command and hope? Or do you double-check using the manpage that it isn't going to do something drastic and unexpected?


I see what you mean. Yes, I do verify it or do it in an environment where a mistake isn't going to cripple an application or cause some crisis. But in most cases once it points out the argument for the command or the command name - that usually jogs my memory to know that its correct. Lately it's been mostly when creating dockerfiles and also stored procedure syntax. I'm not really good with keeping notes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: