This has been my experience as well. The biggest problem is that the answers look plausible, and only after implementation and experimentation do you find them to be wrong. If this happened every once in a while then it wouldn't be a big deal, but I'd guess that more than half of the answers and tutorials I've received through ChatGPT have ended up being plain wrong.
God help us if companies start relying on LLMs for life-or-death stuff like insurance claim decisions.
> If this happened every once in a while then it wouldn't be a big deal, but I'd guess that more than half of the answers and tutorials I've received through ChatGPT have ended up being plain wrong.
It would actually have been more pernicious that way, since it would lull people into a false sense of security.
God help us if companies start relying on LLMs for life-or-death stuff like insurance claim decisions.