> If this happened every once in a while then it wouldn't be a big deal, but I'd guess that more than half of the answers and tutorials I've received through ChatGPT have ended up being plain wrong.
It would actually have been more pernicious that way, since it would lull people into a false sense of security.
It would actually have been more pernicious that way, since it would lull people into a false sense of security.