Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This has been my experience as well. The biggest problem is that the answers look plausible, and only after implementation and experimentation do you find them to be wrong. If this happened every once in a while then it wouldn't be a big deal, but I'd guess that more than half of the answers and tutorials I've received through ChatGPT have ended up being plain wrong.

God help us if companies start relying on LLMs for life-or-death stuff like insurance claim decisions.




I'm not sure if you're being sarcastic, but in case you're not... From https://arstechnica.com/health/2023/11/ai-with-90-error-rate...

"UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges" Also "The use of faulty AI is not new for the health care industry."


> If this happened every once in a while then it wouldn't be a big deal, but I'd guess that more than half of the answers and tutorials I've received through ChatGPT have ended up being plain wrong.

It would actually have been more pernicious that way, since it would lull people into a false sense of security.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: