Yet for humans we have built a society which prevents these mistakes except in edge cases.
Would humans make these mistakes as often as LLMs if there would be no consequences?