Maybe automated testing of all forms will just become much more ubiquitous as a safeguard against the worst of AI hallucinations? I feel that would solve a lot of people's worries about LLMs. I'm imagining a world where a software developer is a person who gathers requirements, writes some tests, asks the AI to modify the codebase, ensures the tests still work, makes sure they are a human who understands the change the AI just made, and continues with the next requirement.