the purpose of having engineers write software is that they can transparently prove that it works reliably, and they can be professionally held accountable and learn if it fails.
You're suggesting that reliability should be improved by being obfuscating the code through transpilation or by merit of being generated by a black box (LLM).
I really suspect that simply transpiling code to rust or ada or some other "safe" language largely wouldn't improve its security. The whole point of these "safe" languages is that they encourage safer practices by design, and that in porting the code to rust you have to restructure the program to conform to the new practices (as opposed to just directly re-implementing it).
I haven't seen a LLM that is reliably capable of logic/reasoning or can even reliably answer technical questions, much less synthesize source code that isn't some trivial modification of something it has been trained on. And it's not clear that future models will necessarily be capable of doing that.
No, but you can transpile (incredibly trivial) Rust programs into Coq that can be than formally verified to give a defined output for all possible inputs.
Unacceptable to have so much non provably safe code exploitable like this.