There is a prevailing mentality that LLMs make it easy to become productive in new languages, if you are already proficient in one. That's perhaps true until you suddenly bump up against the need to go beyond your superficial understanding of the new language and its idiosyncrasies. These little collisions with reality occur until one of them sparks an issue of this magnitude.
In theory, experienced human code reviewers can course correct newer LLM-guided devs work before it blows up. In practice, reviewers are already stretched thin and submitters absolute to now rapidly generate more and more code to review makes that exhaustion effect way worse. It becomes less likely they spot something small but obvious amongst the haystack of LLM generated code bailing there way.
> There is a prevailing mentality that LLMs make it easy to become productive in new languages, if you are already proficient in one.
Yes, and: I've found this to be mostly true, if you make sure you take the time to deeply understand what the code is doing. When I asked an LLM to do something for me in Javascript, then I said, "What if X happens, wouldn't that cause Y? Would it be better to restructure it like so and so to make it more robust?" The LLM immediately improves it.
Any experienced programmer who was taking the time to review this code, on learning that unwrap() has a "panic" inside, would certainly change it. But as you say, reviewers are already stretched thin.
And because it gets picked up by LLMs. It would be interesting to know if this particular .unwrap() was written by a human.