They’re not dissimilar to human devs, who also often feel the need to replat, refactor, over-generalize, etc.
The key thing in both cases, human and AI, is to be super clear about goals. Don’t say “how can this be improved”, say “what can we do to improve maintainability without major architectural changes” or “what changes would be required to scale to 100x volume” or whatever.
Open-ended, poorly-defined asks are bad news in any planning/execution based project.
A senior programmer does not suggest adding more complexity/abstraction layers just to say something. An LLM absolutely does, every single time in my experience.
You might not, but every "senior" programmer I have met on my journey has provided bad answers like the LLMs - and because of them I have an inbuilt verifier that means I check what's being proposed (by "seniors" or LLMs)
There are however human developers that have built enough general and project-specific expertise to be able to answer these open-ended, poorly-defined requests. In fact, given how often that happens, maybe that’s at the core of what we’re being paid for.
I have to be honest, I've heard of these famed "10x" developers, but when I come close to one I only ever find "hacks" with a brittle understanding of a single architecture.
The key thing in both cases, human and AI, is to be super clear about goals. Don’t say “how can this be improved”, say “what can we do to improve maintainability without major architectural changes” or “what changes would be required to scale to 100x volume” or whatever.
Open-ended, poorly-defined asks are bad news in any planning/execution based project.