Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They’re not dissimilar to human devs, who also often feel the need to replat, refactor, over-generalize, etc.

The key thing in both cases, human and AI, is to be super clear about goals. Don’t say “how can this be improved”, say “what can we do to improve maintainability without major architectural changes” or “what changes would be required to scale to 100x volume” or whatever.

Open-ended, poorly-defined asks are bad news in any planning/execution based project.



A senior programmer does not suggest adding more complexity/abstraction layers just to say something. An LLM absolutely does, every single time in my experience.


You might not, but every "senior" programmer I have met on my journey has provided bad answers like the LLMs - and because of them I have an inbuilt verifier that means I check what's being proposed (by "seniors" or LLMs)


There are however human developers that have built enough general and project-specific expertise to be able to answer these open-ended, poorly-defined requests. In fact, given how often that happens, maybe that’s at the core of what we’re being paid for.


But if the business doesn’t know the goals, is it really adding any value to go fulfill poorly defined requests like “make it better”?

AI tools can also take a swing at that kind of thing. But without a product/business intent it’s just shooting in the dark, whether human or AI.


I have to be honest, I've heard of these famed "10x" developers, but when I come close to one I only ever find "hacks" with a brittle understanding of a single architecture.


Most definitely, asking the LLM those things is the same as asking (people) on Reddit, Stack Overflow, IRC, or even Hacker News




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: