I absolutely don't care about how people generate code, but they are responsible for every single line they push for review or merge.
That's my policy in each of my clients and it works fine, if AI makes something simpler/faster, good for the author, but there's 0, none, excuses for pushing slop or code you haven't reviewed and tested yourself thoroughly.
If somebody thinks they can offset not just authoring or editing code, but also taking the responsibility for it and the impact it has on the whole codebase and the underlying business problem they should be jobless ASAP as they are de facto delegating the entirety of their job to a machine, they are not only providing 0 value, but negative value in fact.
Totally agree. For me, the hard part has been figuring out the distinction with junior engineers... Is this poorly thought out, inefficient solution that is 3x as long as necessary due to AI, or inexperience?
Not defending him, but we were already doing this with electron apps, frameworks, libraries, and scripting languages. The only meaningful cost in most software development is labor and that’s what makes sense to optimize. I’d rather have good software, but I’ll take badly made software for free over great software that costs more than the value of the problem solved.
These discussions are always about tactics and never operations.
Code is liability. LLM written PRs often bring net negative value: they make the whole system larger, more brittle, and less integrated. They come at the cost of end user quality and maintainer velocity.
Is that not also true of human written software that costs more per hour than the monthly cost of a coding agent? Developers are expected to ship enterprise software with defects that would land you in court if you made equivalent mistakes designing a water treatment plant or bridge.
I get the “AI sucks” argument from a programmers point of view. It’s weird looking and doesn’t care about “code smells” or about rearranging the code base’s deck chairs just the way you like. From an owner’s or client’s perspective, human programmers suck. You want big standard CRUD app? Like a baby’s first Django app? That’s going to take at least 6 months for some reason. They don’t understand your problem domain and don’t care enough to learn it. They work 15 minutes on the hour, spend 45 on social media or games, and bill you $200/hr. They “pair program” for “quality” to double their billed rate for the same product. They bill you for interns learning how to do their job on your dime. On top of that there is still a very good chance the whole project will just be a failure. Or I can pay Anthropic $20/month and text an AI requirements on my phone when I’ve got 5 minutes of down time. If it doesn’t work I just make a new one and try again. Even if progress on AI stopped today, the world is now so much better for consumers of programs. Maybe not for developers unless you’re writing the AI and getting paid in the millions. Good for them. I’m glad to see the $200/hr Stack Overflow copy and pasters go do something else.
> Is that not also true of human written software that costs more per hour than the monthly cost of a coding agent?
The difference is that a human can learn and grow.
From your examples, it sounds like we're talking about completely different applications of code. I'm a software engineer who is responding to the original topic of reviewing PRs full of LLM slop. It sounds like you are a hobbyist who uses LLMs to vibe code personal apps. Your use case is, frankly, exactly what LLMs should be used for. It's analogous to how using a consumer grade 3d printer to make toys for your kids is fine, but nobody would want to be on the hook for maintaining full scale structural systems that were printed the same way.
In this analogy though, a different someone designed a device or several devices and is printing them on a 3d printer and selling them online and making an alright living through that.
I get it, but I think there’s something deeply anti human about being ok with this (not just in software). It’s similar in sentiment to how you behave when nobody is looking - a culture and society is so much better off if people take pride in their work.
Obviously there’s nuance (I’ll take slop food for starving people over a healthy meal for a limited few if we’re forced to choose), but the perverse incentives in society start to take over if we allow ourselves to be ok with slop. Continuously chasing the bottom of the barrel makes it impossible for high quality to exist for anyone except the rich.
Put another way: if we as a society said “it is illegal to make slop food”, both the poor and the rich would have easy access to healthy food. The cost here would be born by the rich, as they profit off food production and thus would profit less to keep quality high.
I’m pretty sure the USSR, Cuba, and the like never succeeded doing this sort of thing, but maybe if we hit ourselves in the head with the same hammer (and sickle) just one more time it will work?
Absolutely. In areas where there are known quality options, people are clearly willing to pay more. Toyota for instance is a solid example of this.
Automobiles are large, expensive purchases with a relatively small set of options though... For most purchases, it's impossible to determine quality ahead of time.
It's not easy to be a junior, and we might be speaking with survivor bias, but most juniors don't end up in solid engineering teams, they are merely developers that are much cheaper and from whom you expect much less, but more often than not they are borderline left learning and figuring out things on their own. They need to luck some senior member that will nurture them and not just give them low quality work (which I admit I have done too when I had myself lots of pressure to deliver my own stuff).
Even in less desperate teams, as productivity grows with AI (mine does, even if I don't author code with it it's tremendous help in just navigating repos and connecting the dots, it saves me so much time...) the reviewing pressure increases too, and with that fatigue.
It does matter, because it's a worthwhile investment of my time to deeply review, understand, and provide feedback for the work of a junior engineer on my team. That human being can learn and grow.
It is not a worthwhile use of my time to similarly "coach" LLM slop.
The classic challenge with junior engineers is that helping them ship something is often more work than just doing it yourself. I'm willing to do that extra work for a human.
That's my policy in each of my clients and it works fine, if AI makes something simpler/faster, good for the author, but there's 0, none, excuses for pushing slop or code you haven't reviewed and tested yourself thoroughly.
If somebody thinks they can offset not just authoring or editing code, but also taking the responsibility for it and the impact it has on the whole codebase and the underlying business problem they should be jobless ASAP as they are de facto delegating the entirety of their job to a machine, they are not only providing 0 value, but negative value in fact.