What can be stated without evidence can be dismissed without evidence. It is IMO pretty clear to me there is no substance to this post, without knowing anything about the author.
In general most such claims today are without substance, as they are made without any real metrics, and the metrics we actually need we just don't have. I.e. we need to quantify the technical debt of LLM code, how often it has errors relative to human-written code, and how critical / costly those errors are in each case relative to the cost of developer wages, and also need to be clear if the LLM usage is just boilerplate / webshit vs. on legacy codebases involving non-trivial logic and/or context, and whether e.g. the velocity / usefulness of the LLM-generated code decreases as the codebase grows, and etc.
Otherwise, anyone can make vague claims that might even be in earnest, only to have e.g. studies show that in fact the productivity is reduced, despite the developer "feeling" faster. Vague claims are useless at this point without concrete measurements and numbers.
> We’ll unpack why identical tools deliver ~0% lift in some orgs and 25%+ in others.
At https://youtu.be/JvosMkuNxF8?t=145 he says the median is 10% more productivity, and looking at the chart we can see a 19% increase for the top teams (from July 2025).
The paper this is based on doesn't seem to be available which is frustrating though!
I think you are quoting productivity measured before checking the code actually works and correcting it. After re-work productivity drops to 1%. Tinestamp 14:04.
In any case, IMHO I think AI SWE has happened in 3 phases:
Pre-Sonnet 3.7 (Feb 2025): Autocomplete worked.
Sonnet 3.7 to Codex 5.2/Opus 4.5 (Feb 2025-Nov 2025): Agentic coding started working, depending on your problem space, ambition and the model you chose
Post Opus 4.5 (Nov 2025): Agentic coding works in most circumstances
This study was published July 2025. For most of the study timeframe it isn't surprising to me that it was more trouble than it was worth.
But it's different now, so I'm not sure the conclusions are particularly relevant anymore.
As DHH pointed out: AI models are now good enough.