Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DHH: AI models are now good enough (twitter.com/dhh)
12 points by nl 18 days ago | hide | past | favorite | 16 comments


Hasn't that long-haired, old racist just retired yet?


DHH has long past the point where anyone should be caring about his technical opinions. This is a 0 substance post.


> DHH has long past the point where anyone should be caring about his technical opinions. This is a 0 substance post.

Can you elaborate?


What can be stated without evidence can be dismissed without evidence. It is IMO pretty clear to me there is no substance to this post, without knowing anything about the author.

In general most such claims today are without substance, as they are made without any real metrics, and the metrics we actually need we just don't have. I.e. we need to quantify the technical debt of LLM code, how often it has errors relative to human-written code, and how critical / costly those errors are in each case relative to the cost of developer wages, and also need to be clear if the LLM usage is just boilerplate / webshit vs. on legacy codebases involving non-trivial logic and/or context, and whether e.g. the velocity / usefulness of the LLM-generated code decreases as the codebase grows, and etc.

Otherwise, anyone can make vague claims that might even be in earnest, only to have e.g. studies show that in fact the productivity is reduced, despite the developer "feeling" faster. Vague claims are useless at this point without concrete measurements and numbers.


This study does a good job of measuring the productivity impact. It found 1% uplift in dev productivity from using AI.

https://youtu.be/JvosMkuNxF8?si=J9qCjE-RvfU6qoU0


Actually it didn't

From the video summary itself:

> We’ll unpack why identical tools deliver ~0% lift in some orgs and 25%+ in others.

At https://youtu.be/JvosMkuNxF8?t=145 he says the median is 10% more productivity, and looking at the chart we can see a 19% increase for the top teams (from July 2025).

The paper this is based on doesn't seem to be available which is frustrating though!


I think you are quoting productivity measured before checking the code actually works and correcting it. After re-work productivity drops to 1%. Tinestamp 14:04.


That was from a single company, not across the cohort.


My bad. What was the result when they measured productivity after rework across the entire co hort?


They don't publish it as far as I can see!

In any case, IMHO I think AI SWE has happened in 3 phases:

Pre-Sonnet 3.7 (Feb 2025): Autocomplete worked.

Sonnet 3.7 to Codex 5.2/Opus 4.5 (Feb 2025-Nov 2025): Agentic coding started working, depending on your problem space, ambition and the model you chose

Post Opus 4.5 (Nov 2025): Agentic coding works in most circumstances

This study was published July 2025. For most of the study timeframe it isn't surprising to me that it was more trouble than it was worth.

But it's different now, so I'm not sure the conclusions are particularly relevant anymore.

As DHH pointed out: AI models are now good enough.


Sorry for the late response!

My guess is they didn't publish it because they only measured it at one company, if they had the data across the cohort they would have published.

The general result that review/re-wrok can cancel out the productivity gains is supported by other studies

AI generated code is 1.7x more buggy vs human generated code: https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-gen...

Individual dev productivity gains are offset by peers having to review the verbose (and buggy) AI code: https://www.faros.ai/blog/ai-software-engineering

On agentic being the saviour for productivity, Meta measured a 6-12% productivity boost from agents programming: https://www.youtube.com/watch?v=1OzxYK2-qsI&si=ABTk-2RZM-leT...

"But it's different now" :)


Great example of something that actually has some substance beyond meaningless anecdotes.


The claim was > DHH has long past the point where anyone should be caring about his technical opinions.

I asked for evidence, you are replying to something else.


I’ve seen the same change in the last 6 months.


So have I. Opus 4.5 still needs close monitoring and code review, but it is now good enough for most of my day to day tasks.


Can we please stop taking this guy seriously...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: