Google for the Stanford study by Yegor Denisov-Blanch. You might have to pay to access the paper, but you can watch the author’s synopsis on YouTube.
For low complexity greenfield projects (best case) they found a 30% to 40% productivity boost.
For high-complexity brownfield projects (worst case) they found a -5% to 10% productivity boost.
The METR study from a few weeks ago showed an average productivity drop around 20%.
That study also found that the average developer believed AI had made them 20% more productive. The difference in perception and reality was on average 40 percentage points.
The devil is always in the details with these studies. What did they measure, how did they measure it, are they counting learning the new tool as unproductive time, etc etc etc. I’ll have to read them myself. Regardless, I’ll be sad if it makes most people less productive on average if that’s the scientific truth, but it won’t change the fact that for my specific use case there is a clear time save.
Sure you need to read them yourself to know what conclusions to draw.
In my specific case I felt like I was maybe 30% faster on greenfield projects with AI (and maybe 10% on brownfield). Then I read the study showing a 40 percentage point overestimate on average.
I started tracking things and it’s pretty clear I’m not actually saving anywhere near 30%, and I’d estimate that long term I might be in the negative productivity realm.
For low complexity greenfield projects (best case) they found a 30% to 40% productivity boost.
For high-complexity brownfield projects (worst case) they found a -5% to 10% productivity boost.
The METR study from a few weeks ago showed an average productivity drop around 20%.
That study also found that the average developer believed AI had made them 20% more productive. The difference in perception and reality was on average 40 percentage points.