Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting take. I suggest an alternative take: it's a skill issue if LLMs help a developer.

If the study showed that experienced developers suffered a negative performance impact while using an LLM, maybe where LLMs shine are with junior developers?

Until a new study that shows otherwise comes out, it seems the scientific conclusion is that junior developers, the ones with the skill issues, benefit from using LLMs, while more experienced developers are impacted negatively.

I look forward to any new studies that disprove that, but for now it seems settled. So you were right, might indeed be a skills issue if LLMs help a developer and if they do, it might be the dev is early in their career. Do LLMs help you, out of curiosity?



Why are you quick to call it settled and scientifically concluded on the strength of a single study? That’s incredible confidence

There is this paper that surveys results of 37 studies and reaches a different conclusion: https://arxiv.org/abs/2507.03156

> Our analysis reveals that LLM-assistants offer both considerable benefits and critical risks. Commonly reported gains include minimized code search, accelerated development, and the automation of trivial and repetitive tasks. However, studies also highlight concerns around cognitive offloading, reduced team collaboration, and inconsistent effects on code quality.

Why are you ignoring the existence of these 37 other studies and pretending the one study you keep sharing is the only in existence and thus authoritatively conclusive?

Furthermore from the study you keep sharing, they state:

> We do not provide evidence that: AI systems do not currently speed up many or most software developers. Clarification: We do not claim that our developers or repositories represent a majority or plurality of software development work

Why do YOU claim that this study provides evidence, conclusively and as settled science, that AI systems do not speed up many or most developers? You are unscientifically misrepresenting the study you are so eager to share. You are a complete “hype man” for this study beyond what it evidences because of your eagerness for a way to shut down discourse and dismiss any progress since the study’s focus on Sonnet 3.5. The study you share even says that there has been a lot of progress in the last five years and future progress as well as different techniques in using the tools may produce productive results and that the study doesn’t evidence otherwise! You are unserious.


Imagine if you’d worked for a decade as a dev using Notepad as your code editor (in a world where that was the best editor somehow). You’d developed your whole career in Notepad and knew very well how to work with it

Then, someone did a two week study on the productivity difference between Notepad, vim, emacs, and VSCode. And it turns out that there was lower observed productivity for all of the latter 3, with the smallest reduction seen in VSCode.

Would you conclude that Notepad was the best editor, followed by VSCode and then vim and emacs being the worst editors for programming?

That’s the flaw I see in the methodology of that study. I’m glad they did it, but the amount of “Haha, I knew it all along and if you claim AI helps you at all, it’s just because you sucked all along…” citing of that study is astonishing.


> citing of that study is astonishing and somewhat comical

I would like to see your study, one that's not sponsored by OpenAI or github, that shows LLMs actually improved anything for experienced developers. Crickets.

So, to summarize:

1. An actual study shows that experienced developer's productivity declines 19% when using an LLM.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

2.The recent actual MIT study showing 95% GenAI projects fail to have any tangible results in enterprises:

https://fortune.com/2025/08/18/mit-report-95-percent-generat...

And your source is: 'Trust me bro'. I swear the new LLM fanbase is the same as good ol' scrum: a bunch of fanatic gaslighters.

It's always a "skill issue" , "not doing it right" , "not the proper llm/scrum flavor", or a "flawed study".

When I see the studies, then I might actually listen to the LLM booster crowd, but for now I got studies, what you got? Vibes? Figures.


I don't think the study is flawed. It just seems rather narrow:

"We conduct a randomized controlled trial (RCT) to understand how AI tools at the February-June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience."

So the question is what other kinds of software development tasks this result applies to. Moderate AI experience is fine. This applies to many other situations. But 5 years of experience with a single code base is an outlier.

That said, they used relatively large repositories (1.1 million LOC) and the tasks were randomly assigned. So developers couldn't pick and choose tasks in areas of the codebase they already knew extremely well.

I think the study does generalise to some degree, but I've seen conclusions drawn from this study that the methodology doesn't support. In my view, it doesn't generalise over all or even most software development taks.

Personally, I'm a bit sceptical (but not hostile) about LLMs for coding (and some other thinking tasks), because the difference in quality between requests for which there are many examples and tasks for which there are only few examples is so extreme.

Reasoning capabilities of LLMs still seem minimal.


My argument is limited to “we don’t know and one study with significant limitations with regards to participant adaptation doesn’t settle anything definitively for the long-term”.

Your argument seems to project significantly more certainty and spittle.


The LLM crowd always sees themselves as messianic and victims, eerly reminding me of the NFT crowd back 1 year ago. I would not be surprised if a lot of those are the same folks.

The burden of proof is on the ones saying a new concept/tool (LLMs/NFT) is revolutionary or useful. I provided studies showing not only the new concept is not revolutionary, but that it is a step back in terms of productivity. Where are the studies and evidence proving that LLMs are a revolution?

NFT boosters tried for years to make us believe something that wasn't there. I will take the LLM crowd more seriously when I actually see the impact and usefulness of LLMs. For now, it's simply not there.

https://fortune.com/2025/08/18/mit-report-95-percent-generat...

> Your argument seems to project significantly more certainty and spittle.

I am not surprised that a bunch of folks outsourcing their critical thinking to a fancy autocomplete don't have any arguments nor studies though, to refute a pretty simple argument with some receipts behind it. Spittle? Please, at least there is an argument and links.

From the LLM cult crowd there is usually nothing, just crickets. Show me the studies, show me the links, show me the proof that LLMs are the revolution you so desperately want it to be.

Until then, I got the receipts that, if anything, LLMs are just another tool but hardly a revolution worth paying attention to.


“I submitted studies”

You submitted one study and claimed it’s the only in existence (it’s not)

“I got the receipts”

You have one receipt that you misrepresent by saying it scientifically settles things the paper itself points out that it explicitly does not claim


> is the same as good ol' scrum: a bunch of fanatic gaslighters.

Oh no, please no. I can't take it one more time. Is it just me or are devs the absolute worst profession in the regards of self-inflicted dogmas?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: