This is exactly the direction I expect AI-assisted coding to go in. Not software engineers being kicked out and some business person pressing a few buttons to have a fully functional app (as is playing out in a lot of fantasies on LinkedIn & X), but rather experienced engineers using AI to generate bits of code and then meticulously reviewing and testing them.
The million dollar (perhaps literally) question is – could @kentonv have written this library quicker by himself without any AI help?
It took me a few days to build the library with AI.
I estimate it would have taken a few weeks, maybe months to write by hand.
That said, this is a pretty ideal use case: implementing a well-known standard on a well-known platform with a clear API spec.
In my attempts to make changes to the Workers Runtime itself using AI, I've generally not felt like it saved much time. Though, people who don't know the codebase as well as I do have reported it helped them a lot.
I have found AI incredibly useful when I jump into other people's complex codebases, that I'm not familiar with. I now feel like I'm comfortable doing that, since AI can help me find my way around very quickly, whereas previously I generally shied away from jumping in and would instead try to get someone on the team to make whatever change I needed.
> It took me a few days to build the library with AI. ...
> I estimate it would have taken a few weeks, maybe months to write by hand.
I don't think this is a fair assessment give the summary of the commit history https://pastebin.com/bG0j2ube shows your work started on 2025-02-27 and started trailing off at 2025-03-20 as others joined in. Minor changes continue to present.
> That said, this is a pretty ideal use case: implementing a well-known standard on a well-known platform with a clear API spec.
Still, this allowed you to complete in a month what may have taken two. That's a remarkable feat considering the time and value of someone of your caliber.
I think the data supports that there were about 5 distinct days when I did a large amount of work on this library, and a sprinkling of minor commits through the rest of the month. Glen's commits, while numerous, were also fairly minor, mostly logistical details around releases.
This library is not the only thing I was working on, nor even the main thing. As the lead engineer of Cloudflare Workers I have quite a few other things demanding my time.
> (...) your work started on 2025-02-27 and started trailing off at 2025-03-20 as others joined in. Minor changes continue to present.
Your analysis is far too superficial to extract anything meaningful. I know for a fact that I have small projects that took me only a couple of days to get done which have a commit history ranging a few months. Also, software is never done. There's always room to refactor, and LLMs turn that into trivial problems. Lastly, is your project still under development if your commits are README updates, linter runs, and renaming variables?
There is a reason why commit history is not used to track productivity.
Would someone of author's caliber even be working on trivial slog item like Oauth2 implementation, if not for the novel development approach he wanted to attempt here ?
For the kind of regular jobs a engineer typically is expected to do, would it give 100% productivity jump ?
Many tools make lesser developers more productive (to a point) but they fail to improve the productivity of talented professionals. Lots of "no/low" code things come to mind. But here's a tool that made kentonv 2x productive at a task that's clearly in his wheelhouse. It seems under the right conditions it can improve the productivity of developers at the opposite end of the spectrum.
To answer your question explicitly, we do have existing tools that help on that end, but they are nerdy and not hyped by beginners.
Type systems, LSPs, tests, formatters, Rust’s borrow checker, logs and traces, source control are examples of things that make experts go faster. This space is hardly neglected (but could always be better).
It is really nice to see LLMs helping on all skill levels.
The fascinating part is that each person is finding their own way of using these tools from kids to elders and everyone in between no matter what your background or language or whatever is
Funny thing. I have built something similar recently, that is a 2.1-compliant authorisation server in TypeScript[0]. I did it by hand, with some LLM help on the documentation. I think it took me about two weeks full time, give or take, and there’s still work to do, especially on the testing side of things, so I would agree with your estimate.
I’m going to take a very close look at your code base :)
Maybe because (and I'm quoting that article) it is still lacking in what it should have that you managed to accomplish this task in "few days" instead of "a few weeks, maybe months".
Maybe the bottleneck was not your typing speed, but the [specific knowledge] to build that system. Because if you know something well enough, you can build it way faster, like rebuilding something from scratch, you will be faster as you already know the paths. In which case, my question would be: would not be writing this as fast, or maybe at least more secure and reasonable, if you had the complete knowledge of the system first.
Because contrary to LLMs, humans can actually improve and learn when they do things, and they don't whey they don't do things. Not knowing the code to the full extent is worth the time "gained" by using the LLM to write it?
I think it's very hard to estimate those other aspects of the thing.
Thanks kentonv. I picked up where you left off, supported with oauth2.1 rfc, and integrated ms oauth to our internal mcp server. Cool to have Claude be business aware
YES!!!! I've actually been thinking about starting a studio specifically geared to turning complex RFPs and protocols into usable tools with AI-assisted coding. I built these using Cursor just to test how for it could go. I think the potential of doing that as a service is huge:
I think it's funny that Roselite caused a huge meltdown to the Veilid team simply because they have a weird adamancy to no AI assistance. They even called it "plagiarism"
>I have found AI incredibly useful when I jump into other people's complex codebases, that I'm not familiar with. I now feel like I'm comfortable doing that
This makes sense. Are there codebases where you find this doesn't work as well, either from the codebase's min required context size or the code patterns not being in the training data?
In haven’t seen that as a limitation because the agents also are able to grep for keywords to find files to explain things. So they don’t necessarily have to ingest the whole codebase into context.
> Though, people who don't know the codebase as well as I do have reported it helped them a lot.
My problem I guess is that maybe this is just Dunning-Kruger esq. When you don't know what you don't know you get the impression it's smart. When you do, you think it's rubbish.
Like when you see a media report on a subject you know about and you see it's inaccurate but then somehow still trust the media on a subject you're a non-expert on.
> Like when you see a media report on a subject you know about and you see it's inaccurate but then somehow still trust the media on a subject you're a non-expert on.
> My problem I guess is that maybe this is just Dunning-Kruger esq. When you don't know what you don't know you get the impression it's smart. When you do, you think it's rubbish.
I see your point. Indeed there are two completely different points of view regarding the output of LLMs:
* Hey, I managed to vibecode my way into a fully working web service with a React SPA after a couple of prompts, and a full automated test suite to boot.
* This project is nowhere as clean as I would have written it, and doesn't even follow my pet coding conventions.
One side lauds LLMs, the other complains they output mainly crap.
The truth of the matter is that the vast majority of software engineers write crap code, as the definition of "crap code" is "something I would have done differently". Opinionated engineers look at the output of LLMs and accuse it of being crap code. Eppur si muove.
> The truth of the matter is that the vast majority of software engineers write crap code, as the definition of "crap code" is "something I would have done differently".
This is certainly a part of it, but I do wonder that even if an LLM “learned” the conventions and preferences of an engineer and spit out “perfectly styled” code, would it be treated as such? I’d wager (a small amount) that it wouldn’t, because part of enjoying the code - for me - is _knowing_ the code. “I wrote it this way because I tried X, then Y, then saw I could do Z, and now I’m familiar with the code in a way that’s more intimate.” Unfamiliar code rarely looks like _really good_, in my opinion.
> Not software engineers being kicked out ... but rather experienced engineers using AI to generate bits of code and then meticulously reviewing and testing them.
But what if you only need 2 kentonv's instead of 20 at the end? Do you assume we'll find enough new tasks that will occupy the other 18? I think that's the question.
And the author is implementing a fairly technical project in this case. How about routine LoB app development?
> But what if you only need 2 kentonv's instead of 20 at the end? Do you assume we'll find enough new tasks that will occupy the other 18? I think that's the question.
This is likely where all this will end up. I have doubts that AI will replace all engineers, but I have no doubt in my mind that we'll certainly need a lot less engineers.
A not so dissimilar thing happened in the sysadmin world (my career) when everything transitioned from ClickOps to the cloud & Infrastructure as Code. Infrastructure that needed 10 sysadmins to manage now only needed 1 or 2 infrastructure folks.
The role still exists, but the quantity needed is drastically reduced. The work that I do now by myself would have needed an entire team before AWS/Ansible/Terraform, etc.
I think there's a huge huge space of software to build that isn't being touched today because it's not cost-effective to have an engineer build them.
But if the time it takes an engineer to build any one thing goes down, now there are a lot more things that are cost effective.
Consider niche use cases. Every company tends to have custom processes and workflows. Think about being an accountant at one company vs. another -- while a lot of the job is the same, there will always be parts that are significantly different. Those bespoke processes often involve manual labor because off-the-shelf accounting software cannot add custom features for every company.
But what if it could? What if an engineer working with AI could knock out customer-specific features 10x as fast as they could in the past. Now it actually makes sense to build those features, to improve the productivity of each company's accounting department.
It's hard to say if demand for engineers will go down or up. I'm not pretending to know for sure. But I can see a possibility that we actually have way more developers in coming years!
> I think there's a huge huge space of software to build that isn't being touched today because it's not cost-effective to have an engineer build them.
That's definitely an interesting area, but I think we'll actually see (maybe) individual employees solving some of these problems on their own without involving IT/the dev team.
We kind of see it already - a lot of these problem spaces are being solved with complex Excel workflows, crappy Access databases, etc. because the team needed their problem solved now, and resources couldn't be given to them.
Maybe AI is the answer to that so that instead of building a house of cards on Excel, these non-tech teams can have something a little more robust.
It's interesting you mentioned accounting, because that's the one department/area I see taking off and running with it the most. They are already the department that's effectively programming already with Excel workflows & DSLs in whatever ERP du jour.
So it doesn't necessarily open up more dev jobs, but maybe fulfills the old the mantra of "everyone will become a programmer." and we see more advanced computing become a commodity thanks to AI - much like everyone can click their way through an office suite with little experience or training, everyone will be able to use AI to automate large chunks of their job or departmental processes.
If we shiver at the sight of some of those accounting-created excels, which we only learn about when they fail and they can't understand them anymore, wait for them to hand over a vibe-coded 200k loc Python codebase "which is not working anymore" and nobody had ever reviewed a single line of code.
> I think we'll actually see (maybe) individual employees solving some of these problems on their own without involving IT/the dev team.
I agree, but in my book, those employees are now developers. And so by that definition, there will be a lot more developers.
Will we see more or fewer people whose primary job is software development? That's harder to answer. I do think we'll see a lot more consultant-type roles, with experienced software developers helping other people write their own personal automations.
> I think there's a huge huge space of software to build that isn't being touched today because it's not cost-effective to have an engineer build them.
LLMs don't change that. If a business does not have the budget for a software engineer, LLMs won't make up budget headroom for it either. What LLMs do is allow engineers to iterate faster, and work on more tasks. This means less jobs.
If a business has the budget for 1 or 2 engineers though, they might be able to task them with work that previously required 5-10 engineers (in theory, anyways).
Right, but even the way you opted to frame this discussion is based on the idea that there is a drop in demand for software engineers. You need less engineers, not more. A few can get more done, but you need fewer to accomplish your tasks too.
This is like claiming that there are fewer people who work in construction now than in the year 1000 because a machine can do what it would have literally taken 100 people to accomplish back then.
But what has happened instead is that we are now building much more buildings and much more complex ones than we ever would have even conceived of back then. The Three Gorges dam required the work of thousands or even tens of thousands of people when it was built, and it would have required the work of millions in the year 1000. But it didn't actually generate millions of jobs in the year 1000: it was in fact never even conceived of as a possibility, much less attempted.
Of course, the opposite can also happen. The number of carpenters has reduced to almost nothing, when it used to be a major profession, and there are many other professions that have entirely disappeared.
I didn't frame it that way - perhaps you are thinking of the person you replied to?
Nevertheless, I don't think they are trying to frame it that way, either. The point is that making software development easier can actually increase the demand of software engineers in some cases (where projects that were previously not considered due to budget constraints are now feasible).
> I didn't frame it that way - perhaps you are thinking of the person you replied to?
You did. You explicitly asserted the following.
> If a business has the budget for 1 or 2 engineers though, they might be able to task them with work that previously required 5-10 engineers (...).
In your own words, a project that would take 5-10 engineers is now feasible to be tackled with 1 or 2. Your own words.
> (...) The point is that making software development easier can actually increase the demand of software engineers in some cases (...)
I think that's somewhere between unrealistic and wishful thinking. Even in your problem statement, "making software development easier" lowers demand. Even if you argue that some positions might open where none existed before, the truth of the matter is that at the core of your scenario lies a drop in demand for software engineers. Shops who currently employ engineers won't need to retain as many to maintain their current level of productivity.
> In your own words, a project that would take 5-10 engineers is now feasible to be tackled with 1 or 2. Your own words.
That statement != lower demand for software engineers.
If a firm needs to perform project X that previously cost 10 engineers to do, but they only have the budget for 2, they will not tackle that project. Engineers used = 0.
However, if due to productivity enhancements with AI, the project can now be done with just 2 engineers, the company can now afford to tackle the project. Engineers used = 2.
That is the point that the person you were originally replying to was making.
> Even in your problem statement, "making software development easier" lowers demand.
Incorrect, as shown above.
> Even if you argue that some positions might open where none existed before, the truth of the matter is that at the core of your scenario lies a drop in demand for software engineers.
I see what you are trying to say, but it's not that clear cut. The fact is, no one knows what will actually happen to software engineering demand in the long run. Some scenarios will increase demand for engineers, others will decrease it. No one knows what the net demand will be, everyone is only guessing at this point.
> If a firm needs to perform project X that previously cost 10 engineers to do, but they only have the budget for 2, they will not tackle that project. Engineers used = 0.
0 on that Project, but those 2 engineers will still be used on a different Project that needs just 2 Engineers.
BUT a company that sees that project as a critical part of the bussines and MUST tackle that project, will only need the 2 engineers in the payroll. Or hire just 2 instead of 10.
Engineers not hired = 8
Or.. maybe they don't really need that project that needs 10 engineers. They are ok as they are today, but they realize that with AI, they don't need those 2 engineers anymore to produce the same output, probably can be handled by just one with AI assistance.
But now every firm has access to AI. If a firm that doesn’t fire people but instead simply boosts productivity, they will out compete their competitors. The only way to compete with that firm is to also hire enough employees and give them AI tools.
After 30+ years in the software field, and a user for 40+, having at times heavily customized my desktop or editor, for example - I've concluded that the best thing for most apps is for me to learn to use them with stock settings.
Why? Inevitably, I changed positions / jobs / platforms, and all that effort was lost / inapplicable, and I had to relearn to use the stock settings anyway.
Now, I understand that some companies have different setups, but it might just make more sense to change the company's accounting procedures (if possible) to conform to most accounting software defaults, rather than invest heavily in modifying the setup, unless you're a huge conglomerate and can keep people on staff. Why? Because someone, somewhere will have to maintain those changes. Sure, you can then hire someone else to update those changes - but guess what? Most likely, unless they open-source their changes, no LLM will have seen those changes, and even if they are allowed to fine-tune on it, they'll have seen exactly ONE instance of these changes. Odds they'll get everything right, AND the person using the LLM will recognize when it doesn't go right? Oh right, they invested in hundreds of unit tests to ensure everything works as expected even with changes, and I'm the tooth fairy..
This just isn't true and will probably never be true. Using all the defaults is... probably optimal in the general sense and when things come to scale, but most companies (or just leadership) at some point want to leave the "standards" with custom design or additions. Also, any company providing payroll/accounting/ software has an inherent interest in going against standardization and providing features to promote lock-in.
There are good arguments to just conform. But it is in fact true nevertheless that many companies and teams continue to choose bespoke workflows over standardized ones. So I guess there must be something driving that.
I don't actually think this is going to take the form of LLMs implementing custom patches to off-the-shelf software. I think instead it's going to look like LLMs writing code that uses APIs offered by off-the-shelf software to script specific workflows.
It's interesting that you bring up accounting software as an example. In jurisdictions where legal requirements around it are a lot more specific than in e.g. US, accounting suites usually already come with a lot of customization hooks (up to and including full-fledged scripting DSLs), and there are software engineers and companies who specialize in using those to implement bespoke accounting requirements.
I admit I have no specific knowledge of accounting and just meant to reference any random department that isn't engineering.
(Though I think it's true of engineering too. We all have our own weird team-specific processes for code reviews and CI and deployments which could probably use better automation.)
But even where lots of customization exists today (such as in engineering!), more is always possible. It's always just a question of whether the automation saves as much time as it took to build. If the automations can be built faster, then it makes sense to build more of them.
This is precisely why I compare this technological revolution with the electronic spreadsheet. Before the electronic spreadsheet, what used to take several accountants several days to "compute a whole spreadsheet" after changing "a few inputs" is serviced by a single accountant in a few minutes. That kind of service that was only available to enormous firms with teams of dozens of accountants is now available to firms with a technically proficient employee who do that kind of accounting as only a small part of their role.
It took time as different firms adapted to adopt computer technologies in their various business needs and workflows. It's hard to precisely predict how labor roles will change with each revolutionary technology.
I work for SMEs as a consulting CTO, and this is exactly where I see things going in this domain. I can take care of workloads that would've been prohibitively expensive in the past. In the case of SMEs, this may cover critical problems whose resolution can unlock new levels of growth. LLMs can be an absolute boon for them, and I'm fairly optimistic about being able to capitalize on the opportunity.
Though arguably cloud infra made it so that a lot more companies who never would have built out a data center or leased a chunk of space in one were spinning up some serious infra in AWS or Azure -- and thus hiring at least 1-2 devops engineers.
Before the end of zero interest rate policy, all the sysadmins I knew who the made the transition to devops were never stuck looking for a job for long.
To be clear, the number of people employed as "SREs" or "production engineers" is actually far, far higher (at least an order of magnitude) than in the days before cloud became a thing. There are simply far more apps / companies / businesses / etc. who use cloud hosting than there ever were doing on-prem work.
I don’t think we would need less engineers… the work to be done will increase instead. Example: now it takes 10 engineers to release a product in 10 months without AI. With AI it takes lets say 1 engineer to release the same product in 1 month. What’s the company gonna do now? Release 10 products in 10 months without AI 10 engineers (each using AI).
It’s an exaggeration I know, but you get the point.
> What’s the company gonna do now? Release 10 products in 10 months without AI 10 engineers (each using AI).
Software is often not the bottleneck. If instead of 10 engineers you just need the one, the company will shed headcount it doesn't need. This might mean, for example, that instead of 10 developers and a software testing engineer, now a team changes to perhaps add testers while firing half of the developers.
Increased productivity means increased opportuntity. There isn't going to be a time (at least not anytime soon) when we can all sit back and say "yup, we have accomplished everything there is to do with software and don't need more engineers".
But there very well might be a time very soon where human's no longer offer economic value to the software engineering process. If you could (and currently you can't) pay an AI $10k/year to do what a human could do in a year, why would you pay the human 6 figures? Or even $20k?
Nobody is claiming that human's won't have jobs simply because "we have accomplished everything this is to do". It's that humans will offer zero economic value compared to AI because AI gets so good and so cheap.
And there might be a giant asteroid that strikes the earth a few years down the line ending human civilization.
If there is some magic $10k AI that can fully replace a $200k software engineer then I'd love to see it. Until that happens this entire discussion is science fiction.
You don’t need to completely replace a whole 200k engineer. You just need to increase each engineer’s productivity sufficiently that you can reduce the total number of engineers in your company.
> If there is some magic $10k AI that can fully replace a $200k software engineer then I'd love to see it.
I think you have multiple offers of that very AI dangling in front of you, but you might be refusing to acknowledge them. One of the problems is the way you opt to frame the issue. Does "replacing" means firing the guy hoping to replace him with a Slack webhook? Or does it mean your team decides they don't need the same headcount of medior/senior engineers because a team of junior engineers mentored by someone focusing on quality ends up being more productive?
Experts understand orbital mechanics pretty well. If experts say an asteroid in the next 5 years it's pretty similar to saying that a rock dropped from the top of a skyscraper will hit the ground. It happens billions of times every day, we know the cause and effect.
With AI, there's no real expertise involved in saying "well, it was very stupid 5 years ago, now it's starting to seem smart, if we extrapolate it's going to be smarter than me in 5 years." But no one really knows what level of effort is required to make it smarter than me. No one is an expert in something that doesn't exist yet.
Remove all the "experts" who have a major conflict of interest (running AI startups, selling AI courses, wanting to pump their company's stock price by associating with AI) and you'll find that very few actual experts in the field hold this view.
Yup, because it's a stupid view. Good enough AI is right here, right now, today; it's already impacting day-to-day work in the software industry. That one is blindingly obvious to anyone who actually bothers to look around. You don't need experts to tell you the water is wet. It takes something special to try and deny this.
It may not manifest as job loss yet, but the market response to changes is a whole other thing. For one, it's likely to first manifest as slowing down hiring relative to amount of projects being started and then released. Software is a growing market after all.
> Remove all the "experts" who have a major conflict of interest (...) and you'll find that very few actual experts in the field hold this view.
You might seek comfort in your conspiracy theories, but back in the real world the likes of me were already quite capable of creating complete and fully working projects from scratch using yesterday's LLMs.
We are talking about afternoons where you grab your coffee, saying to yourself "let's see what this vibecode thing is all about", and challenging yourself to create projects from scratch using nothing but a definition of done, LLM prompts, and a free-tier LLM configured to run in agent mode.
What, then?
You then can proceed to nitpick about code quality and bugs, but I can also say the same thing about your work, which you take far longer to deliver.
It's not. Consider that replacing the only $200k software engineer on the project is different than replacing the third or tenth $200k software engineer on the project. To the extent AI is improving productivity of those engineers, it reduces the need for adding more engineers to that team. That may mean firing some of them, or just not hiring new ones (or fewer of them) as the project expands, as existing ones + AI can keep up with increased workload.
> it'll just be 'one man startups' for better or worse.
Not necessarily. The reality is, whatever some people can do individually, if they team up, they can do more together. The teams and small startups will remain for now, and so will big companies.
I do imagine however that the internal structure will change. As the AI gets better and able to do more independently, people will shift from pair programming to more of a PM role (this is happening now), and this I imagine will quickly collapse further.
Even today, LLMs seem more suited for project management than doing actual coding - it's just the space in-between that's the problem. I.e. LLMs can code great in the small, and can break down work very well, but keeping the changes consistent and following the plan is where they still struggle. As that gap closes, I'm not really sure how the team composition would look like. But I don't doubt there'd still be teams.
This seems an important thing that somebody should be concerned about. How do we get the next generation of engineers? And how will they even be able to do the senior engineer work of validating the LLM output if they haven't had the years of experience writing code themselves?
I guess I have trouble emphasizing with "But what if you only need 2 kentonv's instead of 20 at the end?" because I'm an open source oriented developer.
What's open source for if not allowing 2 developers to achieve projects that previously would have taken 20?
> but rather experienced engineers using AI to generate bits of code and then meticulously testing and reviewing them.
My problem is that (in my experience anyways) this is slower than me just writing the code myself. That's why AI is not a useful tool right now. They only get it right sometimes so it winds up being easier to just do it yourself in the first place. As the saying goes: bad help is worse than no help at all, and AI is bad help right now.
> My problem is that (in my experience anyways) this is slower than me just writing the code myself.
In my experience, the only times LLMs slow down your task is when you don't use them effectively. For example, if you provide barely any context or feedback and you prompt a LLM to write you the world, of course it will output unusable results, primarily because it will be forced to interpolate and extrapolate through the missing context.
If you take the time to learn how to gently prompt a LLM into doing what you need, you'll find out it makes you far more productive.
> My problem is that (in my experience anyways) this is slower than me just writing the code myself.
How much experience do you have writing code vs how much experience do you have prompting using AI though? You have to factor in that these tools are new and everybody is still figuring out how to use them effectively.
> You have to factor in that these tools are new and everybody is still figuring out how to use them effectively.
I think that the skills required are highly overblown.
The user should be aware of what each model excels at, its context size, temperature, and other parameters; how to communicate well, set system prompts and phrase tasks in a clear, succinct yet informative way; how to refocus the session when it veers off track; keep up to date with the latest (<~6mo) concepts and tooling, and so on.
All of this is trivial for a competent software engineer. The idea that it requires some specialized training that couldn't be attained by experimentation and reading a blog post is absurd. "Prompt engineering" just isn't a thing.
I feel this is on point. So not only is there the time lost correcting and testing AI generated code, but there's also the mental model you build of the code when you write it yourself.
Assuming you want a strong mental model of what the code does and how it works (which you'd use in conversations with stakeholders and architecture discussions for example), writing the code manually, with perhaps minor completion-like AI assistance, may be the optimal approach.
> The million dollar (perhaps literally) question is – could @kentonv have written this library quicker by himself without any AI help?
I *think* the answer to this is clearly no: or at least, given what we can accomplish today with the tools we have now, and that we are still collectively learning how to effectively use this, there's no way it won't be faster (with effective use) in another 3-6 months to fully-code new solutions with AI. I think it requires a lot of work: well-documented, well-structured codebases with fast built-in feedback loops (good linting/unit tests etc.), but we're heading there no
> I think the answer to this is clearly no: or at least, given what we can accomplish today with the tools we have now, and that we are still collectively learning how to effectively use this, there's no way it won't be faster (with effective use) in another 3-6 months to fully-code new solutions with AI.
I think these discussions need to start from another point. The techniques changed radically, and so did the way problems are tackled. It's not that a software engineer is/was unable to deliver a project with/without LLMs. That's a red herring. The key aspects are things like the overall quality of the work being delivered vs how much time it took to reach that level of quality.
For example, one of the primary ways a LLM is used is not to write code at all: it's to explain to you what you are looking at. Whether it's used as a Google substitute or a rubber duck, developers are able to reason with existing projects and even explore approaches and strategies to tackle problem like they were never able to do so. You no longer need to book meetings with a principal engineer to as questions: you just drop a line in Copilot Chat and ask away.
Another critical aspect is that LLMs help you explore options faster, and iterate over them. This allows you to figure out what approach works best for your scenario and adapt to emerging requirements without having to even chat with anyone. This means that, within the timeframe you would deliver the first iteration of a MVP, you can very easily deliver a much more stable project.
> Another critical aspect is that LLMs help you explore options faster, and iterate over them. This allows you to figure out what approach works best for your scenario and adapt to emerging requirements without having to even chat with anyone. This means that, within the timeframe you would deliver the first iteration of a MVP, you can very easily deliver a much more stable project
In a "well-documented, well-structured codebase with fast built-in feedback loops", a human programmer is really empowered to make changes fast. This is exactly what's needed for fast iteration, including in unfamiliar codebases.
When you are not introducing a new pattern in the code structure, it's mostly copy-paste and then edit.
But it's also extremely rare, so a pretty high bar to be able to benefit from tools like AI.
That's not the million dollar question; anyone who's done any kind of AI coding will tell you it's ridiculously faster. I haven't touched JavaScript, CSS & HTML in like a decade. But I got a whole website created with complex UI interactions in 20 minutes - and no frameworks - by just asking ChatGPT to write stuff for me. And that's the crappy, inefficient way of doing this work. Would have taken me a week to figure out all that. If I'd known how to do it already, and I was very good, perhaps it would have taken the same amount of time? But clearly there is a force-multiplier at work here.
The million dollar question is, what are the unintended, unpredicted consequences of developing this way?
If AI allows me to write code 10x faster, I might end up with 10x more code. Has our ability to review it gotten equally fast? Will the number of bugs multiply? Will there be new classes of bugs? Will we now hire 1 person where we hired 5 before? If that happens, will the 1 person leaving the company become a disaster? How will hiring work (cuz we have such a stellar track record at that...)? Will the changing economics of creating software now make SaaS no longer viable? Or will it make traditional commercial software companies no longer viable? Will the entire global economy change, the way it did with the rise of the first tech industry? Are we seeing a rebirth?
We won't know for sure what the consequences are for a while. But there will be consequences.
>experienced engineers using AI to generate bits of code and then meticulously reviewing and testing them
And where are supposed to get experienced engineers if replaced all Jr Devs with AI? There is a ton of benefit from drudgery of writing classes even if seems like grunt work at the time.
> This is exactly the direction I expect AI-assisted coding to go in. Not software engineers being kicked out and some business person pressing a few buttons to have a fully functional app (as is playing out in a lot of fantasies on LinkedIn & X), but rather experienced engineers using AI to generate bits of code and then meticulously reviewing and testing them.
There is a middle ground: software engineers being kicked out because now some business person can hand over the task of building the entire OAuth infrastructure to a single inexperienced developer with a Claude account.
I'm not so sure that would work well in practice. How would the inexperienced developer know that the code created by the AI was correct? What if subtle bugs are introduced that the inexperienced developer didn't catch until it went out into production? What if the developer didn't even know how to debug those problems correctly? Would they know that the code they are writing is maintainable and extensible, or are they just going to generate a new layer of code on top of the old one any time they need a new feature?
> I'm not so sure that would work well in practice. How would the inexperienced developer know that the code created by the AI was correct?
Not a problem. The industry has evolved to tolerate buggy code that barely works. In fact, in some circles that's what's already expected from the baseline. LLMs change nothing in this regard. In fact, they arguably improve upon this problem as it becomes trivial to implement extensive automated test suites.
> What if subtle bugs are introduced that the inexperienced developer didn't catch until it went out into production?
That's what is happening in the real world without LLMs entering the picture.
I've seen firsthand what happens to large software projects that collapse under their own weight of tech debt. The software literally could not function as intended - customers were lost, the product went under. Low quality being "expected" (which isn't true in my experience, either) is irrelevant when the software doesn't work at all.
The chances of all of that happening are a lot higher with a lone inexperienced engineer at the wheel. You still need experienced engineers to maintain your software, period.
> That's what is happening in the real world without LLMs entering the picture.
The difference is that most firms have experienced software engineers to fix those defects.
> Low quality being "expected" (which isn't true in my experience, either) is irrelevant when the software doesn't work at all.
Yep, fully agree. We're going through this ourselves at $CURRENT_JOB, where the instability of the platform and product as a whole due to the immensely bad decisions made in the project's past is leading to massive churn from every single customer other than the smallest ones that make us no money anyway.
And it's not just the customers, the devs are feeling it too. There's constant fires and breakages all over the place because management doesn't care to give us any time to focus on quality, and people (myself included) are getting tired of having to read through some 10kLOC monstrosity that not even God Himself could understand, and it's made worse by the clueless management saying "Have you tried having AI find the bugs for you?" like a bunch of brainless sheep being injected with that sweet ol' VC hype machine.
Sure, people will put up with some bugs from time to time, and I'm not even saying I could've or do make perfect choices as well. But there's only so many times people will put up with a broken experience before they cut ties and quit, and in this vibe-coded hallucination world we're entering, are people really going to be okay with the products they use day-in, day-out changing behavior drastically every single day based on whatever the AI decided to hallucinate this time around to "fix" that 1 persistent bug that can't seem to die?
The million-dollar question is not whether you can review at the speed the model is coding. It is whether you can trust review alone to catch everything.
If a robot assembles cars at lightning speed... but occasionally misaligns a bolt, and your only safeguard is a visual inspection afterward, some defects will roll off the assembly line. Human coders prevent many bugs by thinking during assembly.
> Human coders prevent many bugs by thinking during assembly.
I'm far from an AI true believer but come on -- human coders write bugs, tons and tons of bugs. According to Peopleware, software has "an average defect density of one to three defects per hundred lines of code"!
IMHO more rigorous test automation (including fuzzing and related techniques) is needed. Actually that holds whether AI is involved or not, but probably more so if it is.
This is not where AI-assisted coding is going. Where it is going is: The AI will quickly become better at avoiding these types of mistakes than humans ever were (and are ever going to be), because they can and thus will be RL'ed away. What will be left standing longest is providing the vision wrt what the actual problem is, you want to solve.
> Not software engineers being kicked out and some business person pressing a few buttons to have a fully functional app (as is playing out in a lot of fantasies on LinkedIn & X), but rather experienced engineers using AI to generate bits of code and then meticulously reviewing and testing them.
Why would a human review the code in a few years when AI is far better than the average senior developer? Wouldn't that be as stupid as a human reviewing stockfish's moves in Chess?
> Not software engineers being kicked out and some business person pressing a few buttons to have a fully functional app (as is playing out in a lot of fantasies on LinkedIn & X)
The theory of enshittification says that "business person pressing a few buttons" approach will be pursued, even if it lowers quality, to save costs, at least until that approach undermines quality so much that it undermines the business model. However, nobody knows how much quality tradeoff tolerance is there to mine.
AI is great for undifferentiated heavy lifting and surfacing knowledge, but by the time I've made all the decisions, I can just write the code that matters myself there.
As it happens, if this were released a month later, it would have been a huge loss for us.
This OAuth library is a core component of the Workers Remote MCP framework, which we managed to ship the day before the Remote MCP standard dropped.
And because we were there and ready for customers right at the beginning, a whole lot of people ended up building their MCP servers on us, including some big names:
(Also if I had spent a month on this instead of a few days, that would be a month I wasn't spending on other things, and I have kind of a lot to do...)
The million dollar (perhaps literally) question is – could @kentonv have written this library quicker by himself without any AI help?