The thing everyone forgets when talking about LLMs replacing coders is that there is much more to software engineering than writing code, in fact that's probably one of the smaller aspects of the job.
One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.
If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?
That was also one of the challenges during the offshoring craze in the 00s. The offshore teams did not have the power, or knowledge to push back on things and just built and built and built. Sounds very similar to AI right?
The difference is that when AI exhibits behavior like that, you can refine the AI or add more AI layers to correct it. For example, you might create a supervisor AI that evaluates when more requirements are needed before continuing to build, and a code review AI that triggers refinements automatically.
LLM's do no software engineering at all, and that can be fine. Because you don't actually need software engineering to create successful programs. Some applications will not even need software engineering for their entire life cycles because nobody is really paying attention to efficiency in the ocean of poor cloud management anyway.
I actually imagine it's the opposite of what you say here. I think technically inclined "IT business partners" will be able of creating applications entirely without software engineers... Because I see that happen every day in the world of green energy. The issues come later, when things have to be maintained, scale or become efficient. This is where the software engineering comes in, because it actually matters if you used a list or a generator in your Python app when it iterates over millions of items and not just a few hundreds.
> the vast majority of software out there barely needs to scale or be super efficient
That was the way I saw it for a while. In recent months I've begun to wonder if I need to reevaluate that, because it's become clear to me that scaling doesn't actually start from zero. By zero I mean that I was naive enough to think that all programs, even the most googled programmed one by a completely new junior would at least have, some, efficiency... but some of these LLM services I get to work on today are so bad they didn't start at zero but at some negative number. It would have been less of an issue if our non-developer-developers didn't use Python (or at least used Python with ruff/pyrefly/whateveryoulike, but some of the things they write can't even scale to do minimal BI reporting.
Maybe automated testing of all forms will just become much more ubiquitous as a safeguard against the worst of AI hallucinations? I feel that would solve a lot of people's worries about LLMs. I'm imagining a world where a software developer is a person who gathers requirements, writes some tests, asks the AI to modify the codebase, ensures the tests still work, makes sure they are a human who understands the change the AI just made, and continues with the next requirement.
Yea, this is why I dont buy the "all developers will disappear". Will I write a lot less code in 5 years (maybe almost none)? Sure, I already type a lot less now than a year ago. But that is just a small part of the process.
Exactly, also today I can actually believe I could finish a game which might have taken much longer before LLMs, just because now I can be pretty sure I won't get stuck on some feature just because I never done it before.
It actually comes down to feedback loops which means iterating on software being used or attempting to be used by the customer.
Chat UIs are an excellent customer feedback loop. Agents develop new iterations very quickly.
LLMs can absolutely handle abstractions and different kinds of component systems and overall architecture design.
They can also handle requirements analysis. But it comes back to iteration for the bottom line which means fast turnaround time for changes.
The robustness and IQ of the models continue to be improved. All of software engineering is well underway of being automated.
Probably five years max where un-augmented humans are still generally relevant for most work. You are going to need deep integration of AI into your own cognition somehow in order to avoid just being a bottleneck.
The thing is, it is replacing _coders_ in a way. There are millions of people who do (or did) the work that LLMs excel at. Coders who are given a ticket that says "Write this API taking this input and giving this output" who are so far down the chain they don't even get involved in things like requirements analysis, or even interact with customers.
Software engineering, is a different thing, and I agree you're right (for now at least) about that, but don't underestimate the sheer amount of brainless coders out there.
> If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?
Presumably, they're trained on a ton of requirements docs, as well as a huge number of customer support conversations. I'd expect them to do this at least as well as coding, and probably better.
One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.
If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?