Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing everyone forgets when talking about LLMs replacing coders is that there is much more to software engineering than writing code, in fact that's probably one of the smaller aspects of the job.

One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.

If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?



That was also one of the challenges during the offshoring craze in the 00s. The offshore teams did not have the power, or knowledge to push back on things and just built and built and built. Sounds very similar to AI right?

Probably going to have the same outcome.


I tend to see today's AI Vibrators as the managers of the 00s and their army of offshore devs.


Did you actually mean to say AI Vibrators?


I'm guessing it is a derogatory pun, alluding to vibe coders.


Give it 3 months. There will be an AI Vibrator on the market, if there isn't one already.


I just found this MCP integration, but unfortunately I don't have a device I can test it on - https://github.com/ConAcademy/buttplug-mcp


You can fix that! I hear there's this new thing called online shopping


The difference is that when AI exhibits behavior like that, you can refine the AI or add more AI layers to correct it. For example, you might create a supervisor AI that evaluates when more requirements are needed before continuing to build, and a code review AI that triggers refinements automatically.


Question is, how autonomous decision making works, nobody argues that llm can finish any sentence, but can it push a red button?


Of course it can push a red button. Trivially, with MCP.

Setting up a system to make decisions autonomous is technically easy. Ensuring that it makes the right decisions, though, is a far harder task.


So it can push _a_ red button, but not necessarily the _right_ red button


LLM's do no software engineering at all, and that can be fine. Because you don't actually need software engineering to create successful programs. Some applications will not even need software engineering for their entire life cycles because nobody is really paying attention to efficiency in the ocean of poor cloud management anyway.

I actually imagine it's the opposite of what you say here. I think technically inclined "IT business partners" will be able of creating applications entirely without software engineers... Because I see that happen every day in the world of green energy. The issues come later, when things have to be maintained, scale or become efficient. This is where the software engineering comes in, because it actually matters if you used a list or a generator in your Python app when it iterates over millions of items and not just a few hundreds.


That's the thing too right.. the vast majority of software out there barely needs to scale or be super efficient

It does need to be reliable, though. LLMs have proven very bad at that


> the vast majority of software out there barely needs to scale or be super efficient

That was the way I saw it for a while. In recent months I've begun to wonder if I need to reevaluate that, because it's become clear to me that scaling doesn't actually start from zero. By zero I mean that I was naive enough to think that all programs, even the most googled programmed one by a completely new junior would at least have, some, efficiency... but some of these LLM services I get to work on today are so bad they didn't start at zero but at some negative number. It would have been less of an issue if our non-developer-developers didn't use Python (or at least used Python with ruff/pyrefly/whateveryoulike, but some of the things they write can't even scale to do minimal BI reporting.


Maybe automated testing of all forms will just become much more ubiquitous as a safeguard against the worst of AI hallucinations? I feel that would solve a lot of people's worries about LLMs. I'm imagining a world where a software developer is a person who gathers requirements, writes some tests, asks the AI to modify the codebase, ensures the tests still work, makes sure they are a human who understands the change the AI just made, and continues with the next requirement.


Yea, this is why I dont buy the "all developers will disappear". Will I write a lot less code in 5 years (maybe almost none)? Sure, I already type a lot less now than a year ago. But that is just a small part of the process.


No. the scope will just increase to occupy the space left by LLMs. We will never be allowed to retire.


Exactly, also today I can actually believe I could finish a game which might have taken much longer before LLMs, just because now I can be pretty sure I won't get stuck on some feature just because I never done it before.


I think LLMs are better at requirement elicitation than they are at actually writing code.


It actually comes down to feedback loops which means iterating on software being used or attempting to be used by the customer.

Chat UIs are an excellent customer feedback loop. Agents develop new iterations very quickly.

LLMs can absolutely handle abstractions and different kinds of component systems and overall architecture design.

They can also handle requirements analysis. But it comes back to iteration for the bottom line which means fast turnaround time for changes.

The robustness and IQ of the models continue to be improved. All of software engineering is well underway of being automated.

Probably five years max where un-augmented humans are still generally relevant for most work. You are going to need deep integration of AI into your own cognition somehow in order to avoid just being a bottleneck.


The thing is, it is replacing _coders_ in a way. There are millions of people who do (or did) the work that LLMs excel at. Coders who are given a ticket that says "Write this API taking this input and giving this output" who are so far down the chain they don't even get involved in things like requirements analysis, or even interact with customers.

Software engineering, is a different thing, and I agree you're right (for now at least) about that, but don't underestimate the sheer amount of brainless coders out there.


That sounds more like a case against a highly ossified waterfall development process than anything.

I would argue it’s a good thing to replace the actual brainless activities.


> One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.

It really depends on the organization. In many places product owners and product managers do this nowadays.


> If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?

Presumably, they're trained on a ton of requirements docs, as well as a huge number of customer support conversations. I'd expect them to do this at least as well as coding, and probably better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: