Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's the thing too right.. the vast majority of software out there barely needs to scale or be super efficient

It does need to be reliable, though. LLMs have proven very bad at that



> the vast majority of software out there barely needs to scale or be super efficient

That was the way I saw it for a while. In recent months I've begun to wonder if I need to reevaluate that, because it's become clear to me that scaling doesn't actually start from zero. By zero I mean that I was naive enough to think that all programs, even the most googled programmed one by a completely new junior would at least have, some, efficiency... but some of these LLM services I get to work on today are so bad they didn't start at zero but at some negative number. It would have been less of an issue if our non-developer-developers didn't use Python (or at least used Python with ruff/pyrefly/whateveryoulike, but some of the things they write can't even scale to do minimal BI reporting.


Maybe automated testing of all forms will just become much more ubiquitous as a safeguard against the worst of AI hallucinations? I feel that would solve a lot of people's worries about LLMs. I'm imagining a world where a software developer is a person who gathers requirements, writes some tests, asks the AI to modify the codebase, ensures the tests still work, makes sure they are a human who understands the change the AI just made, and continues with the next requirement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: