Niklaus Wirth wrote about this in 1995, in his essay A Plea for Lean Software.
About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage. (Modern program editors request 100 times that much). An operating system had to manage with 8,000 bytes, and a compiler had to fit into 32 Kbytes, whereas their modern descendants require megabytes. Has all this inflated software become any faster? On the contrary, were it not for a thousand times faster hardware, modern software would be utterly unusable.
That said, as someone fairly young, I still don't think that makes it wrong or something only an old man would think. Software seems to perform exactly as well as it needs to and no more, which is why hardware advances don't make our computers run software much faster.
Aside from slowness, feature creep leads to poor quality, i.e. tons of bugs and user confusion with ever-changing graphical interfaces.
If software was simpler, we could afford to offer some formal guarantees of correctness. Model check protocols, verify pre and post conditions à la Dafny, etc.
+1 to this. Like a lot of issues, I think the root is ideological, but this one in particular very clearly manifests organizationally.
The companies building everyday software are ever bigger— full of software engineers, designers and various kinds of managers who are asked to justify their generous salaries. At an individual level I'm sure there's all sorts of cases, but at a general level there's often almost no other option but to introduce change for the sake of change.
I once asked a man who worked in marketing why Oreos keep making crazy new flavors like "sour patch kids Oreos" when the normal kind is great and clearly has no issues being sold. I could see some upside - it gets people talking about them, it's fun, it reinforces the normal flavor as the best chocolate cookie, etc. but I was still dubious that those benefits outweighed the cost of developing new flavors in a lab, paperwork for food safety, a new manufacturing process, new ads, new packaging, etc. especially for something temporary.
He said it's often just some new marketing exec wants to put something on their resume, and they have certain metrics that they target that don't necessarily align with long term profits of the company.
+1 to this. Like a lot of issues, I think the root is ideological, but this one in particular very clearly manifests organizationally.
The companies building everyday software are ever bigger— full of software engineers, designers and various kinds of managers who are asked to justify their generous salaries. At an individual level I'm sure there's all sorts of cases, but at a general level there's almost no other option but to introduce change for the sake of change.
At a general level, I believe there are other options - changes/features need to meet some level of usage or it is scrapped out of recognition that supporting all these features make bugs more likely, performance likely to degrade, increase difficulty of adding features, make the product more difficult to use, etc.
> Software seems to perform exactly as well as it needs to and no more
The cynical spin I would put on this idea is that software performs as poorly as it can get away with. MSFT is feeling the need/pressure to have Office load faster, and they will try to get away with preloading it.
Otherwise, there is a strong pull towards bloat that different people will try to take credit for as features even if the cumulative experience of all these "features" is actually a worse user-experience.
software authors that don't care about performance annoy me (and I am an old man.)
The amount of things a computer can do in a single thread are amazing, and computers now have a dozen or more threads to do work. If developers cared about performance, things would easily be 20x as performant as they are today.
I'm not talking about "write in assembly, duh" I'm talking about just doing things intelligently instead of naively. The developers I support often simply are not thinking about the problem they're solving and they solve the problem in the simplest way (for them) and not the simplest way for a computer.
Software is an inefficiency amplifier, because the number of developers for a piece of code is much smaller than the number of computers that run that code; how much coal has been burned solely because of shitty implementations? I'd wager that the answer is "a LOT!"
Even if you don't care about coal usage, think about how much happier your users would be if your application was suddenly 5x faster than it was previously? now think of how many customers want their software to be slow (outside of TheDailyWTF): zero.
languages like javascript and python remove you so much from the CPU and the cache that even if you were thinking of those things, you can't do anything about it. JS and Electron are great for developers, and horrible for users because of that amplification I described above.
I am dead tired of seeing hustle culture overtake everything in this field, and important things, to me, like quality and performance and support all fall straight down the toilet simply because executives want to release features faster.
things like copilot could help with this, i hope. presumably copilot will help introduce better code into applications than a daydreaming developer would, though the existence of vibe coding sort of nulls that out probably.
one thing that AI will do quite soon is increase the amount of software that exists quite dramatically. and I am kinda concerned about the possibility that it's all going to suck horribly.
I commiserate with your frustration with developers writing things suboptimally all too often. However, I disagree with the assumption that it's a JS/Python vs C issue.
Example: when VS Code came out, it was much, much faster, more responsive and stable than Visual Studio at the time. Despite being based on Electron, it apparently was much better on architecture, algorithms and multithreading than VS with its C++ and .NET legacy codebase. That really impressed me, as a C++ programmer.
Overall, it feels like folks who idealize bygone eras of computing didn't witness or have forgotten how slow Windows, VS, Office etc. used to feel in the 90s.
> Overall, it feels like folks who idealize bygone eras of computing didn't witness or have forgotten how slow Windows, VS, Office etc. used to feel in the 90s.
Let’s normalize speed over time like we do dollars, so we are talking about the same thing.
Given the enormous multiplier in CPU and storage hardware speeds and parallelism today vs. say 1995, any “slow” application then should be indistinguishable from instant today.
“Slow” in the 90’s vs. “Slow” in 2025 are essentially different words. Given unclarified co-use pushes several orders magnitude of either speed or inefficiency difference under the rug.
The promise of computing is that what was slow in the 1960s and 1970s would be instant in 1990. And those things were instant, but those things aren’t what people did with computers anymore.
New software that did more than before, but less efficiently, came around, so everything felt the same. Developers didn’t have to focus on performance so much, so they didn’t.
Developers are lazy sacks who are held skyward because of hardware designers alone. And software developers are just getting heavier and heavier all the time, but the hardware people can’t hold them forever.
This cannot continue forever. Run software from the 1990s or 2000s on modern hardware. It is unbelievably fast.
Maybe it was slow in the 1990s, sure. I ask why we can’t (or won’t) write software that performs like that today.
The compiler for Turbo Pascal could compile something like a million lines per second in 1990. We have regressed to waiting for 60+ minute C++ compile times today, on even moderate project sizes.
Debugging in visual studio used to be instant when you did things like Step Over. You could hold the Function key down and just eyeball your watch variables to see what was going on. The UI would update at 60FPS the entire time. Now if I hold down that key, the UI freezes and when I let go of the key it takes time to catch up. Useless. All so Microsoft could write the front end in dotnet. Ruin a product so it is easier to write… absolute nonsense decision.
All software is like that today. It’s all slow because developers are lazy sacks who will only do the minimum necessary so they can proceed to the next thing. I am ashamed of my industry because of things like this.
“Developers are lazy sacks who are held skyward because of hardware designers alone”
As a programmer who studied computer and electrical engineering in university, never before have I been so offended by something I one hundred percent agree with
Counterpoint: single threaded performance hasn't improved much in the past 20 years. Maybe 5x at best. And virtually every UI programming environment still has problems with work done on the main thread.
RAM parallel bandwidth, increased caching levels and size, and better caching rules, instruction re-ordering, predictive branching, register optimization, vector instructions, ... there have been many advances in single thread execution since the 90's. Beyond any clock speed advances.
> The amount of things a computer can do in a single thread are amazing, and computers now have a dozen or more threads to do work. If developers cared about performance, things would easily be 20x as performant as they are today.
Why? A good portion of programs are still single-threaded, and often that's the correct choice. Even in games a single-threaded main thread or logic thread may be the only choice. Where multi-threading makes sense it should be employed, but it's difficult to do well.
Otherwise, it's up to the OS to balance threads appropriately. All major OSes do this well today.
I think what the author wanted to say is that because computers are very fast today developers have no incentive of writing optimized code.
Nowadays you just "scale horizontally" by the magic of whatever orchestration platform you happen to use, which is the modern approach of throwing hardware at the problem in the vertical scaling days.
It’s not about programs being multithreaded. It’s about computers running multiple programs at once on different threads and they all perform well.
One can write software that uses the CPU cache in non-dumb ways no matter how many threads your program has. You can craft your structs so that they take less space in RAM, meaning you can fit more in cache at once. You can have structs of arrays instead of arrays of structs if that helps your application. Few people think of things like this today, they just go for the most naive implementation possible so that the branch predictor can’t work well and everything needs to be fetched from RAM every time instead of building things so that the branch predictor and the cache are helping you instead of impeding you. People just do the bare minimum so that the PM says the card is complete and they never think of it again. It’s depressing.
The tools to write fast software are at our fingertips, already installed on our computers. And I have had zero success in getting people to believe that they should develop with performance in mind.
So your assertion is that developers should get in a big huddle to decide how they’re going to consume L1 between applications? Which of course no dev has control over since the OS determines what runs and when.
You can make your time in the CPU more efficient by thinking of the cache and the branch predictor, or you can say “nothing I do matters because the OS schedules things how it wants.” Up to you I guess, but I know which of those approaches performs significantly better.
My standard is that software should appear to work instantly to me, a human. Then it is fast enough. No pressing a button and waiting. That would be great.
That is probably the correct measure. If “The Promise of Computing” is ever to come true, people must never wait on computers when interacting with them.
Waiting is ok when it comes to sending batches of data to be transformed or rendered or processed or whatever. I’m talking about synchronous stuff; when I push a key on my keyboard the computer should be done with what I told it to do before I finish pushing the button all the way down. Anything less is me waiting on the computer and that slows the user down.
Businesses should be foaming at the mouth about performance; every second spent by a user waiting on a computer to do work locally, multiplied by the number of users who wait, multiplied by the number of times this happens per day, multiplied by the number of work days in a year… it’s not a small amount of money lost. Every more efficient piece of code means lighter devices are needed by users. Lambda is billed by CPU and RAM usage, and inefficient code there directly translates into higher bills. But everyone still writes code which stores a Boolean value as a 32-bit integer, and where all numbers are always 8-bytes wide.
What. The. Fuck.
People already go on smoke breaks and long lunches and come in late and leave early; do we want them waiting on their computers all of the time, too? Apparently so, because I’ve never once heard anyone complain to a vendor that their software is so slow that it’s costing money, but almost all of those vendor products are that slow.
I’m old enough that I’m almost completely sick of the industry I once loved.
Software developers used to be people who really wanted to write software, and wanted to write it well. Now, it’s just a stepping stone on the way to a few VP positions at a dozen failed startups and thousands of needlessly optimistic posts on LinkedIn. There’s almost no craft here anymore. Businessmen have taken everything good about this career and flushed it down the toilet and turned teams into very unhappy machines. And if you don’t pretend you’re happy, you’re “not a good fit” anymore and you’re fired. All because you want to do your job well and it’s been made too difficult to reliably do anything well.
> languages like javascript and python remove you so much from the CPU and the cache that even if you were thinking of those things, you can't do anything about it.
Even operating systems don't get direct access to the hardware these days. Instead a bunch of SoC middlemen handle everything however they like.
Wait…those dastardly systems architecture engineers with their decadent trusted platform modules, each with an outrageous number of kilobytes of ROM. They are the true villains of software performance?
that doesn't matter; if you make your cache usage smart and your branches predictable, the CPU will take advantage of that and your program will run faster. It is in the interests of the system and CPU designers to make sure this is the case, and it is.
If you do the things which make your code friendly to the CPU cache and the branch predictor, when it comes time for your code to run on the CPU, it will run faster than it would if you did not do those things.
What's your proposal for a "compromise" language between programmer productivity and performance, especially for multiple threads and CPUs? Go, Rust, a BEAM language?
I don't think the tools are the issue here, they are tools you can do good and bad jobs with all of them. What is lacking are the right incentives. The tech market has never been as anti-competitive as it is today. Let's repeal DMCA 1201 and go from there.
I wasn't asked for examples of software that is congruent to whatever definition you want. I was asked for a proposal of a "compromise" language, and I answered that question.
> presumably copilot will help introduce better code into applications than a daydreaming developer would
Copilot is trained on Github (and probably other Git forges w/o permission, because OpenAI and Microsoft are run by greedy sociopaths.)
I'd wager that the majority of fleshed out repositories on these sites contain projects written at the "too-high level" you describe. This certainly seems to be true based on how these models perform ("good" results for web development and scripting, awful results for C/++/Rust/assembly...) - so I wouldn't get your hopes up, unfortunately.
That probably plays into it as well. I have yet to see any convincing evidence that contradicts LLMs being mere pattern parrots.
My personal benchmark for these models is writing a simple socket BPF in a Rust program. Even the latest and greatest hosted frontier models (with web search and reasoning enabled!) can only ape the structure. The substance is inevitably wanting, with invalid BPF instructions and hallucinated/missing imports.
imho these tools are great i fyou know what you're doing, becasue you know how to smell test the output, but a footgun otherwise.
It works great for me, but it is necessarily an aid learning tool more than a full on replacement, someone's still gotta do the thinking part, even if the llm's can cosplay -reasoning- now
I'm also young and heavily favor simple, quality software. Age is a highly coarse proxy for one's experiences, but in this case I think it has more to do with my personality. I have enough experience in computing that I don't think I'm making demands that are unrealistic, although they are certainly unrealistic if we maintain current incentives and motives.
About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage. (Modern program editors request 100 times that much). An operating system had to manage with 8,000 bytes, and a compiler had to fit into 32 Kbytes, whereas their modern descendants require megabytes. Has all this inflated software become any faster? On the contrary, were it not for a thousand times faster hardware, modern software would be utterly unusable.
https://www.computer.org/csdl/magazine/co/1995/02/r2064/13rR...
That said, as someone fairly young, I still don't think that makes it wrong or something only an old man would think. Software seems to perform exactly as well as it needs to and no more, which is why hardware advances don't make our computers run software much faster.