My theory is not that individual genius accomplishes more than a group, but that whether a field is currently more suited for individual single-brain productivity or assembly-line cooperation is a signal of whether that field is nascent, at its peak, or in decline. I genuinely think software technology has not advanced in a long time. Computers got really fast, which allowed us to build a mountain of abstractions on top, and now you can throw together web apps, and there is no shortage of companies running around with that hammer and banging on everything. But the actual boundaries of what is possible with software have not moved in a while.
Most of the software nowadays (in addition to being terribly slow and broken, by the way) feels like stuff that doesn't actually need to exist. It doesn't advance the boundaries of technology or humanity's skill tree in any way. It's a "nice to have" — it might have some social value or make some business process more efficient or whatever — but no one looks at it and goes, wow, we couldn't do that before.
Someone is going to bring up AI and LLMs, and about that, I will just say that ChatGPT was a real accomplishment... but after that, I have not seen any use cases of LLMs that represent a genuine step forward. Like smart contracts, it's a solution in desperate search of a problem, a technology in search of a breakout app. You feel like there's just no way it doesn't have wide reaching implications, and everyone around you (especially VCs) talks about the "possibilities unlocked" — but two years in and we haven't seen a single actually interesting use case. (Again, of course they're being incorporated to optimize some business process or whatever. But who cares?)
Meanwhile, as far as the day to day reality of software engineering goes, if you work professionally as an engineer, your job is more or less wrangling microservices in one form or another. If you're at a big company this is probably literal, but even if you're at a startup the bigco transplants start doing microservice stuff basically on arrival, and a bunch of the "best practices" and frameworks and tools are meant to mimic microservices type thinking (making things more complicated rather than less).
As a final note, I grew up being a giant fan of YC and PG's essays, so I will make a comment about the latest batch of YC startups. It seems to me that a bunch of them basically exist to make some process more efficient. But then you wonder, why does that process need to exist in the first place? Like there was one startup helping you to optimize your AWS costs or something, but why is software written in such a way to begin with that you have all these microservices and towers of abstractions and untrackable infrastructure? Like you look at these startups solving a problem of solving a problem... and then you get to the root problem, and the whole thing is just a giant castle in the sky. I think this describes much of the motivating thrust of software today.
Theorem provers are a rapidly developing software innovation that is invading a field that has existed since Euclid. I would consider Bitcoin an advancement of boundaries as well. Possibly even Kubernetes, VR, etc.
These boundary pushing technologies appear as a "decline" as you put it because most of the software world needs to worry about paying bills with CRUD apps instead of pushing technological boundaries. I would say this Project Managementization of math as Terrence Tao put it would push the boundaries of math to a point of a sort of revolution, but the sad truth is that it will lose its romantic qualities.
I'm sure there are plenty of mathematicians that enjoy the Euclid/Andrew Wiles way of things and will come to similar conclusions as you. It will be easy to view it as either a revolution or decline depending on whether you prefer advancing the field as a universal truth or you prefer the practice as it is.
Here's a potential analogy: AlphaGo made a major breakthrough when it beat Lee Sedol. For many of us, that was the end of it, but for for professional Go players, it revolutionized the game it ways that no one outside of the Go community can really appreciate. Likewise for AlphaFold and protein structure determination, where it has revolutionized a field of research and at this point you only see this directly if you're using the tool day-to-day (in which case it may have dramatically changed the nature of your research).
For many software products I think you have a point, where superficial things like word processors are mostly just getting bloated these days. But for open-ended things like Mathematics, having new tools can crack open deep problems that were unapproachable in the old way. This new way of doing research may not turn out to be for everyone, but at the end of the day the field will explore previously unapproachable but undeniably interesting territory.
It might also make mathematics more accessible, and a possible outcome of that might be that more superficial mathematics gets done. But if it also enabled more deep mathematics then there's no point being cynical about the former. Maybe there is a similar fallacy with respect to software development.
Most of the software nowadays (in addition to being terribly slow and broken, by the way) feels like stuff that doesn't actually need to exist. It doesn't advance the boundaries of technology or humanity's skill tree in any way. It's a "nice to have" — it might have some social value or make some business process more efficient or whatever — but no one looks at it and goes, wow, we couldn't do that before.
Someone is going to bring up AI and LLMs, and about that, I will just say that ChatGPT was a real accomplishment... but after that, I have not seen any use cases of LLMs that represent a genuine step forward. Like smart contracts, it's a solution in desperate search of a problem, a technology in search of a breakout app. You feel like there's just no way it doesn't have wide reaching implications, and everyone around you (especially VCs) talks about the "possibilities unlocked" — but two years in and we haven't seen a single actually interesting use case. (Again, of course they're being incorporated to optimize some business process or whatever. But who cares?)
Meanwhile, as far as the day to day reality of software engineering goes, if you work professionally as an engineer, your job is more or less wrangling microservices in one form or another. If you're at a big company this is probably literal, but even if you're at a startup the bigco transplants start doing microservice stuff basically on arrival, and a bunch of the "best practices" and frameworks and tools are meant to mimic microservices type thinking (making things more complicated rather than less).
As a final note, I grew up being a giant fan of YC and PG's essays, so I will make a comment about the latest batch of YC startups. It seems to me that a bunch of them basically exist to make some process more efficient. But then you wonder, why does that process need to exist in the first place? Like there was one startup helping you to optimize your AWS costs or something, but why is software written in such a way to begin with that you have all these microservices and towers of abstractions and untrackable infrastructure? Like you look at these startups solving a problem of solving a problem... and then you get to the root problem, and the whole thing is just a giant castle in the sky. I think this describes much of the motivating thrust of software today.