I would argue that the ratio of work to breakthroughs is not a form of inefficiency, but something inevitable about the nature of breakthroughs.
In my opinion, a breakthrough is not the production of new knowledge, it is rather its adoption by the public (beginning with industry).
As such, the rate at which breakthroughs can emerge is bounded by factors external to the producers of breakthroughs. And these outside factors are possibly already limiting.
Another point I would make is that what constitutes a breakthrough is not conditioned by how significant it is, only that it is adopted as a change of processes or mental model. As such, more powerful tools can lead to larger leaps between breakthroughs, but not so much higher rate of breakthrough.
As tools become powerful enough to produce yesterday's year's worth of breakthroughs in a month, then the general public and industry will still wait a year before adopting new technology, only it will see larger progress from the previous iteration. This is in fact the case with LLMs. Even on an avant-garde forum as HN, a very common opinion is "I'm waiting out stagnation before I adopt".
As an over simplification, consider only breakthroughs those that come to have widespread commercial application. If we had an oracle for breakthroughs that could produce arbitrarily many today's-breakthroughs as fast as desired, we'd still be limited by our ability to put them in practice. Work must be allocated, carried out over time, and each new breakthrough requires changing processes and the people involved learning new things, which takes time and energy.
I think this human resistance to change is fundamentally what determines the achievable rate of breakthroughs. As the name implies, a breakthrough is a rupture. It is highly inefficient to be upending one's methods every month. It can even be outright impossible to keep up with all the theoretical advancements, before they have crystallized and been digested into accessible vulgarization, if that is not one's profession (i.e. all time devoted to it).
In my applied sciences field, industry is lagging behind some 20 years. And we ourselves are perhaps a century late to some theoretical advances (I can think of one off the top of my head). At the lowest level, there is resistance to change in that ideas take much longer to be carried to a working prototype, than it takes to have them. Hence, someone who constantly hops to new ideas is guaranteed not to make any progress. By necessity, some stubbornness is selected for. Once things are fleshed out (a multi year endeavour), you still have to convince the broader community (same sub field but not direct collaborators) that your idea has merits surpassing theirs, which is a problem best solved one retirement, and one past mentee hire, at a time. And ultimately convince industrial actors that they should dump millions industrializing these novel methods, when none of their competitors have been doing it (hence it is urgent to wait), the viability (robustness, scalability) of the idea remains to be seen, and the benefits weighed against the risk their practitioner user base won't be able to understand the full scope of the progress and see the need to invest time in learning new things and devising new processes (all of which takes time, money, and makes you dependent on this pioneering supplier). And, lastly, there are three other approaches claiming to be better alternatives.
I don't see a way around this pipeline, and more powerful tools can indeed accelerate some of the stages, but there will remain incompressible delays. Ideas need time to be diffused and understood, all the more if they were advancing at a rapid pace enabled by powerful AIs.
In my opinion, a breakthrough is not the production of new knowledge, it is rather its adoption by the public (beginning with industry).
As such, the rate at which breakthroughs can emerge is bounded by factors external to the producers of breakthroughs. And these outside factors are possibly already limiting.
Another point I would make is that what constitutes a breakthrough is not conditioned by how significant it is, only that it is adopted as a change of processes or mental model. As such, more powerful tools can lead to larger leaps between breakthroughs, but not so much higher rate of breakthrough.
As tools become powerful enough to produce yesterday's year's worth of breakthroughs in a month, then the general public and industry will still wait a year before adopting new technology, only it will see larger progress from the previous iteration. This is in fact the case with LLMs. Even on an avant-garde forum as HN, a very common opinion is "I'm waiting out stagnation before I adopt".
As an over simplification, consider only breakthroughs those that come to have widespread commercial application. If we had an oracle for breakthroughs that could produce arbitrarily many today's-breakthroughs as fast as desired, we'd still be limited by our ability to put them in practice. Work must be allocated, carried out over time, and each new breakthrough requires changing processes and the people involved learning new things, which takes time and energy.
I think this human resistance to change is fundamentally what determines the achievable rate of breakthroughs. As the name implies, a breakthrough is a rupture. It is highly inefficient to be upending one's methods every month. It can even be outright impossible to keep up with all the theoretical advancements, before they have crystallized and been digested into accessible vulgarization, if that is not one's profession (i.e. all time devoted to it).
In my applied sciences field, industry is lagging behind some 20 years. And we ourselves are perhaps a century late to some theoretical advances (I can think of one off the top of my head). At the lowest level, there is resistance to change in that ideas take much longer to be carried to a working prototype, than it takes to have them. Hence, someone who constantly hops to new ideas is guaranteed not to make any progress. By necessity, some stubbornness is selected for. Once things are fleshed out (a multi year endeavour), you still have to convince the broader community (same sub field but not direct collaborators) that your idea has merits surpassing theirs, which is a problem best solved one retirement, and one past mentee hire, at a time. And ultimately convince industrial actors that they should dump millions industrializing these novel methods, when none of their competitors have been doing it (hence it is urgent to wait), the viability (robustness, scalability) of the idea remains to be seen, and the benefits weighed against the risk their practitioner user base won't be able to understand the full scope of the progress and see the need to invest time in learning new things and devising new processes (all of which takes time, money, and makes you dependent on this pioneering supplier). And, lastly, there are three other approaches claiming to be better alternatives.
I don't see a way around this pipeline, and more powerful tools can indeed accelerate some of the stages, but there will remain incompressible delays. Ideas need time to be diffused and understood, all the more if they were advancing at a rapid pace enabled by powerful AIs.