Yeah, where do people think Technology Growth comes from. GOP is all Growth, Growth, Growth, will save us, but hey lets poison the well from which it flows.
All AI growth today, is actually from Academia from 20 years ago.
Growth comes from many sources. The supply-side economics wing of the GOP would claim that lower taxes and smaller, less intrusive government will allow for a higher private sector growth rate. There may be some truth to that, although the effects are probably limited compared to the development of disruptive new technologies.
Just yesterday I was talking with some friends about the disaster that neoliberalism has been.
I see billionaires as "water-balloon" (wealth) hoarders, and I see taxes on the rich as thorns on bushes. If the politicians ever wanted to make "trickle down" work, then we need thornier bushes and to make it impossible for rich people to not go through thorny bushes.
But the whole deregulation craze has made it so that the billionaires don't even need people to help them protect their "water-balloons"...
Debt has always existed, and "LLMs is making it worse"
Yes, I think point is, LLM's are making it 'a-lot' worse.
And then compounding that will be in 10 years when no Senior Devs were being created, so nobody will be around to fix it. Extreme of course, there will be dev's, they'll just be under-water, piled on with trying to debug the LLM stuff.
>they'll just be under-water, piled on with trying to debug the LLM stuff.
So in that theory the senior devs of those days will still be able to command large salaries if they know their stuff, in specific how to untangle the mess of LLM code.
Good point. Maybe it will circle around, and a few devs that like to dig through this stuff will probably be in high demand. And it will be like earlier cycles when, for example only, a few people really liked working with bits and Boolean logic and they were paid well.
I could also argue that 20 years ago EJBs made it a lot worse, ORMs made it massively worse, heck Rails made it worse, and don't even get me started on Javascript frameworks, which are the epitome of dead programs and technical debt. I guarantee there were assembly programmers shouting about Visual Basic back in the day. These are all just abstractions, as is AI IMO, and some are worse than others.
If and when technical debt becomes a paralyzing problem, we'll come up with solutions. Probably agents with far better refactoring skills than we currently have (most are kind of bad at refactoring right now). What's crazy to me is how tolerant the consumer has become. We barely even blink when a program crashes. A successful AAA game these days is one that only crashes every couple hours.
I could show you a Java project from 20+ years ago and you'd have no idea wtf is going on, let alone why every object has 6 interfaces. Hey, why write SQL (a declarative, somewhat functional language, which you'd think would be in fashion today!), when you could instead write reams of Hibernate XML?! We've set the bar pretty low for AI slop.
An abstraction is somewhat reversible: I can take an EJB definition and then rummage around in the official J2EE & vendor appserver docs & identify what is supposed to happen. Similarly, for VB there is code that the IDE adds to a file that's marked "Don't touch" (at least for the early versions, ISTR VB6 did some magic).
Even were I to store the prompts & model parameters, I suspect that I wouldn't get an exact duplicate of the code running the LLM again.
I see what you mean. The abstractions I mentioned are pretty much just translations / transformations (immutable) on their own. Keep in mind that most of these are also tied to a version (and versioning is not always clear, not is documentation around that version). The underlying byte code translation could also change even without a language or framework version change.
Also, as soon as a human is involved in implementation, it becomes less clear. You often won't be able to assume intent correctly. There will also be long lived bugs, pointer references that are off, etc.
I concede that the opacity and inconsistency of LLMs is a big (and necessary) downside though for sure.
In which universe is an abstraction reversible? You can ask 10 people around you to make you a sandwhich. You've abstracted away the work, but I'm willing to bet $10 that each person will not make the same sandwhich (assuming an assortment of meats, veggies, and sauces) ...
Yeah, maybe it is garbage. But it is still another milestone, if it can do this, then it probably does ok with the smaller things.
This keeps incrementing from "garbage" to "wow this is amazing" at each new level. We're already forgetting that this was unbelievable magic a couple years ago.
The problem will be the timing between when bots that can replace humans, and the actual loss of humans. If bots come too soon, humans are out of jobs, if bots come too late, humans are overworked, economies crash.
Maybe when we get updating models. Right now, they are trained, and released, and we are using that static model with a context window. At some point when we have enough processing to have models that are always updating, then that would be plastic. I'm supposing.
All AI growth today, is actually from Academia from 20 years ago.
reply