There are two main threads I keep going back to when thinking about long term AI and why so many investors/statespeople are all in:
1) the labor angle: it’s been stated plainly by many execs that the goal is to replace double percent digits of their workforce with AI of some sort. Human wages being what they are, the savings there are meaningful and seemingly worth the gamble.
2) the military angle: the future of warfare seems to be autonomous weapons/vehicles of all sorts. Given the winner takes all nature of warfare, any edge you can get there is worth it. If not investing enough in AI means the US gets steamrolled by China in the Pacific (and other countries getting steamrolled by whomever China wants to sell/lend its tech to), then it seems to justify most any investment, no matter how ridiculous the current returns seem.
First of all, warfare is not winner take all. That's a sort of video game naive conception. The famous quote "War is politics by other means" is much more accurate.
When armed conflicts happen, it's because the belligerents have specific objectives, and very rarely is that objective "the total obliteration of the enemy" vs something more specific and concrete like territory, access to natural resources, the creation of a vassal state that can be exploited, or sometimes purely ideological (nationalist notions growing into the idea that a people are entitled to an empire).
Anyhow, the point is warfare is not a winner takes all game of obliteration.
But also the idea that the future of warfare will be all autonomous weapons is massively overweighting on drone hype, and ignoring that a lot of the fundamentals haven't changed since the days of Bismark, despite the rise of drones, computer vision algorithms, etc.
A simple example is Ukraine, where the battlefield is essentially defined by the combination of traditional artillery, mines and similar fortifications, and simple observation drones that don't have any particularly complex AI. The combination creates a 20 km "no go zone" that has nothing really to do with autonomy.
In fact, the more AI centric loitering munitions provided by US/EU firms have performed quite poorly in Ukraine, which is why they're favoring much more simple implementations like using hobby FPV drone components, or remote piloting via GSM modems, etc.
Will these technologies play an increasing role in future conflicts? Of course. But they're not going to completely upend things, or obliviate more traditional platforms.
Heck, another example would be simple hand coded AIs have been better than humans in dogfights for decades now. And it matters exactly zero for real world conflicts, because what fighter pilots actually do isn't a Top Gun movie.
Warfare isn’t really a winner takes all affair. Unless you absolutely crush your enemy most warfare ends in a stalemate of one form or another with the victor getting an advantage over the looser. In many cases medium tech advantages can be countered either with better logistics, willingness to trade losses or quality of weapons.
On 1, the railroads had a better point at that than the AI companies. They did allow dispersed industry to integrate, did multiply their countries GDP by a sizeable amount, and went bankrupt anyway.
If those companies replace a low 2-digits percentages of the developers, and capture their entire salary, it's still not enough to reach the depreciation numbers on the article.
On 2, that could justify it... Except that we are talking about fucking LLMs. What do anybody expect LLMs to do in a war that will completely obliterate some country?
I think the ai angle for warfare is overhyped. Most of the autonomous drone stuff happening in Ukraine is not running on bleeding edge nodes. It's radxa sbcs with process nodes from 10 years ago.
Right, the vast majority of compute for autonomous warfare will be at the edge. The latency/jammability of having to communicate with a massive datacenter halfway around the world running bleeding-edge models is a nonstarter. Not to mention that these models are overkill for something like an autonomous suicide drone that just needs a relatively simple CNN trained to recognize enemy uniforms/materiel/buildings/etc.
> it’s been stated plainly by many execs that the goal is to replace double percent digits of their workforce with AI of some sort
Even if we grant that this is possible, have any of these execs actually thought through what happens when their competitors also replace large chunks of their workforce with AI and then begin undercutting them on price? The idea that "our prices will stay exactly the same, but our salary costs will go to zero and become pure profit instead!" is delusional even if AI can actually replace large numbers of people, which itself is quite doubtful.
Presumably if your competitors go to 0 workers before you do they win but in practice that’s unlikely to work. Most companies would be better off buying mature tech once clear savings opportunity materialize.
This is the main thing that's been bugging me about the AI discussion. People seem to forget that capitalism is competitive, and if everyone gains the same advantage, then it's not an advantage. If the cost of labor goes down, it means companies will either need to lower their prices or increase their investment in other areas (e.g. hiring even more people now that they're cheaper).
Unless you're a monopoly, I don't see how AI will lead to these massive cost savings everyone is hoping for.
> Unless you're a monopoly, I don't see how AI will lead to these massive cost savings everyone is hoping for
"If the cost of labor goes down" and "companies...lower their prices," that means cost savings for every one of their customers. If they "increase their investment in other areas," that means lower costs of capital for all of their investments.
You're arguing that the gains from AI don't look likely to be concentrated. That's good! It's not an argument that AI won't be economically revoluationary (and value adding).
> AI companies are competing with each other for that revenue so total spend will go down
You're describing elasticity. None of this is particularly novel. If there is sufficient demand, the thesis is met: returns may not be astronomical, but they'll be positive for at least some of the major players. (Those with the most efficient operations or ability to command a price premium.)
1) the labor angle: it’s been stated plainly by many execs that the goal is to replace double percent digits of their workforce with AI of some sort. Human wages being what they are, the savings there are meaningful and seemingly worth the gamble.
2) the military angle: the future of warfare seems to be autonomous weapons/vehicles of all sorts. Given the winner takes all nature of warfare, any edge you can get there is worth it. If not investing enough in AI means the US gets steamrolled by China in the Pacific (and other countries getting steamrolled by whomever China wants to sell/lend its tech to), then it seems to justify most any investment, no matter how ridiculous the current returns seem.