That's a surprisingly large overhead. I've not measured that large an impact on AMD, particularly for compute heavy.
Did you profile at all? And have you observed if it's not compute-bound? If it's memory or IO bound it can be due to other virtualization overheads, such as memory encryption.
The manufacturing cost is emphatically not only 1% of the total development cost. Particularly for GPUs, the high bandwidth memory and manufacturing costs are a significant portion of the product price.
> The manufacturing cost is emphatically not only 1% of the total development cost.
I have no idea what is the manufacturing cost of a 800 mm^2 die is, but I am sure it is lower than the development cost.
> Particularly for GPUs, the high bandwidth memory and manufacturing costs are a significant portion of the product price.
HBM is not manufactured by the GPU vendor, it is an off-the-shelf component that AMD buys like any other company can. Thus, the cost of HBM is tallied in the BOM and integration costs (interposer, packaging, etc).
800mm^2 die would roughly cost 300-350 usd axcoeding to [1]. That's the Taiwan price and dor N4. This doesn't include the memory or the package. The silicon cost for N3 is close to 2x.
Well, seems that increase of 20% for US-based manufacturing on a base cost of $350 is $70. MSRP of 5090 is $1,999. So, on shoring will result in 3.5% increase in MSRP, which is nothing.
It’s misleading to cite two government-funded supercomputers as evidence that NVIDIA lacks monopoly power in HPC and AI:
- Government-funded outliers don’t disprove monopoly behavior. The two AMD-powered systems on the TOP500 list—both U.S. government funded—are exceptions driven by procurement constraints, not market dynamics. NVIDIA’s pricing is often prohibitive, and its dominance gives it the power to walk away from bids that don’t meet its margins. That’s not competition—it’s monopoly leverage.
- Market power isn't disproven by isolated wins. Monopoly status isn’t defined by having every win, but by the lack of viable alternatives in most of the market. In commercial AI, research, and enterprise HPC workloads, NVIDIA owns an overwhelming share—often >90%. That kind of dominance is monopoly-level control.
- AMD’s affordability is a symptom, not a sign of strength. AMD's lower pricing reflects its underdog status in a market it struggles to compete in—largely because NVIDIA has cornered not just the hardware but the entire CUDA software stack, developer ecosystem, and AI model compatibility. You don't need 100% market share to be a monopoly—you need control. NVIDIA has it.
In short: pointing to a couple of symbolic exceptions doesn’t change the fact that NVIDIA’s grip on the GPU compute stack—from software to hardware to developer mindshare—is monopolistic in practice.
I do not find her critique of argument #2 compelling [1]. Monetization of AI is key to economic growth. She's focused on the democratic aspects of AI, which frankly aren't pertinent. The real "race" in AI is between economic and financial forces, with huge infrastructure investments requiring a massive return on investment to justify the expense. From this perspective, increasing the customer base and revenue of the company is the objective. Without this success, investment in AI will drop, and with it, company valuations.
The essay attempted to mitigate this by noting OAI is nominally a non-profit. But it's clear the actions of the leadership are firmly aligned with traditional capitalism. That's perhaps the only interesting subtly of the issue, but the essay missed this entirely. The omission could not have been intentional, because it provides a complete motivation for item #2.
[1] #2 is 'The US is a democracy and China isn’t, so anything that helps the US “win” the AI “race” is good for democracy.'
That is, "the ends justifies the means"? Yep, seems like we are already at war. What happened to the project of adapting nonzero sum games to reality??
The U.S. may be a nominal democracy, but the governed have no influence over the oligarchy. For example, they will not be able to stop "AI" even though large corporations steal their output and try to make their jobs obsolete or more boring.
Real improvements are achieved in the real world, and building more houses or high speed trains does not require "AI". "AI" will just ruin the last remaining attractive jobs, and China can win that race if they want to, which isn't clear yet at all. They might be more prudent and let the West reduce its collective IQ by taking instructions from computers hosted by mega corporations.
> And when I reach the part where the AI, having copied itself all over the Internet and built robot factories, then invents and releases self-replicating nanotechnology that gobbles the surface of the earth in hours or days, a large part of me still screams out that there must be practical bottlenecks that haven’t been entirely accounted for here.
This is the crux of the issue. There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities that are explained away by the 'singularity' being essentially magic. The entire approach is a doomed version of deus ex machina.
It also appears quite telling the traditional approach is focused on exotic technologies, such as nanotech, and not ICBMs. That's also magical thinking.
We literally spent trillions in the past century building doomsday machines -- hydrogen bombs and ICBMs -- to literally, intentionally destroy humanity as part of the MAD defensive strategy in the Cold War. That stuff is largely still out there. If anything suddenly kills humanity, that's high on the list of possibilities.
The other huge existential risk is someone intentionally creating a doomsday bug. Think airborne HIV with a long incubation period, or an airborne cancer causing virus. Something that would spread far and wide and cause enough debilitation and death that it leads to the collapse of civilization, then continues to hang around and kill people post-collapse (with no health care) to the point that the human race is in long term danger of extinction.
Both of those are extremely plausible to the point that the explanation for why they haven't happened yet is "nobody with the means has been that evil yet."
>There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities
Haven't there already been a couple of massive leaps in AI capabilities (AlexNet in 2012, then transformers in 2017)?
Is it not the publicly-stated goal of the leaders of most of the AI labs to make further massive leaps?
Isn't drastic improvements what happens in fields that humanity is starting to understand?
Wasn't there for example a drastic improvement in humanity's ability to manufacture things starting in 1750 (which led to a massive increase in fossil-fuel use, which led to climate change and other adverse effects like "killer smog")?
This. Ivy Leagues will be here in 3.5 years when Trump will not. EU announced 500 million Euros for research while Harvard alone has almost 60 billion endowment.
That endowment is not their budget for scientific research. The EU basically caught up with what Harvard spends of their own money with that €500 million.
> In fiscal year 2024 (FY24), research at Harvard was supported by more than $1 billion of sponsored funds from federal, foundation, and industry sponsors, alongside an additional $526 million funded directly by the University.
And the kicker is, they're actually planning to overtake Harvard's $526 million investment by approximately three orders of magnitude.
> Von der Leyen announced the 500 million euros ($566.6 million) incentive package and said she also wanted EU member states to invest 3% of gross domestic product in research and development by 2030.
> If governments haven't learned to do properly tax corporations already what makes you think they'll suddenly figure it out now?
"Ground-breaking new EU rules come into effect today introducing a minimum rate of effective taxation of 15% for multinational companies active in EU Member States"
People said that the last time he was elected. Be wary of treating him as an aberration. He’s not magic, he’s exactly what people voted for. What makes you think they won’t vote for the same again with a new face?
He is unusual in that he's boosted by decades of pop culture fame. Other MAGA candidates haven't been able to recreate the same level of success. You're right that there's a lot of stupidity that won't magically go away when he does, but the environment for an adult (of either party) to win won't be so hard.
> Amazon gets a lot of flak for totally bungling their internal AI model development, squandering massive amounts of internal compute resources on models that ultimately are not competitive, but the custom silicon is another matter
Juicy. Anyone have a link or context to this? I'd not heard of this reception to NOVA and related.
I think Nova may have changed things here. Prior to Nova their LLMs were pretty rubbish - Nova only came out in December but seems a whole lot better, at least from initial impressions: https://simonwillison.net/2024/Dec/4/amazon-nova/
Did you profile at all? And have you observed if it's not compute-bound? If it's memory or IO bound it can be due to other virtualization overheads, such as memory encryption.
reply