Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the flip side of it (and where most institutional investors are mentally) is that if OpenAI is to ever achieve AGI, it must invest nearly a trillion dollars towards that effort. We all know LLMs have their limitations, but next phase of AI growth is going to come from OpenAI, Anthropic, Google, maybe even Microsoft, and not some stealth startup. E.g., Only Big Tech can get us to AGI due to sheer massive amounts of investments, not a traditional silicon valley garage startup looking for their Series A. So institutional investors have no choice but to continue to throw money into Big Tech hoping for the Big Payoff, rather than investing in VC funds like 10 years ago.

AMD did this deal because it's literally offering financing to them. OpenAI doesn't have access to capital markets like AMD does. So it's selling off shares of its own stock to finance the purchase of billions of dollars worth of GPUs. And the trick appears to be working since the stock is up 30% today, meaning it has paid for itself and then some.



That “only big tech can solve AGI” bit doesn’t make sense to me - the scale argument was made back when people thought just more scale and more training was gonna keep yielding results.

Now it seems clear that what’s missing is another architectural leap like transformers, likely many different ones. That could come from almost anywhere? Or what makes this something where big tech is the only potential source of innovation?


Yup. LLMs can get arbitrarily good at anything with RL, but RL produces spiky capabilities, and getting LLMs arbitrarily good at things they're not designed for (like reasoning, which is absolutely stupid to do in natural language) is very expensive due to the domain mismatch (as we're seeing in realtime).

Neurosymbolic architectures are the future, but I think LLMs have a place as orchestrators and translators from natural language -> symbolic representation. I'm working on an article that lays out a pretty strong case for a lot of this based on ~30 studies, hopefully I can tighten it up and publish soon.


The barrier of entry is too high for traditional SV startups or a group of folks with a good research idea like transformers. You now need hundreds of billions if not trillions to get access to compute. OpenAI themselves have cornered 40% of global output of DRAM modules. This isn't like 2012, where you could walk into your local BestBuy, get a laptop, open an AWS account, and start a SaaS over the weekend. Even the AI researchers themselves are commanding 7- and 8-figure salaries that rival NFL players.

At best, they can sell their IP to BigTech, who will then commercialize it.


Sorry I still don’t understand.

Are you saying you disagree that a new architectural leap is needed and just more compute for training is enough? Or are you saying a new architectural leap is needed and that or those new architectures will only be possible to train with insane amounts of compute?

If the latter I dont understand how you could know that about an innovation that’s not yet been made


I’m saying it is highly likely that the next leap in AI technology will require massive amounts of compute. On the order of tens of billions per year. I’m also saying that there are a small number of companies that would have access to that level of compute (or financial capital).

In other words, it’s is MORE likely that an OpenAI/Google/Microsoft/Grok/Anthropic gets us closer to AGI than a startup we haven’t heard of yet. Simply because BigTech has cornered the market and has a de facto monopoly on compute itself. Even if you had raised $10 billion in VC funding, you literally can not buy GPUs because there is not enough manufacturing capacity in the world to fill your order. Thus, investors know this and capital is flowing to BigTech, rather than VC funds. Which creates the cycle of BigTech getting bigger, and squeezing out VC money for startups.


If it comes from anywhere else but it needs a lot of capital to execute, big tech will just acquire them right? They'll have all the data centers and compute contracts locked up I guess.


> And the trick appears to be working since the stock is up 30% today, meaning it has paid for itself and then some.

It's a bubble. The tricks keep working until they suddenly don't, and then all the prior tricks unwind themselves.


no amount of investment is going to make AGI just appear. It's looking more and more like current architectures are a dead end and then it's back to the AI drawing board just like the past 30 years.


Theres a phrase for this. Financial engineering.


The difference this time is that it's global coordinated collusion, and it's not just the superwealthy, it's states that are willing to go all in on this. If you thought the banks were too big to fail, the result here is going to be a nationalization of AI resources and doubling down.


In other words they are stealing capital from the rest of the economy. Starving it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: