If inference costs 17x revenue you'd better raise prices because bringing in customers is just going to increase costs. If training costs 17x revenue you'd better bring in some customers, but you might be alright in the long term if you've trained something useful.
There's nothing especially remarkable about taking some risk - everything from drug discovery to drilling for oil involves large up-front costs with uncertain returns.
It sounds a bit like Cognition labs can afford to take a few losing bets costing tens of millions (based on $2bn valuation) but investors seem to be speculating that they'll find something which works in the end.
The portion of the article the previous “17x” title referenced:
“In a presentation earlier this month, the venture-capital firm Sequoia estimated that the AI industry spent $50 billion on the Nvidia chips used to train advanced AI models last year, but brought in only $3 billion in revenue.”
What's wrong with them buying you coding assistants? Sounds to me like more time on hand to prolong those movie nights.
In seriousness, the upside with code assistants is increasingly clear. And may be the bet that these would continue to improve quadratically is worth taking if you're a VC. In fact, PG argues that buying into hype cycles is the default: https://paulgraham.com/herd.html
> So far, the utility on existing codebases is less than zero.
I'm curious what definition of "Utility" you are going by here - at the most basic level of "smarter, more context-aware autocomplete", does it have zero or negative value to a developer? Would you disagree that even an experienced developer could save some amount of time, at the current state of the technology, even if only at the stages where you're just writing code you already have a pretty clear concept of in your head?
> Sounds to me like more time on hand to prolong those movie nights.
Unfortunately that's not how things work under capitalism. To the extent that coding assistants make a difference in productivity (in my experience quite limited gains), that will just mean that you get more work done for the same time and salary.
Even amortized over a few years, they clearly expect to be able to raise prices soon. If Microsoft stops laundering revenue through OpenAI, and as Stability switches away from free stuff, the best AI tools are about to get a lot more expensive.
Raise prices on what? There are barely any revenue generating AI products with a line of paying customers. I look around an I still don’t see anything. At the end of the day there have to be viable business models at the end of the line in order to support all this otherwise this too is just a house of cards.
I think they will have to build actual products instead of just making a model available and having customers figure out something useful to do with it.
My understanding is that the APIs are profitable, and the best alternative (spin up mixtral on cloud provider) isn't that expensive. So there's some downward pressure there.
I expect ChatGPT is priced right where anthropic and OpenAI want. It's generating cash, and if you're a heavy user they can rate limit or offer a higher tier
Nah it’s the opposite. At the moment there’s a lot of low hanging fruit, AI companies can make money from the difference between GPT4 and free models.
Once we get free models that are as good as GPT4, while there will still be a market for a “GPT5”, the current applications will experience serious price competition.
It doesn’t matter, if you can run Llama3/4 on commodity hardware is what I’m saying.
If I can fulfill my objectives with a commodity model with GPT-4 capabilities then I’m not going to pay any $$$ for GPT-5 because my needs are already met. So OpenAI will lose customers at the same time as gaining them.
The price at the top end will go up, but the price at the bottom end (which is currently the top end) will fall.
I think Meta is the last major player who's only releasing free generative AI tools, so with no incentive to kneecap their best free stuff to avoid cannibalizing their own products. AI companies will have to build actual products on top of their models to produce more value, so I want to see if Meta will release whole free products to keep the pressure on. Maybe "Segment Anything" is already a step in that direction.
Hopefully until then we will have stuff that, without requiring specific hardware or hardware from specific vendors, easily runs on home computers. Best with the ability to let the models relearn from chosen sources of knowledge.
Your first concern makes no sense. You can have specific hardware that is widely available on home computers. In fact, you soon won't be able to buy a computer without them.
Even that isn't going to hold forever, as Nvidia's current valuation assumes that demand for their chips will only keep growing at the same rate or more.
How much revenue does Cognition Labs have to justify this valuation?
Even if we forecast this, Devin is using GPT-4. That is going to be very expensive for Cognition Labs for the number of API calls / credits they are doing and it will be a while for them to roll their own model to surpass GPT-4 for Devin to 'replace' developers.
Otherwise, this tells us that we are at the very peak of another VC fuelled AI bubble which will end with lots of losers being unable to generate meaningful revenue and the VCs will push and run this startup into the ground.
tangent, one of the funnier stats I've heard recently: the price/revenue ratio for Nvidia stock is about 37. the price/revenue ratio for truth social is over 1000.
Does this surprise anyone with a modicum of intelligence?
In the past 40 years we went from:
1. PCs (real productivity gain) as now all every home and office could afford a computer running important things like spreadsheets and word processors.
To
2. The world wide web. Huge economic and productivity gain from making information acquisition and communication cheaper and more readily available. The apogee of the world wide web was probably in the early 2000s. Google search worked. You could find what you were looking for. eBay and Amazon worked.
To accelerating bullshit based on advertising.
3. Facebook, Google SEO spam, Twitter Reddit. Increasing programing frameworks all of negligibly marginal productivity and economic value.
4. Crypto currency. For some governments world wide turned a blind eye to the creation of private tokens whose only real use cases are speculation (gambling, distributed ponzi), money laundering and or other black and grey market activities of negligible or negative real economic value.
To now
5. "AI" trained on Reddit posts, Wiki articles, stack posts, and Google SEO blog spam. "AI" is basically a glorified, hallucinating chat bot at this stage. Nothing like real intelligence or capable of producing trustworthy reliable output at par with a human expert or a Google search from 20 years ago, before the degradation.
The sigmoid curve strikes again.
Silicon Valley in bringing the world microprocessors, PCs, the commerical WWW. Since then the ratio of bullshit to real innovation has increased at an accelerating rate. Eventually there will be a profound economic.restructuring of the area more in line with reality.
The only reason why it hasn't happened yet since 2009 to present when bullshit proliferated was low interest rates and too much money chasing gains with delusional business propositions.
I don't think anyone is questioning the impact and future of ML/AI products and services.
But as things are right now, the only AI companies with any serious impact are the very companies that train and release the huge and extremely expensive models.
Everyone else seem to be layers on top of those products.
I'm getting immediate flashbacks to the era of "Google killed my startup/product", where google could simply either acquire companies, or implement their own solutions which essentially killed small competition.
But seriously, most of that stuff is completely useless for anything but machine learning tasks, which you'll find some hobbyists for, but for all that stuff that is floating around right now? As a hobbyist you need one GPU.
17x is a big hurdle to overcome. Likely:
- models will get a lot more efficient for storing and retrieving information
- the marginal value created with this same hardware will increase
- the market price for this hardware will continue to fall similar to historic norms
I expect this to set a floor on the crash.
We will continue to get better and better predictions at more affordable prices.
I'm really excited for it. It's clear to me that the AI proponents are psychopaths trying to do scary and dangerous things to prove their tech is powerful. They should do something useful, like dig dirt.
For anyone confused about why so many of the early comments on this story seem kinda dumb it is a combination of two things:
1. It took about 50 minutes before a link to a non-paywalled version of the story was posted so many early commentators were going just on the title, which normally works pretty well but can fail badly when the submitter does a poor job of picking a title.
2. The submitted title was "WSJ: AI industry spent 17x more on Nvidia chips than it brought in in revenue".
Yes. I had the mention of it already typed out but decided that its not adding much to my point. IBM Watson was some mysterious thing that got shipped with Windows 95 or 98 and i (as a kid) didn't understood what it could do.
I meant the AI stunt IBM ran with a computer winning Jeopardy and then IBM followed by selling AI services that were by all accounts unrelated to the thing that won Jeopardy.
In the corporate world we had CIOs and Operations leaders naively but legitimately making decisions based on their belief Watson was going to replace service workers. Now it’s OpenAI, Copilot or Generative AI broadly.
Yet what the corporate world has to show for all the hype is the same administrative burden and many failed experiments, POCs and strategic projects. Oh and a new generation of believers and snake oil sales teams.