Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there anyone who has successfully productized AI? ChatGPT isn’t a profitable product, at least not yet. Google Photos and Spotify recommendations are the best AI products I can think of with clear revenue, and in these examples AI is just a cherry on top of a product people would use anyway.


Github Copilot seems like a pretty clear example. They charge a subscription that's in excess of the marginal cost of inference.


I’d be astonished if they’re even close to breaking even on copilot. In its current incarnation it wouldn’t even lace the boots of what’s coming out of OpenAI.

CopilotX with its OpenAI collab will be the real winner - if it ever gets released to those on the waitlist. I’m not aware of anyone who got in yet, which leads me to believe it doesn’t yet exist.


Copilot is already "backed" by OpenAI – it uses Codex under the hood which is a fine-tuned GPT-3 model.

CopilotX is not a paradigm shift, it's a version change from GPT-3 to GPT-4.


GPT4 is arguably much better at coding, hence the "paradigm shift".


> In its current incarnation it wouldn’t even lace the boots of what’s coming out of OpenAI.

I don’t know about you, but Copilot is part of my daily work, while OpenAI ChatGPT is still more or less a toy to me.


Copilot is and has always been OpenAI so not sure what you're talking about.


I got access to the Copilot CLI, which is supposed to be part of the Copilot X package eventually. Dunno anyone who has gotten access to Copilot Chat yet, which I expect is what everyone really wants.


That’s a good point. I forgot about Copilot.


Midjourney, NaturalReaders (text to speech reader)


curious if/when MS will get desirable returns on the significant investment needed to run/train copilot


The question there is how many of the estimated 100,000 software engineers at Microsoft are using Copilot, and what has been their (I'm sure measured) productivity boost. Microsoft does derive some benefit from Copilot being accessible to the paying subscribers of the world from having them give feedback and in free (and paid) press. But the internal use numbers probably easily justify its initial cost to train.

Say Copilot makes an engineer 2x as productive, their all-in salary is $500k to make the math easy, 1,000 MS sw engineers are using it, and Copilot took $5mm to train (GPT-3 took $4.6mm). Those 1k MS employees now being twice as productive are doing the work of an extra 1k people at $500k, or $500 million's worth in a year. That means $5 million in Copilot training costs are paid for in... 4 days. I have no inside information, so those number are all made up, but I'm pretty sure the initial training costs have already been paid off internally.

We also don't know how many multiples of $5mm it took to produce the initial version of Copilot, nor how many subsequent training runs there have been.

Point is, any significant productivity gains made, across an organization the size of Microsoft engineering, easily pays for big expensive training runs.


SOME parts of an engineer's work are 2x faster, but not all. Generating code - yes, writing tests and docs - yes, designing the system - no, debugging - no, attending meetings - no, getting your data faster, or moving the other team to finish integration sooner - no help. So it's going to be 10% boost overall, not 100%.

The nice effect is that the AI makes people more confident to try things and go out of their comfort zone. Maybe the quality of the end product will be higher.


It makes tou nowhere nearly 2x productive. Usefull yes but 2x is a dream


It's a good question, but also helpful to point out that one of the beauties of these models is that you can train them once and deploy to many use cases. The same model can be used by Github, Bing, Office 365, Azure and so on.

And as for the big multi-billion investment in OpenAI, they may have more than made that back up on their valuation already. Plus the deal was structured that OpenAI would pay it's revenues into Microsoft till the investment was paid back and MS would sill end up with a 49% stake.

All in all, sounds like a smart investment from MS and, cerry on top, managed to majorly embarrass a main rival.


agree on it being a good play by MS. will interesting to see if they do spin it out to their other realms


If the return are computed by increase in market cap, they may already have got that.


How do you want to define AI? Google's been using ML models to power various Google products for years. Translate was a big switch over, back in 2016, and it powers the Google Search answer box. I have no idea how profitable that answer box is, but I'm fairly certain that Google Search is profitable. The Google Home speaker voice recognition is also powered by ML models.


Assistant however never made money.


OpenAI is growing revenues from ~0 in 2022 to $300M in 2023 to $1B in 2024. That sounds like a product to me.


It is currently April 2023. “~0 in 2022” is the only part of that that seems credible. I not convinced of OpenAI’s rosy predictions of future explosive growth.


The fact that Microsoft is baking GPT into all of their products guarantees explosive growth.

ChatGPT is also one of the fastest growing consumer products in history by number of users. At $20 a month for plus, it could be a significant revenue stream.

Then add all the companies like Duolingo and Snapchat that are using GPT as well.

If you don’t see this as explosive growth, then I don’t know what to tell you.


> The fact that Microsoft is baking GPT into all of their products guarantees explosive growth.

Why?


Because Microsoft is the largest software company in the world. Their products are in wide use by virtually all businesses and schools in the country. Judging by the popularity of ChatGPT, these features will be very popular and heavily used.


"Hopes to grow" revenues. Current estimates put hardware costs alone at $700k/day, so even if they hit $300M in 2023 that won't make them profitable. This isn't even counting the people costs and other operation costs required to run a company.

edit: order of magnitude was wrong on costs per day.


And yet, $300M is only 1.25M subscribers at the current $20/mo rate. If we say that they need a $1B / year to be comfortably profitable, that's ~4.2M subscribers. A good rule of thumb is that you can hope to convert about 10% of your free user base to paid; one random source says they have 100M monthly active users - which at 10% conversion, is $2.4B / year. I think they'll be fine.


10% is insanely high conversion for B2C freemium. It’s closer to 1% for most products.


Does GPT-4 look like 'most products'?

You are talking about a revolutionary product that literally dominates mindshare from consumers to students to CEOs to governments. Its a pure monopoly that has insanely wide utility and instantly obvious value proposition.


Do you know anything about B2C freemium products? 10% conversion over 10s-100s of millions of users is literally insane.

It doesn't matter how revolutionary GPT-4 is, people's willingness to pay for anything is generally very low. ChatGPT premium is also a very expensive subscription for a consumer product!


People are willing to pay $10k a year for college, very high conversion rate. People are willing to pay for tutoring, at rate that's probably about 10% of student base, despite its very high costs ($50/h, not $20/m)

At the very minimum, I can already see every university and high school student paying for GPT-4. Its a way way way more powerful essay writer and personal tutor than GPT3.5, that alone is incentive to upgrade. For only $20 a month.

Know what GPT-4 currently is insanely capacity limited. So far, the limitation is not on the demand side, but the supply side.

Getting 100 million users in 3 months with 0 marketing or network effects already annihilates existing records, getting 10% conversion is nothing special.


You could have saved yourself and everyone else the time by instead writing “I have no idea what I’m talking about and I don’t understand constraints”.

https://twitter.com/visakanv/status/1113154447050334208


The cheating sites I see on the first page of Google seem to charge around $10-$30/page. GPT is cheaper, but lower quality. I don't think the market for tens of dollars cheaper but shittier quality than buyessayfriend.com is anywhere near every highschool and college student.


Look what happened to image generators, no one talks about Dall-e or whatever that was.


How much money will they spend servicing requests for those 4.2M subscribers?


Do you have the order of magnitude right at $700/day? That's not much at all.


It's not 2024 yet, and 2023 just started.


Revenues mean nothing if your expenses outpace them. What's the net profit?


I've seen MidJourney's estimated revenue at about $750k/month. Not bad.


This seems likely lower than their costs. Is there a breakdown somewhere?


GPT API is a successful product. All those start ups that are just a thin layer over GPT that are funded by YCombinator are paying for API use and that's profitable for OpenAI.


> and that's profitable for OpenAI.

Reference?

"Profitable" means they're making more money than they're using, at this moment.


Are you implying openai is selling access to their API at a loss?


No, I would like facts, not assumptions. It's definitely not safe to assume they are making a profit, as a whole, or per transaction. It's more complicated than that.

Profit has a strict definition of $revenue - $cost, for a business operation as a whole, which leaves money in the bank at the end of the month.

They could be making more money for a single query than the cost of compute time for that single query, but that may not cover the engineering and idle servers. They could be running at a loss with the assumption that they can improve efficiency per transaction soon. They could be running at a "loss" because they're giving some of the compute away for free right now, to improve the training with the user responses. Or maybe they are making fistfuls of money. "Profitable" has a strict meaning, shouldn't be assumed, and definitely isn't required, at this point in their operation.

I'm very interested to know if they are profitable, at the moment, but I don't think that's been publicly disclosed yet, and I can't find anything. A reference is required.


I don't have a reference. I'm taking the very reasonable assumption that openai is making money on API calls based on how much they charge compared to others in this space, the favorable pricing they receive from Microsoft, their ability to constantly bring down the costs and push the savings to their consumers, their unwillingness to lower the cost of Dall-E even though it's more expensive than it's competitors.

Very reasonable assumptions. You will never get certainty, even if they say they're profitable maybe they're just lying for investors. If you see their bank account total go up every month maybe it's a ponzi scheme.

For my heuristics, if not profitable, at least close, and definitely a major success in acquiring market share and customer mind share.


So, because Fraud is possible, unfounded speculation is the best we have?


I've given plenty of evidence as to why it's a reasonable assumption. But doesn't seem like you have much access to nuance in your thinking.


There is a world in between, we cannot be sure of anything because of Fraud, and here is some rough guess work.

If you want to claim the latter as evidence then fair enough, I would call it speculation.

In either case there is no need to resort to petty insults.


It was not a petty insult. The options you gave are "fraud" or "unfounded speculation." Literally lacks any kind of nuance. What sort of nuance would you say you contributed?


I think it would be better to make it clear that something is an assumption rather than stating it as fact, to not add to the noise. In the world of tech (and any R&D heavy group), initially running at a loss is the norm, not the exception.


I use GPT about 100x more than search now for my day to day work. I pay them $20/mo. I’d likely pay $100 or more for the value I’m getting.

They’ll figure it out.


TikTok uses a bunch of AI. From their algorithm for the FYP. To vision for classification of videos. And sound processing to bucket sounds/music. This feeds into their rec engine as well as their safety engine.


if you consider Spotify recommendations as AI then you should consider also Youtube and every social network based on a non-time based timeline and ads, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: