Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The price feels outrageous, but I think the unsaid truth of this is that they think o1 is good enough to replace employees. For example, if it's really as good at coding as they say, I could see this being a point where some people decide that a team of 5 devs with o1 pro can do the work of 6 or 7 devs without o1 pro.


No, o1 is definitely not good enough to replace employees.

The reason we're launching o1 pro is that we have a small slice of power users who want max usage and max intelligence, and this is just a way to supply that option without making them resort to annoying workarounds like buying 10 accounts and rotating through their rate limits. Really it's just an option for those who'd want it; definitely not trying to push a super expensive subscription onto anyone who wouldn't get value from it.

(I work at OpenAI, but I am not involved in o1 pro)


I wish the second paragraph was the launch announcement


My 3rd day intern still couldn't do a script o1-preview could do in less than 25 prompts.

OBVIOUSLY a smart OAI employee wouldn't want the public to think they are already replacing high-level humans.

And OBVIOUSLY OAI senior management will want to try to convince AI engineers that might have 2nd-guessings about their work that they aren't developing a replacement for human beings.

But they are.


> 25 prompts

Interested to learn more, is that the usual break even point?


25 prompts is the daily limit on o1-preview. And I wrote that script in just one day.


Good enough to replace very junior employees.

But, then again, how companies going to get senior employees if the world stops producing juniors?


Indeed, I'm very concerned about this. Though i think it's a case of tragedy of the commons. Every company individually optimizes for themselves, fucking us over in the aggregate. But I think any executive arguing for this would have to be a pretty big company with an internal pipeline and promoting within to justify it, especially since everyone else will just poach your cultivated talent, and employees aren't loyal anymore (nor should they be, but that's a different discussion).


Maybe someone at OAI should have considered the optics of leading the "12 days of product releases" with this, then.


> The reason we're launching o1 pro is that we have a small slice of power users who want max usage and max intelligence

I'd settle for knowing what level of usage and intelligence I'm getting instead of feeling gaslighted with models seemingly varying in capabilities depending on the time of day, number of days since release and whatnot


> No, o1 is definitely not good enough to replace employees.

You should meet some of my colleagues...


Yeah, to be fair, there exist employees (some of whom are managers) who could not be replaced and their absence would improve productivity. So the bar for “can this replace any employees at all?” is potentially so low that, technically, cat’ing from /dev/null can clear it, if you must have a computerized solution.

Companies won’t be able to figure those cases out, though, because if they could they’d already have gotten rid of those folks and replaced them with nothing.


Unfortunately I'm seeing that in my company already. They are forcing AI tools down our throat and execs are vastly misinterpreting stats like '20% of our code is coming from AI'.

What that means is the simple, boilerplate and repetitive stuff is being generated by LLM's, but anything complex or involving more than a singular simple problem LLM's often provide more problems than benefit. Effective dev's are using it to handle simple stuff and Execs are thinking 'the team can be reduced by x', when in reality you can get rid of at best your most junior and least trained people without loosing key abilities.

Watching companies try to sell their AI's and "Agents" as having the ability to reason is also absurd but the non-technical managers and execs are eating it up...


>The price feels outrageous,

I haven't used ChatGPT enough to judge what a "fair price" is but $200/month seems to be in the ballpark of other "software-tools-for-highly-paid-knowledge-workers" with premium pricing:

- mathematicians: Wolfram Mathematica is $154/mo

- attorneys: WestLaw legal research service is ~$200/month with common options added

- engineers for printed circuit boards : Altium Designer is $355/month

- CAD/CAM designers: Siemens NX base subscription is $615/month

- financial traders : Bloomberg Terminal is ~$2100/month

It will be interesting to see if OpenAI can maintain the $200/month pricing power like the sustainable examples above. The examples in other industries have sustained their premium prices even though there are cheaper less-featured alternatives (sometimes including open source). Indeed, they often increase their prices each year instead of discount them.

One difference from them is that OpenAI has much more intense competition than those older businesses.


This is a really interesting take. I don't think individuals pay for these subscriptions though, it's usually an organizational license.

They also come with extensive support, documentation and people have vast experience using them. They are also integrated into all other tools if the field very well. This makes them very entrenched. I am not sure OpenAI has any of those things. I also don't know what those things would entail for LLMs.

Maybe they need to add modes that are good for certain tasks or integrate with tools that their users most commonly use like email, document processors.


That'll work out nicely when you have 5 people learning nothing and just asking GPT to do everything and then you have a big terrible codebase that GPT can't effectively operate on, and a team that doesn't know how to do anything.

Bullish


Sounds like a great market opportunity for consulting gigs to clean up the aftermath at medium size companies.


This is how I have made my living for years, and that was before AI


I'm rooting for this to happen at scale.

It'll be an object lesson in short-termism.

(and provide some job security, perhaps)


No lessons will be learned, but it’ll provide for some sweet, if unpleasant, contract gigs.


I think that would be a great outcome - more well paid work for everyone cleaning up the mess


Suppose an employee costs a business, say, $10k/mo; it's 50 subscriptions. Can giving access to the AI to, say, 40 employees improve their performance enough to avoid the need of hiring another employee? This does not sound outlandish to me, at least in certain industries.


That’s the wrong question. The only question is “is this price reflective of 10x performance over the competition?”. The answer is almost definitely no.


It doesn't have to be 10x.

Imagine you have two options:

A) A $20/month service which provides you with $100/month of value.

B) A $200/month service which provides you with $300/month of value.

A nets you $80, but B nets you $100. So you should pick B.


Consider a $350k/year engineer.

If Claude increases their productivity 5% ($17.5k/yr), but CGPT Pro adds 7% ($24.5k), that's an extra $7k in productivity, which more than makes up for the $2400 annual cost. 10x the price, but only 40% better, but still worth it.


If I’m understanding their own graphs correctly, it’s not even 10x their own next lowest pricing tier.


In a hypothetical world where this was integrated with code reviews, and minimized developer time (writing valid/useful comments), and minimized bugs by even a small percentage... $200/m is a no-brainer.

The question is - how good is it really.


That sounds very much like the first-order reaction they'd expect from upper and middle management. Artificially high prices can give the buyer the feeling that they're getting more than they really are, as a consequence of the sunk cost fallacy. You can't rule out that they want to dazzle with this impression even if eval metrics remain effectively the same.


I think the key is to have a strong goal. If the developer knows what they want but can't quite get there, even if it gives the wrong answer you can catch it. The use the resulting code to improve your productivity.

Last week when using jetpack compose(which is a react like framework). A cardinal sin in jetpack compose is to change a State variable in a composable based on non-user/UI action which the composable also mutates. This is easy enough to understand this for toy examples. But for more complex systems one can make this mistake. o1-preview made this mistake last week, and I caught it. On prompting it with the stacktrace it did not immediately catch it and recommended a solution that committed the same error. When I actually gave it the documentation on the issue it caught on and made the variable a userpreference instead. I used the userpreference code in my app instead of coding it by myself. It worked well.


It is not good enough to replace workers of a skill level I would hire. But that won't stop people doing it.


I am not so sure about "replace" atleast at my company we are always short staffed (mostly cause we cant find people fast enough given how long the whole interview cycle takes). It might actually free some people up to do more interviews.


That's a great point actually. Nearly everywhere (us included) is short-staffed (and by that I mean we don't have the bandwidth to build everything we want to build), so perhaps it's not a "reduce the team size" but rather a "reduce the level of deficit."


And the fact that ordinary people sanction this by supporting OpenAI is outrageous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: