Hacker Newsnew | past | comments | ask | show | jobs | submit | why_only_15's commentslogin

I'm not really sure but my recollection from talking to them in 2019 was that it was quite difficult to get features shipped because of e.g. hacking risk.


It's certainly true that iOS's strict sandboxing and aggressive resource management probably made life harder for them, but that doesn't excuse the lack of deep integration for 1p automation. That's the kind of stuff AppleScript allowed two decades prior without any background runtime.


taking into account all the impacts on society, uber is a substantial improvement on what came before. sometimes laws are bad and it is good when you break them


doing a u32 compare instead of an f32 compare is not rust-specific or indeed CPU-specific.


This trick is very useful on Nvidia GPUs for calculating mins and maxes in some cases, e.g. atomic mins (better u32 support than f32) or warp-wide mins with `redux.sync` (only supports u32, not f32).


Probably more like richest 10%, which applies to most people in the US


Random unrelated point: in a 100km radius circle between Atlanta and Augusta there are ~2,000,000 people (calculated using https://www.tomforth.co.uk/circlepopulations/ )


Haha thank you for doing the math! I was lazy and just added the populations and a plus at the end.


I'd be pretty curious to get patio11's opinion on why #5 happened.


Assuming the 10M records is ~2000M input tokens + 200M output tokens, this would cost $300 to classify using llama-3.3-70b[1]. If using llama lets you do this in say one day instead of two days for a traditional NLP pipeline, it's worthwhile.

[1]: https://openrouter.ai/meta-llama/llama-3.3-70b-instruct


> ...two days for a traditional NLP pipeline

Why 2 days? Machine Learning took over the NLP space 10-15 years ago, so the comparison is between small, performant task-specific models versus LLMs. There is no reason to believe the "traditional" NLP pipelines are inherently slower than Large Language Models, and they aren't.


my claim is not that it would take two days for such a pipeline to run but that it would take two days to make an NLP pipeline whereas an LLM pipeline would be faster to make.


how is this a plateau since gpt-4? this is significantly better


First, this model is yet to be released. This is a momentum "announcement". When the O1 was "announced", it was announced as a "breakthrough" but I use Claude/O1 daily and 80% of the time Claude beats it. I also see it as a highly fine-tuned/targeted GPT-4 rather than something that has complex understanding.

So we'll find out if this model is real or not by 2-3 months. My guess is that it'll turn out to be another flop like O1. They needed to release something big because they are momentum based and their ability to raise funding is contingent on their AGI claims.


I thought o1 was a fine-tune of GPT-4o. I don't think o3 is though. Likely using the same techniques on what would have been the "GPT-5" base model.


Intelligence has not been LLM's major limiting factor since GPT4. The original GPT4 reports in late-2022 & 2023 already established that it's well beyond an average human in professional fields: https://www.microsoft.com/en-us/research/publication/sparks-.... They failed to outright replaced humans at work not because of lacking intelligence.

We may have progressed from a 99%-accurate chatbot to one that's 99.9%-accurate, and you'd have a hard time telling them apart in normal real world (dumb) applications. A paradigm shift is needed from the current chatbot interface to a long-lived stream of consciousness model (e.g. a brain that constantly reads input and produces thoughts at 10ms refresh rate; remembers events for years and keep the context window from exploding; paired with a cerebellum to drive robot motors, at even higher refresh rates.)

As long as we're stuck at chatbots, LLM's impact on the real world will be very limited, regardless of how intelligent they become.


O3 is multiple orders of magnitude more expensive to realize a marginal performance gain. You could hire 50 full time PhDs for the cost of using O3. You're witnessing the blowoff top of the scaling hype bubble.


What they’ve proven here is that it can be done.

Now they just have to make it cheap.

Tell me, what has this industry been good at since its birth? Driving down the cost of compute and making things more efficient.

Are you seriously going to assume that won’t happen here?


>> Now they just have to make it cheap.

Like they've been making it all this time? Cheaper and cheaper? Less data, less compute, fewer parameters, but the same, or improved performance? Not what we can observe.

>> Tell me, what has this industry been good at since its birth? Driving down the cost of compute and making things more efficient.

No, actually the cheaper compute gets the more of it they need to use or their progress stalls.


> Like they've been making it all this time?

Yes exactly like they’ve been doing this whole time, with the cost of running each model massively dropping sometimes even rapidly after release.


No, the cost of training is the one that isn't dropping any time soon. When data, compute and parameters increase, then the cost increases, yes?


Do you understand the difference between training and inference?

Yes, it costs a lot to train a model. Those costs go up. But once you trained it, it’s done. At that point inference — the actual execution/usage of the model — is the cost you worry about.

Inference cost drops rapidly after a model is released as new optimizations and more efficient compute comes online.


That’s precisely what’s different about this approach. Now the inference itself is expensive because the system spends far more time coming up with potential solutions and searching for the optimal one.


I feel like I’m taking crazy pills.

Inference always starts expensive. It comes down.


And again, no. The cost of inference is a function of the size of the model and if models keep getting bigger, deeper, badder, the cost of inference will keep going up. And if models stop getting bigger because improved performance can be achieved just by scaling inference, without scaling the model- well that's still more inference; and even if the cost overall falls, there will need to be so much more inference to scale sufficiently to keep AI companies in competition with each other that the money they have to spend will keep increasing, or in other words it's not how much it costs but how much you need to buy.

This is a thing, you should know. It's called Jevon's Paradox:

In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced.[1][2][3][4]

https://en.wikipedia.org/wiki/Jevons_paradox

Better check those pills then.

Oh but, you know, merry chrimbo to you too.


>> Do you understand the difference between training and inference?

Oh yes indeed-ee-o and I'm referring to training and not inference because the big problem is the cost of training, not inference. The cost of training has increased steeply with every new generation of models because it has to, in order to improve performance. That process has already reached the point where training ever larger models is prohibitively expensive even for companies with the resources of OpenAI. For example, the following is from an article that was posted on HN a couple days ago and is basically all about the overwhelming cost to train GPT-5:

In mid-2023, OpenAI started a training run that doubled as a test for a proposed new design for Orion. But the process was sluggish, signaling that a larger training run would likely take an incredibly long time, which would in turn make it outrageously expensive. And the results of the project, dubbed Arrakis, indicated that creating GPT-5 wouldn’t go as smoothly as hoped.

(...)

Altman has said training GPT-4 cost more than $100 million. Future AI models are expected to push past $1 billion. A failed training run is like a space rocket exploding in the sky shortly after launch.

(...)

By May, OpenAI’s researchers decided they were ready to attempt another large-scale training run for Orion, which they expected to last through November.

Once the training began, researchers discovered a problem in the data: It wasn’t as diversified as they had thought, potentially limiting how much Orion would learn.

The problem hadn’t been visible in smaller-scale efforts and only became apparent after the large training run had already started. OpenAI had spent too much time and money to start over.

From:

https://archive.ph/L7fOF

HN discussion:

https://news.ycombinator.com/item?id=42485938

"Once you trained it it's done" - no. First, because you need to train new models continuously so that they pick up new information (e.g. the name of the President of the US). Second because companies are trying to compete with each other and to do that they have to train bigger models all the time.

Bigger models means more parameters and more data (assuming there is enough which is a whole other can of worms) more parameters and data means more compute and more compute means more millions, or even billions. Nothing in all this is suggesting that costs are coming down in any way, shape or form, and yep, that's absolutely about training and not inference. You can't do inference before you do training, you need to train continuously, and for that reason you can't ignore the cost of training and consider only the cost of inference. Inference is not the problem.


> What they’ve proven here is that it can be done.

No they haven't, these results do not generalize, as mentioned in the article:

"Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute"

Meaning, they haven't solved AGI, and the task itself do not represent programming well, these model do not perform that well on engineering benchmarks.


Sure, AGI hasn’t been solved today.

But what they’ve done is show that progress isn’t slowing down. In fact, it looks like things are accelerating.

So sure, we’ll be splitting hairs for a while about when we reach AGI. But the point is that just yesterday people were still talking about a plateau.


About 10,000 times the cost for twice the performance sure looks like progress is slowing to me.


Just to be clear — your position is that the cost of inference for o3 will not go down over time (which would be the first time that has happened for any of these models).


Even if compute costs drop by 10X a year (which seems like a gross overestimate IMO), you're still looking at 1000X the cost for a 2X annual performance gain. Costs outpacing progress is the very definition of diminishing returns.


From their charts, o3 mini outperforms o1 using less energy. I don’t see the diminishing returns you’re talking about. Improvement outpacing cost. By your logic, perhaps the very definition of progress?

You can also use the full o3 model, consume insane power, and get insane results. Sure, it will probably take longer to drive down those costs.

You’re welcome to bet against them succeeding at that. I won’t be.


Yes, that's exactly what I'm implying, otherwise they would have done it a long time ago, given that the fundamental transformer architecture hasn't changed since 2017. This bubble is like watching first year CS students trying to brute force homework problems.


> Yes, that's exactly what I'm implying, otherwise they would have done it a long time ago

They’ve been doing it literally this entire time. O3-mini according to the charts they’ve released is less expensive than o1 but performs better.

Costs have been falling to run these models precipitously.


I would agree if the cost of AI compute over performance hasn't been dropping by more than 90-99% per year since GPT3 launched.

This type of compute will be cheaper than Claude 3.5 within 2 years.

It's kinda nuts. Give these models tools to navigate and build on the internet and they'll be building companies and selling services.


That's a very static view of the affairs. Once you have a master AI, at a minimum you can use it to train cheaper slightly less capable AIs. At the other end the master AI can train to become even smarter.


The high efficiency version got 75% at just $20/task. When you count the time to fill in the squares, that doesn't sound far off from what a skilled human would charge


People act as if GPT-4 came out 10 years ago.


> how is this a plateau since gpt-4? this is significantly better

Significantly better at what? A benchmark? That isn't necessarily progress. Many report preferring gpt-4 to the newer o1 models with hidden text. Hidden text makes the model more reliable, but more reliable is bad if it is reliably wrong at something since then you can't ask it over and over to find what you want.

I don't feel it is significantly smarter, it is more like having the same dumb person spend more thinking than the model getting smarter.


Fantastic article. I didn't realize dairy cows lactated ~60lbs/day or ~3.5% of their body weight. Totally insane. Chickens appear to be this way too -- from some quick research Rhode Island Reds are ~3kg, lay ~300 eggs/year and each egg is ~62g.

The graph at the end ("US milk yield continues to grow, but falls short of its genetic potential") is interesting. If I saw that graph, I would interpret it as dairy farmers overfitting on the genetic yield potential measure, not as something that needs explanations like climate change.


I think the second paragraph is poorly written

> America’s cows are now extraordinarily productive. In 2024, just 9.3 million cows will produce 226 billion pounds of milk (about 100 million tons) – enough milk to provide ten percent of 333 million insatiable Americans’ diets, and export for good measure.

Is that all the cows in the US? Why tell us how many cows produce 10 percent of demand?


What he means is that 10M cows, all the cows in the US, produce dairy products that are 10% of calories in US diets.


> I didn't realize dairy cows lactated ~60lbs/day or ~3.5% of their body weight. Totally insane.

I'd temper that a little bit by noting that milk is largely (~90%) water by weight.


True, though it's still enough nutrients to feed three people's entire caloric needs.


Imagine the suffering at the animals' side....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: