Hacker News new | past | comments | ask | show | jobs | submit login

> bigger models trained on bigger data with bigger reasoning posttraining and better distillation will push the horizons further and further

There is no evidence this is the case.

We could be in an era of diminishing returns where bigger models do not yield substantial improvements in quality but instead they become faster, cheaper and more resource efficient.




I would claim that o1 -> o3 is evidence of exactly that, and supposedly in half a year we will have even better reasoning models (further complexity horizon), so what could that be besides what I am describing.


Is there some breakthrough in reasoning between o1 and o3 that we are all missing.

And no one cares what we may have in the future. OpenAI etc already have an issue with credibility.


No breakthrough, it's just better in some quantitatively measurable way.


The empirical scaling laws are evidence. They're not deductive evidence, but still evidence.

The scaling laws themselves advertise diminishing returns, something like a natural log. This was never debated by AI optimists, so it's odd to suggest otherwise as if it contradicts anything the AI optimists have been saying.

The scaling laws are kind of a worst case scenario, anyway. They assume no paradigm shift in methodology. As we saw when the test-time scaling law was discovered, you can't bet on stasis here.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: