Hacker News new | past | comments | ask | show | jobs | submit login

I would claim that o1 -> o3 is evidence of exactly that, and supposedly in half a year we will have even better reasoning models (further complexity horizon), so what could that be besides what I am describing.





Is there some breakthrough in reasoning between o1 and o3 that we are all missing.

And no one cares what we may have in the future. OpenAI etc already have an issue with credibility.


No breakthrough, it's just better in some quantitatively measurable way.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: