Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would claim that o1 -> o3 is evidence of exactly that, and supposedly in half a year we will have even better reasoning models (further complexity horizon), so what could that be besides what I am describing.


Is there some breakthrough in reasoning between o1 and o3 that we are all missing.

And no one cares what we may have in the future. OpenAI etc already have an issue with credibility.


No breakthrough, it's just better in some quantitatively measurable way.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: