Hacker News new | past | comments | ask | show | jobs | submit login

The paper shows reasoning is better than no reasoning, reasoning needs more tokens to work for simple tasks, and that models get confused when things get too complicated. Nothing interesting, on the level of what an undergrad would write for a side project. If it wasn’t “from apple” no one would be mentioning it.





They cite https://arxiv.org/abs/2503.23829 which is interesting though. If you have lots of tokens to burn, just try the task lots of times. Can find better solutions than reasoning would on its first try. Only tested on small models though.

I think that these kind of papers are necessary to ground people back into reality - the hype machine is too strong to be left unguarded.

All the papers I’ve seen show models have limits. This is just an attention grab by lazy “researchers” cashing in on their Apple credentials.

Samy Bengio is the co-author of Torch. His credentials speak for themselves.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: