Hacker News new | past | comments | ask | show | jobs | submit login

This actually set back my expectation of AI / LLM / LRM by at least 5 if not 10 years. But someone please correct me if I am wrong.

My idea was that up to a few years ago while AI / LLM is good at getting conversational or dishing out results that is in a language we understand. It still doesn't "understand" anything and in a lot of time conjured up that seems remotely correctly. Pattern matching over a very large data set that could be correct for 70% and increasingly to 80%+ of the time. However more accurate predictions would require order of magnitude more computing resources.

But pattern matching is still, pattern matching. There is no reasoning behind it. 1+1 will never equals to 11 but it may have skewed towards that results because of Javascript. When fundamental logic isn't behind any of these progress and process. The very bottom layer of any conversation / information / results are fragile.

So I have been skeptical of AI progress or LLM. That was until LRM or as the title said Reasoning LLMs. I thought we somehow manage to programme critical thinking into it, or some sort of reflection / fact checking / rationale / basic logic as fundamental principle. And while I can tell LRM isn't and wont be perfect, and possibly never quite reach AGI, the layer will improve over time until we find different ways to progress. And we will have something I called Assisted Intelligence. Which is what a lot of people uses as AI programming today.

Instead what this shows is that LRM isn't reasoning at all. It is LLM conjured up excuses to make it look like it is reasoning. It is another set of pattern matching specially made up for reasoning to look like it is reasoning. It is basically a kid making things up on why he got the results without thinking because he just want to get away from class or homework that looks very clever.

May be the title gave it away, and made be we got tricked. It was always a LLM specifically trained for showcasing "reasoning". The actual reasoning behind the scene is never done. Hence the title "The Illusion of Thinking".






Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: