> The issue is people who say "see, the AI makes mistakes at very complex reasoning problems, so their 'thinking is an illusion'". That's the title of the paper.
That's not what the paper proposes (i.e. it commits errors => thinking is an illusion). It in fact looks at the failures modes and then it argues that due to HOW they fail and in which contexts/conditions, that their thinking may be "illusory" (not that the word illusory matters that much, papers of this calibre always strive for interesting sounding titles). Hell, they even gave the exact algo to the LRM, it probably can't get more enabling than that.
Humans are lossy thinkers and error-prone biological "machines", but an educated+aligned+incentivized one shouldn't have problems following complex instructions/algos (not in a no-errors way, but rather, in a self-correcting way); we thought that LRMs did that too, but the paper shows how they even start using less "thinking" tokens after a complexity threshold and that's terribly worrisome, akin to someone getting frustrated and stopping thinking after a problem gets too difficult which goes contrary to the idea that these machines can run laboratories by themselves. It is not the last nail in the coffin because more evidence is needed as always, but when taken into account with other papers, it points towards the limitations of LLMs/LRMs and how those limitations may not be solvable with more compute/tokens, but rather exploring new paradigms (long due in my opinion, the industry usually forces a paradigm as panacea during hype cicles in the name of hypergrowth/sales).
In short the argument you say the paper and posters ITT make is very different from what they are actually saying, so beware of the logical leap you are making.
> There is this armchair philosophical idea, that a human can simulate any turning machine and thus our reasoning is "maxomally general", and anything that can't do this is not general intelligence. But this is the complete opposite of reality. In our world, anything we know that can perfectly simulate a turning machine is not general intelligence, and vice versa.
That's typical goalpoast moving and happens in both ways when talking about "general intelligence" as you say, since the dawn of AI and the first neural networks. I'm not following why this is relevant for the discussion though.
That's not what the paper proposes (i.e. it commits errors => thinking is an illusion). It in fact looks at the failures modes and then it argues that due to HOW they fail and in which contexts/conditions, that their thinking may be "illusory" (not that the word illusory matters that much, papers of this calibre always strive for interesting sounding titles). Hell, they even gave the exact algo to the LRM, it probably can't get more enabling than that.
Humans are lossy thinkers and error-prone biological "machines", but an educated+aligned+incentivized one shouldn't have problems following complex instructions/algos (not in a no-errors way, but rather, in a self-correcting way); we thought that LRMs did that too, but the paper shows how they even start using less "thinking" tokens after a complexity threshold and that's terribly worrisome, akin to someone getting frustrated and stopping thinking after a problem gets too difficult which goes contrary to the idea that these machines can run laboratories by themselves. It is not the last nail in the coffin because more evidence is needed as always, but when taken into account with other papers, it points towards the limitations of LLMs/LRMs and how those limitations may not be solvable with more compute/tokens, but rather exploring new paradigms (long due in my opinion, the industry usually forces a paradigm as panacea during hype cicles in the name of hypergrowth/sales).
In short the argument you say the paper and posters ITT make is very different from what they are actually saying, so beware of the logical leap you are making.
> There is this armchair philosophical idea, that a human can simulate any turning machine and thus our reasoning is "maxomally general", and anything that can't do this is not general intelligence. But this is the complete opposite of reality. In our world, anything we know that can perfectly simulate a turning machine is not general intelligence, and vice versa.
That's typical goalpoast moving and happens in both ways when talking about "general intelligence" as you say, since the dawn of AI and the first neural networks. I'm not following why this is relevant for the discussion though.