Well, I agree that the part that does the reasoning isn't an LLM in the naive form.
But that "scaffolding" seems to be an integral part of the neural net that has been built. It's not some Python for-loop that has been built on top of the neural network to brute force the search pattern.
If that part isn't part of the LLM, then o1 isn't really an LLM anymore, but a new kind of model. One that can do reasoning.
And if we chose to call it an LLM, well then now LLM's can also do reasoning intrinsically.
Reasoning, just like intelligence (of which it is part) isn't an all or nothing capability. o1 can now reason better than before (in a way that is more useful in some contexts than others), but it's not like a more basic LLM can't reason at all (i.e. generate an output that looks like reasoning - copy reasoning present in the training set), or that o1's reasoning is human level.
From the benchmarks it seems like o1-style reasoning-enhancement works best for mathematical or scientific domains where it's a self-consistent axiom-driven domain such that combining different sources for each step works. It might also be expected to help in strict rule-based logical domains such as puzzles and games (wouldn't be surprising to see it do well as a component of a Chollet ARC prize submission).
o1 has moved "reasoning" from training time to partly something happening at inference time.
I'm thinking of this difference as analogus to the difference between my (as a human) first intution (or memory) about a problem to what I can achieve by carefully thinking about it for a while, where I can gradually build much more powerful arguments, verify if they work and reject parts that don't work.
If you're familiar with chess terminology, it's moving from a model that can just "know" what the best move is to one that combines that with the ability to "calculate" future moves for all of the most promising moves, and several moves deep.
Consider Magnus Carlsen. If all he did was just did the first move that came to his mind, he could still beat 99% of humanity at chess. But to play 2700+ rated GM's, he needs to combine it with "calculations".
Not only that, but the skill of doing such calculations must also be trained, not only by being able to calculate with speed and accuracy, but also by knowing what parts of the search tree will be useful to analyze.
o1 is certainly optimized for STEM problems, but not necessarily only for using strict rule-based logic. In fact, even most hard STEM problems need more than the ability to perform deductive logic to solve, just like chess does. It requires strategical thinking and intuition about what solution paths are likely to be fruitful. (Especially if you go beyond problems that can be solved by software such as WolframAlpha).
I think the main reason STEM problems was used for training is not so much that they're solved using strict rule-based solving strategies, but rather because a large number of such problems exist that have a single correct answer.
But that "scaffolding" seems to be an integral part of the neural net that has been built. It's not some Python for-loop that has been built on top of the neural network to brute force the search pattern.
If that part isn't part of the LLM, then o1 isn't really an LLM anymore, but a new kind of model. One that can do reasoning.
And if we chose to call it an LLM, well then now LLM's can also do reasoning intrinsically.