Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reasoning models are just wrappers over the base model. It was pretty obvious it wasn’t actually reasoning but rather just refining the results using some kind of reasoning like heuristic. At least that’s what I assumed when they were released and you couldn’t modify the system prompt.



I don't understand why this comes as a surprise to a lot of people. Underlying all this is sequences of text tokens converted to semantic vectors, with some positional encoding, and then run through matrix multiplications to compute the probability of the next token. The probability is a function of all the text corpuses the model has previously consumed. You can run this process multiple times, chaining one output into the next one's input, but reason as humans know is unlikely to emerge.


Not just wrappers. Some models are fine-tuned with reasoning traces.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: