> So, so weird that they still don't want you to see their models' reasoning process
It's not weird at all. R1-distills have shown that you can get pretty close to the real thing with post-training on enough completions. I believe gemini has also stopped showing the thinking steps (apparently the GLM series of open access models were heavily trained on gemini data).
ToS violations can't be enforced in any effective way, and certainly not cross-borders. Their only way to maintain whatever moat thinking models give them is to simply not show the thinking parts.
It's not weird at all. R1-distills have shown that you can get pretty close to the real thing with post-training on enough completions. I believe gemini has also stopped showing the thinking steps (apparently the GLM series of open access models were heavily trained on gemini data).
ToS violations can't be enforced in any effective way, and certainly not cross-borders. Their only way to maintain whatever moat thinking models give them is to simply not show the thinking parts.