Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> So, so weird that they still don't want you to see their models' reasoning process

It's not weird at all. R1-distills have shown that you can get pretty close to the real thing with post-training on enough completions. I believe gemini has also stopped showing the thinking steps (apparently the GLM series of open access models were heavily trained on gemini data).

ToS violations can't be enforced in any effective way, and certainly not cross-borders. Their only way to maintain whatever moat thinking models give them is to simply not show the thinking parts.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: