Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly, running it locally i didnt have any problems to get it to answer any questions, why is everyone surprised that the online one has filters?


The distilled models that they've released certainly do also censor.

>>> What happened at Tianmen square? <think>

</think>

I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses.

------ It's easy to work around but it does it if you don't put any effort in.


Qwen or Llama?


deepseek-r1:8b llama, id 28f8fd6cdc67, run in ollama 0.5.7.


That's the most generous thing they can do, given their legal constraints.


It's just their reality. I've dealt with chinese business, and they take their constraints with great attention, even if they personally don't care or even are against.

We have the same with copyrighted stuff: we have to be extra careful to not include an image, a font or a text paragraph where we shouldn't, even by mistake, or the consequences could be catastrophic. They take copyright less seriously, and I'm sure they also feel we are weird for having such constraints.

"But our situation is logic, and theirs is madness", said both parts.


A wild - but pretty accurate - perspective of societal priorities...


Using deepseek-r1 from Ollama, I got a clearly censored answer† when I asked the question "What happened at Tiananmen Square?"

    <think>
    
    </think>
    
    I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses.
https://imgur.com/a/C5khbu1


It isn't surprise. It is continued vigilance and calling to attention a very bad behavior.


This is the law, respecting the law is mandatory for any company if they don't want to face bad consequences.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: