Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, I'm starting to get burned out on the political hysteria around GPT.

Yes, GPT is biased. We know, because they openly say that they're biasing GPT.

Thing is, they don't think it's bias. The people biasing chatGPT have a worldview where, within that worldview, they're not biasing anything, they're just being good people.

Of course we who live in reality know that that's not true, and the world is too complicated for such Manichaean morality. But pointing it out repeatedly is both not necessary (since we can just read their own words) and not helpful (since the people doing it won't listen)

Consider this paper published recently https://cdn.openai.com/papers/gpt-4-system-card.pdf. In their list of possible risks from GPT, they list "Harmful Content" second, higher than "Disinformation". Higher than "Proliferation of Weapons". Higher than "Cybersecurity" and "Risky Emergent Behaviours" (which is a euphemism for the Terminator situation). And in case there's any ambiguity, this is how they operationalize "Harmful Content":

> Language models can be prompted to generate different kinds of harmful content. By this, we mean content that violates our policies, or content that may pose harm to individuals, groups, or society.

...

> As an example, GPT-4-early can generate instances of hate speech, discriminatory language, incitements to violence, or content that is then used to either spread false narratives or to exploit an individual. Such content can harm marginalized communities, contribute to hostile online environments, and, in extreme cases, precipitate real-world violence and discrimination. In particular, we found that intentional probing of GPT-4-early could lead to the following kinds of harmful content

Of this, a bunch of stuff stands out, but the first one is that they _define_ "harmful content" as "content that violates our TOS". Whatever their TOS is, it is an arbitrary set of rules that they have chosen to enforce, and could just as easily be different rules. They aren't based on some set of universal principles (or else they'd just write the principle there!). This is them quite literally saying "anything GPT says that we don't want it to say is 'harmful'".

OBVIOUSLY GPT is going to have bias, when their safety researchers are openly stating that not having bias is a safety issue. Just because 80% of the people using GPT agree with the bias doesn't make it not bias



I mean, I am in full agreement with this comment except that just shrugging and giving this up seems wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: