> an (actual no-foolin' not just political-pejorative) neo-nazi
Could you give me an example from the USA?
> your product is that the ad you just sold is not going to sit [near]
The evolution of personalized feeds makes this less important. It's not Ford gracing a page in NeoNazi's Monthly magazine with their ad, it's your nazi-laden feed that happens to get a truck ad in passing.
> Twitter does content moderation.
Not well. And not usefully. They tended to block speech they don't like and leave worse from their friends. Blocking scams, bots, and actual harm seems to take a backseat to political stunts.
To be useful it will need to be transparent and configurable, and so far Twitter has focused on making it hidden and based on their views, not the users' views.
> It's not Ford gracing a page in NeoNazi's Monthly magazine with their ad, it's your nazi-laden feed that happens to get a truck ad in passing.
Do you think that makes Ford feel better or worse, as a brand that’s mostly tried to avoid Nazi connections over the last 70-odd years?
> They tended to block speech they don't like and leave worse from their friends.
Two points here:
First, I think you’d be shocked at how many people are super OK with blocking the speech Twitter blocked, and indeed prefer it. As a corporate money-making entity, Twitter is concerned about maximizing its user base, and if blocking the neo-nazi means 3 other people stay on the platform, by golly that’s the move Twitter Inc the business will make. It turns out speech is sometimes zero-sun or worse: one person’s instance of free speech makes another person feel like their life is in danger.
Second, Twitter’s customer is not you, nor the public, nor democracy. Twitter’s customer is Ford, and if you’ve noticed from advertising generally in this country lately, most brands have decided that embracing things like LGBTQ rights and immigration, at least at a surface level, are better for their global brand than not, so Twitter’s content moderation strategy will reflect that. Most of the people complaining about Twitter’s content moderation strategy were already mad at Cheerios for running ads celebrating gay marriage, so it‘s hard to say they’ve really missed the mark here.
Again, Twitter is an advertising company whose supply is “users”, and they need the maximal supply of palatable (read: users without paired lightning bolts in their bio) users for their customers, Advertisers. That’s what the platform is, that’s what their business is, and that’s who they’re moderating content for.
> Do you think that makes Ford feel better or worse
Vastly better. Because it's not a choice they made but a choice the user made. Someone reading a nazi blog won't be upset and someone not reading about nazis won't see the ad run concurrently. You'd have to go looking to create this problem situation and then screenshot it for evidence because it wouldn't show up for normal people using the system.
> I think you’d be shocked at how many people are super OK with blocking the speech Twitter blocked, and indeed prefer it
I think most people would be happy that nazis and child-abuse photos would be blocked, but that most people don't know Twitter seemed to put more time into political censorship. If was easier for a feminist to get banned from twitter for saying women need sex-based spaces than for the accounts sending death threats to the feminist for saying it. Child-sex abuse materials and accounts trading them remained for months, if not years.
> if you’ve noticed from advertising generally in this country lately, most brands have decided that embracing things
Not most by far. Some brands have decided to play very virtue-forward, most just made sure they cut out unintentional offense. No need to show your truck driven to a Redskins game when you could show it to a Huskies game.
>> To be useful it will need to be transparent and configurable, and so far Twitter has focused on making it hidden and based on their views, not the users' views
> Advertisers. that’s who they’re moderating content for
No, they didn't block the Hunter Biden laptop story because of advertiser pressure. They've been mixing their own personal views with what the advertisers supposedly want.
Besides, what I'm talking about would serve advertisers just as well as now. I'm not saying Twitter shouldn't moderate, just that users and advertisers would both be better served with a configurable categorizing system where they (and you if you choose, or with a public blocklist) can assign scores based on nazi words and dogwhistles, etc, and users can choose to block that content or not.
Advertisers would end up with better categorized posts because Twitter and the users would be collaborating on this, not at odds like now, and they could block more effectively; not just on whatever Twitter considers ad-worthy today but with their own list of never-show keywords as well. And users would finally end up with something trustworthy because instead of "bad content" being deleted it would simply be hidden - shadow banned - and available to be audited.
Could you give me an example from the USA?
> your product is that the ad you just sold is not going to sit [near]
The evolution of personalized feeds makes this less important. It's not Ford gracing a page in NeoNazi's Monthly magazine with their ad, it's your nazi-laden feed that happens to get a truck ad in passing.
> Twitter does content moderation.
Not well. And not usefully. They tended to block speech they don't like and leave worse from their friends. Blocking scams, bots, and actual harm seems to take a backseat to political stunts.
To be useful it will need to be transparent and configurable, and so far Twitter has focused on making it hidden and based on their views, not the users' views.