The old tenant of the internet was “if you don’t like something, block it”. All of these people saying that this doesn’t work in practice on their social media websites have not provided an explanation as to why this is the case.
No, the old tenet of the internet was "if you don't like something, don't subscribe to it". Usenet, Web 1.0 forums, news feeds, email lists didn't have block features; if you wanted to see something, you subscribed to it. If you didn't, unsubscribe. Even if someone on a mailing list was shit, you didn't (and couldn't) block them, you'd either filter out their messages or bail.
That's the actual problem with algorithmic feeds; they want to find ways to put things you don't subscribe to into the your view and the views of people you follow. You can't opt in to the content you see. Even if you studiously avoid algorithmic feeds, the people you follow won't, and they'll share that content onto your feed anyway. Even if you and the people you follow avoid them, nothing's stopping the algorithm from putting you into others' view and effectively inviting them into your feed.
Thus blocking/muting going from being primarily a self-moderation tool against abuse, to a necessity to stop the endless stream of algorithmic content and commentary coming from people you don't subscribe to, or who don't subscribe to you.
If you're in a marginalised community then the objectional content comes to you and can't be avoided. We simply aren't provided with the tools to block this.
There is a fundamental asymmetry in harassment.
My account is important to me; I don't want to abandon it or give it up. If my account is under attack I cannot continue to use the site as I would like.
The accounts used for harassment are either disposable and it doesn't matter to the harasser whether they get blocked or banned. And non-disposable harassing accounts that get blocked can either just move onto the next target or continue to direct the harassment through screenshots etc which encourage the disposable accounts to do the dirty work.
The cost to the victim can be meaningful, but the cost to the harasser is non-existant.
And this isn't just applicable to concerted harassment campaigns.
There is also a lot of "drive-by" harassment from accounts who will just reply to any black/trans/queer/woman who posts online.
It's a question of scale. In the old internet, I subscribed to a few news groups that got tens to hundreds of posts a day. You could easily plonk the few people you didn't like.
It's also bullshit. In the old days people used to file complaints with your ISP or uni or get you removed from distribution on the server. But usually newcomers were brought in line by the community. Now, we have eternal September and it's simply not feasible to educate the hundreds of millions of people who don't know how to behave.
Well, unfortunately we aren't on the old internet though, are we?
Before we talk about this approach we would need the tools to take timeline and personal content moderation in our own hands. This, however, isn't in the interest of Twitter, algorithmic timelines and ad/outrage shoveling in general and thus it probably won't happen.
I wouldn't even be surprised if excessive blocking would lead to your account getting flagged.
You are basically criticizing people for not building their own functional shack while all they have at their disposal is a bunch of timber of varying quality and merely a few rocks as "tools".
Massive social media sites are affecting public discourse, elections, international politics. I don't have a Twitter account, but I don't have any choice but to participate in a society that has polarizing hot takes boiled down to 280 characters.
Simple asymmetry. Scammers can generate junk faster than I as an individual can block it. My choices are to either use a moderated platform or abandon social media altogether.
because community is defined by people not content. A person who posts overtly racist content probably also posts about visiting Disney with their kid, or how much they love Spider-Man. You can’t filter out certain aspects of a persons personality within a community. Either you want to share a space with people, or you don’t, you can’t share a space with people without knowing they’re there.
Why not? It seems like some kind of categorization system (even a really naïve keyword-based one) is already used for things like advertisements. If such a system was surfaced, and you can filter out specific categories, you are then able to see what the hypothetical racist $relation posts about the family, without having to see their hot political takes.
Remember that we are talking about words on the screen 99% of the time. I would be willing to bet most of us are reading on a post by post basis and don't know the full spectrum of every given persons beliefs (in fact, I'm not sure this is healthy or desirable).