Bluesky has an algorithmic feed even though people think it doesn't.
I am annoyed by emotionally negative political content on Mastodon and had to make a number of rules (no "fascist", "republican", "Trump", ...) to make it tolerable.
I found Bluesky's algorithmic feed eliminated about 75% of emotionally negative content and thought it was a good feed without any rules (though I was being careful who I follow.) Then the inauguration happened and there was a huge exodus of people from X and the feed either got overwhelmed with them or they were deliberately giving activists and journalists a lot of visibility so they build up followers quickly.
I've been wanting to make a "negative people" filter to make it easier to choose people to follow, the idea is to make a ModernBERT + LSTM classifier which I think will outperform my miniLM + pooling + SVM classifier. I need something that can look at images and detect "text in images" such as screenshots and image memes to block those too.
So from my viewpoint people can post what they want, but I won't read it. Probably Threads is using some technology like this to suppress politics and negativity in general.
To tell if something is true you need to make a god. Politics and negativity aren't quite the same thing, but they are fellow travelers.
a lot of memes have texts in them, so you potentially remove ones where context matters a lot of people abuse these to redirect traffic from what I have seen.
I am annoyed by emotionally negative political content on Mastodon and had to make a number of rules (no "fascist", "republican", "Trump", ...) to make it tolerable.
I found Bluesky's algorithmic feed eliminated about 75% of emotionally negative content and thought it was a good feed without any rules (though I was being careful who I follow.) Then the inauguration happened and there was a huge exodus of people from X and the feed either got overwhelmed with them or they were deliberately giving activists and journalists a lot of visibility so they build up followers quickly.
I've been wanting to make a "negative people" filter to make it easier to choose people to follow, the idea is to make a ModernBERT + LSTM classifier which I think will outperform my miniLM + pooling + SVM classifier. I need something that can look at images and detect "text in images" such as screenshots and image memes to block those too.
So from my viewpoint people can post what they want, but I won't read it. Probably Threads is using some technology like this to suppress politics and negativity in general.
To tell if something is true you need to make a god. Politics and negativity aren't quite the same thing, but they are fellow travelers.