Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From my first hand experience designing these algorithms at Meta and running experiments, and listening to feedback from accounts, I would guess that you are not getting reach because your followers do not interact with what you post. In almost every case people call "shadow ban", the cause was occasionally an actual bug (code doing something undesired and unexpected) or far more often people posting bad content.

The ML is good enough to know whether your followers want to engage with political content. If they don't, Facebook will show them less of it.



> The ML is good enough to know whether your followers want to engage with political content. If they don't, Facebook will show them less of it.

This is a very simplistic explanation. If the GP is a political activist and "always comments on draft laws", I would expect their followers to be willing to engage with political content.


It would explain a lot about how Meta operates that the engineers don't factor things like this in.


We did, they do. When I was there people did "calibration studies" for certain sensitive categories to understand and fix any model biases that would rank content higher for undesirable or unexpected reasons.


I feel a deep discomfort with algorithms optimizing for “engaging” content like this. It feels to me like a form of algorithmic brainwashing, capturing people in filter bubbles. I would prefer it if there were laws against platforms making visibility decisions for the user. If people choose to be in a bubble, that’s a form of personal freedom, if an unknowable algorithm chooses for them, not so much.


Most people on Meta's product follow too many accounts to see everything everyone posts. In an alternate reality where ranking is illegal, the game would be different and worse. Accounts would just repost the same content over and over every few minutes to stay at the top of the feed, or some other strategy to get distribution.


Right but users could be given prominent ways to deal with this - eg tools and buttons to (in bulk at first) unfollow people.


Well, it does not make sense because : 1- I know the baseline of interactions I get usually 2- it's not something that happened to my account specifically, it's everybody, including community managers, for whom posting content is a monetizable skill, who resort to obfuscation when they use the word "Palestine".




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: