Your premise is incomplete. When someone posts illegal content on YouTube they are not liable if they are not aware of the illegality of that content. Once they learn that they are hosting illegal content they lose their safe harbor if they don't remove it.
Let me rephrase, since saying they lose their safe harbor was a poor choice of words. The safe harbor does indeed prevent them from being treated as the publisher of the illegal content. However illegal content can incur liability for acts other than publishing or distributing and section 230's safe harbor won't protect them from that.
The reason we're having this discussion this on this particular post because YT's AI is not infallible. There isn't a "standard rubric" - just automated correlation-based scoring derived from labeled training data. In this case, the AI learned that media piracy and self-hosted setups are correlated, but without actual judgement or a sense of causality. So YT doesn't truly "know" anything about the videos despite the AI augmentation.
I am curious what you consider to be a "standard rubric" - would that be based on the presence of keywords, or requires a deeper understanding of meaning to be able to differentiate the study/analysis of a topic versus promoting said subject.