Um, no. I don't want to see pics of NSFL gore before the userbase has had a chance to remove them. Which is what most moderators spend time removing from FB, to the point where it psychologically traumatizes them.
You don't have to. That's actually a place where automation could help. You could just use image detection and auto-tag stuff as to what you think it contains. Then, have a list of sensitive tags that are automatically blurred out in the feed (and let users customize the list as they see fit).
If it's something trending towards illegal, toss it into an "emergency" queue for moderators to hand-verify and don't make it visible until it's been checked.
So in your example, if someone uploads war imagery, it would be tagged as "war," "violence," "gore" and be auto-blurred for users. That doesn't mean the post or account needs to be outright nuked, just treated differently from SFW stuff.
Those are subjective clarifications, and so will differ between each person. And models are pre-trained to recognize these classifications.
Since you mentioned war, I'm reminded of Black Mirror episode "Men Against Fire", where an army of soldiers have eye implants that cause them to visually see enemy soldiers as unsightly. (My point being this is effectively what Facebook can do.)
Automation + human intervention, yes. In the setup I described, worst case scenario something gets blurred out that's benign, but it doesn't create a press/support nightmare for Meta.
Considering they've open sourced one of their image detection API [1], I'd imagine it's more a problem of accuracy and implementation at scale than a serious technical hurdle.