This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.
Human verified the video -> human was the decision-maker. No human verified the video -> Human who gave a blank check to the AI system was the decision-maker. It's not really about the quality of journalism, here.
We're talking about who should be charged with a crime. I sincerely hope we're going to do more discovery than "ask Dexerto to summarize what WBAL-TV 11 News said".
Superintendent approved a system that they 100% knew would hallucinate guns on students. You assert that if the superintendent required human-in-the-loop before calling the police that the superintendent is absolved from deploying that system on students.
You are wrong. The superintendent is the person who decided to deploy a system that would lead to swatting kids and they knew it before they spent taxpayer dollars on that system.
The superintendent also knew that there is no way a school administrator is going to reliably NOT dial SWAT when the AI hallucinates a gun. No administrator is going to err on the side of "I did not see an actual firearm so everything is great even though the AI warned me that it exists." Human-in-the-loop is completely useless in this scenario. And the superintendent knows that too.
In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>. We are not close to safely betting lives on it, but people will do it immediately anyway.
So, are you implying that if humans surveil kids at random and call the SWAT team if a frame in a video seems to imply one kid has a gun, that then it's all OK?
Those journalists, just trying to get (unjustified, dude, unjustified!!) emotes from kids being mistakenly held at gun point, boy they are terrible.... They're just covering up how necessary those mistakes are in our pursuit of teh crime...
If security sees someone carrying a gun in surveillance video, on a gun free campus, and policy verify it, then yes, that's justified, by all aspects of the law. There are countless examples of surveillance of illegal activity resulting in police action.
Nobody saw a gun in a video. Nobody even saw something that looked like a gun. A chip bag, at most, is going to produce a bulge. No reasonable human is going to look at a kid with a random bulge in their pocket and assume gun. Otherwise we might as well start sending our kids to school naked; this is the kind of paranoia that brought us the McMartin Preschool nonsense.
The presence or absence of human review is irrelevant. A system with stupid humans are just as bad as a system with stupid machines. They are complementary, really.
Nobody saw a gun. We know this because there was no gun.
They didn't see that, though. They saw a kid with a bulge over their pants pocket, suggesting that something was in the pocket. The idea that any kind of algorithm can accurately predict that an amorphous pocket bulge is a gun is just bonkers stupid.
(Ok, ok, with thin, skin-tight, light-colored pants, maybe -- maybe -- it could work. But if it mistook a crumpled-up Doritos bag as a gun, clearly that was not the case here.)
Decision-maker accountability is the only thing that halts bad decision-making.