> Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content.
> That’s the message they’re sending to governments, competing services, China, you.
Is it? That’s just something the tweets have read in.
The message could equally well be ‘We won’t become an easy political target by ignoring a problem something most people care about like child porn, but we are going to build a point solution to that problem, so the public doesn’t force us to bow government surveillance requests.’
It’s easy to nod along with an anti-Apple slogan, but we need to consider what would happen if they didn’t do this.
If Apple thought this kind of dragnet was a losing political fight that tells me they've become too weak to stand up to unreasonable government demands. Where is the company that won the public opinion battle over unlocking a mass shooter's phone?
This isn’t a government demand. This is something the public cares deeply about, and Apple is solving it their own way.
Public opinion is not in favor of giving safe harbor to pedophiles and child pornographers, and I can’t see why anyone would even want Apple to fight that battle.
Not sure where you got that information. I haven't seen any official announcements, so I assumed based on it being US-only and rolled out with no fanfare (except critical press stories citing unnamed sources) that it's something the FBI asked for.
Not sure where you got that - seems like you’ve just made up an explanation that it’s an FBI demand out of whole cloth.
What we do know is that it is a proprietary solution using a proprietary hash which only applies to Apple products and that Apple has always presented themselves as a family friendly company that doesn’t support criminal use cases.
Everything points to this being something Apple thinks needs to be solved before the public asks why they haven’t.
If it was a government demand, we’d presumably see Google responding to it too.
You still haven’t explained why not working to deter pedophiles and child pornographers, is an important battle for them to fight.
They might think it’s a problem they want to solve on their own terms, and that would seem to be what they have done here.
Not solving a problem people care about just because the government also cares about it seems illogical. I think a lot of people like the idea of corporations taking responsibility for the social problems they cause without needing to be forced to do so by the government.
False positives, what if someone can poison the set of hashes, engineered collisions, etc. And what happens when you come up positive - does the local sheriff just get a warrant and SWAT you at that point? Is the detection of a hash prosecutable? Is it enough to get your teeth kicked in, or get you informally labeled a pedo by your local police? On the flip side, since it's running on the client, could actual pedophiles use it to mutate their images until they can evade the hashing algorithm?
False positives are clearly astronomically unlikely. Not a real issue.
Engineered collisions seem unlikely too. Not impossible. Unless there is a straight up cryptographic defect in the hash algorithm, it seems hard to see how engineered collisions could be made to happen at any scale.
At Apple scale, a once in a million issue is going to ruin the lives of 2000 people. A false positive here is not a mild inconvenience. It means police raiding their house, potentially damaging it, seizing all of their technology for months while it is analyzed, and leaving these people highly stressed while they try to put their lives back together.
This isn't some web tech startup where a mistake means someones tshirt got sent to the wrong address. Peoples lives will quite literally be ruined over mistakes here.
Is it a once in a million issue? The collision rate matters. It could easily be much higher and then it wouldn’t matter that it was being used at Apple’s scale.
If this was the kind of hash where flipping one bit of the input completely scrambles the output, the bad guys would just flip one bit of the input to evade it. Obviously a PhotoDna type of hash is going to be be easier to cause a collision with because they're averaging out a ton of the input data. According to Wikipedia the classic way to do it is convert it to monochrome, divide it into a grid, and average the shade of each of the cells. If they're doing that you could probably just pass in that intermediate grid and it would "hash" to the same result as the original picture with no porn present.
Why do you think that? There are plenty of whitepapers on fooling NNs by changing random pixels by a bit, so that the picture is not meaningfully changed for a person, but the computer will label it very differently. Do note that these are not cryptographic hashes because they have to recognize the picture even when compressed differently, cropped a bit, etc.
We know perceptual hashing and cryptography have incompatible requirements. Think of an image, the same image with 1 pixel changed, and a very different image. A perceptual hash should say 1 and 2 were related and not 3. Cryptographers call that failing a chosen plaintext attack.
The hashes will of course be provided by local governments, who have the ultimate authority (because they can forbid Apple to sell there, and Tim Cook never says no to money).