The abandoned plan was perceptual hashing, which should return the same hash for very similar photos, while the new one is a checksum, which should return the same hash only for identical photos. I don’t think that invalidates the point, but it does seem relevant. It certainly makes it much less useful for CSAM scanning or enforcing local dictator whims, since it’s now trivial to defeat if you actually try to.
The big difference is with photos end-to-end encrypted, Apple can't (by choice nor force) have human "content reviewers" look at photos to inspect them for unlawful content, as was the intention under Apple's 2021 plan [1] after a threshold of 30 hash matches was met.
Although it was starting on CSAM material, it wasn't clear which other illegal activities Apple would assist governments in tracking. In countries in which [being gay is illegal](https://www.humandignitytrust.org/lgbt-the-law/map-of-crimin...), having Apple employees aid law enforcement by pointing out photographic evidence of unlawful behaviour (for example, a man hugging his husband) would have been a recipe for grotesque human rights abuses.
With photos encrypted, Apple can't be pressured to hire human reviewers to inspect them, and thus cannot be pressured by governments that enforce absurd laws to pass on information on who might be engaging in "unlawful" activities.
>The abandoned plan was perceptual hashing, which should return the same hash for very similar photos . . .
Is there any proof they actually abandoned this? NeuralHash seems alive and well in iOS 16[1]. Supposedly the rest of the machinery around comparing these hashes to a blind database, encrypting those matches, and sending them to Apple et al. to be reviewed has all been axed. However that's not exactly trivial to verify since Photos is closed source.
Anything over a network can be decrypted and inspected with a MITM proxy (manually adding its root certificate to the trust store), as long as only TLS (no application-level encryption) is being used.
There are a multitude of ways to inspect the decrypted traffic of your own device, whether it's a jailbroken iPhone provided by Apple to the security community or a non-kosher jailbroken device. People inspect this traffic all the time.
> . . . as long as only TLS (no application-level encryption) is being used.
Therein lies the rub: the payload itself is protected by an encryption scheme where the keys are intentionally being withheld by either party. In the case of Apple's proposed CSAM detection Apple would be withholding the secret in the form of the unblinded database's derivation key. In the case of Advanced Data Protection the user's key lives in the SEP, unknown to Apple.
By design the interior of the "safety vouchers" cannot be inspected, supposedly not even by Apple, unless you are in possession of (a) dozens of matching vouchers and (b) the unblinded database. So on the wire you're just going to see opaque encrypted containers representing a photo destined for iCloud.