Glenn Fleishman and Rich Mogull wrote an unofficial FAQ for TidBits about Apple’s message filtering and CSAM detecting efforts:
It’s always laudable to find and prosecute those who possess and distribute known CSAM. But Apple will, without question, experience tremendous pressure from governments to expand the scope of on-device scanning. Since Apple has already been forced to compromise its privacy stance by oppressive regimes, and even US law enforcement continues to press for backdoor access to iPhones, this is a very real concern.
On the other hand, there is also the chance this targeted scanning could appease and reduce the pressure for full-encryption backdoors, at least for a time. We don’t know how much negotiation behind the scenes with US authorities took place for Apple to come up with this solution, and no current government officials are quoted in any of Apple’s materials—only previous ones, like former U.S. Attorney General Eric Holder. Apple has opened a door, and no one can know for sure how it will play out over time.
Public arguments about encryption have tended to make the same oversimplification: that encryption and compliance with law enforcement will always be in opposition to each other and, because Apple is in favour of strong encryption, it is on the side of criminal behaviour. Chicago’s former head of detectives John J. Escalante said, in response to Apple’s decision to enable encryption by default, that the “average pedophile at this point is probably thinking, I’ve got to get an Apple phone”, and Escalante was not alone. As it joined the castigation chorus, the editorial board of the Washington Post invented a mythical golden key that would be unavailable to anyone but law enforcement. Unsurprisingly, such a key has failed to materialize.
Apple’s rationale for encrypting devices has nothing to do with protecting lawbreakers and everything to do with protecting customers. It has consistently said that it attempts to prevent unauthorized access to the vast amount of personal information on our smartphones. That means encrypting information in transit to prevent man-in-the-middle attacks, and encrypting at rest to prevent copying files in the case of device loss or theft. But Apple’s use of encryption by default is also a risk as it may mean a legal request cannot be comprehensively fulfilled. The more times that happens, the more often governments will demand that it curtails encryption on its devices.
Apple has since published its own FAQ guide (PDF), given that its initial messaging was confusing and incomplete:
Could governments force Apple to add non-CSAM images to the hash list?
Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
The fundamental issue — and the first reason why I think Apple made a mistake here — is that there is a meaningful difference between capability and policy. One of the most powerful arguments in Apple’s favor in the 2016 San Bernardino case is that the company didn’t even have the means to break into the iPhone in question, and that to build the capability would open the company up to a multitude of requests that were far less pressing in nature, and weaken the company’s ability to stand up to foreign governments. In this case, though, Apple is building the capability, and the only thing holding the company back is policy.
[…]
[…] instead of adding CSAM-scanning to iCloud Photos in the cloud that they own-and-operate, Apple is compromising the phone that you and I own-and-operate, without any of us having a say in the matter. Yes, you can turn off iCloud Photos to disable Apple’s scanning, but that is a policy decision; the capability to reach into a user’s phone now exists, and there is nothing an iPhone user can do to get rid of it.
Reading Apple’s FAQ underscores the difference between capability and policy as Thompson has written. There is nothing that will prevent this feature from being abused other than Apple’s assurances that human reviewers will verify that the only reason an account has been flagged is because of CSAM — and its reputation is the only thing that backs this promise. Whether you believe it or not probably depends on how much damage to its reputation you think it can sustain. After all, people are buying Apple’s products and services in record numbers even as the company has received criticism for, among other things, having suppliers that use the forced labour of political prisoners. If this system were repurposed for compelled censorship of users in some far-away country, would the damage to Apple’s reputation be sufficient to cause them to change it, or would most of us still buy iPhones?
Some people believe that the mere possibility of this abuse means that this system should not exist, and I think that is a fair argument. But we already know that many governments around the world — from the democratic to the authoritarian — have lined up behind the argument that encryption is too easily abused and needs to be curtailed. I worry that the pressure will eventually build to the point where weakened encryption will be required, and that would destroy the safety of these devices. I would not feel comfortable keeping my credit card details, health records, photographs, or contacts on a smartphone with fundamentally compromised security.
These products and services have become so integral to our lives that there is no perfect solution to address their abuses. The way I see it, Apple’s forthcoming efforts are a reasonable compromise if there were greater third-party oversight. And there is no universally trusted third party. Apple almost has to stake its reputation on the success and reliability of this system, and to not extend it beyond the most heinous of crimes. If these systems get repurposed to fuel something like political censorship or flag copyright infringement in unshared files, I hope the ensuing backlash would be enough to cause real damage at the company and real change, but I hope it never gets to that point. It seems that the mere announcement that Apple will launch these capabilities has bruised its reputation, and its promises to use them solely for good have fallen flat because it so badly flopped its communication.