In Internal Memo, Apple Addresses Concerns Around New Child Safety Initiatives ⇥ 9to5mac.com
Chance Miller, 9to5Mac:
In an internal memo distributed to the teams that worked on this project and obtained by 9to5Mac, Apple acknowledges the “misunderstandings” around the new features, but doubles down on its belief that these features are part of an “important mission” for keeping children safe.
[…]
The memo, which was distributed late last night and obtained by 9to5Mac, was written by Sebastien Marineau-Mes, a software VP at Apple. Marineau-Mes says that Apple will continue to “explain and detail the features” included in this suite of Expanded Protections for Children.
There are legitimate concerns about the technologies Apple announced yesterday, but there is also plenty of confusion. Sometimes, that is the result of oversimplified summaries, as with this tweet from Edward Snowden:
Apple says to “protect children,” they’re updating every iPhone to continuously compare your photos and cloud storage against a secret blacklist. If it finds a hit, they call the cops.
iOS will also tell your parents if you view a nude in iMessage.
Scanning only applies to photos stored in iCloud, Apple was already checking for known images (Update: Apple was not checking. I regret the error.), similar “blacklists” are used by many tech companies, the threshold is not public but it is not one match, Apple does not call the cops, and the optional parent alert feature only applies to accounts belonging to children under the age of 13. Unfortunately, those caveats do not fit into a tweet, so Snowden was apparently obliged to spread misinformation.
But he is right to be worried about what is not known about this system, and what could be done with these same technologies. The hash databases used by CSAM scanning methods have little oversight. The fuzzy hashing used to match images that have been resized, compressed, or altered means that there is an error rate that, however small, may lead to false positives. And there are understandable worries about how systems like these could be expanded beyond these noble goals.
Some other confusion came from early reports published before Apple’s announcement. New York Times reporter Daisuke Wakabayashi on Twitter:
Like many parents, I have pictures of my kids without clothes on (try getting a 4 year old to wear clothes). How are they going to differentiate this from legitimate abuse? Are we to trust AI to understand the difference? Seems fraught.
Wakabayashi was far from the only person conflating the two image detection systems. The confusion even after Apple’s announcement shows that many people on Twitter simply do not read, but it also indicates that Apple’s explanations were perhaps unclear. Here is my attempt at differentiating them:
For devices signed into a U.S. Apple ID that is both associated with an iCloud Family and has a birthdate of less than 18 years ago, Messages will attempt to intercept nude or sexually-explicit images. This uses on-device automated categorization to identify the images locally. Messages will display warnings in two cases: before images sent to the device are viewed, and before images taken with the device are sent. If the account belongs to a child under 13, parents can also be notified.
Before images are uploaded to iCloud Photos, they will be checked against known examples of CSAM. The images do not leave the device to make this comparison, and this is not a check against all apparently-nude images. This matching happens on the device using a local version of a hash database. The results of the match test are attached to the file when it is encrypted and uploaded to iCloud Photos. When a threshold is crossed in the number of positive matches, only the matched files and their test results will be decrypted for further analysis and possible referral to the NCMEC.
The Messages system may mark an innocent photo of a nude child, as in Wakabayashi’s example, if it was sent to or from a device signed into a minor’s Apple ID. The iCloud Photos system should, theoretically, not match such a photo because it is an original image. No matter how much resemblance it may bear to an image catalogued as part of a broader CSAM investigation, it should register as an entirely unique image.
In any case, all of this requires us to place trust in automated systems using unproven machine learning magic, run by technology companies, and given little third-party oversight. I am not surprised to see people worried by even this limited scope, never mind the possibilities of its expansion.
I hope this is all very clever and works damn near perfectly, but I am skeptical. I tried to sync my iPhone with my Mac earlier today. After twenty minutes of connecting it, disconnecting it, fiddling with settings, and staring at an indeterminate progress bar, I gave up — and that was just copying some files from one device to another over a cable, with every stage made by Apple. I find it hard to believe the same company can promise that it has written some automated systems that can accurately find collections of CSAM without impacting general privacy. I know that these are separate teams and separate people writing separate things. But it is still true that Apple is asking for a great deal of trust that it can get this critically important software right, and I have doubts that any company can.