Katie McQue, the Guardian:
The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC.
Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children’s charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC’s annual report.
The reactions to statistics related to this particularly revolting crime are similar to all crime figures: higher and lower numbers can be interpreted as both positive and negative alike. More reports could mean better detection or more awareness, but it could also mean more instances; it is hard to know. Fewer reports might reflect less activity, a smaller platform size or, indeed, undercounting. In Apple’s case, it is likely the latter. It is neither a small platform nor one which prohibits the kinds of channels through which CSAM is distributed.
NCMEC addresses both these problems and I think its complaints are valid:
U.S.-based ESPs are legally required to report instances of child sexual abuse material (CSAM) to the CyberTipline when they become aware of them. However, there are no legal requirements regarding proactive efforts to detect CSAM or what information an ESP must include in a CyberTipline report. As a result, there are significant disparities in the volume, content and quality of reports that ESPs submit. For example, one company’s reporting numbers may be higher because they apply robust efforts to identify and remove abusive content from their platforms. Also, even companies that are actively reporting may submit many reports that don’t include the information needed for NCMEC to identify a location or for law enforcement to take action and protect the child involved. These reports add to the volume that must be analyzed but don’t help prevent the abuse that may be occurring.
Not only are many reports not useful, they are also part of an overwhelming caseload with which law enforcement struggles to turn into charges. Proposed U.S. legislation is designed to improve the state of CSAM reporting. Unfortunately, the wrong bill is moving forward.
The next paragraph in the Guardian story:
All US-based tech companies are obligated to report all cases of CSAM they detect on their platforms to NCMEC. The Virginia-headquartered organization acts as a clearinghouse for reports of child abuse from around the world, viewing them and sending them to the relevant law enforcement agencies. iMessage is an encrypted messaging service, meaning Apple is unable to see the contents of users’ messages, but so is Meta’s WhatsApp, which made roughly 1.4m reports of suspected CSAM to NCMEC in 2023.
I wish there was more information here about this vast discrepancy — a million reports from just one of Meta’s businesses compared to just 267 reports from Apple to NCMEC for all of its online services. The most probable explanation, I think, can be found in a 2021 ProPublica investigation by Peter Elkind, Jack Gillum, and Craig Silverman, about which I previously commented. The reporters here revealed WhatsApp moderators’ heavy workloads, writing:
Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.
WhatsApp allows users to report any message at any time. Apple’s Messages app, on the other hand, only lets users flag a sender as junk and, even then, only if the sender is not in the user’s contacts and the user has not replied a few times. As soon as there is a conversation, there is no longer any reporting mechanism within the app as far as I can tell.
The same is true of shared iCloud Photo albums. It should be easy and obvious how to report illicit materials to Apple. But I cannot find an obvious mechanism for doing so — not in an iCloud-shared photo album, and not in an obvious place on Apple’s website, either. As noted in Section G of the iCloud terms of use, reports must be sent via email to abuse@icloud.com. iCloud albums use long, unguessable URLs, so the likelihood of unintentionally stumbling across CSAM or other criminal materials is low. Nevertheless, it seems to me that notifying Apple of abuse of its services should be much clearer.
Back to the Guardian article:
Apple’s June announcement that it will launch an artificial intelligence system, Apple Intelligence, has been met with alarm by child safety experts.
“The race to roll out Apple AI is worrying when AI-generated child abuse material is putting children at risk and impacting the ability of police to safeguard young victims, especially as Apple pushed back embedding technology to protect children,” said [the NSPCC’s Richard] Collard. Apple says the AI system, which was created in partnership with OpenAI, will customize user experiences, automate tasks and increase privacy for users.
The Guardian ties Apple’s forthcoming service to models able to generate CSAM, which it then connects to models being trained on CSAM. But we do not know what Apple Intelligence is capable of doing because it has not yet been released, nor do we know what it has been trained on. This is not me giving Apple the benefit of the doubt. I think we should know more about how these systems are trained.
We also currently do not know what limitations Apple will set for prompts. It is unclear to me what Collard is referring to in saying that the company “pushed back embedding technology to protect children”.
One more little thing: Apple does not say Apple Intelligence was created in partnership with OpenAI, which is basically a plugin. It also does not say Apple Intelligence will increase privacy for users, only that it is more private than competing services.
I am, for the record, not particularly convinced by any of Apple’s statements or claims. Everything is firmly in we will see territory right now.