One of the curious side effects of a sprawling lawsuit like Epic Games v. Apple is that documents surface which can clarify past reporting.
Take, for example, a 2010 email from Steve Jobs to Apple’s executive team that was first disclosed during Apple’s lawsuit against Samsung. It described the agenda for Apple’s 2011 “Top 100” meeting. In that version, there was one bullet point that was redacted, but contained second-level items like “cost goal” and “show model”. We now know that line read “iPhone Nano plan”.
The amount of exhibits released can also create newsworthy items of its own. Sean Hollister at the Verge assembled a large list of interesting tidbits. One of those items was a February 2020 iMessage discussion between Eric Friedman and Herve Sibert. Friedman is responsible for Apple’s Fraud Engineering Algorithms and Risk team, while Sibert manages Security and Fraud. From that discussion, Hollister snipped this back-and-forth:
Friedman The spotlight at Facebook etc is all on trust and safety (fake accounts, etc). In privacy, they suck.
Friedman Our priorities are the inverse.
Friedman Which is why we are the greatest platform for distributing child porn, etc.
Sibert Really? I mean, is there a lot of this in our ecosystem? I thought there were even more opportunities for bad actors on other file sharing systems.
Friedman Yes
The snippet Hollister posted ends there, and it formed the basis for articles by John Koetsier at Forbes and Ben Lovejoy at 9to5Mac. Both writers seized on the third text Friedman sent and quoted it in their headlines.
But this is clearly only a segment of a conversation — a single page glimpse into a much longer iMessage discussion. Page 17 of 31, as it turns out, in this exhibit document. Given how incendiary Friedman’s statement was, even in the context of a casual chat, I think it is worth being precise about its context.
In preceding messages, Friedman writes about a presentation the two managers have been working on to be shown to Eddy Cue later that morning. Friedman shows a slide describing features within iOS that have revealed fraud and safety issues. The two relevant concerns are reports of child grooming in social features — like iMessages and in-app chat — and in App Store reviews, of all places. Subsequent messages indicate that this is partly what Friedman was referring to.
Here’s the transcript beginning immediately after Friedman responded “Yes” in the above quote:
Friedman But — and here’s the key — we have chosen to not know in enough places where we really cannot say.
Friedman The NYTimes published a bar graph showing how companies are doing in this area. We are on it, but I think it’s an undererport. [sic]
Friedman Also, we KNOW that developers on our platform are running social media integrations that are inherently unsafe. We can do things in our ecosystem to help with that. For example “ask to chat” is a feature we could require developers to adopt and use for U13 accounts.
Sibert There are also lots of rapidly changing trends in public focus
Friedman Let the parents make a decision
Sibert Yes
Friedman We could introduce a fine distinction between malware and software that is behaviorally fraught, guiding parents to have a conversation with kids about their choices.
Friedman discusses how this could be implemented through families set up in iCloud, which sounds similar to one of Apple’s child safety initiatives. But this discussion is not limited to Apple’s first-party features; it appears to cover a range of vectors through which childrens’ safety could be at risk.
I raise this subtle distinction because the simplified, headline-friendly version gave rise to a bizarre line of questioning in Lovejoy’s article:
Eric Friedman stated, in so many words, that “we are the greatest platform for distributing child porn.” The revelation does, however, raise the question: How could Apple have known this if it wasn’t scanning iCloud accounts… ?
One possibility not raised in Lovejoy’s article is that Friedman was typing imprecisely in this iMessage thread. But this seems to me like a reasonable guess made by the head of fraud and risk at Apple — one of the world’s biggest providers of cloud storage and maker of some of the most popular third-party developer platforms. Even though Apple has not been checking iCloud Photos or iCloud Drive against a CSAM hash list, it is reasonable to speculate that a billion active devices will — sad to say — involve a lot of CSAM in those cloud services.
But Friedman is right: Apple has almost certainly been underreporting because of the current design of its systems. According to the National Center for Missing and Exploited Children, many companies made millions of reports of CSAM uploaded by users, but Apple does not even appear on the chart the Times created. Given the types of services Apple offers, this is certainly a lack of detection rather than a lack of material.
But Apple does make some reports to NCMEC. So, if it is not scanning its cloud storage services — yet — where are those reports coming from?
Thomas Brewster, writing for Forbes in February 2020:
But in Apple’s case, its staff is clearly being more helpful, first by stopping emails containing abuse material from being sent. A staff member then looks at the content of the files and analyzes the emails. That’s according to a search warrant in which the investigating officer published an Apple employee’s comments on how they first detected “several images of suspected child pornography” being uploaded by an iCloud user and then looked at their emails. (As no charges have been filed against that user, Forbes has chosen to publish neither his name nor the warrant.)
Apple also confirmed to Lovejoy this week that it has automatically checked hashes of email attachments against known CSAM since 2019.
I was able to find the warrant referenced here by Brewster — but, for the same reasons, I will not link to it — and I was struck by the similarities between its existing CSAM protocol and the description of its forthcoming child safety projects. In both cases, when there is a hash match, someone at Apple verifies the legitimacy of the match before submitting a report.
I hope Apple does not offload this emotionally damaging work onto some minimum wage contractor.
Apple’s announcement two weeks ago set up a high-stakes reorientation of the balance between the privacy of its users and the risks created by its hardware, software, and services. Those risks were also identified by Friedman and Sibert in the iMessage chat above, along with some loose ideas for countermeasures. Whether Apple’s recently-proposed projects are a good compromise is still the topic of rigorous debate. But it seems some of the exhibits exposed in this lawsuit combined with great reporting from Brewster creates a fuller picture of the nascent days of these child safety efforts, and how Apple’s current processes might scale.