Everybody Panic: A Finder Bug, Since Fixed, Was Sending Empty API Calls to Apple

Security researcher Jeffrey Paul was using a Mac without signing into iCloud and has blocked many internet-connected services using Little Snitch in MacOS Ventura 13.1:

Imagine my surprise when browsing these images in the Finder, Little Snitch told me that macOS is now connecting to Apple APIs via a program named mediaanalysisd (Media Analysis Daemon – a background process for analyzing media files).

That sure is surprising. The daemon in question is associated with Visual Look Up, which can be turned off by unchecking Siri Suggestions in System Settings. Given that Paul has done so, mediaanalysisd should not be sending any network requests. This is a privacy violation and surely needs to be fixed.

So Paul has found a MacOS bug, and has a couple of options. He can research it further to understand what information is being sent to Apple and publish a thorough but perhaps dry report. Or he could stop with the Little Snitch notification and spin stories.

Which do you think he did?

It’s very important to contextualize this. In 2021 Apple announced their plan to begin clientside scanning of media files, on device, to detect child pornography (“CSAM”, the term of art used to describe such images), so that devices that end users have paid for can be used to provide police surveillance in direct opposition to the wishes of the owner of the device. CP being, of course, one of the classic Four Horsemen of the Infocalypse trotted out by those engaged in misguided attempts to justify the unjustifiable: violations of our human rights.

I think you can probably see where this is going.

Paul:

Some weeks later, in an apparent (but not really) capitulation, Apple published the following statement:

Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.

The media erroneously reported this as Apple reversing course.

Read the statement carefully again, and recognize that at no point did Apple say they reversed course or do not intend to proceed with privacy-violating scanning features. As a point of fact, Apple said they still intend to release the features and that they consider them “critically important”.

That was certainly true of this, the first statement Apple provided in response to the CSAM detection plans in September 2021, which media outlets accurately reported as a “delay” or “pause”. But in claiming the media erred and that Apple intends to continue building the feature, Paul cites a statement provided to Wired in December 2022 which reads:

“After extensive consultation with experts to gather feedback on child protection initiatives we proposed last year, we are deepening our investment in the Communication Safety feature that we first made available in December 2021,” the company told WIRED in a statement. “We have further decided to not move forward with our previously proposed CSAM detection tool for iCloud Photos. Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all.”

It is fair to report, as Wired and others did, that this constitutes Apple ending development of on-machine CSAM detection for Photos.

Paul does not stop there. He implies Apple has lied about stopping development, and this bug with Quick Look previews in Finder triggering the Visual Look Up process is proof it has quietly launched it in MacOS. That is simply untrue. Howard Oakley reproduced the bug in a virtual machine and saw nothing relevant in the logs and, when Mysk monitored this activity, it found the API request was entirely empty. It is an issue which appeared in MacOS 13.1 and Apple fixed this bug in 13.2, released earlier this month.

But if Paul is going to speculate, he may as well take those conclusions as far as his imagination will go:

Who knows what types of media governments will legally require Apple to scan for in the future? Today it’s CP, tomorrow it’s cartoons of the prophet (PBUH please don’t decapitate me). One thing you can be sure of is that this database of images for which your hardware will now be used to scan will regularly be amended and updated by people who are not you and are not accountable to you.

Nothing about these images was being sent to Apple when this bug was present in MacOS 13.1 despite what Paul suggested throughout this article. A technically savvy security researcher like him could have figured this out instead of rushing to conclusions. But, granted, there is still reason to be skeptical; even if nothing about users’ images have been sent to Apple by this bug, there is no way to know whether the company has some secret database of red flag files. This bug violated users’ trust. The last time something like this happened was with the OCSP fiasco, when Apple promised a way to opt out of Gatekeeper checks by the end of 2021. As of writing, any such option remains unavailable.

However, it is irresponsible of Paul to post such alarmist claims based on a tiny shred of evidence. Yes, mediaanalysisd was making an empty API call despite Siri Suggestions being switched off, and that is not good. But veering into a land of speculation in lieu of missing information is not productive, and neither is misrepresenting what little information has been provided. Paul says “Apple PR exploits poor reading comprehension ability”, yet his own incuriousity has produced a widely shared conspiracy theory that has no basis in fact. If you do not trust Apple’s statements or behaviour, I understand that perspective. I do not think blanket trust is helpful. At the same time, it is unwise to trust alarmist reports like these, either. These are extraordinary claims made without evidence, and they can be dismissed unless proven.