Ambiguity and Trust in Apple Intelligence

Spencer Ackerman has been a national security reporter for over twenty years, and was partially responsible for the Guardian’s coverage of NSA documents leaked by Edward Snowden. He has good reason to be skeptical of privacy claims in general, and his experience updating his iPhone made him worried:

Recently, I installed Apple’s iOS 18.1 update. Shame on me for not realizing sooner that I should be checking app permissions for Siri — which I had thought I disabled as soon as I bought my device — but after installing it, I noticed this update appeared to change Siri’s defaults.

Apple has a history with changing preferences and dark patterns. This is particularly relevant in the case of the iOS 18.1 update because it was the one with Apple Intelligence, which creates new ambiguity between what is happening on-device and what goes to a server farm somewhere.

Allen Pike:

While easy tasks are handled by their on-device models, Apple’s cloud is used for what I’d call moderate-difficulty work: summarizing long emails, generating patches for Photos’ Clean Up feature, or refining prose in response to a prompt in Writing Tools. In my testing, Clean Up works quite well, while the other server-driven features are what you’d expect from a medium-sized model: nothing impressive.

Users shouldn’t need to care whether a task is completed locally or not, so each feature just quietly uses the backend that Apple feels is appropriate. The relative performance of these two systems over time will probably lead to some features being moved from cloud to device, or vice versa.

It would be nice if it truly did not matter — and, for many users, the blurry line between the two is probably fine. Private Cloud Compute seems to be trustworthy. But I fully appreciate Ackerman’s worries. Someone in his position necessarily must understand what is being stored and processed in which context.

However, Ackerman appears to have interpreted this setting change incorrectly:

I was alarmed to see that even my secure communications apps, like Proton and Signal, were toggled by default to “Learn from this App” and enable some subsidiary functions. I had to swipe them all off.

This setting was, to Ackerman, evidence of Apple “uploading your data to its new cloud-based AI project”, which is a reasonable assumption at a glance. Apple, like every technology company in the past two years, has decided to loudly market everything as being connected to its broader A.I. strategy. In launching these features in a piecemeal manner, though, it is not clear to a layperson which parts of iOS are related to Apple Intelligence, let alone where those interactions are taking place.

However, this particular setting is nearly three years old and unrelated to Apple Intelligence. This is related to Siri Suggestions which appear throughout the system. For example, the widget stack on my home screen suggests my alarm clock app when I charge my iPhone at night. It suggests I open the Microsoft Authenticator app on weekday mornings. When I do not answer the phone for what is clearly a scammer, it suggests I return the missed call. It is not all going to be gold.

Even at the time of its launch, its wording had the potential for confusion — something Apple has not clarified within the Settings app in the intervening years — and it seems to have been enabled by default. While this data may play a role in establishing the “personal context” Apple talks about — both are part of the App Intents framework — I do not believe it is used to train off-device Apple Intelligence models. However, Apple says this data may leave the device:

Your personal information — which is encrypted and remains private — stays up to date across all your devices where you’re signed in to the same Apple Account. As Siri learns about you on one device, your experience with Siri is improved on your other devices. If you don’t want Siri personalization to update across your devices, you can disable Siri in iCloud settings. See Keep what Siri knows about you up to date on your Apple devices.

While I believe Ackerman is incorrect about the setting’s function and how Apple handles its data, I can see how he interpreted it that way. The company is aggressively marketing Apple Intelligence, even though it is entirely unclear which parts of it are available, how it is integrated throughout the company’s operating systems, and which parts are dependent on off-site processing. There are people who really care about these details, and they should be able to get answers to these questions.

All of this stuff may seem wonderful and novel to Apple and, likely, many millions of users. But there are others who have reasonable concerns. Like any new technology, there are questions which can only be answered by those who created it. Only Apple is able to clear up the uncertainty around Apple Intelligence, and I believe it should. A cynical explanation is that this ambiguity is all deliberate because Apple’s A.I. approach is so much slower than its competitors and, so, it is disincentivized from setting clear boundaries. That is possible, but there is plenty of trust to be gained by being upfront now. Americans polled by Pew Research and Gallup have concerns about these technologies. Apple has repeatedly emphasized its privacy bonafides. But these features remain mysterious and suspicious for many people regardless of how much a giant corporation swears it delivers “stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency”.

All of that is nice, I am sure. Perhaps someone at Apple can start the trust-building by clarifying what the Siri switch does in the Settings app, though.