Article

Apple is a famously tight-knit business. Its press releases and media conferences routinely drum the integration of hardware, software, and services as something only Apple is capable of doing. So it sticks out when features feel like they were developed by people who do not know what another part of the company is doing. This happened to me twice in the past week.

Several years ago, Apple added a very nice quality-of-life improvement to the Mac operating system: software installers began offering to delete themselves after they had done their job. This was a good idea.

In the ensuing years, Apple made some other changes to MacOS in an effort to — it says — improve privacy and security. One of the new rules it imposed was requiring the user to grant apps specific permission to access certain folders; another was a requirement to allow one app to modify or delete another.

And, so, when I installed an application earlier this month, I was shown an out-of-context dialog at the end of the process asking for access to my Downloads folder. I granted it. Then I got a notification that the Installer app was blocked from modifying or deleting another file. To change it, I had to open System Settings, toggle the switch, enter my password, and then I was prompted to restart the Installer application — but it seemed to delete itself just fine without my doing so.

This is a built-in feature, triggered by where the installer has been downloaded, using an Apple-provided installation packaging system.1 But it is stymied by a different set of system rules and unexpected permissions requests.


Another oddity is in Apple’s two-factor authentication system. Because Apple controls so much about its platforms, authentication codes are delivered through a system prompt on trusted devices. Preceding the code is a notification informing the user their “Apple Account is being used to sign in”, and it includes a map of where that is.

This map is geolocated based on the device’s IP address, which can be inaccurate for many reasons — something Apple discloses in its documentation:

This location is based on the new device’s IP address and might reflect the network that it’s connected to, rather than the exact physical location. If you know that you’re the person trying to sign in but don’t recognize the location, you can still tap Allow and view the verification code.

It turns out one of the reasons the network might think you are located somewhere other than where you are is because you may be using iCloud Private Relay. Even if you have set it to “maintain general location”, it can sometimes be incredibly inaccurate. I was alarmed to see a recent attempt from Toronto when I was trying to sign into iCloud at home in Calgary — a difference of over 3,000 kilometres.

The map gives me an impression of precision and security. But if it is made less accurate in part because of a feature Apple created and markets, it is misleading and — at times — a cause of momentary anxiety.

What is more, Safari supports automatically filling authentication codes delivered by text message. Apple’s own codes, though, cannot be automatically filled.


These are small things — barely worth the bug report. They also show how features introduced one year are subverted by those added later, almost like nobody is keeping track of all of the different capabilities in Apple’s platforms. I am sure there are more examples; these are just the ones which happened in the past week, and which I have been thinking about. They expose little cracks in what is supposed to be a tight, coherent package of software.


  1. Thanks to Keir Ansell for tracking down this documentation for me. ↥︎

The New York Times recently ran a one–two punch of stories about the ostensibly softening political involvement of Mark Zuckerberg and Meta — where by “punch”, I mean “gentle caress”.

Sheera Frenkel and Mike Isaac on Meta “distanc[ing] itself from politics”:

On Facebook, Instagram and Threads, political content is less heavily featured. App settings have been automatically set to de-emphasize the posts that users see about campaigns and candidates. And political misinformation is harder to track on the platforms after Meta removed transparency tools that journalists and researchers used to monitor the sites.

[…]

“It’s quite the pendulum swing because a decade ago, everyone at Facebook was desperate to be the face of elections,” said Katie Harbath, chief executive of Anchor Change, a tech consulting firm, who previously worked at Facebook.

Facebook used to have an entire category of “Government and Politics” advertising case studies through 2016 and 2017; it was removed by early 2018. I wonder if anything of note happened in the intervening months. Anything at all.

All of this discussion has so far centred U.S. politics; due to the nature of reporting, that will continue for the remainder of this piece. I wonder if Meta is ostensibly minimizing politics everywhere. What are the limits of that policy? Its U.S. influence is obviously very loud and notable, but its services have taken hold — with help — around the world. No matter whether it moderates those platforms aggressively or it deprioritizes what it identifies as politically sensitive posts, the power remains U.S.-based.

Theodore Schleifer and Mike Isaac, in the other Times article about Zuckerberg personally, under a headline claiming he “is done with politics”, wrote about the arc of his philanthropic work, which he does with his wife, Dr. Priscilla Chan:

Two years later, taking inspiration from Bill Gates, Mr. Zuckerberg and Dr. Chan established the Chan Zuckerberg Initiative, a philanthropic organization that poured $436 million over five years into issues such as legalizing drugs and reducing incarceration.

[…]

Mr. Zuckerberg and Dr. Chan were caught off guard by activism at their philanthropy, according to people close to them. After the protests over the police killing of George Floyd in 2020, a C.Z.I. employee asked Mr. Zuckerberg during a staff meeting to resign from Facebook or the initiative because of his unwillingness at the time to moderate comments from Mr. Trump.

The incident, and others like it, upset Mr. Zuckerberg, the people said, pushing him away from the foundation’s progressive political work. He came to view one of the three central divisions at the initiative — the Justice and Opportunity team — as a distraction from the organization’s overall work and a poor reflection of his bipartisan point-of-view, the people said.

This foundation, like similar ones backed by other billionaires, appears to be a mix of legitimate interests for Chan and Zuckerberg, and a vehicle for tax avoidance. I get that its leadership tries to limit its goals and focus on specific areas. But to be in any way alarmed by internal campaigning? Of course there are activists there! One cannot run a charitable organization claiming to be “building a better future for everyone” without activism. That Zuckerberg’s policies at Meta is an issue for foundation staff points to the murky reality of billionaire-controlled charitable initiatives.

Other incidents piled up. After the 2020 election, Mr. Zuckerberg and Dr. Chan were criticized for donating $400 million to the nonprofit Center for Tech and Civic Life to help promote safety at voting booths during pandemic lockdowns. Mr. Zuckerberg and Dr. Chan viewed their contributions as a nonpartisan effort, though advisers warned them that they would be criticized for taking sides.

The donations came to be known as “Zuckerbucks” in Republican circles. Conservatives, including Mr. Trump and Representative Jim Jordan of Ohio, a Republican who is chairman of the House Judiciary Committee, blasted Mr. Zuckerberg for what they said was an attempt to increase voter turnout in Democratic areas.

This is obviously a bad faith criticism. In what healthy democracy would lawmakers actively campaign against voter encouragement? Zuckerberg ought to have stood firm. But it is one of many recent clues as to Zuckerberg’s thinking.

My pet theory is Zuckerberg is not realigning on politics — either personally or as CEO of Meta — out of principle; I am not even sure he is changing at all. He has always been sympathetic to more conservative voices. Even so, it is important for him to show he is moving toward overt libertarianism. In the United States, politicians of both major parties have been investigating Meta for antitrust concerns. Whether the effort by Democrats is in earnest is a good question. But the Republican efforts have long been dominated by a persecution complex where they believe U.S. conservative voices are being censored — something which has been repeatedly shown to be untrue or, at least, lacking context. If Zuckerberg can convince Republican lawmakers he is listening to their concerns, maybe he can alleviate the bad faith antitrust concerns emanating from the party.

I would not be surprised if Zuckerberg’s statements encourage Republican critics to relent. Unfortunately, as in 2016, that is likely to taint any other justifiable qualms with Meta as politically motivated. Recall how even longstanding complaints about Facebook’s practices, privacy-hostile business, and moderation turned into a partisan. The giants of Silicon Valley have every reason to expect ongoing scrutiny. After Meta’s difficult 2022, it is now worth more than ever before — the larger and more influential it becomes, the more skepticism it should expect.

Hannah Murphy, Financial Times:

Some suggest Zuckerberg has been emboldened by X’s Musk.

“With Elon Musk coming and literally saying ‘fuck you’ to people who think he shouldn’t run Twitter the way he has, he is dramatically lowering the bar for what is acceptable behaviour for a social media platform,” said David Evan Harris, the Chancellor’s public scholar at California University, Berkeley and a former Meta staffer. “He gives Mark Zuckerberg a lot of permission and leeway to be defiant.”

This is super cynical. It also feels, unfortunately, plausible for both Zuckerberg and Meta as a company. There is a vast chasm of responsible corporate behaviour which opened up in the past two years and it seems like it is giving room to already unethical players to shine.

See Also: Karl Bode was a guest on “Tech Won’t Save Us” to discuss Zuckerberg’s P.R. campaign with Paris Marx.

Sarah Perez, TechCrunch:

iOS apps that build their own social networks on the back of users’ address books may soon become a thing of the past. In iOS 18, Apple is cracking down on the social apps that ask users’ permission to access their contacts — something social apps often do to connect users with their friends or make suggestions for who to follow. Now, Apple is adding a new two-step permissions pop-up screen that will first ask users to allow or deny access to their contacts, as before, and then, if the user allows access, will allow them to choose which contacts they want to share, if not all.

Kevin Roose, New York Times, in an article with the headline “Did Apple Just Kill Social Apps?”:

Now, some developers are worried that they may struggle to get new apps off the ground. Nikita Bier, a start-up founder and advisor who has created and sold several viral apps aimed at young people, has called the iOS 18 changes “the end of the world,” and said they could render new friend-based social apps “dead on arrival.”

That might be a little melodramatic. I recently spent some time talking to Mr. Bier and other app developers and digging into the changes. I also heard from Apple about why they believe the changes are good for users’ privacy, and from some of Apple’s rivals, who see it as an underhanded move intended to hurt competitors. And I came away with mixed feelings.

Leaving aside the obviously incendiary title, I think this article’s framing is pretty misleading. Apple’s corporate stance is the only one favourable to these limitations. Bier is the only on-the-record developer who thinks these changes are bad; while Roose interviewed others who said contact uploads had slowed since iOS 18’s release, they were not quoted “out of fear of angering the Cupertino colossus”. I suppose that is fair — Apple’s current relationship with developers seems to be pretty rocky. But this article ends up poorly litigating Bier’s desires against Apple giving more control to users.

Bier explicitly markets himself as a “growth expert”; his bio on X is “I make apps grow really fast”. He has, to quote Roose, “created and sold several viral apps” in part by getting users to share their contact list, even children. Bier’s first hit app, TBH, was marketed to teenagers and — according to several sources I could find, including a LinkedIn post by Kevin Natanzon — it “requested address book access before actually being able to use the app”. A more respectful way of offering this feature would be to ask for contacts permission only when users want to add friends. Bier’s reputation for success is built on this growth hacking technique, so I understand why he is upset.

What I do not understand is granting Bier’s objections the imprimatur of a New York Times story when one can see the full picture of Bier’s track record. On the merits, I am unsympathetic to his complaints. Users can still submit their full contact list if they so choose, but now they have the option of permitting only some access to an app I have not even decided I trust.

Roose:

Apple’s stated rationale for these changes is simple: Users shouldn’t be forced to make an all-or-nothing choice. Many users have hundreds or thousands of contacts on their iPhones, including some they’d rather not share. (A therapist, an ex, a random person they met in a bar in 2013.) iOS has allowed users to give apps selective access to their photos for years; shouldn’t the same principle apply to their contacts?

The surprise is not that Apple is allowing more granular contacts access, it is that it has taken this long for the company to do so. Developers big and small have abused this feature to a shocking degree. Facebook ingested the contact lists of a million and a half users unintentionally — and millions of users intentionally — a massive collection of data which was used to inform its People You May Know feature. LinkedIn is famously creepy and does basically the same thing. Clubhouse borrowed from the TBH playbook by slurping up contacts before you could use the app.1 This has real consequences in surfacing hidden connections many people would want to stay hidden.

Even a limited capacity of allowing users to more easily invite friends can go wrong. When Tribe offered such a feature, it spammed users’ contacts. It settled a resulting class action suit in 2018 for $200,000 without admitting wrongdoing. That may have been accidental. Circle, on the other hand, was deliberate in its 2013 campaign.

Apple’s position is, therefore, a reasonable one, but it is strange to see no voices from third-party experts favourable to this change. Well-known iOS security researchers Mysk celebrated it; why did Roose not talk to them? I am sure there are others who would happily adjudicate Apple’s claims. The cool thing about a New York Times email address is that people will probably reply, so it seems like a good idea to put that power to use. Instead, all we get is this milquetoast company-versus-growth-hacker narrative, with some antitrust questions thrown in toward the end.

Roose:

Some developers also pointed out that the iOS 18 changes don’t apply to Apple’s own services. iMessage, for example, doesn’t have to ask for permission to access users’ contacts the way WhatsApp, Signal, WeChat and other third-party messaging apps do. They see that as fundamentally anti-competitive — a clear-cut example of the kind of self-preferencing that antitrust regulators have objected to in other contexts.

I am not sure this is entirely invalid, but it seems like an overreach. The logic of requiring built-in apps to request the same permissions as third-party apps is, I think, understandable on fairness grounds, but there is a reasonable argument to be made for implied consent as well. Assessing this is a whole different article.

But Messages accesses the contacts directory on-device, while many other apps will transport the list off-device. That is a huge difference. Your contact list is almost certainly unique. The specific combination of records is a goldmine for social networks and data brokers wishing to individually identify you, and understand your social graph.

I have previously argued that permission to access contacts is conceptually being presented to the wrong person — it ought to, in theory, be required by the people in your contacts instead. Obviously that would be a terrible idea in practice. Yet each of us has only given our contact information to a person; we may not expect them to share it more widely.

As in so many other cases, the answer here is found in comprehensive privacy legislation. You should not have to worry that your phone number in a contact list or used for two-factor authentication is going to determine your place in the global social graph. You should not have to be concerned that sharing your own contact list in a third-party app will expose connections or send an unintended text to someone you have not spoken with in a decade. Data collected for a purpose should only be used for that purpose; violating that trust should come with immediate penalties, not piecemeal class action settlements and FTC cases.

Apple’s solution is imperfect. But if it stops the Biers of the world from building apps which ingest wholesale the contact lists of teenagers, I find it difficult to object.


  1. Remember when Clubhouse was the next big thing, and going to provide serious competition to incumbent giants? ↥︎

Allow me to set the scene: you have been seated with a group of your friends at a restaurant, catching up in a lively discussion, when a member of the waitstaff shows up. They take everyone’s orders, then the discussion resumes — but they return a short while later to ask if you heard about the specials. You had and, anyway, you have already ordered what you want, so the waiter leaves. You chat amongst yourselves again.

But then they appear again. Might they suggest some drinks? How about a side? Every couple of minutes, they reliably return, breaking your discussion to sell you on something else.

Would you like to see the menu again? Here, try this new thing. Here, try this classic thing we brought back. Here is a different chair. How about we swap the candles on the table for a disco ball? Would you like to hear the specials again? Have you visited our other locations?

It is weird because you had been to this restaurant a few times before and the service was attentive, but not a nuisance. Now that you think of it, though, the waitstaff became increasingly desperate after your first visit. Those first interruptions were fine because they were expected — even desired. But there is a balance. You are coming to this restaurant because the food and the drinks are good, sure, but you are there with friends to catch up.

Now, pressured by management, the waiters have become a constant irritant instead of helpful, and there is nothing you can do. You can ask them to leave you alone, but they only promise a slightly longer gap. There is no way to have a moment to yourselves. Do you get up and leave? Do you come back? I would not. In actual experience, there are restaurants I avoid because the service is just too needy.

Also, apps.

When I open any of the official clients for the most popular social media platforms — Instagram, Threads, X, or YouTube — I am thrust into an environment where I am no longer encouraged to have a good time on my own terms. From home feeds containing a blend of posts from accounts I follow and those I do not, to all manner of elements encouraging me to explore other stuff — the platform is never satisfied with my engagement. I have not even factored in ads; this is solely about my time commitment. These platforms expect more of it.

These are decisions made by people who, it would seem, have little trust in users. There is rarely an off switch for any of these features — at best, there is most often only a way to temporarily hide them.

These choices illustrate the sharply divergent goals of these platforms and my own wishes. I would like to check out the latest posts from the accounts I follow, maybe browse around a bit, and then get out. That is a complete experience for me. Not so for these platforms.

Which makes it all the more interesting when platforms try new things they think will be compelling, like this announcement from Meta:

We’re expanding Meta AI’s Imagine features, so you can now imagine yourself as a superhero or anything else right in feed, Stories and your Facebook profile pictures. You can then easily share your AI-generated images so your friends can see, react to or mimic them. Meta AI can also suggest captions for your Stories on Facebook and Instagram.

[…]

And we’re testing new Meta AI-generated content in your Facebook and Instagram feeds, so you may see images from Meta AI created just for you (based on your interests or current trends). You can tap a suggested prompt to take that content in a new direction or swipe to Imagine new content in real time.

Perhaps this is appealing to you, but I find this revolting. Meta’s superficially appealing generated images have no place in my Instagram feed; they do not reflect how I actually want to use Instagram at all.

Decisions like these have infected the biggest platforms in various ways, which explains why I cannot stand to use most of them any longer. The one notable asterisk is YouTube which, as of last year, allows you to hide suggested videos on the homepage, which also turns off Shorts’ infinite scrolling. However, every video page still contains suggestions for what you should watch next. Each additional minute of your time is never enough for any of these platforms; they always want the minute after that, too.

You really notice the difference in respect when you compare these platforms against smaller, less established competitors. When I open Bluesky or my current favourite Mastodon client, it feels similar to the way social media did about ten years ago blended with an updated understanding of platform moderation. Glass is another tremendous product which lets me see exactly what I want, and discover more if I would like to — but there is no pressure.

The business models of these companies are obviously and notably very different from those of incumbent players. Bluesky and Mastodon are both built atop open protocols, so their future is kind of independent of whether the companies themselves exist. But, also, it is possible there will come a time when those protocols lack the funding to be updated, and are only used by not more than a handful of people each running their own instance. Glass, on the other hand, is just a regular boring business: users pay money for it.

Is the future of some of these smaller players going to mimic those which have come before? Must they ultimately disrespect their users? I hope that is not the roadmap for any of them. It should not be necessary to slowly increase the level of hostility between product and user. It should be possible to build a successful business by treating people with respect.

The biggest social platforms are fond of reminding us about how they facilitate connections and help us communicate around the world. They are a little bit like a digital third place. And, just as you would not hang out somewhere that was actively trying to sabotage your ability to chat with your friends in real life, it is hard to know why you would do so online, either. Happily, where Google and Meta and X exhaust me with their choices, there are a wealth of new ideas that bring back joy.

It was only a couple of weeks ago when Mark Zuckerberg wrote in a letter to U.S. lawmakers about his regret in — among other things — taking officials at their word about Russian election meddling in 2020. Specifically, he expressed remorse for briefly demoting a single link to a then-breaking New York Post story involving data it had obtained from a copy of the hard drive of a laptop formerly belonging to Hunter Biden, the son of the now-U.S. president.

At the time, some U.S. officials were concerned about the possibility it was partly or wholly Russian disinformation. This was later found to be untrue. In his letter, Zuckerberg wrote “in retrospect, we shouldn’t have demoted the story” and said the company has “changed [its] policies and processes to make sure this doesn’t happen again”.

To be clear, the laptop story was not a hoax and social media platforms ultimately erred in the decisions they made, but their policies were not inherently unfair and were reasonably cautious in their handling of the story. Nevertheless, the phrase “Hunter Biden’s laptop” is now a kind of shibboleth among those who believe there is a mass censorship campaign by disinformation researchers, intelligence agencies, and social media companies. That group includes people like Jim Jordan, to whom Zuckerberg addressed his obsequious letter. Surely, he was no longer taking U.S. officials at their word, and would be shrugging off their suggestions for platform moderation.

Right?

Kevin Collier and Phil Helsel, NBC News:

Social media giant Meta announced Monday that it is banning Russian media outlet RT, days after the Biden administration accused RT of acting as an arm of Moscow’s spy agencies.

[…]

U.S. officials allege that in Africa, RT is behind an online platform called “African Stream” but hides its role; that in Germany, it secretly runs a Berlin-based English-language site known as “Red”; and that in France, it hired a journalist in Paris to carry out “influence projects” aimed at a French-speaking audience.

As of writing, the Instagram and Threads accounts for Red are still online, but its Facebook page is not. A June report in Tagesspiegel previously connected Red to RT.

But I could not find any previous reporting connecting African Stream to Russia before U.S. officials made that claim. Even so, without corroborating evidence, Meta dutifully suspended African Stream’s presence on its platforms, which appeared to be active as of Friday.

Meta should — absolutely — do its best to curtail governments’ use of its platforms to spread propaganda and disinformation. All platforms should do so. I also hope it was provided more substantial evidence of RT’s involvement in African Stream. By that standard, it was also reasonable — if ultimately wrong — for it to minimize the spread of the Post story in 2020 based on the information it had at the time.

For all Zuckerberg’s grovelling to U.S. lawmakers, Meta ultimately gets to choose what is allowed on its platforms and what is not. It is right for it to be concerned about political manipulation. But this stuff is really hard to moderate. That is almost certainly why it is deprioritizing “political” posts — not because they do not get engagement or that the engagement they do get is heated and negative, but because it risks allegations of targeted censorship and spreading disinformation. Better, in Meta’s view, to simply restrict it all. Zuckerberg has figured out Meta is just as valuable when it does not react to criticism.

What I am worried about is the rising tension between the near-global scope of social media platforms and the parallel attempts by governments to get them to meet local expectations. Many of these platforms are based in the U.S. and have uncomfortably exported those values worldwide. Meta’s platforms are among the world’s most-used, so it is often among the most criticized. But earlier this month, X was banned in Brazil. The U.S. is seeking to ban TikTok and, based on a hearing today, it may well succeed.

It is concerning these corporations have such concentrated power, but I also do not think it makes sense to either treat them as common carriers, nor for them to be moderated in other countries as they would in the United States. I am more supportive of decentralized social software based on protocols like ActivityPub. Those can be, if anything, even more permissive and even harder to moderate. That also makes them more difficult for governments to restrict them — something which I support, but I know is not seen universally as the correct choice. They minimize the control of a single party’s decisions and, with it, help reduce the kinds of catastrophes we have seen from the most popular days of Facebook and Twitter.

Surely there will be new problems with which to contend, and perhaps it will have been better for there to be monolithic decision-makers after all. But it is right to try something different, and I am glad to see support building in different expressions. It is an exciting time for the open web.

Even so, I still wish for a good MacOS Bluesky client.

I do not wish to make a whole big thing out of this, but I have noticed a bunch of little things which make my iPhone a little bit harder to use. For this, I am setting aside things like rearranging the Home Screen, which still feels like playing Tetris with an adversarial board. These are all things which are relatively new, beginning with the always-on display and the Island in particular, neither of which I had on my last iPhone.

The always-on display is a little bit useful and a little bit of a gimmick. I have mine set to hide the wallpaper and notifications. In this setup, however, the position of media controls becomes unpredictable. Imagine you are listening to music when someone wishes to talk to you. You reach down to the visible media controls and tap where the pause button is, knowing that this only wakes the display. You go in for another tap to pause but — surprise — you got a notification at some point and, so, now that you have woken up the display, the notification slides in from the bottom and moves the media controls up, so you have now tapped on a notification instead.

I can resolve this by enabling notifications on the dimmed lock screen view, but that seems more like a workaround than a solution to this unexpected behaviour. A simple way to fix this would be to not show media controls when the phone is locked and the display is asleep. They are not functional, but they create an expectation for where those controls will be, which is not necessarily the case.

The Dynamic Island is fussy, too. I frequently interact with it for media playback, but it has a very short time-out. That is, if I pause media from the Dynamic Island, the ability to resume playback disappears after just a few seconds; I find this a little disorientating.

I do not understand how to swap the priority or visibility of Dynamic Island Live Activities. That is to say the Dynamic Island will show up to two persistent items, one of which will be minimized into a little circular icon, while the other will wrap around the display cutout. Apple says I should be able to swap the position of these by swiping horizontally, but I can only seem to make one of the Activities disappear no matter how I swipe. And, when I do make an Activity disappear, I do not know how I can restore it.

I find a lot of the horizontal swiping gestures too easy to activate in the Dynamic Island — I have unintentionally made an Activity disappear more than once — and across the system generally. It seems only a slightly off-centre angle is needed to transform a vertical scrolling action into a horizontal swiping one. Many apps make use of “sloppy” swiping — being able to swipe horizontally anywhere on the display to move through sequential items or different pages — and vertical scrolling in the same view, but the former is too easy for me to trigger when I intend the latter.

I also find the area above the Dynamic Island too easy to touch when I am intending to expand the current Live Activity. This will be interpreted as touching the Status Bar, which will jump the scroll position of the current view to the top.

Lastly, the number of unintended taps I make has, anecdotally, skyrocketed. One reason for this is a change made several iOS versions ago to recognize touches more immediately. If I am scrolling a long list and I tap the display to stop the scroll in-place, resting my thumb onscreen is sometimes read as a tap action on whatever control is below it. Another reason for accidental touches is that pressing the sleep/wake button does not immediately stop interpreting taps on the display. You can try this now: open Mail, press the sleep/wake button, then — without waiting for the display to fall asleep — tap some message in the list. It is easy to do this accidentally when I return my phone to my pocket, for example.

These are all little things but they are a cumulative irritation. I do not think my motor skills have substantially changed in the past seventeen years of iOS device use, though I concede they have perhaps deteriorated a little. I do notice more things behaving unexpectedly. I think part of the reason is this two-dimensional slab of glass is being asked to interpret a bunch of gestures in some pretty small areas.

Chance Miller, 9to5Mac:

Apple has changed its screen recording privacy prompt in the latest beta of macOS Sequoia. As we reported last week, Apple’s initial plan was to prompt users to grant screen recording permissions weekly.

In macOS Sequoia beta 6, however, Apple has adjusted this policy and will now prompt users on a monthly basis instead. macOS Sequoia will also no longer prompt you to approve screen recording permissions every time you reboot your Mac.

After I wrote about the earlier permissions prompt, I got an email from Adam Selby, who manages tens of thousands of Macs in an enterprise context. Selby wanted to help me understand the conditions which trigger this alert, and to give me some more context. The short version is that Apple’s new APIs allow clearer and more informed user control over screen recording to the detriment of certain types of application, and — speculation alert — it is possible this warning will not appear in the first versions of MacOS Sequoia shipped to users.

Here is an excerpt from the release notes for the MacOS 15.0 developer beta:

Applications utilizing deprecated APIs for content capture such as CGDisplayStream & CGWindowListCreateImage can trigger system alerts indicating they might be able to collect detailed information about the user. Developers need to migrate to ScreenCaptureKit and SCContentSharingPicker. (120910350)

It turns out the “and” in that last sentence is absolutely critical. In last year’s beta releases of MacOS 14, Apple began advising developers it would be deprecating CoreGraphics screenshot APIs, and that applications should migrate to ScreenCaptureKit. However, this warning was removed by the time MacOS 14.0 shipped to users, only for it to reappear in the beta versions of 14.4 released to developers earlier this year. Apple’s message was to get on board — and fast — with ScreenCaptureKit.

ScreenCaptureKit was only the first part of this migration for developers. The second part — returning to the all-important “and” from the 15.0 release notes — is SCContentSharingPicker. That is the selection window you may have seen if you have recently tried screen sharing with, say, FaceTime. It has two agreeable benefits: first, it is not yet another permissions dialog; second, it allows the user to know every time the screen is being recorded because they are actively granting access through a trusted system process.

This actually addresses some of the major complaints I have with the way Apple has built out its permissions infrastructure to date:

[…] Even if you believe dialog boxes are a helpful intervention, Apple’s own sea of prompts do not fulfil the Jobs criteria: they most often do not tell users specifically how their data will be used, and they either do not ask users every time or they cannot be turned off. They are just an occasional interruption to which you must either agree or find some part of an application is unusable.

Instead of the binary choices of either granting apps blanket access to record your screen or having no permissions dialog at all for what could be an abused feature, this picker gives users the control and knowledge over how an app may record their screen. This lacks a scary catch-all dialog in favour of ongoing consent. A user will know exactly when an app is recording their screen, and exactly what it is recording, because that permission is no longer something an app gets, but something given to it by this picker.

This makes sense for a lot of screen recording use cases — for example, if someone is making a demo video, or if they are showing their screen in an online meeting. But if someone is trying to remotely access a computer, there is a sort of Möbius strip of permissions where you need to be able to see the remote screen in order to grant access to be able to see the screen. The Persistent Content Capture entitlement is designed to fix that specific use case.

Even though I think this structure will work for most apps, most of the time, it will add considerable overhead for apps like xScope, which allows you to measure and sample anything you can see, or ScreenFloat — a past sponsor — which allows you to collect, edit, and annotate screenshots and screen recordings. To use these utilities and others like them, a user will need to select the entire screen from the window picking control every time they wish to use a particular tool. Something as simple as copying an onscreen colour is now a clunky task without, as far as I can tell, any workaround. That is basically by design: what good is it to have an always-granted permission when the permissions structure is predicated on ongoing consent? But it does mean these apps are about to become very cumbersome. Either you need to grant whole-screen access every time you invoke a tool (or launch the app), or you do so a month at a time — and there is no guarantee the latter grace period will stick around in future versions of MacOS.

I think it is possible MacOS 15.0 ships without this dialog. In part, that is because its text — “requesting to bypass the system window picker” — is technical and abstruse, written with seemingly little care for average user comprehension. I also think that could be true because it is what happened last year with MacOS 14.0. That is not to say it will be gone for good; Apple’s intention is very clear to me. But hopefully there will be some new APIs or entitlement granted to legitimately useful utility apps built around latent access to seeing the whole screen when a user commands. At the very least, users should be able to grant access indefinitely.

I do not think it is coincidental this Windows-like trajectory for MacOS has occurred as Apple tries to focus more on business customers. In an investor call last year, Tim Cook said Apple’s “enterprise business is growing”. In one earlier this month, he seemed to acknowledge it was a factor, saying the company “also know[s] the importance of security for our users and enterprises, so we continue to advance protections across our products” in the same breath as providing an update on the company’s Mac business. This is a vague comment and I am wary of reading too much into it, but it is notable to see the specific nod to Mac enterprise security this month. I hope this does not birth separate “Home” and “Professional” versions of MacOS.

Still, there should be a way for users to always accept the risks of their actions. I am confident in my own ability to choose which apps I run and how to use my own computer. For many people — maybe most — it makes sense to provide a layer of protection for possibly harmful actions. But there must also be a way to suppress these warnings. Apple ought to be doing better on both counts. As Michael Tsai writes, the existing privacy system “feels like it was designed, not to help the user understand what’s going on and communicate their preferences to the system, but to deflect responsibility”. The new screen recording picker feels like an honest attempt at restricting what third-party apps are able to do without the user’s knowledge, and without burdening users with an uninformative clickwrap agreement.

But, please, let me be riskier if I so choose. Allow me to let apps record the entire screen all the time, and open unsigned apps without going through System Settings. Give me the power to screw myself over, and then let me get out of it. One does not get better at cooking by avoiding tools that are sharp or hot. We all need protections from our own stupidity at times, but there should always be a way to bypass them.

Marko Zivkovic, in an April report for AppleInsider, revealed several new Safari features to debut this year. Some of them, like A.I.-based summarization, were expected and shown at WWDC. Then there was this:

Also accessible from the new page controls menu is a feature Apple is testing called “Web Eraser.” As its name would imply, it’s designed to allow users to remove, or erase, specific portions of web pages, according to people familiar with the feature.

WWDC came and went without any mention of this feature, despite its lengthy and detailed description in that April story. Zivkovic, in a June article, speculated on what happened:

So, why did Apple remove a Safari feature that was fully functional?

The answer to that question is likely two-fold — to avoid controversy and to make leaked information appear inaccurate or incorrect.

The first of these reasons is plausible to me; the second is not. In May, Lara O’Reilly of Business Insider reported on a letter sent by a group of publishers and advertisers worried Apple was effectively launching an ad blocker. Media websites may often suck, but this would be a big step for a platform owner to take. I have no idea if that letter caused Apple to reconsider, but it seems likely to me it would be prudent and reasonable for the company to think more carefully about this feature’s capabilities and how it is positioned.

The apparent plot to subvert AppleInsider’s earlier reporting, on the other hand, is ludicrous. If you believe Zivkovic, Apple went through the time and expense of developing a feature so refined it must have been destined for public use because there is, according to Zivkovic, “no reason to put effort into the design of an internal application”,1 then decided it was not worth launching because AppleInsider spoiled it. This was not the case for any other feature revealed in that same April report for, I guess, some top secret reason. As evidence, Zivkovic points to several products which have been merely renamed for launch:

A notable example of this occurred in 2023, when Apple released the first developer betas of its new operating system for the Apple Vision Pro headset. Widely expected to make its debut under the name xrOS, the company instead announced “visionOS.”

Even then, there were indications of a rushed rebrand. Apple’s instructional videos and code from the operating systems contained clear mentions of the name xrOS.

Apple renamed several operating system features ahead of launch. To be more specific, the company renamed its Adaptive Voice Shortcuts accessibility feature to Vocal Shortcuts.

As mentioned earlier, Intelligent Search received the name Highlights, while Generative Playground was changed to “Image Playground.” The name “Generative Playground” still appears as the application title in the recently released developer betas of Apple’s operating systems.

None of these seem like ways of discrediting media. Renaming the operating system for the Vision Pro to “visionOS” makes sense because it is the name of the product — similar to tvOS and iPadOS — and, also, “xrOS” is clunky. Because of how compartmentalized Apple is, the software team probably did not know what name it would go by until it was nearly time to reveal it. But they needed to call it something so they could talk about it in progress meetings without saying “the spatial computer operating system”, or whatever. This and all of the other examples just seem like temporary names getting updated for public use. None of this supports the thesis that Apple canned Web Eraser to discredit Zivkovic. There is a huge difference between replacing the working name of a product with one which has been finalized, and developing an entire new feature only to scrap it to humiliate a reporter.

Besides, Mark Gurman already tried this explanation. In a March 2014 9to5Mac article, Gurman reported on the then-unreleased Health app for iOS, which he said would be named “Healthbook” and would have a visual design similar to the Passbook app, now known as Wallet. After the Health app was shown at WWDC that year, Gurman claimed it was renamed and redesigned “late in development due to the leak”. While I have no reason to doubt the images Gurman showed were re-created from real screenshots, and there was evidence of the “Healthbook” name in early builds of the Health app, I remain skeptical it was entirely changed in direct response to Gurman’s report. It is far more likely the name was a placeholder, and the March version of the app’s design was still a work in progress.

The June AppleInsider article is funny in hindsight for how definitive it is in the project’s cancellation — it “never became available to the public”; it “has been removed in its entirety […] leaving no trace of it”. Yet, mere weeks later, it seems a multitrillion-dollar corporation decided it would not be bullied by an AppleInsider writer, held its head high, and released it after all. You have to admire the bravery.

Juli Clover, of MacRumors, was first early to report on its appearance in the fifth beta builds of this year’s operating systems under a new name (Update: it seems like Cherlynn Low of Engadget was first; thanks Jeff):

Distraction Control can be used to hide static content on a page, but it is not an ad blocker and cannot be used to permanently hide ads. An ad can be temporarily hidden, but the feature was not designed for ads, and an ad will reappear when it refreshes. It was not created for elements on a webpage that regularly change.

I cannot confirm but, after testing it, I read this to mean it will hide elements with some kind of identifier which remains fixed across sessions — an id or perhaps a unique string of classes — and within the same domain. If the identifier changes on each load, the element will re-appear. Since ads often appear with different identifiers each time and this feature is (I think) limited by domain, it is not an effective ad blocker.

Zivkovic’s follow-up story from after Distraction Control was included in an August beta build is, more or less, a rehashing of only the first explanation for the feature’s delay from what he wrote in June, never once commenting on his more outlandish theory:

Based on the version of Distraction Control revealed on Monday, it appears as though Apple wanted to distance itself from Web Eraser and the negative connotations surrounding the feature.

As mentioned earlier, the company renamed Web Eraser to Distraction Control. In addition to this, the fifth developer beta of iOS 18 includes a new pop-up message that informs users of the feature’s overall purpose, making it clear that it’s not meant to block ads.

It has been given a more anodyne name and it now has a dialog box.

Still, this shows Zivkovic’s earlier report was correct: Apple was developing an easy-to-use feature to hide page elements within Safari and it is in beta builds of the operating systems launching this year. Zivkovic should celebrate this. Instead, his speculative June report makes his earlier reliable reporting look shaky because, it would seem, he was too impatient to wait and see if the feature would launch later. That would be unusual for Apple but still more likely than the company deciding to cancel it entirely.

The August report also adds some new information but, in an effort to create distance between Web Eraser and Distraction Control, Zivkovic makes some unforced errors:

When it comes to ads, pre-release versions of Web Eraser behaved differently from the publicly available Distraction Control. Internal versions of the feature had the ability to block the same page element across different web pages and maintained the users’ choice of hidden elements even after the page was refreshed.

This description of the Distraction Control behaviour is simply not true. In my testing, page elements with stable identifiers remain hidden between pages on the same domain, after the page has been refreshed, and after several hours in a new browser tab.

Zivkovic should be thrilled about his April scoop. Instead, the two subsequent reports undermine the confidence of that first report and unnecessarily complicate the most likely story with baseless speculation that borders on conspiracy theories. From the outside, it appears the early rumour about Web Eraser was actually beneficial for the feature. Zivkovic accurately reported its existence and features. Publishers, worried about its use as a first-party ad blocker, wrote to Apple. Apple delayed the feature’s launch and, when it debuted, gave it a new name and added a dialog box on first use to clarify its intent. Of course, someone can still use Distraction Control to hide ads but, by being a manual process on a per-domain basis, it is a far more tedious process than downloading a dedicated ad blocker.

This was not a ruse to embarrass rumour-mongers. It was just product development: a sometimes messy, sometimes confusing process which, in this case, seemed to result in a better feature with a clearer scope. Unless someone reports otherwise, it does not need to be much more complicated than that.


  1. If Zivkovic believes Apple does not care much about designing things for internal use only, he is sorely mistaken. Not every internal tool is given that kind of attention, but many are. ↥︎

In response to Apple’s increasingly distrustful permissions prompts, it is worth thinking about what benefits this could provide. For example, apps can start out trustworthy and later become malicious through updates or ownership changes, and users should be reminded of the permissions they have afforded it. There is a recent example of this in Bartender. But I am not sure any of this is helped by yet another alert.

The approach seems to be informed by the Steve Jobs definition of privacy, as he described it at D8 in 2010:

Privacy means people know what they’re signing up for — in plain English, and repeatedly. That’s what it means.

I’m an optimist. I believe people are smart, and some people want to share more data than other people do. Ask ’em. Ask ’em every time. Make them tell you to stop asking them, if they get tired of your asking them. Let them know precisely what you’re gonna do with their data.

Some of the permissions dialogs thrown by Apple’s operating systems exist to preempt abuse, while others were added in response to specific scandals. The prompt for accessing your contacts, for example, was added after Path absorbed users’ lists.

The new weekly nag box for screen recording in the latest MacOS Sequoia is also conceivably a response to a specific incident. Early this year, the developer of Bartender sold the app to another developer without telling users. The app has long required screen recording permissions to function. It made some users understandably nervous about transferring that power, especially because the transition was done so quietly to a new shady owner.

I do not think this new prompt succeeds in helping users make an informed decision. There is no information in the dialog’s text informing you who the developer is, and if it has changed. It does not appear the text of the dialog can be customized for the developer to provide a reason. If this is thrown by an always-running app like Bartender, a user will either become panicked or begin passively accepting this annoyance.

The latter is now the default response state to a wide variety of alerts and cautions. Car alarms are ineffective. Hospitals and other medical facilities are filled with so many beeps staff become “desensitized”. People agree to cookie banners without a second of thought. Alert fatigue is a well-known phenomenon, such that it informed the Canadian response in the earliest days of the pandemic. Without more thoughtful consideration of how often and in what context to inform people of something, it is just pollution.

There is apparently an entitlement which Apple can grant, but it is undocumented. It is still the summer and this could all be described in more robust terms over the coming weeks. Yet it is alarming this prompt was introduced with so little disclosure.

I believe people are smart, too. But I do not believe they are fully aware of how their data is being collected and used, and none of these dialog boxes do a good job of explaining that. An app can ask to record your screen on a weekly basis, but the user is not told any more than that. It could ask for access to your contacts — perhaps that is only for local, one-time use, or the app could be sending a copy to the developer, and a user has no way of knowing which. A weather app could be asking for your location because you requested a local forecast, but it could also be reselling it. A Mac app can tell you to turn on full disk access for plausible reasons, but it could abuse that access later.

Perhaps the most informative dialog boxes are the cookie consent forms you see across the web. In their most comprehensive state, you can see which specific third-parties may receive your behavioural data, and they allow you to opt into or out of categories of data use. Yet nobody actually reads those cookie consents because they have too much information.

Of course, nobody expects dialog boxes to be a complete solution to our privacy and security woes. A user places some trust in each layer of the process: in App Review, if they downloaded software from the App Store; in built-in protections; in the design of the operating system itself; and in the developer. Even if you believe dialog boxes are a helpful intervention, Apple’s own sea of prompts do not fulfil the Jobs criteria: they most often do not tell users specifically how their data will be used, and they either do not ask users every time or they cannot be turned off. They are just an occasional interruption to which you must either agree or find some part of an application is unusable.

Users are not typically in a position to knowledgeably authorise these requests. They are not adequately informed, and it is poor policy to treat these as individualized problems.

Since owners of web properties became aware of the traffic-sending power of search engines — most often Google in most places — they have been in an increasingly uncomfortable relationship as search moves beyond ten relevant links on a page. Google does not need websites, per se; it needs the information they provide. Its business recommendations are powered in part by reviews on other websites. Answers to questions appear in snippets, sourced to other websites, without the user needing to click away.

Publishers and other website owners might consider this a bad deal. They feed Google all this information hoping someone will visit their website, but Google is adding features that make it less likely they will do so. Unless they were willing to risk losing all their Google search traffic, there was little a publisher could do. Individually, they needed Google more than Google needed them.

But that has not been quite as true for Reddit. Its discussions hold a uniquely large corpus of suggestions and information on specific topics and in hyper-local contexts, as well as a whole lot of trash. While the quality of Google’s results have been sliding, searchers discovered they could append “Reddit” to a query to find what they were looking for.

Google realized this and, earlier this year, signed a $60 million deal with Reddit allowing it to scrape the site to train its A.I. features. Part of that deal apparently involved indexing pages in search as, last month, Reddit restricted that capability to Google. That is: if you want to search Reddit, you can either use the site’s internal search engine, or you can use Google. Other search engines still display results created from before mid-July, according to 404 Media, but only Google is permitted to crawl anything newer.

It is unclear to me whether this is a deal only available to Google, or if it is open to any search engine that wants to pay. Even if it was intended to be exclusive, I have a feeling it might not be for much longer. But it seems like something Reddit would only care about doing with Google because other search engines basically do not matter in the United States or worldwide.1 What amount of money do you think Microsoft would need to pay for Bing to be the sole permitted crawler of Reddit in exchange for traffic from its measly market share? I bet it is a lot more than $60 million.

Maybe that is one reason this agreement feels uncomfortable to me. Search engines are marketed as finding results across the entire web but, of course, that is not true: they most often obey rules declared in robots.txt files, but they also do not necessarily index everything they are able to, either. These are not explicit limitations. Yet it feels like it violates the premise of a search engine to say that it will be allowed to crawl and link to other webpages. The whole thing about the web is that the links are free. There is no guarantee the actual page will be freely accessible, but the link itself is not restricted. It is the central problem with link tax laws, and this pay-to-index scheme is similarly restrictive.

This is, of course, not the first time there has been tension in how a site balances search engine visibility and its own goals. Publishers have, for years, weighed their desire to be found by readers against login requirements and paywalls — guided by the overwhelming influence of Google.

Google used to require publishers provide free articles to be indexed by the search engine but, in 2017, it replaced that with a model that is more flexible for publishers. Instead of forcing a certain number of free page views, publishers are now able to provide Google with indexable data.

Then there are partnerships struck by search engines and third parties to obtain specific kinds of data. These were summarized well in the recent United States v. Google decision (PDF), and they are probably closest in spirit to this Reddit deal:

GSEs enter into data-sharing agreements with partners (usually specialized vertical providers) to obtain structured data for use in verticals. Tr. at 9148:2-5 (Holden) (“[W]e started to gather what we would call structured data, where you need to enter into relationships with partners to gather this data that’s not generally available on the web. It can’t be crawled.”). These agreements can take various forms. The GSE might offer traffic to the provider in exchange for information (i.e., data-for-traffic agreements), pay the provider revenue share, or simply compensate the provider for the information. Id. at 6181:7-18 (Barrett-Bowen).

As of 2020, Microsoft has partnered with more than 100 providers to obtain structured data, and those partners include information sources like Fandango, Glassdoor, IMDb, Pinterest, Spotify, and more. DX1305 at .004, 018–.028; accord Tr. at 6212:23–6215:10 (Barrett-Bowen) (agreeing that Microsoft partners with over 70 providers of travel and local information, including the biggest players in the space).

The government attorneys said Bing is required to pay for structured data owing to its smaller size, while Google is able to obtain structured data for free because it sends partners so much traffic. The judge ultimately rejected their argument Microsoft struggled to sign these agreements or it was impeded in doing so, but did not dispute the difference in negotiating power between the two companies.

Once more, for emphasis: Google usually gets structured data for free but, in this case, it agreed to pay $60 million; imagine how much it would cost Bing.

This agreement does feel pretty unique, though. It is hard for me to imagine many other websites with the kind of specific knowledge found aplenty on Reddit. It is a centralized version of the bulletin boards of the early 2000s for such a wide variety of interests and topics. It is such a vast user base that, while it cannot ignore Google referrals, it is not necessarily reliant on them in the same way as many other websites are.

Most other popular websites are insular social networks; Instagram and TikTok are not relying on Google referrals. Wikipedia would probably be the best comparison to Reddit in terms of the contribution it makes to the web — even greater, I think — but every article page I tried except the homepage is overwhelmingly dependent on external search engine traffic.

Meanwhile, pretty much everyone else still has to pay Google for visitors. They have to buy the ads sitting atop organic search results. They have to buy ads on maps, on shopping carousels, on videos. People who operate websites hope they will get free clicks, but many of them know they will have to pay for some of them, even though Google will happily lift and summarize their work without compensation.

I cannot think of any other web property which has this kind of leverage over Google. While this feels like a violation of the ideals and principles that have built the open web on which Google has built its empire, I wonder if Google will make many similar agreements, if any. I doubt it — at least for now. This feels funny; maybe that is why it is so unique, and why it is not worth being too troubled by it.


  1. The uptick of Bing in the worldwide chart appears to be, in part, thanks to a growing share in China. Its market share has also grown a little in Africa and South America, but only by tiny amounts. However, Reddit is blocked in China, so a deal does not seem particularly attractive to either party. ↥︎

Anthony Ha, of TechCrunch, interviewed Jean-Paul Schmetz, CEO of Ghostery, and I will draw your attention to this exchange:

AH I want to talk about both of those categories, Big Tech and regulation. You mentioned that with GDPR, there was a fork where there’s a little bit of a decrease in tracking, and then it went up again. Is that because companies realized they can just make people say yes and consent to tracking?

J-PS What happened is that in the U.S., it continued to grow, and in Europe, it went down massively. But then the companies started to get these consent layers done. And as they figured it out, the tracking went back up. Is there more tracking in the U.S. than there is in Europe? For sure.

AH So it had an impact, but it didn’t necessarily change the trajectory?

J-PS It had an impact, but it’s not sufficient. Because these consent layers are basically meant to trick you into saying yes. And then once you say yes, they never ask again, whereas if you say no, they keep asking. But luckily, if you say yes, and you have Ghostery installed, well, it doesn’t matter, because we block it anyway. And then Big Tech has a huge advantage because they always get consent, right? If you cannot search for something in Google unless you click on the blue button, you’re going to give them access to all of your data, and you will need to rely on people like us to be able to clean that up.

The TechCrunch headline summarizes this by saying “regulation won’t save us from ad trackers”, but I do not think that is a fair representation of this argument. What it sounds like, to me, is that regulations should be designed more effectively.

The E.U.’s ePrivacy Directive and GDPR have produced some results: tracking is somewhat less pervasive, people have a right to data access and portability, and businesses must give users a choice. That last thing is, as Schmetz points out, also its flaw, and one it shares with something like App Tracking Transparency on iOS. Apps affected by the latter are not permitted to keep asking if tracking is denied, but they do similarly rely on the assumption a user can meaningfully consent to a cascading system of trackers.

In fact, the similarities and differences between cookie banner laws and App Tracking Transparency are considerable. Both require some form of consent mechanism immediately upon accessing a website or an app, assuming a user can provide that choice. Neither can promise tracking will not occur should a user deny the request. Both are interruptive.

But cookie consent laws typically offer users more information; many European websites, for example, enumerate all their third-party trackers, while App Tracking Transparency gives users no visibility into which trackers will be allowed. The latter choice is remembered forever unless a user removes and reinstalls the app, while websites can ask you for cookie consent on each visit. Perhaps the latter may sometimes be a consequence of using Safari; it is hard to know.

App Tracking Transparency also has a system-wide switch to opt out of all third-party tracking. There used to be something similar in web browsers, but compliance was entirely optional. Its successor effort, Global Privacy Control, is sadly not as widely supported as it ought to be, but it appears to have legal teeth.

Both of these systems have another important thing in common: neither are sufficiently protective of users’ privacy because they burden individuals with the responsibility of assessing something they cannot reasonably comprehend. It is patently ridiculous to put the responsibility on individuals to mitigate a systemic problem like invasive tracking schemes.

There should be a next step to regulations like these because user tracking is not limited to browsers where Ghostery can help — if you know about it. A technological response is frustrating and it is unclear to me how effective it is on its own. This is clearly not a problem only regulation can solve but neither can browser extensions. We need both.

To promote the launch of a new Beats Pill model, Apple’s Oliver Schusser was interviewed by Craig McLean of Wallpaper — where by “interviewed” I mostly mean “guided through talking points”. There is not much here unless you appreciate people discussing brands in the abstract.

However, McLean wanted to follow up on a question asked of Schusser in a 2019 issue of Music Week (PDF): “where do you want to see, or want Apple Music to be, in five years?” Schusser replied:

We want to be the best in what we do. And that means, obviously, we’ll continue to invest in the product and make sure we’re innovative and provide our customers with the best experience. We want to invest in our editorial and content, in our relationships with the industry, whether that’s the songwriters, music publishers, the labels, artists or anyone in the creative process. But that’s really what we’re trying to do. We just want to be the best at what we do.

With McLean given the opportunity for a response at the end of that timeframe, where does Apple Music now find itself? Schusser answered:

We are very clearly positioned as the quality service. We don’t have a free offer [unlike Spotify’s advertising-supported tier]. We don’t give anything away. Everything is made by music fans and curated by experts. We are focused on music while other people are running away from music into podcasts and audiobooks. Our service is clearly dedicated to music.

With spatial audio, we’ve completely revolutionised the listening experience. [Historically] we went from mono to stereo and then, for decades, there was nothing else. Then we completely invented a new standard [where] now 90 per cent of our subscribers are listening to music in spatial audio. Which is great.

And little things, like the lyrics, for example, [which] you find on Apple Music, which are incredibly popular. We have a team of people that are actually transcribing the lyrics because we don’t want them to be crowd-sourced from the internet. We want to make sure they’re as pristine as possible. We’ve got motion artwork and song credits. We really try to make Apple Music a high quality place for music fans.

And while most others in the marketplace have sort of stopped innovating, we’ve been really pushing hard, whether it’s Apple Music Sing, which is a great singalong feature, like karaoke. Or Classical, which is an audience that had completely been neglected. We’re trying to make Apple Music the best place for people to listen to music. I’m super happy with that.

This is quite the answer, and one worth tediously picking apart claim-by-claim.

We are very clearly positioned as the quality service. We don’t have a free offer [unlike Spotify’s advertising-supported tier]. We don’t give anything away.

I am not sure how one would measure whether Apple Music is “positioned as the quality service”, but this is a fair point. Apple Music offers free streaming “Radio” stations, but it is substantially not a free service.

Everything is made by music fans and curated by experts.

This is a common line from Apple and a description which has carried on from the launch of Beats Music. But it seems only partially true. There are, for example, things which must be entirely made by algorithm, like user-personalized playlists and radio stations. Schusser provided more detail to McLean five years ago in that Music Week interview, saying “[o]f course there are algorithms involved [but] the algorithms only pick music that [our] editors and curators would choose”. I do not know what that means, but it is at least an acknowledgement of an automated system instead of the handmade impression Apple gives in the Wallpaper interview.

Other parts of Apple Music suspiciously seem informed by factors beyond what an expert curator might decide. Spellling’s 2021 record “The Turning Wheel”, a masterpiece of orchestral art pop, notably received a perfect score from music reviewer Anthony Fantano. Fantano also gave high scores to artists like Black Midi, JPEGMAFIA, and Lingua Ignota, none of whom make music anything like Spellling. Yet all are listed as “similar artists” to Spellling on Apple Music. If you like Spellling’s work, you may be surprised by those other artists because they sound wildly different. This speaks less of curation than it does automation by audience.

For the parts which are actually curated manually, do I know the people who are making these decisions? What is their taste like? What are their standards? Are they just following Apple’s suggestions? Why is the “Rock Drive” playlist the same as any mediocre FM rock radio station?

We are focused on music while other people are running away from music into podcasts and audiobooks. Our service is clearly dedicated to music.

Music has undeniably shaped Apple from its earliest days and, especially, following the launch of the iPod. Its executives are fond of repeating the line “we love music” in press releases and presentations since 2001. But Apple’s dedication to separating music from other media is a five year old decision. It was previously wholly dedicated to music while shipping an app that also played audiobooks and podcasts and movies and all manner of other things. Plus, have you seen the state of the Music app on MacOS?

This is clearly just a dig at Spotify. It would carry more weight if Apple Music felt particularly good for music playback. It does not. I have filed dozens of bugs against the MacOS, iOS, and tvOS versions reflecting basic functionality: blank screens, poor search results, playback queue ordering issues, inconsistencies in playlist sort order between devices, problems with importing files, sync issues, cloud problems, and so forth. It is not uniformly terrible, but this is not a solid foundation for criticizing Spotify for not focusing on music enough.

Spotify sucks in other ways.

With spatial audio, we’ve completely revolutionised the listening experience. [Historically] we went from mono to stereo and then, for decades, there was nothing else.

This is untrue. People have been experimenting with multichannel audio in music since the 1960s. “Dark Side of the Moon” was released in quadrophonic audio in 1973, one of many albums released that decade in a four-channel mix. In the 1990s, a bunch of albums were released on SACDs mixed in 5.1 surround sound.

What Apple can correctly argue is that few people actually listened to any multichannel music in these formats. They were niche. Now?

Then we completely invented a new standard [where] now 90 per cent of our subscribers are listening to music in spatial audio. Which is great.

A fair point, though with a couple of caveats. Part of the high adoption rate is because Spatial Audio is turned on by default, and Apple is paying a premium to incentivize artists to release multichannel mixes. It is therefore not too surprising that most people have listened to at least one Spatial Audio track.

But this is the first time I can remember Apple claiming it “invented” the format. Spatial Audio was originally framed as supporting music mixed in Dolby Atmos. In its truest guise — played through a set of AirPods or Beats headphones, which can track the movement of the wearer’s head — it forms a three-dimensional bubble of music, something which Apple did create. That is, Apple invented the part which makes Atmos-mixed audio playable on its systems within a more immersive apparent space. But Apple did not invent the “new standard” taking music beyond two channels — that was done long before, and then by Dolby.

Also, it is still bizarre to me how many of the most popular multichannel mixes of popular albums are not available in Spatial Audio on Apple Music. These are records the artists deliberately intended for a surround sound mix at the time they were released, yet they cannot be played in what must be the most successful multichannel music venue ever made? Meanwhile, a whole bunch of classic songs and albums have been remixed in Spatial Audio for no good reason.

And little things, like the lyrics, for example, [which] you find on Apple Music, which are incredibly popular. We have a team of people that are actually transcribing the lyrics because we don’t want them to be crowd-sourced from the internet. We want to make sure they’re as pristine as possible.

I really like the way Apple Music displays time-tracked lyrics. That said, I only occasionally see inaccuracies in lyrics on Genius and in Apple Music, so I am not sure how much more “pristine” Apple’s are.

Also, I question the implication of a team of people manually transcribing lyrics. I have nothing to support this, but I would wager heavily this is primarily machine-derived followed by manual cleanup.

We’ve got motion artwork and song credits.

Song credits are good. Motion artwork is a doodad.

We really try to make Apple Music a high quality place for music fans.

I want to believe this is true, but I have a hard time accepting today’s Apple Music is the high quality experience worth raving about. Maybe some music fans are clamouring for animated artwork and bastardized Spatial Audio mixes of classic albums. I am not one of them. What I want is foundation of a reliable and fast jukebox functionality extended to my local library and streaming media, and then all this exciting stuff built on top.

And while most others in the marketplace have sort of stopped innovating, we’ve been really pushing hard, whether it’s Apple Music Sing, which is a great singalong feature, like karaoke. Or Classical, which is an audience that had completely been neglected.

These are good updates. Apple has not said much about Apple Music Sing or its popularity since it launched in December 2022, but it seems fine enough. Also, Spotify began trialling its own karaoke mode in June 2022, so maybe it should be credited with this innovation.

Apple Music Classical, meanwhile, remains a good but occasionally frustrating app. Schusser is right in saying this has been a neglected audience among mainstream streaming services. Apple’s effort is built upon Primephonic, which it acquired in August 2021 before launching it re-skinned as Classical in March 2023. That said, it is better now than it was at launch and it seems Apple is slowly refining it. It is important to me for there to be mainstream attention in this area.

We’re trying to make Apple Music the best place for people to listen to music. I’m super happy with that.

The thing I keep thinking about the four paragraph response above is that Schusser says a lot of the right things. Music is so important to so many people, and I would like to believe Apple cares as much about making the best music service and players as I do about listening to each week’s new releases.

I just wish everything was better than it currently is. There are many bugs I filed years ago which remain open, though I am happy to say the version in the latest Sequoia beta appears to contain a fix for reversing the order of songs when dragging them to the playback queue. If Apple really wants to position Apple Music as “the quality service” that is “the best at what we do”, it should demonstrate that instead of just saying it.

Ryan Broderick:

You’ve probably seen the phrase AI slop already, the term most people have settled on for the confusing and oftentimes disturbing pictures of Jesus and flight attendants and veterans that are filling up Facebook right now. But the current universe of slop is much more vast than that. There’s Google Slop, YouTube slop, TikTok slop, Marvel slop, Taylor Swift slop, Netflix slop. One could argue that slop has become the defining “genre” of the 2020s. But even though we’ve all come around to this idea, I haven’t seen anyone actually define it. So today I’m going to try.

This piece does actually settle somewhere very good in its attempt to address the vibe of the entertainment and media world in which we swim, but it is a slog to get there. This is the first paragraph and trying to pull it apart will take a minute. For a start, Broderick says the definition of “slop” has evaded him. That is plausible, but it does require him to have avoided Googling “ai slop definition” upon which point he would have surely seen Simon Willison’s post defining and popularizing the term:

Not all promotional content is spam, and not all AI-generated content is slop. But if it’s mindlessly generated and thrust upon someone who didn’t ask for it, slop is the perfect term for it.

This is a good definition, though Willison intentionally restricts it to describe A.I.-generated products. However, it seems like people are broadening the word’s use to cover things not made using A.I., and it appears Broderick wishes to reflect that.

Next paragraph:

Content slop has three important characteristics. The first being that, to the user, the viewer, the customer, it feels worthless. This might be because it was clearly generated in bulk by a machine or because of how much of that particular content is being created. The next important feature of slop is that feels forced upon us, whether by a corporation or an algorithm. It’s in the name. We’re the little piggies and it’s the gruel in the trough. But the last feature is the most crucial. It not only feels worthless and ubiquitous, it also feels optimized to be so. […]

I have trimmed a few examples from this long paragraph — in part because I do not want emails about Taylor Swift. I will come back to this definition, but I want to touch on something in the next paragraph:

Speaking of Ryan Reynolds, the film essayist Patrick Willems has been attacking this idea from a different direction in a string of videos over the last year. In one essay titled, “When Movie Stars Become Brands,” Willems argues that in the mid-2000s, after a string of bombs, Dwayne Johnson and Ryan Reynolds adapted a strategy lifted from George Clooney, where an actor builds brands and side businesses to fund creatively riskier movie projects. Except Reynolds and Johnson never made the creatively riskier movie projects and, instead, locked themselves into streaming conglomerates and allowed their brands to eat their movies. The zenith of this being their 2021 Netflix movie Red Notice, which literally opens with competing scenes advertising their respective liquor brands. A movie that, according to Netflix, is their most popular movie ever.

This is a notable phenomenon, but I think Broderick would do to cite another Willems video essay as well. This one, which seems just as relevant, is all about the word “content”. Willems’ obvious disdain for the word — one which I share — is rooted in its everythingness and, therefore, nothingness. In it, he points to a specific distinction:

[…] In a video on the PBS “Ideas” channel, Mike Rugnetta addressed this topic, coming at it from a similar place as me. And he put forth the idea that the “content” label also has to do with how we experience something.

He separates it into “consumption” versus “mere consumption”. In other words, yes, we technically are consuming everything, but there’s the stuff that we fully focus on and engage with, and then the stuff we look at more passively, like tweets we scroll past or a gaming stream we half-watch in the background.

So the idea Mike proposes is that maybe the stuff that we merely consume is content. And if we consume it and actually focus on it, then it’s something else.

What Broderick is getting at — and so too, I think, are the hoards of people posting about “slop” on X to which he links in the first paragraph — is a combination of this phenomenon and the marketing-driven vehicles for Johnson and Reynolds. Willems correctly points out that actors and other public figures have long been spokespeople for products, including their own. Also, there have always been movies and shows which lack any artistic value. Those things have not changed.

What has changed, however, is the sheer volume of media released now. Nearly six hundred English-language scripted shows were released in 2022 alone, though that declined in 2023 to below five hundred in part because of striking writers and actors. According to IMDB data, 4,100 movies were released in 1993, 6,125 in 2003, 15,451 in 2013, and 19,626 in 2023.

As I have previously argued, volume is not inherently bad. The self-serve approach of streaming services means shows do not need to fit into an available airtime slot on a particular broadcast channel. It means niche programming is just as available as blockbusters. The only scheduling which needs to be done is on the viewer’s side, fitting a new show or movie in between combing through the 500 hours of YouTube videos uploaded every minute, some of which have the production quality of mid-grade television or movies, not to mention a world of streaming music.

As Willems says, all of this media gets flattened in description — “content” — and in delivery. If you want art, you can find it, but if you just want something for, as Rugnetta says, “mere consumption”, you can find that — or, more likely, it will be served to you. This is true of all forms of media.

There are two things which help older media’s reputation for quality, with the benefit of hindsight: a bunch of bad stuff has been forgotten, and there was less of it to begin with. It was a lot harder to make a movie when it had to be shot to tape or film, and more difficult to make it look great. A movie with a jet-setting hero was escapist in the 1960s, but lower-cost airfare means those locations no longer seem so exotic. If you wanted to give it a professional sheen, you had to rent expensive lenses, build detailed sets, shoot at specific times of day, and light it carefully. If you wanted a convincing large-scale catastrophe on-screen, it had to be built in real life. These are things which can now be done in post-production, albeit not easily or necessarily cheaply. I am not a hater of digital effects. But it is worth mentioning the ability of effects artists to turn a crappy shot into something cinematic, and to craft apocalyptic scenery without constructing a single physical element.

We are experiencing the separating of wheat and chaff in real time, and with far more of each than ever before. Unfortunately, soulless and artless vehicles for big stars sell well. Explosions sell. Familiar sells.

“Content” sells.

Here is where Broderick lands:

And six years later, it’s not just music that feels forgettable and disposable. Most popular forms of entertainment and even basic information have degraded into slop simply meant to fill our various feeders. It doesn’t matter that Google’s AI is telling you to put glue on pizza. They needed more data for their language model, so they ingested every Reddit comment ever. This makes sense because from their perspective what your search results are doesn’t matter. All that matters is that you’re searching and getting a response. And now everything has meet these two contradictory requirements. It must fill the void and also be the most popular thing ever. It must reach the scale of MrBeast or it can’t exist. Ironically enough, though, when something does reach that scale now, it’s so watered down and forgettable it doesn’t actually feel like it exists.

One may quibble with the precise wording that “what your search results are doesn’t matter” to Google. The company appears to have lost market share as trust in search has declined, though there is conflicting data and the results may not be due to user preference. But the gist of this is, I think, correct.

People seem to understand they are being treated as mere consumers in increasingly financialized expressive media. I have heard normal people in my life — people without MBAs, and who do not work in marketing, and who are not influencers — throw around words like “monetize” and “engagement” in a media context. It is downright weird.

The word “slop” seems like a good catch-all term finding purchase in the online vocabulary, but I think the popularization of “content” — in the way it is most commonly used — foreshadowed this shift. Describing artistic works as though they are filler for a container is a level of disrespect not even a harsh review could achieve. Not all “content” is “slop”, but all “slop” is “content”. One thing “slop” has going for it is its inherent ugliness. People excitedly talk about all the “content” they create. Nobody will be proud of their “slop”.

With apologies to Mitchell and Webb.

In a word, my feelings about A.I. — and, in particular, generative A.I. — are complicated. Just search “artificial intelligence” for a reverse chronological back catalogue of where I have landed. It feels like an appropriate position to hold for a set of nascent technologies so sprawling and therefore implying radical change.

Or perhaps that, like so many other promising new technologies, will turn out to be illusory as well. Instead of altering the fundamental fabric of reality, maybe it is used to create better versions of features we have used for decades. This would not necessarily be a bad outcome. I have used this example before, but the evolution of object removal tools in photo editing software is illustrative. There is no longer a need to spend hours cloning part of an image over another area and gently massaging it to look seamless. The more advanced tools we have today allow an experienced photographer to make an image they are happy with in less time, and lower barriers for newer photographers.

A blurry boundary is crossed when an entire result is achieved through automation. There is a recent Drew Gooden video which, even though not everything resonated with me, I enjoyed.1 There is a part in the conclusion which I wanted to highlight because I found it so clarifying (emphasis mine):

[…] There’s so many tools along the way that help you streamline the process of getting from an idea to a finished product. But, at a certain point, if “the tool” is just doing everything for you, you are not an artist. You just described what you wanted to make, and asked a computer to make it for you.

You’re also not learning anything this way. Part of what makes art special is that it’s difficult to make, even with all the tools right in front of you. It takes practice, it takes skill, and every time you do it, you expand on that skill. […] Generative A.I. is only about the end product, but it won’t teach you anything about the process it would take to get there.

This gets at the question of whether A.I. is more often a product or a feature — the answer to which, I think, is both, just not in a way that is equally useful. Gooden shows an X thread in which Jamian Gerard told Luma to convert the “Abbey Road” cover to video. Even though the results are poor, I think it is impressive that a computer can do anything like this. It is a tech demo; a more practical application can be found in something like the smooth slow motion feature in the latest release of Final Cut Pro.

“Generative A.I. is only about the end product” is a great summary of the emphasis we put on satisfying conclusions instead of necessary rote procedure. I cook dinner almost every night. (I recognize this metaphor might not land with everyone due to time constraints, food availability, and physical limitations, but stick with me.) I feel lucky that I enjoy cooking, but there are certainly days when it is a struggle. It would seem more appealing to type a prompt and make a meal appear using the ingredients I have on hand, if that were possible.

But I think I would be worse off if I did. The times I have cooked while already exhausted have increased my capacity for what I can do under pressure, and lowered my self-imposed barriers. These meals have improved my ability to cook more elaborate dishes when I have more time and energy, just as those more complicated meals also make me a better cook.2

These dynamics show up in lots of other forms of functional creative expression. Plenty of writing is not particularly artistic, but the mental muscle exercised by trying to get ideas into legible words is also useful when you are trying to produce works with more personality. This is true for programming, and for visual design, and for coordinating an outfit — any number of things which are sometimes individually expressive, and other times utilitarian.

This boundary only exists in these expressive forms. Nobody, really, mourns the replacement of cheques with instant transfers. We do not get better at paying our bills no matter which form they take. But we do get better at all of the things above by practicing them even when we do not want to, and when we get little creative satisfaction from the result.

It is dismaying to see so many of A.I. product demos show how they can be used to circumvent this entire process. I do not know if that is how they will actually be used. There are plenty of accomplished artists using A.I. to augment their practice, like Sougwen Chen, Anna Ridler, and Rob Sheridan. Writers and programmers are using generative products every day as tools, but they must have some fundamental knowledge to make A.I. work in their favour.

Stock photography is still photography. Stock music is still music, even if nobody’s favourite song is “Inspiring Corporate Advertising Tech Intro Promo Business Infographics Presentation”. (No judgement if that is your jam, though.) A rushed pantry pasta is still nourishment. A jingle for an insurance commercial could be practice for a successful music career. A.I. should just be a tool — something to develop creativity, not to replace it.


  1. There are also some factual errors. At least one of the supposed Google Gemini answers he showed onscreen was faked, and Adobe’s standard stock license is less expensive than the $80 “Extended” license Gooden references. ↥︎

  2. I am wary of using an example like cooking because it implies a whole set of correlative arguments which are unkind and judgemental toward people who do not or cannot cook. I do not want to provide kindling for these positions. ↥︎

Tuesdays used to be my favourite day of the week because it was the day when a bunch of new music would typically be released. That is no longer the case — but only because music releases were moved to Fridays. The music itself is as good as ever.

It remains a frustratingly recurring theme that today’s music is garbage, and yesterday’s music is gold. Music today is “just noise”, according to generations of people who insist the music in their day — probably when they were in their twenties — was better. Today’s peddler of this dead horse trope is YouTuber and record producer Rick Beato, who published a video about the two “real reason[s] music is getting worse”: it is “too easy to make”, and “too easy to consume”.

Beato is right that new technologies have been steadily making it easier to make music and listen to it, but he is wrong that they are destructive. “Good” and “bad” are subjective, to be sure, but it seems self-evident that more good music is being made now than ever before, simply because people are so easily able to translate an idea to published work. Any artist can experiment with any creative vision. It is an amazing time.

This also suggests more bad music is being made, but who cares? Bad music has existed forever. The lack of a gatekeeper now means it gains wider distribution, but that has more benefits than problems. Maybe some people will stumble across it and recognize the potential in a burgeoning artist.

Aside from the lack of distribution avenues historically, the main reason we do not remember bad records is because they are no longer played. This does not mean unpopular music is inherently bad, of course, only that time sifts things we generally like from things we do not.

Perhaps one’s definition of “good” includes how influential a work of art turns out to be. Again, it seems difficult to argue modern music is not as influential as that which has preceded it. It may be too early to tell what will prove its influence, to be sure, but we have relatively recent examples which indicate otherwise. The Weeknd spawned an entire genre of moody R&B imitators from a series of freely distributed mixtapes. The entire genre of trap spread to the world from its origins in Atlanta, to the extent that its unique characteristics have underpinned much of pop music for a decade. Many of its biggest artists made their name on DatPiff. Just two of countless examples.

If you actually love music for all that it can be, you are spoiled for choice today. If anything, that is the biggest problem with music today: there is so much of it and it can be overwhelming. The ease with which music can be made does not necessarily make it worse, but it does make it more difficult if you want to try as much of it as you can. I have only a small amount of sympathy when Beato laments how the ease of streaming services devalues artistry because of how difficult it can be to spend time with any one album when there is another to listen to, and then another. But anyone can make the decision to hold the queue and embrace a single release. (And if artistry is something we are concerned with, why call it “consuming”? A good record is not something I want to chug down.)

We can try any music we like these days. We can explore old releases just as easily as we can see what has just been published. We can and should take a chance on genres we had never considered before. We can explore new recordings of jazz and classical compositions. Every Friday is a feast for the ears — if you want it to be. If you really like music, you are living in the luckiest time. I know that is how I feel. I just wish artists could get paid an appropriate amount for how much they contribute to the best parts of life.

Since 2022, the European Parliament has been trying to pass legislation requiring digital service providers to scan for and report CSAM as it passes through their services.

Giacomo Zandonini, Apostolis Fotiadis, and Luděk Stavinoha, Balkan Insight, with a good summary in September:

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.

[…]

The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

This is going to require a little back-and-forth, and I will pick up the story with quotations from Matthew Green’s introductory remarks to a panel before the European Internet Services Providers Association in March 2023:

The only serious proposal that has attempted to address this technical challenge was devised — and then subsequently abandoned — by Apple in 2021. That proposal aimed only at detecting known content using a perceptual hash function. The company proposed to use advanced cryptography to “split” the evaluation of hash comparisons between the user’s device and Apple’s servers: this ensured that the device never received a readable copy of the hash database.

[…]

The Commission’s Impact Assessment deems the Apple approach to be a success, and does not grapple with this failure. I assure you that this is not how it is viewed within the technical community, and likely not within Apple itself. One of the most capable technology firms in the world threw all their knowledge against this problem, and were embarrassed by a group of hackers: essentially before the ink was dry on their proposal.

Daniel Boffey, the Guardian, in May 2023:

Now leaked internal EU legal advice, which was presented to diplomats from the bloc’s member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year.

The European Parliament in a November 2023 press release:

In the adopted text, MEPs excluded end-to-end encryption from the scope of the detection orders to guarantee that all users’ communications are secure and confidential. Providers would be able to choose which technologies to use as long as they comply with the strong safeguards foreseen in the law, and subject to an independent, public audit of these technologies.

Joseph Menn, Washington Post, in March, reporting on the results of a European court ruling:

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”

This is not directly about the proposed CSAM measures, but it is precedent for European regulators to follow.

Natasha Lomas, TechCrunch, this week:

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Meredith Whittaker, CEO of Signal, issued a PDF statement criticizing the proposal:

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Patrick Breyer, of Germany’s Pirate Party:

Only Germany, Luxembourg, the Netherlands, Austria and Poland are relatively clear that they will not support the proposal, but this is not sufficient for a “blocking minority”.

Ella Jakubowska on X:

The exact quote from [Věra Jourová] the Commissioner for Values & Transparency: “the Commission proposed the method or the rule that even encrypted messaging can be broken for the sake of better protecting children”

Věra Jourová on X, some time later:

Let me clarify one thing about our draft law to detect online child sexual abuse #CSAM.

Our proposal is not breaking encryption. Our proposal preserves privacy and any measures taken need to be in line with EU privacy laws.

Matthew Green on X:

Coming back to the initial question: does installing surveillance software on every phone “break encryption”? The scientist in me squirms at the question. But if we rephrase as “does this proposal undermine and break the *protections offered by encryption*”: absolutely yes.

Maïthé Chini, the Brussels Times:

It was known that the qualified majority required to approve the proposal would be very small, particularly following the harsh criticism of privacy experts on Wednesday and Thursday.

[…]

“[On Thursday morning], it soon became clear that the required qualified majority would just not be met. The Presidency therefore decided to withdraw the item from today’s agenda, and to continue the consultations in a serene atmosphere,” a Belgian EU Presidency source told The Brussels Times.

That is a truncated history of this piece of legislation: regulators want platform operators to detect and report CSAM; platforms and experts say that will conflict with security and privacy promises, even if media is scanned prior to encryption. This proposal may be specific to the E.U., but you can find similar plans to curtail or invalidate end-to-end encryption around the world:

I selected English-speaking areas because that is the language I can read, but I am sure there are more regions facing threats of their own.

We are not served by pretending this threat is limited to any specific geography. The benefits of end-to-end encryption are being threatened globally. The E.U.’s attempt may have been pushed aside for now, but another will rise somewhere else, and then another. It is up to civil rights organizations everywhere to continue arguing for the necessary privacy and security protections offered by end-to-end encryption.

After Robb Knight found — and Wired confirmed — Perplexity summarizes websites which have followed its opt out instructions, I noticed a number of people making a similar claim: this is nothing but a big misunderstanding of the function of controls like robots.txt. A Hacker News comment thread contains several versions of these two arguments:

  • robots.txt is only supposed to affect automated crawling of a website, not explicit retrieval of an individual page.

  • It is fair to use a user agent string which does not disclose automated access because this request was not automated per se, as the user explicitly requested a particular page.

That is, publishers should expect the controls provided by Perplexity to apply only to its indexing bot, not a user-initiated page request. Wary of being the kind of person who replies to pseudonymous comments on Hacker News, this is an unnecessarily absolutist reading of how site owners expect the Robots Exclusion Protocol to work.

To be fair, that protocol was published in 1994, well before anyone had to worry about websites being used as fodder for large language model training. And, to be fairer still, it has never been formalized. A spec was only recently proposed in September 2022. It has so far been entirely voluntary, but the draft standard proposes a more rigid expectation that rules will be followed. Yet it does not differentiate between different types of crawlers — those for search, others for archival purposes, and ones which power the surveillance economy — and contains no mention of A.I. bots. Any non-human means of access is expected to comply.

The question seems to be whether what Perplexity is doing ought to be considered crawling. It is, after all, responding to a direct retrieval request from a user. This is subtly different from how a user might search Google for a URL, in which case they are asking whether that site is in the search engine’s existing index. Perplexity is ostensibly following real-time commands: go fetch this webpage and tell me about it.

But it clearly is also crawling in a more traditional sense. The New York Times and Wired both disallow PerplexityBot, yet I was able to ask it to summarize a set of recent stories from both publications. At the time of writing, the Wired summary is about seventeen hours outdated, and the Times summary is about two days old. Neither publication has changed its robots.txt directives recently; they were both blocking Perplexity last week, and they are blocking it today. Perplexity is not fetching these sites in real-time as a human or web browser would. It appears to be scraping sites which have explicitly said that is something they do not want.

Perplexity should be following those rules and it is shameful it is not. But what if you ask for a real-time summary of a particular page, as Knight did? Is that something which should be identifiable by a publisher as a request from Perplexity, or from the user?

The Robots Exclusion Protocol may be voluntary, but a more robust method is to block bots by detecting their user agent string. Instead of expecting visitors to abide by your “No Homers Club” sign, you are checking IDs. But these strings are unreliable and there are often good reasons for evading user agent sniffing.

Perplexity says its bot is identifiable by both its user agent and the IP addresses from which it operates. Remember: this whole controversy is that it sometimes discloses neither, making it impossible to differentiate Perplexity-originating traffic from a real human being — and there is a difference.

A webpage being rendered through a web browser is subject to the quirks and oddities of that particular environment — ad blockers, Reader mode, screen readers, user style sheets, and the like — but there is a standard. A webpage being rendered through Perplexity is actually being reinterpreted and modified. The original text of the page is transformed through automated means about which neither the reader or the publisher has any understanding.

This is true even if you ask it for a direct quote. I asked for a full paragraph of a recent article and it mashed together two separate sections. They are direct quotes, to be sure, but the article must have been interpreted to generate this excerpt.1

It is simply not the case that requesting a webpage through Perplexity is akin to accessing the page via a web browser. It is more like automated traffic — even if it is being guided by a real person.

The existing mechanisms for restricting the use of bots on our websites are imperfect and limited. Yet they are the only tools we have right now to opt out of participating in A.I. services if that is something one wishes to do, short of putting pages or an entire site behind a user name and password. It is completely reasonable for someone to assume their signal of objection to any robotic traffic ought to be respected by legitimate businesses. The absolute least Perplexity can do is respecting those objections by clearly and consistently identifying itself, and excluding websites which have indicated they do not want to be accessed by these means.


  1. I am not presently blocking Perplexity, and my argument is not related to its ability to access the article. I am only illustrating how it reinterprets text. ↥︎

If you had just been looking at the headlines from major research organizations, you would see a lack of confidence from the public in big business, technology companies included. For years, poll after poll from around the world has found high levels of distrust in their influence, handling of private data, and new developments.

If these corporations were at all worried about this, they are not much showing it in their products — particularly the A.I. stuff they have been shipping. There has been little attempt at abating last year’s trust crisis. Google decided to launch overconfident summaries for a variety of search queries. Far from helping to sift through all that has ever been published on the web to mash together a representative summary, it was instead an embarrassing mess that made the company look ill prepared for the concept of satire. Microsoft announced a product which will record and interpret everything you do and see on your computer, but as a good thing.

Can any of them see how this looks? If not — if they really are that unaware — why should we turn to them to fill gaps and needs in society? I certainly would not wish to indulge businesses which see themselves as entirely separate from the world.

It is hard to imagine they do not, though. Sundar Pichai, in an interview with Nilay Patel, recognised there were circumstances in which an A.I. summary would be inappropriate, and cautioned that the company still considers it a work in progress. Yet Google still turned it on by default in the U.S. with plans to expand worldwide this year.

Microsoft has responded to criticism by promising Recall will now be a feature users must opt into, rather than something they must turn off after updating Windows. The company also says there are more security protections for Recall data than originally promised but, based on its track record, maybe do not get too excited yet.

These product introductions all look like hubris. Arrogance, really — recognition of the significant power these corporations wield and the lack of competition they face. Google can poison its search engine because where else are most people going to go? How many people would turn off Recall, something which requires foreknowledge of its existence, under Microsoft’s original rollout strategy?

It is more or less an admission they are all comfortable gambling with their customers’ trust to further the perception they are at the forefront of the new hotness.

None of this is a judgement on the usefulness of these features or their social impact. I remain perplexed by the combination of a crisis of trust in new technologies, and the unwillingness of the companies responsible to engage with the public. There seems to be little attempt at persuasion. Instead, we are told to get on board because this rocket ship is taking off with or without us. Concerned? Too bad: the rocket ship is shaped like a giant middle finger.

What I hope we see Monday from Apple — a company which has portrayed itself as more careful and practical than many of its contemporaries — is a recognition of how this feels from outside the industry. Expect “A.I.” to be repeated in the presentation until you are sick of those two letters; investors are going to eat it up. When normal people update their phones in September, though, they should not feel like they are being bullied into accepting our A.I. future.

People need to be given time to adjust and learn. If the polls are representative, very few people trust giant corporations to get this right — understandably — yet these tech companies seem to believe we are as enthusiastic about every change they make as they are. Sorry, we are not, no matter how big a smile a company representative is wearing when they talk about it. Investors may not be patient but many of the rest of us need time.

Apple finished naming what it — well, its “team of experts alongside a select group of artists […] songwriters, producers, and industry professionals” — believes are the hundred best albums of all time. Like pretty much every list of the type, it is overwhelmingly Anglocentric, there are obvious picks, surprise appearances good and bad, and snubs.

I am surprised the publication of this list has generated as much attention as it has. There is a whole Wall Street Journal article with more information about how it was put together, a Slate thinkpiece arguing this ranking “proves [Apple has] lost its way”, and a Variety article claiming it is more-or-less “rage bait”.

Frankly, none of this feels sincere. Not Apple’s list, and not the coverage treating it as meaningful art criticism. I am sure there are people who worked hard on it — Apple told the Journal “about 250” — and truly believe their rating carries weight. But it is fluff.

Make no mistake: this is a promotional exercise for Apple Music more than it is criticism. Sure, most lists of this type are also marketing for publications like Rolling Stone and Pitchfork and NME. Yet, for how tepid the opinions of each outlet often are, they have each given out bad reviews. We can therefore infer they have specific tastes and ideas about what separates great art from terrible art.

Apple has never said a record is bad. It has never made you question whether the artist is trying their best. It has never presented criticism so thorough it makes you wince on behalf of the people who created the album.

Perhaps the latter is a poor metric. After Steve Jobs’ death came a river of articles questioning the internal culture he fostered, with several calling him an “asshole”. But that is mixing up a mean streak and a critical eye — Jobs, apparently, had both. A fair critic can use their words to dismantle an entire project and explain why it works or, just as important, why it does not. The latter can hurt; ask any creative person who has been on the receiving end. Yet exploring why something is not good enough is an important skill to develop as both a critic and a listener.

Dan Brooks, Defector:

There has been a lot of discussion about what music criticism is for since streaming reduced the cost of listening to new songs to basically zero. The conceit is that before everything was free, the function of criticism was to tell listeners which albums to buy, but I don’t think that was ever it. The function of criticism is and has always been to complicate our sense of beauty. Good criticism of music we love — or, occasionally, really hate — increases the dimensions and therefore the volume of feeling. It exercises that part of ourselves which responds to art, making it stronger.

There are huge problems with the way music has historically been critiqued, most often along racial and cultural lines. There are still problems. We will always disagree about the fairness of music reviews and reviewers.

Apple’s list has nothing to do with any of that. It does not interrogate which albums are boring, expressionless, uncreative, derivative, inconsequential, inept, or artistically bankrupt. So why should we trust it to explain what is good? Apple’s ranking of albums lacks substance because it cannot say any of these things. Doing so would be a terrible idea for the company and for artists.

It is beyond my understanding why anyone seems to be under the impression this list is anything more than a business reminding you it operates a music streaming platform to which you can subscribe for eleven dollars per month.


Speaking of the app — some time after I complained there was no way in Apple Music to view the list, Apple added a full section, which I found via foursliced on Threads. It is actually not bad. There are stories about each album, all the reveal episodes from the radio show, and interviews.

You will note something missing, however: a way to play a given album. That is, one cannot visit this page in Apple Music, see an album on the list they are interested in, and simply tap to hear it. There are play buttons on the website and, if you are signed in with your Apple Music account, you can add them to your library. But I cannot find a way to do any of this from within the app.

Benjamin Mayo found a list, but I cannot through search or simply by browsing. Why is this not a more obvious feature? It makes me feel like a dummy.

Finally. The government of the United States finally passed a law that would allow it to force the sale of, or ban, software and websites from specific countries of concern. The target is obviously TikTok — it says so right in its text — but crafty lawmakers have tried to add enough caveats and clauses and qualifiers to, they hope, avoid it being characterized as a bill of attainder, and to permit future uses. This law is very bad. It is an ineffective and illiberal position that abandons democratic values over, effectively, a single app. Unfortunately, TikTok panic is a very popular position in the U.S. and, also, here in Canada.

The adversaries the U.S. is worried about are the “covered nationsdefined in 2018 to restrict the acquisition by the U.S. of key military materials from four countries: China, Iran, North Korea, and Russia. The idea behind this definition was that it was too risky to procure magnets and other important components of, say, missiles and drones from a nation the U.S. considers an enemy, lest those parts be compromised in some way. So the U.S. wrote down its least favourite countries for military purposes, and that list is now being used in a bill intended to limit TikTok’s influence.

According to the law, it is illegal for any U.S. company to make available TikTok and any other ByteDance-owned app — or any app or website deemed a “foreign adversary controlled application” — to a user in the U.S. after about a year unless it is sold to a company outside the covered countries, and with no more than twenty percent ownership stake from any combination of entities in those four named countries. Theoretically, the parent company could be based nearly anywhere in the world; practically, if there is a buyer, it will likely be from the U.S. because of TikTok’s size. Also, the law specifically exempts e-commerce apps for some reason.

This could be interpreted as either creating an isolated version specifically for U.S. users or, as I read it, moving the global TikTok platform to a separate organization not connected to ByteDance or China.1 ByteDance’s ownership is messy, though mostly U.S.-based, but politicians worried about its Chinese origin have had enough, to the point they are acting with uncharacteristic vigour. The logic seems to be that it is necessary for the U.S. government to influence and restrict speech in order to prevent other countries from influencing or restricting speech in ways the U.S. thinks are harmful. That is, the problem is not so much that TikTok is foreign-owned, but that it has ownership ties to a country often antithetical to U.S. interests. TikTok’s popularity might, it would seem, be bad for reasons of espionage or influence — or both.

Power

So far, I have focused on the U.S. because it is the country that has taken the first step to require non-Chinese control over TikTok — at least for U.S. users but, due to the scale of its influence, possibly worldwide. It could force a business to entirely change its ownership structure. So it may look funny for a Canadian to explain their views of what the U.S. ought to do in a case of foreign political interference. This is a matter of relevance in Canada as well. Our federal government raised the alarm on “hostile state-sponsored or influenced actors” influencing Canadian media and said it had ordered a security review of TikTok. There was recently a lengthy public inquiry into interference in Canadian elections, with a special focus on China, Russia, and India. Clearly, the popularity of a Chinese application is, in the eyes of these officials, a threat.

Yet it is very hard not to see the rush to kneecap TikTok’s success as a protectionist reaction to shaking the U.S. dominance of consumer technologies, as convincingly expressed by Paris Marx at Disconnect:

In Western discourses, China’s internet policies are often positioned solely as attempts to limit the freedoms of Chinese people — and that can be part of the motivation — but it’s a politically convenient explanation for Western governments that ignores the more important economic dimension of its protectionist approach. Chinese tech is the main competitor to Silicon Valley’s dominance today because China limited the ability of US tech to take over the Chinese market, similar to how Japan and South Korea protected their automotive and electronics industries in the decades after World War II. That gave domestic firms the time they needed to develop into rivals that could compete not just within China, but internationally as well. And that’s exactly why the United States is so focused not just on China’s rising power, but how its tech companies are cutting into the global market share of US tech giants.

This seems like one reason why the U.S. has so aggressively pursued a divestment or ban since TikTok’s explosive growth in 2019 and 2020. On its face it is similar to some reasons why the E.U. has regulated U.S. businesses that have, it argues, disadvantaged European competitors, and why Canadian officials have tried to boost local publications that have seen their ad revenue captured by U.S. firms. Some lawmakers make it easy to argue it is a purely xenophobic reaction, like Senator Tom Cotton, who spent an exhausting minute questioning TikTok’s Singaporean CEO Shou Zi Chew about where he is really from. But I do not think it is entirely a protectionist racket.

A mistake I have made in the past — and which I have seen some continue to make — is assuming those who are in favour of legislating against TikTok are opposed to the kinds of dirty tricks it is accused of on principle. This is false. Many of these same people would be all too happy to allow U.S. tech companies to do exactly the same. I think the most generous version of this argument is one in which it is framed as a dispute between the U.S. and its democratic allies, and anxieties about the government of China — ByteDance is necessarily connected to the autocratic state — spreading messaging that does not align with democratic government interests. This is why you see few attempts to reconcile common objections over TikTok with the quite similar behaviours of U.S. corporations, government arms, and intelligence agencies. To wit: U.S.-based social networks also suggest posts with opaque math which could, by the same logic, influence elections in other countries. They also collect enormous amounts of personal data that is routinely wiretapped, and are required to secretly cooperate with intelligence agencies. The U.S. is not authoritarian as China is, but the behaviours in question are not unique to authoritarians. Those specific actions are unfortunately not what the U.S. government is objecting to. What it is disputing, in a most generous reading, is a specifically antidemocratic government gaining any kind of influence.

Espionage and Influence

It is easiest to start by dismissing the espionage concerns because they are mostly misguided. The peek into Americans’ lives offered by TikTok is no greater than that offered by countless ad networks and data brokers — something the U.S. is also trying to restrict more effectively through a comprehensive federal privacy law. So long as online advertising is dominated by a privacy-hostile infrastructure, adversaries will be able to take advantage of it. If the goal is to restrict opportunities for spying on people, it is idiotic to pass legislation against TikTok specifically instead of limiting the data industry.

But the charge of influence seems to have more to it, even though nobody has yet shown that TikTok is warping users’ minds in a (presumably) pro-China direction. Some U.S. lawmakers described its danger as “theoretical”; others seem positively terrified. There are a few different levels to this concern: are TikTok users uniquely subjected to Chinese government propaganda? Is TikTok moderated in a way that boosts or buries videos to align with Chinese government views? Finally, even if both of these things are true, should the U.S. be able to revoke access to software if it promotes ideologies or viewpoints — and perhaps explicit propaganda? As we will see, it looks like TikTok sometimes tilts in ways beneficial — or, at least, less damaging — to Chinese government interests, but there is no evidence of overt government manipulation and, even if there were, it is objectionable to require it to be owned by a different company or ban it.

The main culprit, it seems, is TikTok’s “uncannily good” For You feed that feels as though it “reads your mind”. Instead of users telling TikTok what they want to see, it just begins showing videos and, as people use the app, it figures out what they are interested in. How it does this is not actually that mysterious. A 2021 Wall Street Journal investigation found recommendations were made mostly based on how long you spent watching each video. Deliberate actions — like sharing and liking — play a role, sure, but if you scroll past videos of people and spend more time with a video of a dog, it learns you want dog videos.

That is not so controversial compared to the opacity in how TikTok decides what specific videos are displayed and which ones are not. Why is this particular dog video in a user’s feed and not another similar one? Why is it promoting videos reflecting a particular political viewpoint or — so a popular narrative goes — burying those with viewpoints uncomfortable for its Chinese parent company? The mysterious nature of an algorithmic feed is the kind of thing into which you can read a story of your choosing. A whole bunch of X users are permanently convinced they are being “shadow banned” whenever a particular tweet does not get as many likes and retweets as they believe it deserved, for example, and were salivating at the thought of the company releasing its ranking code to solve a nonexistent mystery. There is a whole industry of people who say they can get your website to Google’s first page for a wide range of queries using techniques that are a mix of plausible and utterly ridiculous. Opaque algorithms make people believe in magic. An alarmist reaction to TikTok’s feed should be expected particularly as it was the first popular app designed around entirely recommended material instead of personal or professional connections. This has now been widely copied.

The mystery of that feed is a discussion which seems to have been ongoing basically since the 2018 merger of Musical.ly and TikTok, escalating rapidly to calls for it to be separated from its Chinese owner or banned altogether. In 2020, the White House attempted to force a sale by executive order. In response, TikTok created a plan to spin off an independent entity, but nothing materialized from this tense period.

March 2023 brought a renewed effort to divest or ban the platform. Chew, TikTok’s CEO, was called to a U.S. Congressional hearing and questioned for hours, to little effect. During that hearing, a report prepared for the Australian government was cited by some of the lawmakers, and I think it is a telling document. It is about eighty pages long — excluding its table of contents, appendices, and citations — and shows several examples of Chinese government influence on other products made by ByteDance. However, the authors found no such manipulation on TikTok itself, leading them to conclude:

In our view, ByteDance has demonstrated sufficient capability, intent, and precedent in promoting Party propaganda on its Chinese platforms to generate material risk that they could do the same on TikTok.

“They could do the same”, emphasis mine. In other words, if they had found TikTok was boosting topics and videos on behalf of the Chinese government, they would have said so — so they did not. The closest thing I could find to a covert propaganda campaign on TikTok anywhere in this report is this:

The company [ByteDance] tried to do the same on TikTok, too: In June 2022, Bloomberg reported that a Chinese government entity responsible for public relations attempted to open a stealth account on TikTok targeting Western audiences with propaganda”. [sic]

If we follow the Bloomberg citation — shown in the report as a link to the mysterious Archive.today site — the fuller context of the article by Olivia Solon disproves the impression you might get from reading the report:

In an April 2020 message addressed to Elizabeth Kanter, TikTok’s head of government relations for the UK, Ireland, Netherlands and Israel, a colleague flagged a “Chinese government entity that’s interested in joining TikTok but would not want to be openly seen as a government account as the main purpose is for promoting content that showcase the best side of China (some sort of propaganda).”

The messages indicate that some of ByteDance’s most senior government relations team, including Kanter and US-based Erich Andersen, Global Head of Corporate Affairs and General Counsel, discussed the matter internally but pushed back on the request, which they described as “sensitive.” TikTok used the incident to spark an internal discussion about other sensitive requests, the messages state.

This is the opposite conclusion to how this story was set up in the report. Chinese government public relations wanted to set up a TikTok account without any visible state connection and, when TikTok management found out about this, it said no. This Bloomberg article makes TikTok look good in the face of government pressure, not like it capitulates. Yes, it is worth being skeptical of this reporting. Yet if TikTok acquiesced to the government’s demands, surely the report would provide some evidence.

While this report for the Australian Senate does not show direct platform manipulation, it does present plenty of examples where it seems like TikTok may be biased or self-censoring. Its authors cite stories from the Washington Post and Vice finding posts containing hashtags like #HongKong and #FreeXinjiang returned results favourable to the official Chinese government position. Sometimes, related posts did not appear in search results, which is not unique to TikTok — platforms regularly use crude search term filtering to restrict discovery for lots of reasons. I would not be surprised if there were bias or self-censorship to blame for TikTok minimizing the visibility of posts critical of the subjugation of Uyghurs in China. However, it is basically routine for every social media product to be accused of suppression. The Markup found different types of posts on Instagram, for example, had captions altered or would no longer appear in search results, though it is unclear to anyone why that is the case. Meta said it was a bug, an explanation also offered frequently by TikTok.

The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.

The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.

The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. A report by Exovera (PDF) prepared for the U.S.–China Economic and Security Review Commission indicates exactly the latter: “TikTok’s current content moderation strategy […] adheres to a strategy of ‘depoliticization’ (去政治化) and ‘localization’ (本土化) that seeks to downplay politically controversial speech and demobilize populist sentiment”, apparently avoiding “algorithmic optimization in order to promote content that evangelizes China’s culture as well as its economic and political systems” which “is liable to result in backlash”. Meta, on its own platforms, said it would not generally suggest “political” posts to users but did not define exactly what qualifies. It said its goal in limiting posts on social issues was because of user demand, but these types of posts have been difficult to moderate. A difference in which posts are found on each platform for specific search terms is not necessarily reflective of government pressure, deliberate or not. Besides, it is not as though there is no evidence for straightforward propaganda on TikTok. One just needs to look elsewhere to find it.

Propaganda

The Office of the Director of National Intelligence recently released its annual threat assessment summary (PDF). It is unclassified and has few details, so the only thing it notes about TikTok is “accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022”. It seems likely to me this is a reference to this article in Forbes, though this is a guess as there are no citations. The state-affiliated TikTok account in question — since made private — posted a bunch of news clips which portray the U.S. in an unflattering light. There is a related account, also marked as state-affiliated, which continues to post the same kinds of videos. It has over 33,000 followers, which sounds like a lot, but each post is typically getting only a few hundred views. Some have been viewed thousands of times, others as little as thirteen times as of writing — on a platform with exaggerated engagement numbers. Nonetheless, the conclusion is obvious: these accounts are government propaganda, and TikTok willingly hosts them.

But that is something it has in common with all social media platforms. The Russian RT News network and China’s People’s Daily newspaper have X and Facebook accounts with follower counts in the millions. Until recently, the North Korean newspaper Uriminzokkiri operated accounts on Instagram and X. It and other North Korean state-controlled media used to have YouTube channels, too, but they were shut down by YouTube in 2017 — a move that was protested by academics studying the regime’s activities. The irony of U.S.-based platforms helping to disseminate propaganda from the country’s adversaries is that it can be useful to understand them better. Merely making propaganda available — even promoting it — is a risk and also a benefit to generous speech permissions.

The DNI’s unclassified report has no details about whether TikTok is an actual threat, and the FBI has “nothing to add” in response to questions about whether TikTok is currently doing anything untoward. More secretive information was apparently provided to U.S. lawmakers ahead of their March vote and, though few details of what, exactly, was said, several were not persuaded by what they heard, including Rep. Sara Jacobs of California:

As a member of both the House Armed Services and House Foreign Affairs Committees, I am keenly aware of the threat that PRC information operations can pose, especially as they relate to our elections. However, after reviewing the intelligence, I do not believe that this bill is the answer to those threats. […] Instead, we need comprehensive data privacy legislation, alongside thoughtful guardrails for social media platforms – whether those platforms are funded by companies in the PRC, Russia, Saudi Arabia, or the United States.

Lawmakers like Rep. Jacobs were an exception among U.S. Congresspersons who, across party lines, were eager to make the case against TikTok. Ultimately, the divest-or-ban bill got wrapped up in a massive and politically popular spending package agreed to by both chambers of Congress. Its passage was enthusiastically received by the White House and it was signed into law within hours. Perhaps that outcome is the democratic one since polls so often find people in the U.S. support a sale or ban of TikTok.

I get it: TikTok scoops up private data, suggests posts based on opaque criteria, its moderation appears to be susceptible to biases, and it is a vehicle for propaganda. But you could replace “TikTok” in that sentence with any other mainstream social network and it would be just as true, albeit less scary to U.S. allies on its face.

A Principled Objection

Forcing TikTok to change its ownership structure whether worldwide or only for a U.S. audience is a betrayal of liberal democratic principles. To borrow from Jon Stewart, “if you don’t stick to your values when they’re being tested, they’re not values, they’re hobbies”. It is not surprising that a Canadian intelligence analysis specifically pointed out how those very same values are being taken advantage of by bad actors. This is not new. It is true of basically all positions hostile to democracy — from domestic nationalist groups in Canada and the U.S., to those which originate elsewhere.

Julian G. Ku, for China File, offered a seemingly reasonable rebuttal to this line of thinking:

This argument, while superficially appealing, is wrong. For well over one hundred years, U.S. law has blocked foreign (not just Chinese) control of certain crucial U.S. electronic media. The Protect Act [sic] fits comfortably within this long tradition.

Yet this counterargument falls apart both in its details and if you think about its further consequences. As Martin Peers writes at the Information, the U.S. does not prohibit all foreign ownership of media. And governing the internet like public airwaves gets way more complicated if you stretch it any further. Canada has broadcasting laws, too, and it is not alone. Should every country begin requiring social media platforms comply with laws designed for ownership of broadcast media? Does TikTok need disconnected local versions of its product in each country in which it operates? It either fundamentally upsets the promise of the internet, or it is mandating the use of protocols instead of platforms.

It also looks hypocritical. Countries with a more authoritarian bent and which openly censor the web have responded to even modest U.S. speech rules with mockery. When RT Americatechnically a U.S. company with Russian funding — was required to register as a foreign agent, its editor-in-chief sarcastically applauded U.S. free speech standards. The response from Chinese government officials and media outlets to the proposed TikTok ban has been similarly scoffing. Perhaps U.S. lawmakers are unconcerned about the reception of their policies by adversarial states, but it is an indicator of how these policies are being portrayed in these countries — a real-life “we are not so different, you and I” setup — that, while falsely equivalent, makes it easy for authoritarian states to claim that democracies have no values and cannot work. Unless we want to contribute to the fracturing of the internet — please, no — we cannot govern social media platforms by mirroring policies we ostensibly find repellant.

The way the government of China seeks to shape the global narrative is understandably concerning given its poor track record on speech freedoms. An October 2023 U.S. State Department “special report” (PDF) explored several instances where it boosted favourable narratives, buried critical ones, and pressured other countries — sometimes overtly, sometimes quietly. The government of China and associated businesses reportedly use social media to create the impression of dissent toward human rights NGOs, and apparently everything from university funding to new construction is a vector for espionage. On the other hand, China is terribly ineffective in its disinformation campaigns, and many of the cases profiled in that State Department report end in failure for the Chinese government initiative. In Nigeria, a pitch for a technologically oppressive “safe city” was rejected; an interview published in the Jerusalem Post with Taiwan’s foreign minister was not pulled down despite threats from China’s embassy in Israel. The report’s authors speculate about “opportunities for PRC global censorship”. But their only evidence is a “list [maintained by ByteDance] identifying people who were likely blocked or restricted” from using the company’s many platforms, though the authors can only speculate about its purpose.

The problem is that trying to address this requires better media literacy and better recognition of propaganda. That is a notoriously daunting problem. We are exposed to a more destabilizing cocktail of facts and fiction, but there is declining trust in experts and institutions to help us sort it out. Trying to address TikTok as a symptomatic or even causal component of this is frustratingly myopic. This stuff is everywhere.

Also everywhere is corporate propaganda arguing regulations would impede competition in a global business race. I hate to be mean by picking on anyone in particular, but a post from Om Malik has shades of this corporate slant. Malik is generally very good on the issues I care about, but this is not one we appear to agree on. After a seemingly impressed observation of how quickly Chinese officials were able to eject popular messaging apps from the App Store in the country, Malik compares the posture of each country’s tech industries:

As an aside, while China considers all its tech companies (like Bytedance) as part of its national strategic infrastructure, the United States (and its allies) think of Apple and other technology companies as public enemies.

This is laughable. Presumably, Malik is referring to the chillier reception these companies have faced from lawmakers, and antitrust cases against Amazon, Apple, Google, and Meta. But that tougher impression is softened by the U.S. government’s actual behaviour. When the E.U. announced the Digital Markets Act and Digital Services Act, U.S. officials sprang to the defence of tech companies. Even before these cases, Uber expanded in Europe thanks in part to its close relationship with Obama administration officials, as Marx pointed out. The U.S. unquestionably sees its tech industry dominance as a projection of its power around the world, hardly treating those companies as “public enemies”.

Far more explicit were the narratives peddled by lobbyists from Targeted Victory in 2022 about TikTok’s dangers, and American Edge beginning in 2020 about how regulations will cause the U.S. to become uncompetitive with China and allow TikTok to win. Both organizations were paid by Meta to spread those messages; the latter was reportedly founded after a single large contribution from Meta. Restrictions on TikTok would obviously be beneficial to Meta’s business.

If you wanted to boost the industry — and I am not saying Malik is — that is how you would describe the situation: the U.S. is fighting corporations instead of treating them as pals to win this supposed race. It is not the kind of framing one uses if they wanted to dissuade people from the notion this is a protectionist dispute over the popularity of TikTok. But it is the kind of thing you hear from corporations via their public relations staff and lobbyists, which gets trickled into public conversation.

This Is Not a TikTok Problem

TikTok’s divestment would not be unprecedented. The Committee on Foreign Investment in the United States — henceforth, CFIUS, pronounced “siff-ee-us” — demanded, after a 2019 review, that Beijing Kunlun Tech Co Ltd sell Grindr. CFIUS concluded the risk to users’ private data was too great for Chinese ownership given Grindr’s often stigmatized and ostracized user base. After its sale, now safe in U.S. hands, a priest was outed thanks to data Grindr had been selling since before it was acquired by the Chinese firm, and it is being sued for allegedly sharing users’ HIV status with third parties. Also, because it transacts with data brokers, it potentially still leaks users’ private information to Chinese companies (PDF), apparently violating the fundamental concern triggering this divestment.

Perhaps there is comfort in Grindr’s owner residing in a country where same-sex marriage is legal rather than in one where it is not. I think that makes a lot of sense, actually. But there remain plenty of problems unaddressed by its sale to a U.S. entity.

Similarly, this U.S. TikTok law does not actually solve potential espionage or influence for a few reasons. The first is that it has not been established that either are an actual problem with TikTok. Surely, if this were something we ought to be concerned about, there would be a pattern of evidence, instead of what we actually have which is a fear something bad could happen and there would be no way to stop it. But many things could happen. I am not opposed to prophylactic laws so long as they address reasonable objections. Yet it is hard not to see this law as an outgrowth of Cold War fears over leaflets of communist rhetoric. It seems completely reasonable to be less concerned about TikTok specifically while harbouring worries about democratic backsliding worldwide and the growing power of authoritarian states like China in international relations.

Second, the Chinese government does not need local ownership if it wants to exert pressure. The world wants the country’s labour and it wants its spending power, so businesses comply without a fight, and often preemptively. Hollywood films are routinely changed before, during, and after production to fit the expectations of state censors in China, a pattern which has been pointed out using the same “Red Dawn” anecdote in story after story after story. (Abram Dylan contrasted this phenomenon with U.S. military cooperation.) Apple is only too happy to acquiesce to the government’s many demands — see the messaging apps issue mentioned earlier — including, reportedly, in its media programming. Microsoft continues to operate Bing in China, and its censorship requirements have occasionally spilled elsewhere. Economic leverage over TikTok may seem different because it does not need access to the Chinese market — TikTok is banned in the country — but perhaps a new owner would be reliant upon China.

Third, the law permits an ownership stake no greater than twenty percent from a combination of any of the “covered nations”. I would be shocked if everyone who is alarmed by TikTok today would be totally cool if its parent company were only, say, nineteen percent owned by a Chinese firm.

If we are worried about bias in algorithmically sorted feeds, there should be transparency around how things are sorted, and more controls for users including wholly opting out. If we are worried about privacy, there should be laws governing the collection, storage, use, and sharing of personal information. If ownership ties to certain countries is concerning, there are more direct actions available to monitor behaviour. I am mystified why CFIUS and TikTok apparently abandoned (PDF) a draft agreement that would give U.S.-based entities full access to the company’s systems, software, and staff, and would allow the government to end U.S. access to TikTok at a moment’s notice.

Any of these options would be more productive than this legislation. It is a law which empowers the U.S. president — whoever that may be — to declare the owner of an app with a million users a “covered company” if it is from one of those four nations. And it has been passed. TikTok will head to court to dispute it on free speech grounds, and the U.S. may respond by justifying its national security concerns.

Obviously, the U.S. government has concerns about the connections between TikTok, ByteDance, and the government of China, which have been extensively reported. Rest of World says ByteDance put pressure on TikTok to improve its financial performance and has taken greater control by bringing in management from Douyin. The Wall Street Journal says U.S. user data is not fully separated. And, of course, Emily Baker-White has reported — first for Buzzfeed News and now for Forbes — a litany of stories about TikTok’s many troubling behaviours, including spying on her. TikTok is a well scrutinized app and reporters have found conduct that has understandably raised suspicions. But virtually all of these stories focus on data obtained from users, which Chinese agencies could do — and probably are doing — without relying on TikTok. None of them have shown evidence that TikTok’s suggestions are being manipulated at the behest or demand of Chinese officials. The closest they get is an article from Baker-White and Iain Martin which alleges TikTok “served up a flood of ads from Chinese state propaganda outlets”, yet waiting until the third-to-last paragraph before acknowledging “Meta and Google ad libraries show that both platforms continue to promote pro-China narratives through advertising”. All three platforms label state-run media outlets, albeit inconsistently. Meanwhile, U.S.-owned X no longer labels any outlets with state editorial control. It is not clear to me that TikTok would necessarily operate to serve the best interests of the U.S. even if it was owned by some well-financed individual or corporation based there.

For whatever it is worth, I am not particularly tied to the idea that the government of China would not use TikTok as a vehicle for influence. The government of China is clearly involved in propaganda efforts both overt and covert. I do not know how much of my concerns are a product of living somewhere with a government and a media environment that focuses intently on the country as particularly hostile, and not necessarily undeservedly. The best version of this argument is one which questions the platform’s possible anti-democratic influence. Yes, there are many versions of this which cross into moral panic territory — a new Red Scare. I have tried to put this in terms of a more reasonable discussion, and one which is not explicitly xenophobic or envious. But even this more even-handed position is not well served by the law passed in the U.S., one which was passed without evidence of influence much more substantial than some choice hashtag searches. TikTok’s response to these findings was, among other things, to limit its hashtag comparison tool, which is not a good look. (Meta is doing basically the same by shutting down CrowdTangle.)

I hope this is not the beginning of similar isolationist policies among democracies worldwide, and that my own government takes this opportunity to recognize the actual privacy and security threats at the heart of its own TikTok investigation. Unfortunately, the head of CSIS is really leaning on the espionage angle. For years, the Canadian government has been pitching sorely needed updates to privacy legislation, and it would be better to see real progress made to protect our private data. We can do better than being a perpetual recipient of decisions made by other governments. I mean, we cannot do much — we do not have the power of the U.S. or China or the E.U. — but we can do a little bit in our own polite Canadian way. If we are worried about the influence of these platforms, a good first step would be to strengthen the rights of users. We can do that without trying to governing apps individually, or treating the internet like we do broadcasting.

To put it more bluntly, the way we deal with a possible TikTok problem is by recognizing it is not a TikTok problem. If we care about espionage or foreign influence in elections, we should address those concerns directly instead of focusing on a single app or company that — at worst — may be a medium for those anxieties. These are important problems and it is inexcusable to think they would get lost in the distraction of whether TikTok is individually blameworthy.


  1. Because this piece has taken me so long to write, a whole bunch of great analyses have been published about this law. I thought the discussion on “Decoder” was a good overview, especially since two of the three panelists are former lawyers. ↥︎