Article

To promote the launch of a new Beats Pill model, Apple’s Oliver Schusser was interviewed by Craig McLean of Wallpaper — where by “interviewed” I mostly mean “guided through talking points”. There is not much here unless you appreciate people discussing brands in the abstract.

However, McLean wanted to follow up on a question asked of Schusser in a 2019 issue of Music Week (PDF): “where do you want to see, or want Apple Music to be, in five years?” Schusser replied:

We want to be the best in what we do. And that means, obviously, we’ll continue to invest in the product and make sure we’re innovative and provide our customers with the best experience. We want to invest in our editorial and content, in our relationships with the industry, whether that’s the songwriters, music publishers, the labels, artists or anyone in the creative process. But that’s really what we’re trying to do. We just want to be the best at what we do.

With McLean given the opportunity for a response at the end of that timeframe, where does Apple Music now find itself? Schusser answered:

We are very clearly positioned as the quality service. We don’t have a free offer [unlike Spotify’s advertising-supported tier]. We don’t give anything away. Everything is made by music fans and curated by experts. We are focused on music while other people are running away from music into podcasts and audiobooks. Our service is clearly dedicated to music.

With spatial audio, we’ve completely revolutionised the listening experience. [Historically] we went from mono to stereo and then, for decades, there was nothing else. Then we completely invented a new standard [where] now 90 per cent of our subscribers are listening to music in spatial audio. Which is great.

And little things, like the lyrics, for example, [which] you find on Apple Music, which are incredibly popular. We have a team of people that are actually transcribing the lyrics because we don’t want them to be crowd-sourced from the internet. We want to make sure they’re as pristine as possible. We’ve got motion artwork and song credits. We really try to make Apple Music a high quality place for music fans.

And while most others in the marketplace have sort of stopped innovating, we’ve been really pushing hard, whether it’s Apple Music Sing, which is a great singalong feature, like karaoke. Or Classical, which is an audience that had completely been neglected. We’re trying to make Apple Music the best place for people to listen to music. I’m super happy with that.

This is quite the answer, and one worth tediously picking apart claim-by-claim.

We are very clearly positioned as the quality service. We don’t have a free offer [unlike Spotify’s advertising-supported tier]. We don’t give anything away.

I am not sure how one would measure whether Apple Music is “positioned as the quality service”, but this is a fair point. Apple Music offers free streaming “Radio” stations, but it is substantially not a free service.

Everything is made by music fans and curated by experts.

This is a common line from Apple and a description which has carried on from the launch of Beats Music. But it seems only partially true. There are, for example, things which must be entirely made by algorithm, like user-personalized playlists and radio stations. Schusser provided more detail to McLean five years ago in that Music Week interview, saying “[o]f course there are algorithms involved [but] the algorithms only pick music that [our] editors and curators would choose”. I do not know what that means, but it is at least an acknowledgement of an automated system instead of the handmade impression Apple gives in the Wallpaper interview.

Other parts of Apple Music suspiciously seem informed by factors beyond what an expert curator might decide. Spellling’s 2021 record “The Turning Wheel”, a masterpiece of orchestral art pop, notably received a perfect score from music reviewer Anthony Fantano. Fantano also gave high scores to artists like Black Midi, JPEGMAFIA, and Lingua Ignota, none of whom make music anything like Spellling. Yet all are listed as “similar artists” to Spellling on Apple Music. If you like Spellling’s work, you may be surprised by those other artists because they sound wildly different. This speaks less of curation than it does automation by audience.

For the parts which are actually curated manually, do I know the people who are making these decisions? What is their taste like? What are their standards? Are they just following Apple’s suggestions? Why is the “Rock Drive” playlist the same as any mediocre FM rock radio station?

We are focused on music while other people are running away from music into podcasts and audiobooks. Our service is clearly dedicated to music.

Music has undeniably shaped Apple from its earliest days and, especially, following the launch of the iPod. Its executives are fond of repeating the line “we love music” in press releases and presentations since 2001. But Apple’s dedication to separating music from other media is a five year old decision. It was previously wholly dedicated to music while shipping an app that also played audiobooks and podcasts and movies and all manner of other things. Plus, have you seen the state of the Music app on MacOS?

This is clearly just a dig at Spotify. It would carry more weight if Apple Music felt particularly good for music playback. It does not. I have filed dozens of bugs against the MacOS, iOS, and tvOS versions reflecting basic functionality: blank screens, poor search results, playback queue ordering issues, inconsistencies in playlist sort order between devices, problems with importing files, sync issues, cloud problems, and so forth. It is not uniformly terrible, but this is not a solid foundation for criticizing Spotify for not focusing on music enough.

Spotify sucks in other ways.

With spatial audio, we’ve completely revolutionised the listening experience. [Historically] we went from mono to stereo and then, for decades, there was nothing else.

This is untrue. People have been experimenting with multichannel audio in music since the 1960s. “Dark Side of the Moon” was released in quadrophonic audio in 1973, one of many albums released that decade in a four-channel mix. In the 1990s, a bunch of albums were released on SACDs mixed in 5.1 surround sound.

What Apple can correctly argue is that few people actually listened to any multichannel music in these formats. They were niche. Now?

Then we completely invented a new standard [where] now 90 per cent of our subscribers are listening to music in spatial audio. Which is great.

A fair point, though with a couple of caveats. Part of the high adoption rate is because Spatial Audio is turned on by default, and Apple is paying a premium to incentivize artists to release multichannel mixes. It is therefore not too surprising that most people have listened to at least one Spatial Audio track.

But this is the first time I can remember Apple claiming it “invented” the format. Spatial Audio was originally framed as supporting music mixed in Dolby Atmos. In its truest guise — played through a set of AirPods or Beats headphones, which can track the movement of the wearer’s head — it forms a three-dimensional bubble of music, something which Apple did create. That is, Apple invented the part which makes Atmos-mixed audio playable on its systems within a more immersive apparent space. But Apple did not invent the “new standard” taking music beyond two channels — that was done long before, and then by Dolby.

Also, it is still bizarre to me how many of the most popular multichannel mixes of popular albums are not available in Spatial Audio on Apple Music. These are records the artists deliberately intended for a surround sound mix at the time they were released, yet they cannot be played in what must be the most successful multichannel music venue ever made? Meanwhile, a whole bunch of classic songs and albums have been remixed in Spatial Audio for no good reason.

And little things, like the lyrics, for example, [which] you find on Apple Music, which are incredibly popular. We have a team of people that are actually transcribing the lyrics because we don’t want them to be crowd-sourced from the internet. We want to make sure they’re as pristine as possible.

I really like the way Apple Music displays time-tracked lyrics. That said, I only occasionally see inaccuracies in lyrics on Genius and in Apple Music, so I am not sure how much more “pristine” Apple’s are.

Also, I question the implication of a team of people manually transcribing lyrics. I have nothing to support this, but I would wager heavily this is primarily machine-derived followed by manual cleanup.

We’ve got motion artwork and song credits.

Song credits are good. Motion artwork is a doodad.

We really try to make Apple Music a high quality place for music fans.

I want to believe this is true, but I have a hard time accepting today’s Apple Music is the high quality experience worth raving about. Maybe some music fans are clamouring for animated artwork and bastardized Spatial Audio mixes of classic albums. I am not one of them. What I want is foundation of a reliable and fast jukebox functionality extended to my local library and streaming media, and then all this exciting stuff built on top.

And while most others in the marketplace have sort of stopped innovating, we’ve been really pushing hard, whether it’s Apple Music Sing, which is a great singalong feature, like karaoke. Or Classical, which is an audience that had completely been neglected.

These are good updates. Apple has not said much about Apple Music Sing or its popularity since it launched in December 2022, but it seems fine enough. Also, Spotify began trialling its own karaoke mode in June 2022, so maybe it should be credited with this innovation.

Apple Music Classical, meanwhile, remains a good but occasionally frustrating app. Schusser is right in saying this has been a neglected audience among mainstream streaming services. Apple’s effort is built upon Primephonic, which it acquired in August 2021 before launching it re-skinned as Classical in March 2023. That said, it is better now than it was at launch and it seems Apple is slowly refining it. It is important to me for there to be mainstream attention in this area.

We’re trying to make Apple Music the best place for people to listen to music. I’m super happy with that.

The thing I keep thinking about the four paragraph response above is that Schusser says a lot of the right things. Music is so important to so many people, and I would like to believe Apple cares as much about making the best music service and players as I do about listening to each week’s new releases.

I just wish everything was better than it currently is. There are many bugs I filed years ago which remain open, though I am happy to say the version in the latest Sequoia beta appears to contain a fix for reversing the order of songs when dragging them to the playback queue. If Apple really wants to position Apple Music as “the quality service” that is “the best at what we do”, it should demonstrate that instead of just saying it.

Ryan Broderick:

You’ve probably seen the phrase AI slop already, the term most people have settled on for the confusing and oftentimes disturbing pictures of Jesus and flight attendants and veterans that are filling up Facebook right now. But the current universe of slop is much more vast than that. There’s Google Slop, YouTube slop, TikTok slop, Marvel slop, Taylor Swift slop, Netflix slop. One could argue that slop has become the defining “genre” of the 2020s. But even though we’ve all come around to this idea, I haven’t seen anyone actually define it. So today I’m going to try.

This piece does actually settle somewhere very good in its attempt to address the vibe of the entertainment and media world in which we swim, but it is a slog to get there. This is the first paragraph and trying to pull it apart will take a minute. For a start, Broderick says the definition of “slop” has evaded him. That is plausible, but it does require him to have avoided Googling “ai slop definition” upon which point he would have surely seen Simon Willison’s post defining and popularizing the term:

Not all promotional content is spam, and not all AI-generated content is slop. But if it’s mindlessly generated and thrust upon someone who didn’t ask for it, slop is the perfect term for it.

This is a good definition, though Willison intentionally restricts it to describe A.I.-generated products. However, it seems like people are broadening the word’s use to cover things not made using A.I., and it appears Broderick wishes to reflect that.

Next paragraph:

Content slop has three important characteristics. The first being that, to the user, the viewer, the customer, it feels worthless. This might be because it was clearly generated in bulk by a machine or because of how much of that particular content is being created. The next important feature of slop is that feels forced upon us, whether by a corporation or an algorithm. It’s in the name. We’re the little piggies and it’s the gruel in the trough. But the last feature is the most crucial. It not only feels worthless and ubiquitous, it also feels optimized to be so. […]

I have trimmed a few examples from this long paragraph — in part because I do not want emails about Taylor Swift. I will come back to this definition, but I want to touch on something in the next paragraph:

Speaking of Ryan Reynolds, the film essayist Patrick Willems has been attacking this idea from a different direction in a string of videos over the last year. In one essay titled, “When Movie Stars Become Brands,” Willems argues that in the mid-2000s, after a string of bombs, Dwayne Johnson and Ryan Reynolds adapted a strategy lifted from George Clooney, where an actor builds brands and side businesses to fund creatively riskier movie projects. Except Reynolds and Johnson never made the creatively riskier movie projects and, instead, locked themselves into streaming conglomerates and allowed their brands to eat their movies. The zenith of this being their 2021 Netflix movie Red Notice, which literally opens with competing scenes advertising their respective liquor brands. A movie that, according to Netflix, is their most popular movie ever.

This is a notable phenomenon, but I think Broderick would do to cite another Willems video essay as well. This one, which seems just as relevant, is all about the word “content”. Willems’ obvious disdain for the word — one which I share — is rooted in its everythingness and, therefore, nothingness. In it, he points to a specific distinction:

[…] In a video on the PBS “Ideas” channel, Mike Rugnetta addressed this topic, coming at it from a similar place as me. And he put forth the idea that the “content” label also has to do with how we experience something.

He separates it into “consumption” versus “mere consumption”. In other words, yes, we technically are consuming everything, but there’s the stuff that we fully focus on and engage with, and then the stuff we look at more passively, like tweets we scroll past or a gaming stream we half-watch in the background.

So the idea Mike proposes is that maybe the stuff that we merely consume is content. And if we consume it and actually focus on it, then it’s something else.

What Broderick is getting at — and so too, I think, are the hoards of people posting about “slop” on X to which he links in the first paragraph — is a combination of this phenomenon and the marketing-driven vehicles for Johnson and Reynolds. Willems correctly points out that actors and other public figures have long been spokespeople for products, including their own. Also, there have always been movies and shows which lack any artistic value. Those things have not changed.

What has changed, however, is the sheer volume of media released now. Nearly six hundred English-language scripted shows were released in 2022 alone, though that declined in 2023 to below five hundred in part because of striking writers and actors. According to IMDB data, 4,100 movies were released in 1993, 6,125 in 2003, 15,451 in 2013, and 19,626 in 2023.

As I have previously argued, volume is not inherently bad. The self-serve approach of streaming services means shows do not need to fit into an available airtime slot on a particular broadcast channel. It means niche programming is just as available as blockbusters. The only scheduling which needs to be done is on the viewer’s side, fitting a new show or movie in between combing through the 500 hours of YouTube videos uploaded every minute, some of which have the production quality of mid-grade television or movies, not to mention a world of streaming music.

As Willems says, all of this media gets flattened in description — “content” — and in delivery. If you want art, you can find it, but if you just want something for, as Rugnetta says, “mere consumption”, you can find that — or, more likely, it will be served to you. This is true of all forms of media.

There are two things which help older media’s reputation for quality, with the benefit of hindsight: a bunch of bad stuff has been forgotten, and there was less of it to begin with. It was a lot harder to make a movie when it had to be shot to tape or film, and more difficult to make it look great. A movie with a jet-setting hero was escapist in the 1960s, but lower-cost airfare means those locations no longer seem so exotic. If you wanted to give it a professional sheen, you had to rent expensive lenses, build detailed sets, shoot at specific times of day, and light it carefully. If you wanted a convincing large-scale catastrophe on-screen, it had to be built in real life. These are things which can now be done in post-production, albeit not easily or necessarily cheaply. I am not a hater of digital effects. But it is worth mentioning the ability of effects artists to turn a crappy shot into something cinematic, and to craft apocalyptic scenery without constructing a single physical element.

We are experiencing the separating of wheat and chaff in real time, and with far more of each than ever before. Unfortunately, soulless and artless vehicles for big stars sell well. Explosions sell. Familiar sells.

“Content” sells.

Here is where Broderick lands:

And six years later, it’s not just music that feels forgettable and disposable. Most popular forms of entertainment and even basic information have degraded into slop simply meant to fill our various feeders. It doesn’t matter that Google’s AI is telling you to put glue on pizza. They needed more data for their language model, so they ingested every Reddit comment ever. This makes sense because from their perspective what your search results are doesn’t matter. All that matters is that you’re searching and getting a response. And now everything has meet these two contradictory requirements. It must fill the void and also be the most popular thing ever. It must reach the scale of MrBeast or it can’t exist. Ironically enough, though, when something does reach that scale now, it’s so watered down and forgettable it doesn’t actually feel like it exists.

One may quibble with the precise wording that “what your search results are doesn’t matter” to Google. The company appears to have lost market share as trust in search has declined, though there is conflicting data and the results may not be due to user preference. But the gist of this is, I think, correct.

People seem to understand they are being treated as mere consumers in increasingly financialized expressive media. I have heard normal people in my life — people without MBAs, and who do not work in marketing, and who are not influencers — throw around words like “monetize” and “engagement” in a media context. It is downright weird.

The word “slop” seems like a good catch-all term finding purchase in the online vocabulary, but I think the popularization of “content” — in the way it is most commonly used — foreshadowed this shift. Describing artistic works as though they are filler for a container is a level of disrespect not even a harsh review could achieve. Not all “content” is “slop”, but all “slop” is “content”. One thing “slop” has going for it is its inherent ugliness. People excitedly talk about all the “content” they create. Nobody will be proud of their “slop”.

With apologies to Mitchell and Webb.

In a word, my feelings about A.I. — and, in particular, generative A.I. — are complicated. Just search “artificial intelligence” for a reverse chronological back catalogue of where I have landed. It feels like an appropriate position to hold for a set of nascent technologies so sprawling and therefore implying radical change.

Or perhaps that, like so many other promising new technologies, will turn out to be illusory as well. Instead of altering the fundamental fabric of reality, maybe it is used to create better versions of features we have used for decades. This would not necessarily be a bad outcome. I have used this example before, but the evolution of object removal tools in photo editing software is illustrative. There is no longer a need to spend hours cloning part of an image over another area and gently massaging it to look seamless. The more advanced tools we have today allow an experienced photographer to make an image they are happy with in less time, and lower barriers for newer photographers.

A blurry boundary is crossed when an entire result is achieved through automation. There is a recent Drew Gooden video which, even though not everything resonated with me, I enjoyed.1 There is a part in the conclusion which I wanted to highlight because I found it so clarifying (emphasis mine):

[…] There’s so many tools along the way that help you streamline the process of getting from an idea to a finished product. But, at a certain point, if “the tool” is just doing everything for you, you are not an artist. You just described what you wanted to make, and asked a computer to make it for you.

You’re also not learning anything this way. Part of what makes art special is that it’s difficult to make, even with all the tools right in front of you. It takes practice, it takes skill, and every time you do it, you expand on that skill. […] Generative A.I. is only about the end product, but it won’t teach you anything about the process it would take to get there.

This gets at the question of whether A.I. is more often a product or a feature — the answer to which, I think, is both, just not in a way that is equally useful. Gooden shows an X thread in which Jamian Gerard told Luma to convert the “Abbey Road” cover to video. Even though the results are poor, I think it is impressive that a computer can do anything like this. It is a tech demo; a more practical application can be found in something like the smooth slow motion feature in the latest release of Final Cut Pro.

“Generative A.I. is only about the end product” is a great summary of the emphasis we put on satisfying conclusions instead of necessary rote procedure. I cook dinner almost every night. (I recognize this metaphor might not land with everyone due to time constraints, food availability, and physical limitations, but stick with me.) I feel lucky that I enjoy cooking, but there are certainly days when it is a struggle. It would seem more appealing to type a prompt and make a meal appear using the ingredients I have on hand, if that were possible.

But I think I would be worse off if I did. The times I have cooked while already exhausted have increased my capacity for what I can do under pressure, and lowered my self-imposed barriers. These meals have improved my ability to cook more elaborate dishes when I have more time and energy, just as those more complicated meals also make me a better cook.2

These dynamics show up in lots of other forms of functional creative expression. Plenty of writing is not particularly artistic, but the mental muscle exercised by trying to get ideas into legible words is also useful when you are trying to produce works with more personality. This is true for programming, and for visual design, and for coordinating an outfit — any number of things which are sometimes individually expressive, and other times utilitarian.

This boundary only exists in these expressive forms. Nobody, really, mourns the replacement of cheques with instant transfers. We do not get better at paying our bills no matter which form they take. But we do get better at all of the things above by practicing them even when we do not want to, and when we get little creative satisfaction from the result.

It is dismaying to see so many of A.I. product demos show how they can be used to circumvent this entire process. I do not know if that is how they will actually be used. There are plenty of accomplished artists using A.I. to augment their practice, like Sougwen Chen, Anna Ridler, and Rob Sheridan. Writers and programmers are using generative products every day as tools, but they must have some fundamental knowledge to make A.I. work in their favour.

Stock photography is still photography. Stock music is still music, even if nobody’s favourite song is “Inspiring Corporate Advertising Tech Intro Promo Business Infographics Presentation”. (No judgement if that is your jam, though.) A rushed pantry pasta is still nourishment. A jingle for an insurance commercial could be practice for a successful music career. A.I. should just be a tool — something to develop creativity, not to replace it.


  1. There are also some factual errors. At least one of the supposed Google Gemini answers he showed onscreen was faked, and Adobe’s standard stock license is less expensive than the $80 “Extended” license Gooden references. ↥︎

  2. I am wary of using an example like cooking because it implies a whole set of correlative arguments which are unkind and judgemental toward people who do not or cannot cook. I do not want to provide kindling for these positions. ↥︎

Tuesdays used to be my favourite day of the week because it was the day when a bunch of new music would typically be released. That is no longer the case — but only because music releases were moved to Fridays. The music itself is as good as ever.

It remains a frustratingly recurring theme that today’s music is garbage, and yesterday’s music is gold. Music today is “just noise”, according to generations of people who insist the music in their day — probably when they were in their twenties — was better. Today’s peddler of this dead horse trope is YouTuber and record producer Rick Beato, who published a video about the two “real reason[s] music is getting worse”: it is “too easy to make”, and “too easy to consume”.

Beato is right that new technologies have been steadily making it easier to make music and listen to it, but he is wrong that they are destructive. “Good” and “bad” are subjective, to be sure, but it seems self-evident that more good music is being made now than ever before, simply because people are so easily able to translate an idea to published work. Any artist can experiment with any creative vision. It is an amazing time.

This also suggests more bad music is being made, but who cares? Bad music has existed forever. The lack of a gatekeeper now means it gains wider distribution, but that has more benefits than problems. Maybe some people will stumble across it and recognize the potential in a burgeoning artist.

Aside from the lack of distribution avenues historically, the main reason we do not remember bad records is because they are no longer played. This does not mean unpopular music is inherently bad, of course, only that time sifts things we generally like from things we do not.

Perhaps one’s definition of “good” includes how influential a work of art turns out to be. Again, it seems difficult to argue modern music is not as influential as that which has preceded it. It may be too early to tell what will prove its influence, to be sure, but we have relatively recent examples which indicate otherwise. The Weeknd spawned an entire genre of moody R&B imitators from a series of freely distributed mixtapes. The entire genre of trap spread to the world from its origins in Atlanta, to the extent that its unique characteristics have underpinned much of pop music for a decade. Many of its biggest artists made their name on DatPiff. Just two of countless examples.

If you actually love music for all that it can be, you are spoiled for choice today. If anything, that is the biggest problem with music today: there is so much of it and it can be overwhelming. The ease with which music can be made does not necessarily make it worse, but it does make it more difficult if you want to try as much of it as you can. I have only a small amount of sympathy when Beato laments how the ease of streaming services devalues artistry because of how difficult it can be to spend time with any one album when there is another to listen to, and then another. But anyone can make the decision to hold the queue and embrace a single release. (And if artistry is something we are concerned with, why call it “consuming”? A good record is not something I want to chug down.)

We can try any music we like these days. We can explore old releases just as easily as we can see what has just been published. We can and should take a chance on genres we had never considered before. We can explore new recordings of jazz and classical compositions. Every Friday is a feast for the ears — if you want it to be. If you really like music, you are living in the luckiest time. I know that is how I feel. I just wish artists could get paid an appropriate amount for how much they contribute to the best parts of life.

Since 2022, the European Parliament has been trying to pass legislation requiring digital service providers to scan for and report CSAM as it passes through their services.

Giacomo Zandonini, Apostolis Fotiadis, and Luděk Stavinoha, Balkan Insight, with a good summary in September:

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.

[…]

The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

This is going to require a little back-and-forth, and I will pick up the story with quotations from Matthew Green’s introductory remarks to a panel before the European Internet Services Providers Association in March 2023:

The only serious proposal that has attempted to address this technical challenge was devised — and then subsequently abandoned — by Apple in 2021. That proposal aimed only at detecting known content using a perceptual hash function. The company proposed to use advanced cryptography to “split” the evaluation of hash comparisons between the user’s device and Apple’s servers: this ensured that the device never received a readable copy of the hash database.

[…]

The Commission’s Impact Assessment deems the Apple approach to be a success, and does not grapple with this failure. I assure you that this is not how it is viewed within the technical community, and likely not within Apple itself. One of the most capable technology firms in the world threw all their knowledge against this problem, and were embarrassed by a group of hackers: essentially before the ink was dry on their proposal.

Daniel Boffey, the Guardian, in May 2023:

Now leaked internal EU legal advice, which was presented to diplomats from the bloc’s member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year.

The European Parliament in a November 2023 press release:

In the adopted text, MEPs excluded end-to-end encryption from the scope of the detection orders to guarantee that all users’ communications are secure and confidential. Providers would be able to choose which technologies to use as long as they comply with the strong safeguards foreseen in the law, and subject to an independent, public audit of these technologies.

Joseph Menn, Washington Post, in March, reporting on the results of a European court ruling:

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”

This is not directly about the proposed CSAM measures, but it is precedent for European regulators to follow.

Natasha Lomas, TechCrunch, this week:

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Meredith Whittaker, CEO of Signal, issued a PDF statement criticizing the proposal:

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Patrick Breyer, of Germany’s Pirate Party:

Only Germany, Luxembourg, the Netherlands, Austria and Poland are relatively clear that they will not support the proposal, but this is not sufficient for a “blocking minority”.

Ella Jakubowska on X:

The exact quote from [Věra Jourová] the Commissioner for Values & Transparency: “the Commission proposed the method or the rule that even encrypted messaging can be broken for the sake of better protecting children”

Věra Jourová on X, some time later:

Let me clarify one thing about our draft law to detect online child sexual abuse #CSAM.

Our proposal is not breaking encryption. Our proposal preserves privacy and any measures taken need to be in line with EU privacy laws.

Matthew Green on X:

Coming back to the initial question: does installing surveillance software on every phone “break encryption”? The scientist in me squirms at the question. But if we rephrase as “does this proposal undermine and break the *protections offered by encryption*”: absolutely yes.

Maïthé Chini, the Brussels Times:

It was known that the qualified majority required to approve the proposal would be very small, particularly following the harsh criticism of privacy experts on Wednesday and Thursday.

[…]

“[On Thursday morning], it soon became clear that the required qualified majority would just not be met. The Presidency therefore decided to withdraw the item from today’s agenda, and to continue the consultations in a serene atmosphere,” a Belgian EU Presidency source told The Brussels Times.

That is a truncated history of this piece of legislation: regulators want platform operators to detect and report CSAM; platforms and experts say that will conflict with security and privacy promises, even if media is scanned prior to encryption. This proposal may be specific to the E.U., but you can find similar plans to curtail or invalidate end-to-end encryption around the world:

I selected English-speaking areas because that is the language I can read, but I am sure there are more regions facing threats of their own.

We are not served by pretending this threat is limited to any specific geography. The benefits of end-to-end encryption are being threatened globally. The E.U.’s attempt may have been pushed aside for now, but another will rise somewhere else, and then another. It is up to civil rights organizations everywhere to continue arguing for the necessary privacy and security protections offered by end-to-end encryption.

After Robb Knight found — and Wired confirmed — Perplexity summarizes websites which have followed its opt out instructions, I noticed a number of people making a similar claim: this is nothing but a big misunderstanding of the function of controls like robots.txt. A Hacker News comment thread contains several versions of these two arguments:

  • robots.txt is only supposed to affect automated crawling of a website, not explicit retrieval of an individual page.

  • It is fair to use a user agent string which does not disclose automated access because this request was not automated per se, as the user explicitly requested a particular page.

That is, publishers should expect the controls provided by Perplexity to apply only to its indexing bot, not a user-initiated page request. Wary of being the kind of person who replies to pseudonymous comments on Hacker News, this is an unnecessarily absolutist reading of how site owners expect the Robots Exclusion Protocol to work.

To be fair, that protocol was published in 1994, well before anyone had to worry about websites being used as fodder for large language model training. And, to be fairer still, it has never been formalized. A spec was only recently proposed in September 2022. It has so far been entirely voluntary, but the draft standard proposes a more rigid expectation that rules will be followed. Yet it does not differentiate between different types of crawlers — those for search, others for archival purposes, and ones which power the surveillance economy — and contains no mention of A.I. bots. Any non-human means of access is expected to comply.

The question seems to be whether what Perplexity is doing ought to be considered crawling. It is, after all, responding to a direct retrieval request from a user. This is subtly different from how a user might search Google for a URL, in which case they are asking whether that site is in the search engine’s existing index. Perplexity is ostensibly following real-time commands: go fetch this webpage and tell me about it.

But it clearly is also crawling in a more traditional sense. The New York Times and Wired both disallow PerplexityBot, yet I was able to ask it to summarize a set of recent stories from both publications. At the time of writing, the Wired summary is about seventeen hours outdated, and the Times summary is about two days old. Neither publication has changed its robots.txt directives recently; they were both blocking Perplexity last week, and they are blocking it today. Perplexity is not fetching these sites in real-time as a human or web browser would. It appears to be scraping sites which have explicitly said that is something they do not want.

Perplexity should be following those rules and it is shameful it is not. But what if you ask for a real-time summary of a particular page, as Knight did? Is that something which should be identifiable by a publisher as a request from Perplexity, or from the user?

The Robots Exclusion Protocol may be voluntary, but a more robust method is to block bots by detecting their user agent string. Instead of expecting visitors to abide by your “No Homers Club” sign, you are checking IDs. But these strings are unreliable and there are often good reasons for evading user agent sniffing.

Perplexity says its bot is identifiable by both its user agent and the IP addresses from which it operates. Remember: this whole controversy is that it sometimes discloses neither, making it impossible to differentiate Perplexity-originating traffic from a real human being — and there is a difference.

A webpage being rendered through a web browser is subject to the quirks and oddities of that particular environment — ad blockers, Reader mode, screen readers, user style sheets, and the like — but there is a standard. A webpage being rendered through Perplexity is actually being reinterpreted and modified. The original text of the page is transformed through automated means about which neither the reader or the publisher has any understanding.

This is true even if you ask it for a direct quote. I asked for a full paragraph of a recent article and it mashed together two separate sections. They are direct quotes, to be sure, but the article must have been interpreted to generate this excerpt.1

It is simply not the case that requesting a webpage through Perplexity is akin to accessing the page via a web browser. It is more like automated traffic — even if it is being guided by a real person.

The existing mechanisms for restricting the use of bots on our websites are imperfect and limited. Yet they are the only tools we have right now to opt out of participating in A.I. services if that is something one wishes to do, short of putting pages or an entire site behind a user name and password. It is completely reasonable for someone to assume their signal of objection to any robotic traffic ought to be respected by legitimate businesses. The absolute least Perplexity can do is respecting those objections by clearly and consistently identifying itself, and excluding websites which have indicated they do not want to be accessed by these means.


  1. I am not presently blocking Perplexity, and my argument is not related to its ability to access the article. I am only illustrating how it reinterprets text. ↥︎

If you had just been looking at the headlines from major research organizations, you would see a lack of confidence from the public in big business, technology companies included. For years, poll after poll from around the world has found high levels of distrust in their influence, handling of private data, and new developments.

If these corporations were at all worried about this, they are not much showing it in their products — particularly the A.I. stuff they have been shipping. There has been little attempt at abating last year’s trust crisis. Google decided to launch overconfident summaries for a variety of search queries. Far from helping to sift through all that has ever been published on the web to mash together a representative summary, it was instead an embarrassing mess that made the company look ill prepared for the concept of satire. Microsoft announced a product which will record and interpret everything you do and see on your computer, but as a good thing.

Can any of them see how this looks? If not — if they really are that unaware — why should we turn to them to fill gaps and needs in society? I certainly would not wish to indulge businesses which see themselves as entirely separate from the world.

It is hard to imagine they do not, though. Sundar Pichai, in an interview with Nilay Patel, recognised there were circumstances in which an A.I. summary would be inappropriate, and cautioned that the company still considers it a work in progress. Yet Google still turned it on by default in the U.S. with plans to expand worldwide this year.

Microsoft has responded to criticism by promising Recall will now be a feature users must opt into, rather than something they must turn off after updating Windows. The company also says there are more security protections for Recall data than originally promised but, based on its track record, maybe do not get too excited yet.

These product introductions all look like hubris. Arrogance, really — recognition of the significant power these corporations wield and the lack of competition they face. Google can poison its search engine because where else are most people going to go? How many people would turn off Recall, something which requires foreknowledge of its existence, under Microsoft’s original rollout strategy?

It is more or less an admission they are all comfortable gambling with their customers’ trust to further the perception they are at the forefront of the new hotness.

None of this is a judgement on the usefulness of these features or their social impact. I remain perplexed by the combination of a crisis of trust in new technologies, and the unwillingness of the companies responsible to engage with the public. There seems to be little attempt at persuasion. Instead, we are told to get on board because this rocket ship is taking off with or without us. Concerned? Too bad: the rocket ship is shaped like a giant middle finger.

What I hope we see Monday from Apple — a company which has portrayed itself as more careful and practical than many of its contemporaries — is a recognition of how this feels from outside the industry. Expect “A.I.” to be repeated in the presentation until you are sick of those two letters; investors are going to eat it up. When normal people update their phones in September, though, they should not feel like they are being bullied into accepting our A.I. future.

People need to be given time to adjust and learn. If the polls are representative, very few people trust giant corporations to get this right — understandably — yet these tech companies seem to believe we are as enthusiastic about every change they make as they are. Sorry, we are not, no matter how big a smile a company representative is wearing when they talk about it. Investors may not be patient but many of the rest of us need time.

Apple finished naming what it — well, its “team of experts alongside a select group of artists […] songwriters, producers, and industry professionals” — believes are the hundred best albums of all time. Like pretty much every list of the type, it is overwhelmingly Anglocentric, there are obvious picks, surprise appearances good and bad, and snubs.

I am surprised the publication of this list has generated as much attention as it has. There is a whole Wall Street Journal article with more information about how it was put together, a Slate thinkpiece arguing this ranking “proves [Apple has] lost its way”, and a Variety article claiming it is more-or-less “rage bait”.

Frankly, none of this feels sincere. Not Apple’s list, and not the coverage treating it as meaningful art criticism. I am sure there are people who worked hard on it — Apple told the Journal “about 250” — and truly believe their rating carries weight. But it is fluff.

Make no mistake: this is a promotional exercise for Apple Music more than it is criticism. Sure, most lists of this type are also marketing for publications like Rolling Stone and Pitchfork and NME. Yet, for how tepid the opinions of each outlet often are, they have each given out bad reviews. We can therefore infer they have specific tastes and ideas about what separates great art from terrible art.

Apple has never said a record is bad. It has never made you question whether the artist is trying their best. It has never presented criticism so thorough it makes you wince on behalf of the people who created the album.

Perhaps the latter is a poor metric. After Steve Jobs’ death came a river of articles questioning the internal culture he fostered, with several calling him an “asshole”. But that is mixing up a mean streak and a critical eye — Jobs, apparently, had both. A fair critic can use their words to dismantle an entire project and explain why it works or, just as important, why it does not. The latter can hurt; ask any creative person who has been on the receiving end. Yet exploring why something is not good enough is an important skill to develop as both a critic and a listener.

Dan Brooks, Defector:

There has been a lot of discussion about what music criticism is for since streaming reduced the cost of listening to new songs to basically zero. The conceit is that before everything was free, the function of criticism was to tell listeners which albums to buy, but I don’t think that was ever it. The function of criticism is and has always been to complicate our sense of beauty. Good criticism of music we love — or, occasionally, really hate — increases the dimensions and therefore the volume of feeling. It exercises that part of ourselves which responds to art, making it stronger.

There are huge problems with the way music has historically been critiqued, most often along racial and cultural lines. There are still problems. We will always disagree about the fairness of music reviews and reviewers.

Apple’s list has nothing to do with any of that. It does not interrogate which albums are boring, expressionless, uncreative, derivative, inconsequential, inept, or artistically bankrupt. So why should we trust it to explain what is good? Apple’s ranking of albums lacks substance because it cannot say any of these things. Doing so would be a terrible idea for the company and for artists.

It is beyond my understanding why anyone seems to be under the impression this list is anything more than a business reminding you it operates a music streaming platform to which you can subscribe for eleven dollars per month.


Speaking of the app — some time after I complained there was no way in Apple Music to view the list, Apple added a full section, which I found via foursliced on Threads. It is actually not bad. There are stories about each album, all the reveal episodes from the radio show, and interviews.

You will note something missing, however: a way to play a given album. That is, one cannot visit this page in Apple Music, see an album on the list they are interested in, and simply tap to hear it. There are play buttons on the website and, if you are signed in with your Apple Music account, you can add them to your library. But I cannot find a way to do any of this from within the app.

Benjamin Mayo found a list, but I cannot through search or simply by browsing. Why is this not a more obvious feature? It makes me feel like a dummy.

Finally. The government of the United States finally passed a law that would allow it to force the sale of, or ban, software and websites from specific countries of concern. The target is obviously TikTok — it says so right in its text — but crafty lawmakers have tried to add enough caveats and clauses and qualifiers to, they hope, avoid it being characterized as a bill of attainder, and to permit future uses. This law is very bad. It is an ineffective and illiberal position that abandons democratic values over, effectively, a single app. Unfortunately, TikTok panic is a very popular position in the U.S. and, also, here in Canada.

The adversaries the U.S. is worried about are the “covered nationsdefined in 2018 to restrict the acquisition by the U.S. of key military materials from four countries: China, Iran, North Korea, and Russia. The idea behind this definition was that it was too risky to procure magnets and other important components of, say, missiles and drones from a nation the U.S. considers an enemy, lest those parts be compromised in some way. So the U.S. wrote down its least favourite countries for military purposes, and that list is now being used in a bill intended to limit TikTok’s influence.

According to the law, it is illegal for any U.S. company to make available TikTok and any other ByteDance-owned app — or any app or website deemed a “foreign adversary controlled application” — to a user in the U.S. after about a year unless it is sold to a company outside the covered countries, and with no more than twenty percent ownership stake from any combination of entities in those four named countries. Theoretically, the parent company could be based nearly anywhere in the world; practically, if there is a buyer, it will likely be from the U.S. because of TikTok’s size. Also, the law specifically exempts e-commerce apps for some reason.

This could be interpreted as either creating an isolated version specifically for U.S. users or, as I read it, moving the global TikTok platform to a separate organization not connected to ByteDance or China.1 ByteDance’s ownership is messy, though mostly U.S.-based, but politicians worried about its Chinese origin have had enough, to the point they are acting with uncharacteristic vigour. The logic seems to be that it is necessary for the U.S. government to influence and restrict speech in order to prevent other countries from influencing or restricting speech in ways the U.S. thinks are harmful. That is, the problem is not so much that TikTok is foreign-owned, but that it has ownership ties to a country often antithetical to U.S. interests. TikTok’s popularity might, it would seem, be bad for reasons of espionage or influence — or both.

Power

So far, I have focused on the U.S. because it is the country that has taken the first step to require non-Chinese control over TikTok — at least for U.S. users but, due to the scale of its influence, possibly worldwide. It could force a business to entirely change its ownership structure. So it may look funny for a Canadian to explain their views of what the U.S. ought to do in a case of foreign political interference. This is a matter of relevance in Canada as well. Our federal government raised the alarm on “hostile state-sponsored or influenced actors” influencing Canadian media and said it had ordered a security review of TikTok. There was recently a lengthy public inquiry into interference in Canadian elections, with a special focus on China, Russia, and India. Clearly, the popularity of a Chinese application is, in the eyes of these officials, a threat.

Yet it is very hard not to see the rush to kneecap TikTok’s success as a protectionist reaction to shaking the U.S. dominance of consumer technologies, as convincingly expressed by Paris Marx at Disconnect:

In Western discourses, China’s internet policies are often positioned solely as attempts to limit the freedoms of Chinese people — and that can be part of the motivation — but it’s a politically convenient explanation for Western governments that ignores the more important economic dimension of its protectionist approach. Chinese tech is the main competitor to Silicon Valley’s dominance today because China limited the ability of US tech to take over the Chinese market, similar to how Japan and South Korea protected their automotive and electronics industries in the decades after World War II. That gave domestic firms the time they needed to develop into rivals that could compete not just within China, but internationally as well. And that’s exactly why the United States is so focused not just on China’s rising power, but how its tech companies are cutting into the global market share of US tech giants.

This seems like one reason why the U.S. has so aggressively pursued a divestment or ban since TikTok’s explosive growth in 2019 and 2020. On its face it is similar to some reasons why the E.U. has regulated U.S. businesses that have, it argues, disadvantaged European competitors, and why Canadian officials have tried to boost local publications that have seen their ad revenue captured by U.S. firms. Some lawmakers make it easy to argue it is a purely xenophobic reaction, like Senator Tom Cotton, who spent an exhausting minute questioning TikTok’s Singaporean CEO Shou Zi Chew about where he is really from. But I do not think it is entirely a protectionist racket.

A mistake I have made in the past — and which I have seen some continue to make — is assuming those who are in favour of legislating against TikTok are opposed to the kinds of dirty tricks it is accused of on principle. This is false. Many of these same people would be all too happy to allow U.S. tech companies to do exactly the same. I think the most generous version of this argument is one in which it is framed as a dispute between the U.S. and its democratic allies, and anxieties about the government of China — ByteDance is necessarily connected to the autocratic state — spreading messaging that does not align with democratic government interests. This is why you see few attempts to reconcile common objections over TikTok with the quite similar behaviours of U.S. corporations, government arms, and intelligence agencies. To wit: U.S.-based social networks also suggest posts with opaque math which could, by the same logic, influence elections in other countries. They also collect enormous amounts of personal data that is routinely wiretapped, and are required to secretly cooperate with intelligence agencies. The U.S. is not authoritarian as China is, but the behaviours in question are not unique to authoritarians. Those specific actions are unfortunately not what the U.S. government is objecting to. What it is disputing, in a most generous reading, is a specifically antidemocratic government gaining any kind of influence.

Espionage and Influence

It is easiest to start by dismissing the espionage concerns because they are mostly misguided. The peek into Americans’ lives offered by TikTok is no greater than that offered by countless ad networks and data brokers — something the U.S. is also trying to restrict more effectively through a comprehensive federal privacy law. So long as online advertising is dominated by a privacy-hostile infrastructure, adversaries will be able to take advantage of it. If the goal is to restrict opportunities for spying on people, it is idiotic to pass legislation against TikTok specifically instead of limiting the data industry.

But the charge of influence seems to have more to it, even though nobody has yet shown that TikTok is warping users’ minds in a (presumably) pro-China direction. Some U.S. lawmakers described its danger as “theoretical”; others seem positively terrified. There are a few different levels to this concern: are TikTok users uniquely subjected to Chinese government propaganda? Is TikTok moderated in a way that boosts or buries videos to align with Chinese government views? Finally, even if both of these things are true, should the U.S. be able to revoke access to software if it promotes ideologies or viewpoints — and perhaps explicit propaganda? As we will see, it looks like TikTok sometimes tilts in ways beneficial — or, at least, less damaging — to Chinese government interests, but there is no evidence of overt government manipulation and, even if there were, it is objectionable to require it to be owned by a different company or ban it.

The main culprit, it seems, is TikTok’s “uncannily good” For You feed that feels as though it “reads your mind”. Instead of users telling TikTok what they want to see, it just begins showing videos and, as people use the app, it figures out what they are interested in. How it does this is not actually that mysterious. A 2021 Wall Street Journal investigation found recommendations were made mostly based on how long you spent watching each video. Deliberate actions — like sharing and liking — play a role, sure, but if you scroll past videos of people and spend more time with a video of a dog, it learns you want dog videos.

That is not so controversial compared to the opacity in how TikTok decides what specific videos are displayed and which ones are not. Why is this particular dog video in a user’s feed and not another similar one? Why is it promoting videos reflecting a particular political viewpoint or — so a popular narrative goes — burying those with viewpoints uncomfortable for its Chinese parent company? The mysterious nature of an algorithmic feed is the kind of thing into which you can read a story of your choosing. A whole bunch of X users are permanently convinced they are being “shadow banned” whenever a particular tweet does not get as many likes and retweets as they believe it deserved, for example, and were salivating at the thought of the company releasing its ranking code to solve a nonexistent mystery. There is a whole industry of people who say they can get your website to Google’s first page for a wide range of queries using techniques that are a mix of plausible and utterly ridiculous. Opaque algorithms make people believe in magic. An alarmist reaction to TikTok’s feed should be expected particularly as it was the first popular app designed around entirely recommended material instead of personal or professional connections. This has now been widely copied.

The mystery of that feed is a discussion which seems to have been ongoing basically since the 2018 merger of Musical.ly and TikTok, escalating rapidly to calls for it to be separated from its Chinese owner or banned altogether. In 2020, the White House attempted to force a sale by executive order. In response, TikTok created a plan to spin off an independent entity, but nothing materialized from this tense period.

March 2023 brought a renewed effort to divest or ban the platform. Chew, TikTok’s CEO, was called to a U.S. Congressional hearing and questioned for hours, to little effect. During that hearing, a report prepared for the Australian government was cited by some of the lawmakers, and I think it is a telling document. It is about eighty pages long — excluding its table of contents, appendices, and citations — and shows several examples of Chinese government influence on other products made by ByteDance. However, the authors found no such manipulation on TikTok itself, leading them to conclude:

In our view, ByteDance has demonstrated sufficient capability, intent, and precedent in promoting Party propaganda on its Chinese platforms to generate material risk that they could do the same on TikTok.

“They could do the same”, emphasis mine. In other words, if they had found TikTok was boosting topics and videos on behalf of the Chinese government, they would have said so — so they did not. The closest thing I could find to a covert propaganda campaign on TikTok anywhere in this report is this:

The company [ByteDance] tried to do the same on TikTok, too: In June 2022, Bloomberg reported that a Chinese government entity responsible for public relations attempted to open a stealth account on TikTok targeting Western audiences with propaganda”. [sic]

If we follow the Bloomberg citation — shown in the report as a link to the mysterious Archive.today site — the fuller context of the article by Olivia Solon disproves the impression you might get from reading the report:

In an April 2020 message addressed to Elizabeth Kanter, TikTok’s head of government relations for the UK, Ireland, Netherlands and Israel, a colleague flagged a “Chinese government entity that’s interested in joining TikTok but would not want to be openly seen as a government account as the main purpose is for promoting content that showcase the best side of China (some sort of propaganda).”

The messages indicate that some of ByteDance’s most senior government relations team, including Kanter and US-based Erich Andersen, Global Head of Corporate Affairs and General Counsel, discussed the matter internally but pushed back on the request, which they described as “sensitive.” TikTok used the incident to spark an internal discussion about other sensitive requests, the messages state.

This is the opposite conclusion to how this story was set up in the report. Chinese government public relations wanted to set up a TikTok account without any visible state connection and, when TikTok management found out about this, it said no. This Bloomberg article makes TikTok look good in the face of government pressure, not like it capitulates. Yes, it is worth being skeptical of this reporting. Yet if TikTok acquiesced to the government’s demands, surely the report would provide some evidence.

While this report for the Australian Senate does not show direct platform manipulation, it does present plenty of examples where it seems like TikTok may be biased or self-censoring. Its authors cite stories from the Washington Post and Vice finding posts containing hashtags like #HongKong and #FreeXinjiang returned results favourable to the official Chinese government position. Sometimes, related posts did not appear in search results, which is not unique to TikTok — platforms regularly use crude search term filtering to restrict discovery for lots of reasons. I would not be surprised if there were bias or self-censorship to blame for TikTok minimizing the visibility of posts critical of the subjugation of Uyghurs in China. However, it is basically routine for every social media product to be accused of suppression. The Markup found different types of posts on Instagram, for example, had captions altered or would no longer appear in search results, though it is unclear to anyone why that is the case. Meta said it was a bug, an explanation also offered frequently by TikTok.

The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.

The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.

The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. A report by Exovera (PDF) prepared for the U.S.–China Economic and Security Review Commission indicates exactly the latter: “TikTok’s current content moderation strategy […] adheres to a strategy of ‘depoliticization’ (去政治化) and ‘localization’ (本土化) that seeks to downplay politically controversial speech and demobilize populist sentiment”, apparently avoiding “algorithmic optimization in order to promote content that evangelizes China’s culture as well as its economic and political systems” which “is liable to result in backlash”. Meta, on its own platforms, said it would not generally suggest “political” posts to users but did not define exactly what qualifies. It said its goal in limiting posts on social issues was because of user demand, but these types of posts have been difficult to moderate. A difference in which posts are found on each platform for specific search terms is not necessarily reflective of government pressure, deliberate or not. Besides, it is not as though there is no evidence for straightforward propaganda on TikTok. One just needs to look elsewhere to find it.

Propaganda

The Office of the Director of National Intelligence recently released its annual threat assessment summary (PDF). It is unclassified and has few details, so the only thing it notes about TikTok is “accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022”. It seems likely to me this is a reference to this article in Forbes, though this is a guess as there are no citations. The state-affiliated TikTok account in question — since made private — posted a bunch of news clips which portray the U.S. in an unflattering light. There is a related account, also marked as state-affiliated, which continues to post the same kinds of videos. It has over 33,000 followers, which sounds like a lot, but each post is typically getting only a few hundred views. Some have been viewed thousands of times, others as little as thirteen times as of writing — on a platform with exaggerated engagement numbers. Nonetheless, the conclusion is obvious: these accounts are government propaganda, and TikTok willingly hosts them.

But that is something it has in common with all social media platforms. The Russian RT News network and China’s People’s Daily newspaper have X and Facebook accounts with follower counts in the millions. Until recently, the North Korean newspaper Uriminzokkiri operated accounts on Instagram and X. It and other North Korean state-controlled media used to have YouTube channels, too, but they were shut down by YouTube in 2017 — a move that was protested by academics studying the regime’s activities. The irony of U.S.-based platforms helping to disseminate propaganda from the country’s adversaries is that it can be useful to understand them better. Merely making propaganda available — even promoting it — is a risk and also a benefit to generous speech permissions.

The DNI’s unclassified report has no details about whether TikTok is an actual threat, and the FBI has “nothing to add” in response to questions about whether TikTok is currently doing anything untoward. More secretive information was apparently provided to U.S. lawmakers ahead of their March vote and, though few details of what, exactly, was said, several were not persuaded by what they heard, including Rep. Sara Jacobs of California:

As a member of both the House Armed Services and House Foreign Affairs Committees, I am keenly aware of the threat that PRC information operations can pose, especially as they relate to our elections. However, after reviewing the intelligence, I do not believe that this bill is the answer to those threats. […] Instead, we need comprehensive data privacy legislation, alongside thoughtful guardrails for social media platforms – whether those platforms are funded by companies in the PRC, Russia, Saudi Arabia, or the United States.

Lawmakers like Rep. Jacobs were an exception among U.S. Congresspersons who, across party lines, were eager to make the case against TikTok. Ultimately, the divest-or-ban bill got wrapped up in a massive and politically popular spending package agreed to by both chambers of Congress. Its passage was enthusiastically received by the White House and it was signed into law within hours. Perhaps that outcome is the democratic one since polls so often find people in the U.S. support a sale or ban of TikTok.

I get it: TikTok scoops up private data, suggests posts based on opaque criteria, its moderation appears to be susceptible to biases, and it is a vehicle for propaganda. But you could replace “TikTok” in that sentence with any other mainstream social network and it would be just as true, albeit less scary to U.S. allies on its face.

A Principled Objection

Forcing TikTok to change its ownership structure whether worldwide or only for a U.S. audience is a betrayal of liberal democratic principles. To borrow from Jon Stewart, “if you don’t stick to your values when they’re being tested, they’re not values, they’re hobbies”. It is not surprising that a Canadian intelligence analysis specifically pointed out how those very same values are being taken advantage of by bad actors. This is not new. It is true of basically all positions hostile to democracy — from domestic nationalist groups in Canada and the U.S., to those which originate elsewhere.

Julian G. Ku, for China File, offered a seemingly reasonable rebuttal to this line of thinking:

This argument, while superficially appealing, is wrong. For well over one hundred years, U.S. law has blocked foreign (not just Chinese) control of certain crucial U.S. electronic media. The Protect Act [sic] fits comfortably within this long tradition.

Yet this counterargument falls apart both in its details and if you think about its further consequences. As Martin Peers writes at the Information, the U.S. does not prohibit all foreign ownership of media. And governing the internet like public airwaves gets way more complicated if you stretch it any further. Canada has broadcasting laws, too, and it is not alone. Should every country begin requiring social media platforms comply with laws designed for ownership of broadcast media? Does TikTok need disconnected local versions of its product in each country in which it operates? It either fundamentally upsets the promise of the internet, or it is mandating the use of protocols instead of platforms.

It also looks hypocritical. Countries with a more authoritarian bent and which openly censor the web have responded to even modest U.S. speech rules with mockery. When RT Americatechnically a U.S. company with Russian funding — was required to register as a foreign agent, its editor-in-chief sarcastically applauded U.S. free speech standards. The response from Chinese government officials and media outlets to the proposed TikTok ban has been similarly scoffing. Perhaps U.S. lawmakers are unconcerned about the reception of their policies by adversarial states, but it is an indicator of how these policies are being portrayed in these countries — a real-life “we are not so different, you and I” setup — that, while falsely equivalent, makes it easy for authoritarian states to claim that democracies have no values and cannot work. Unless we want to contribute to the fracturing of the internet — please, no — we cannot govern social media platforms by mirroring policies we ostensibly find repellant.

The way the government of China seeks to shape the global narrative is understandably concerning given its poor track record on speech freedoms. An October 2023 U.S. State Department “special report” (PDF) explored several instances where it boosted favourable narratives, buried critical ones, and pressured other countries — sometimes overtly, sometimes quietly. The government of China and associated businesses reportedly use social media to create the impression of dissent toward human rights NGOs, and apparently everything from university funding to new construction is a vector for espionage. On the other hand, China is terribly ineffective in its disinformation campaigns, and many of the cases profiled in that State Department report end in failure for the Chinese government initiative. In Nigeria, a pitch for a technologically oppressive “safe city” was rejected; an interview published in the Jerusalem Post with Taiwan’s foreign minister was not pulled down despite threats from China’s embassy in Israel. The report’s authors speculate about “opportunities for PRC global censorship”. But their only evidence is a “list [maintained by ByteDance] identifying people who were likely blocked or restricted” from using the company’s many platforms, though the authors can only speculate about its purpose.

The problem is that trying to address this requires better media literacy and better recognition of propaganda. That is a notoriously daunting problem. We are exposed to a more destabilizing cocktail of facts and fiction, but there is declining trust in experts and institutions to help us sort it out. Trying to address TikTok as a symptomatic or even causal component of this is frustratingly myopic. This stuff is everywhere.

Also everywhere is corporate propaganda arguing regulations would impede competition in a global business race. I hate to be mean by picking on anyone in particular, but a post from Om Malik has shades of this corporate slant. Malik is generally very good on the issues I care about, but this is not one we appear to agree on. After a seemingly impressed observation of how quickly Chinese officials were able to eject popular messaging apps from the App Store in the country, Malik compares the posture of each country’s tech industries:

As an aside, while China considers all its tech companies (like Bytedance) as part of its national strategic infrastructure, the United States (and its allies) think of Apple and other technology companies as public enemies.

This is laughable. Presumably, Malik is referring to the chillier reception these companies have faced from lawmakers, and antitrust cases against Amazon, Apple, Google, and Meta. But that tougher impression is softened by the U.S. government’s actual behaviour. When the E.U. announced the Digital Markets Act and Digital Services Act, U.S. officials sprang to the defence of tech companies. Even before these cases, Uber expanded in Europe thanks in part to its close relationship with Obama administration officials, as Marx pointed out. The U.S. unquestionably sees its tech industry dominance as a projection of its power around the world, hardly treating those companies as “public enemies”.

Far more explicit were the narratives peddled by lobbyists from Targeted Victory in 2022 about TikTok’s dangers, and American Edge beginning in 2020 about how regulations will cause the U.S. to become uncompetitive with China and allow TikTok to win. Both organizations were paid by Meta to spread those messages; the latter was reportedly founded after a single large contribution from Meta. Restrictions on TikTok would obviously be beneficial to Meta’s business.

If you wanted to boost the industry — and I am not saying Malik is — that is how you would describe the situation: the U.S. is fighting corporations instead of treating them as pals to win this supposed race. It is not the kind of framing one uses if they wanted to dissuade people from the notion this is a protectionist dispute over the popularity of TikTok. But it is the kind of thing you hear from corporations via their public relations staff and lobbyists, which gets trickled into public conversation.

This Is Not a TikTok Problem

TikTok’s divestment would not be unprecedented. The Committee on Foreign Investment in the United States — henceforth, CFIUS, pronounced “siff-ee-us” — demanded, after a 2019 review, that Beijing Kunlun Tech Co Ltd sell Grindr. CFIUS concluded the risk to users’ private data was too great for Chinese ownership given Grindr’s often stigmatized and ostracized user base. After its sale, now safe in U.S. hands, a priest was outed thanks to data Grindr had been selling since before it was acquired by the Chinese firm, and it is being sued for allegedly sharing users’ HIV status with third parties. Also, because it transacts with data brokers, it potentially still leaks users’ private information to Chinese companies (PDF), apparently violating the fundamental concern triggering this divestment.

Perhaps there is comfort in Grindr’s owner residing in a country where same-sex marriage is legal rather than in one where it is not. I think that makes a lot of sense, actually. But there remain plenty of problems unaddressed by its sale to a U.S. entity.

Similarly, this U.S. TikTok law does not actually solve potential espionage or influence for a few reasons. The first is that it has not been established that either are an actual problem with TikTok. Surely, if this were something we ought to be concerned about, there would be a pattern of evidence, instead of what we actually have which is a fear something bad could happen and there would be no way to stop it. But many things could happen. I am not opposed to prophylactic laws so long as they address reasonable objections. Yet it is hard not to see this law as an outgrowth of Cold War fears over leaflets of communist rhetoric. It seems completely reasonable to be less concerned about TikTok specifically while harbouring worries about democratic backsliding worldwide and the growing power of authoritarian states like China in international relations.

Second, the Chinese government does not need local ownership if it wants to exert pressure. The world wants the country’s labour and it wants its spending power, so businesses comply without a fight, and often preemptively. Hollywood films are routinely changed before, during, and after production to fit the expectations of state censors in China, a pattern which has been pointed out using the same “Red Dawn” anecdote in story after story after story. (Abram Dylan contrasted this phenomenon with U.S. military cooperation.) Apple is only too happy to acquiesce to the government’s many demands — see the messaging apps issue mentioned earlier — including, reportedly, in its media programming. Microsoft continues to operate Bing in China, and its censorship requirements have occasionally spilled elsewhere. Economic leverage over TikTok may seem different because it does not need access to the Chinese market — TikTok is banned in the country — but perhaps a new owner would be reliant upon China.

Third, the law permits an ownership stake no greater than twenty percent from a combination of any of the “covered nations”. I would be shocked if everyone who is alarmed by TikTok today would be totally cool if its parent company were only, say, nineteen percent owned by a Chinese firm.

If we are worried about bias in algorithmically sorted feeds, there should be transparency around how things are sorted, and more controls for users including wholly opting out. If we are worried about privacy, there should be laws governing the collection, storage, use, and sharing of personal information. If ownership ties to certain countries is concerning, there are more direct actions available to monitor behaviour. I am mystified why CFIUS and TikTok apparently abandoned (PDF) a draft agreement that would give U.S.-based entities full access to the company’s systems, software, and staff, and would allow the government to end U.S. access to TikTok at a moment’s notice.

Any of these options would be more productive than this legislation. It is a law which empowers the U.S. president — whoever that may be — to declare the owner of an app with a million users a “covered company” if it is from one of those four nations. And it has been passed. TikTok will head to court to dispute it on free speech grounds, and the U.S. may respond by justifying its national security concerns.

Obviously, the U.S. government has concerns about the connections between TikTok, ByteDance, and the government of China, which have been extensively reported. Rest of World says ByteDance put pressure on TikTok to improve its financial performance and has taken greater control by bringing in management from Douyin. The Wall Street Journal says U.S. user data is not fully separated. And, of course, Emily Baker-White has reported — first for Buzzfeed News and now for Forbes — a litany of stories about TikTok’s many troubling behaviours, including spying on her. TikTok is a well scrutinized app and reporters have found conduct that has understandably raised suspicions. But virtually all of these stories focus on data obtained from users, which Chinese agencies could do — and probably are doing — without relying on TikTok. None of them have shown evidence that TikTok’s suggestions are being manipulated at the behest or demand of Chinese officials. The closest they get is an article from Baker-White and Iain Martin which alleges TikTok “served up a flood of ads from Chinese state propaganda outlets”, yet waiting until the third-to-last paragraph before acknowledging “Meta and Google ad libraries show that both platforms continue to promote pro-China narratives through advertising”. All three platforms label state-run media outlets, albeit inconsistently. Meanwhile, U.S.-owned X no longer labels any outlets with state editorial control. It is not clear to me that TikTok would necessarily operate to serve the best interests of the U.S. even if it was owned by some well-financed individual or corporation based there.

For whatever it is worth, I am not particularly tied to the idea that the government of China would not use TikTok as a vehicle for influence. The government of China is clearly involved in propaganda efforts both overt and covert. I do not know how much of my concerns are a product of living somewhere with a government and a media environment that focuses intently on the country as particularly hostile, and not necessarily undeservedly. The best version of this argument is one which questions the platform’s possible anti-democratic influence. Yes, there are many versions of this which cross into moral panic territory — a new Red Scare. I have tried to put this in terms of a more reasonable discussion, and one which is not explicitly xenophobic or envious. But even this more even-handed position is not well served by the law passed in the U.S., one which was passed without evidence of influence much more substantial than some choice hashtag searches. TikTok’s response to these findings was, among other things, to limit its hashtag comparison tool, which is not a good look. (Meta is doing basically the same by shutting down CrowdTangle.)

I hope this is not the beginning of similar isolationist policies among democracies worldwide, and that my own government takes this opportunity to recognize the actual privacy and security threats at the heart of its own TikTok investigation. Unfortunately, the head of CSIS is really leaning on the espionage angle. For years, the Canadian government has been pitching sorely needed updates to privacy legislation, and it would be better to see real progress made to protect our private data. We can do better than being a perpetual recipient of decisions made by other governments. I mean, we cannot do much — we do not have the power of the U.S. or China or the E.U. — but we can do a little bit in our own polite Canadian way. If we are worried about the influence of these platforms, a good first step would be to strengthen the rights of users. We can do that without trying to governing apps individually, or treating the internet like we do broadcasting.

To put it more bluntly, the way we deal with a possible TikTok problem is by recognizing it is not a TikTok problem. If we care about espionage or foreign influence in elections, we should address those concerns directly instead of focusing on a single app or company that — at worst — may be a medium for those anxieties. These are important problems and it is inexcusable to think they would get lost in the distraction of whether TikTok is individually blameworthy.


  1. Because this piece has taken me so long to write, a whole bunch of great analyses have been published about this law. I thought the discussion on “Decoder” was a good overview, especially since two of the three panelists are former lawyers. ↥︎

The new A.I. Pin from Humane is, according to those who have used one, bad. Even if you accept the premise of wearing a smart speaker and use it to do a bunch of the stuff for which you used to rely on your phone, it is not good at those things — again, according to those who have used one, and I have not. Why is it apparently controversial to say that with intention?

Cherlynn Low, of Engadget, “cannot recommend anyone spend this much money for the one or two things it does adequately”. David Pierce, of the Verge, says it is “so thoroughly unfinished and so totally broken in so many unacceptable ways”. Arun Maini said the “total amount of effort required to perform any given action is just higher with the Pin”. Raymond Wong, of Inverse, wrote the most optimistic review of all those I saw but, after needing a factory reset of his review unit and then a wind gust blowing it off his shirt, it sounds like he is only convinced by the prospect of future versions, not the “textbook […] first-generation product” he is actually using.

It was Marques Brownlee’s blunt review title — “The Worst Product I’ve Ever Reviewed… For Now” — which caught the attention of a moderately popular Twitter user. The review itself was more like Wong’s, seeing some promise in the concept while dismissing this implementation, but the tweet itself courted controversy. Is the role of a reviewer to be kind to businesses even if their products suck, or is it to be honest?

I do not think it makes sense to dwell on an individual tweet. What is more interesting to me is how generous all of the reviewers have been so far, even while reaching such bleak conclusions. Despite having a list of cons including “unreliable”, and “slow”, and Low saying she burned herself “several times” because it was so hot, Engadget still gave it a score of 50 out of 100. The Verge gave it a 4 out of 10, and compared the product’s reception to that of the “dumpster fire” Nexus Q of 2012, which it gave a score of 5 out of 10.

That last review is a relevant historic artifact. The Nexus Q was a $300 audio and video receiver which users would, in theory, connect to a television or a Hi-Fi speaker system. It was controlled through software on an Android phone, and its standout feature was collaborative playlists. But the Verge found it had “connectivity problems” with different phones and different Nexus Q review units, videos looked “noticeably poor”, it was under-featured, and different friends adding music to the playback queue worked badly. Aside from the pretty hardware, there simply was no there there, and it was canned before a wide release.

But that was from Google, an established global corporation. Humane may have plenty of ex-Apple staff and lots of venture capital money, but it is still a new company. I have no problem grading on a reasonable curve. But how in the world is the Humane getting 40% or 50% of a perfect grade when every reviewer seems to think this product is bad and advises people not to buy one?

Even so, all of them seem compelled to give it the kind of tepid score you would expect for something that is flawed, but not a disaster. Some of the problems do not seem to be a direct fault of Humane; they are a consequence of the technological order. But that does not justify spending $700 plus a $24 per month subscription which you will need to keep paying in perpetuity to prevent your A.I. Pin from becoming a fridge magnet.

Maybe this is just a problem with trying to assign numerical scores. I have repeatedly complained about this because I think it gives mixed messages. What people need to know is whether something is worth buying, which consists of two factors: whether it addresses an actual problem, and whether it is effective at solving that problem. It appears the answer to the first is “maybe”, and the answer to the second is “hell no”. It does not matter how nice the hardware may be, or how interesting the laser projecting screen is. It apparently burns you while you barely use it.

In that light, giving this product an even tepid score is misleading. It is not respectful of potential buyers nor of the team which helped make it. It seems there are many smart people at Humane who thought they had a very good idea, and many people were intrigued. If a reviewer’s experience was poor, it is not cruel for them to be honest and say that it is, in a word, bad.

In the 1970s and 1980s, in-house researchers at Exxon began to understand how crude oil and its derivatives were leading to environmental devestation. They were among the first to comprehensively connect the use of their company’s core products to the warming of the Earth, and they predicted some of the harms which would result. But their research was treated as mere suggestion by Exxon because the effects of obvious legislation would “alter profoundly the strategic direction of the energy industry”. It would be a business nightmare.

Forty years later, the world has concluded its warmest year in recorded history by starting another. Perhaps we would have been more able to act if businesses like Exxon equivocated less all these years. Instead, they publicly created confusion and minimized lawmakers’ knowledge. The continued success of their industry lay in keeping these secrets.


“The success lies in the secrecy” is a shibboleth of the private surveillance industry, as described in Byron Tau’s new book, “Means of Control”. It is easy to find parallels to my opening anecdote throughout though, to be clear, a direct comparison to human-led ecological destruction is a knowingly exaggerated metaphor. The erosion of privacy and civil liberties is horrifying in its own right, and shares key attributes: those in the industry knew what they were doing and allowed it to persist because it was lucrative and, in a post-9/11 landscape, ostensibly justified.

Tau’s byline is likely familiar to anyone interested in online privacy. For several years at the Wall Street Journal, he produced dozens of deeply reported articles about the intertwined businesses of online advertising, smartphone software, data brokers, and intelligence agencies. Tau no longer writes for the Journal, but “Means of Control” is an expansion of that earlier work and carefully arranged into a coherent set of stories.

Tau’s book, like so many others describing the current state of surveillance, begins with the terrorists attacks of September 11 2001. This was the early days, when Acxiom realized it could connect its consumer data set to flight and passport records. The U.S. government ate it up and its appetite proved insatiable. Tau documents the growth of an industry that did not exist — could not exist — before the invention of electronic transactions, targeted advertising, virtually limitless digital storage, and near-universal smartphone use. This rapid transformation occurred not only with little regulatory oversight, but with government encouragement, including through investments in startups like Dataminr, GeoIQ, PlaceIQ, and PlanetRisk.

In near-chronological order, Tau tells the stories which have defined this era. Remember when documentation released by Edward Snowden showed how data created by mobile ad networks was being used by intelligence services? Or how a group of Colorado Catholics bought up location data for outing priests who used gay-targeted dating apps? Or how a defence contractor quietly operates nContext, an adtech firm, which permits the U.S. intelligence apparatus to effectively wiretap the global digital ad market? Regarding the latter, Tau writes of a meeting he had with a source who showed him a “list of all of the advertising exchanges that America’s intelligence agencies had access to”, and who told him American adversaries were doing the exact same thing.

What impresses most about this book is not the volume of specific incidents — though it certainly delivers on that front — but the way they are all woven together into a broader narrative perhaps best summarized by Tau himself: “classified does not mean better”. That can be true for volume and variety, and it is also true for the relative ease with which it is available. Tracking someone halfway around the world no longer requires flying people in or even paying off people on the ground. Someone in a Virginia office park can just make that happen and likely so, too, can other someones in Moscow and Sydney and Pyongyang and Ottawa, all powered by data from companies based in friendly and hostile nations alike.

The tension running through Tau’s book is in the compromise I feel he attempts to strike between acknowledging the national security utility of a surveillance state while describing how the U.S. has abdicated the standards of privacy and freedom it has long claimed are foundational rights. His reporting often reads as an understandable combination of awe and disgust. The U.S. has, it seems, slid in the direction of the kinds of authoritarian states its administration routinely criticizes. But Tau is right to clarify in the book’s epilogue that the U.S. is not, for example, China, separated from the standards of the latter by “a thin membrane of laws, norms, social capital, and — perhaps most of all — a lingering culture of discomfort” with concentrated state power. However, the preceding chapters of the book show questions about power do not fully extend into the private sector, where there has long been pride in the scale and global reach of U.S. businesses but concern about their influence. Tau’s reporting shows how U.S. privacy standards have been exported worldwide. For a more pedestrian example, consider the frequent praise–complaint sandwiches of Amazon, Meta, Starbucks, and Walmart, to throw a few names out there.

Corporate self-governance is an entirely inadequate response. Just about every data broker and intermediary from Tau’s writing which I looked up promised it was “privacy-first” or used similar language. Every business insists in marketing literature it is concerned about privacy and says they ensure they are careful about how they collect and use information, and they have been doing so for decades — yet here we are. Entire industries have been built on the backs of tissue-thin user consent and a flexible definition of “privacy”.

When polled, people say they are concerned about how corporations and the government collect and use data. Still, when lawmakers mandate choices for users about their data collection preferences, the results do not appear to show a society that cares about personal privacy.

In response to the E.U.’s General Data Privacy Regulation, websites decided they wanted to continue collecting and sharing loads of data with advertisers, so they created the now-ubiquitous cookie consent sheet. The GPDR does not explicitly mandate this mechanism and many remain non-compliant with the rules and intention of the law, but they are a particularly common form of user consent. However, if you arrive at a website and it asks you whether you are okay with it sharing your personal data with hundreds of ad tech firms, are you providing meaningful consent with a single button click? Hardly.

Similarly, something like 10–40% of iOS users agree to allow apps to track them. In the E.U., the cost of opting out of Meta’s tracking will be €6–10 per month which, I assume, few people will pay.

All of these examples illustrate how inadequately we assess cost, utility, and risk. It is tempting to think of this as a personal responsibility issue akin to cigarette smoking but, as we are so often reminded, none of this data is particularly valuable in isolation — it must be aggregated in vast amounts. It is therefore much more like an environmental problem.

As with global warming, exposé after exposé after exposé is written about how our failure to act has produced extraordinary consequences. All of the technologies powering targeted advertising have enabled grotesque and pervasive surveillance as Tau documents so thoroughly. Yet these are abstract concerns compared to a fee to use Instagram, or the prospect of reading hundreds of privacy policies with a lawyer and negotiating each of them so that one may have a smidge of control over their private information.

There are technical answers to many of these concerns, and there are also policy answers. There is no reason both should not be used.

I have become increasingly convinced the best legal solution is one which creates a framework limiting the scope of data collection, restricting it to only that which is necessary to perform user-selected tasks, and preventing mass retention of bulk data. Above all, users should not be able to choose a model that puts them in obvious future peril. Many of you probably live in a society where so much is subject to consumer choice. What I wrote sounds pretty drastic, but it is not. If anything, it is substantially less radical than the status quo that permits such expansive surveillance on the basis that we “agreed” to it.

Any such policy should also be paired with something like the Fourth Amendment is Not For Sale Act in the U.S. — similar legislation is desperately needed in Canada as well — to prevent sneaky exclusions from longstanding legal principles.

Last month, Wired reported that Near Intelligence — a data broker you can read more about in Tau’s book — was able to trace dozens of individual trips to Jeffrey Epstein’s island. That could be a powerful investigative tool. It is also very strange and pretty creepy all that information was held by some random company you probably have not heard of or thought about outside stories like these. I am obviously not defending the horrendous shit Epstein and his friends did. But it is really, really weird that Near is capable of producing this data set. When interviewed by Wired, Eva Galperin, of the Electronic Frontier Foundation, said “I just don’t know how many more of these stories we need to have in order to get strong privacy regulations.”

Exactly. Yet I have long been convinced an effective privacy bill could not be implemented in either the United States nor European Union, and certainly not with any degree of urgency. And, no, Matt Stoller: de facto rules on the backs of specific FTC decisions do not count. Real laws are needed. But the products and services which would be affected are too popular and too powerful. The E.U. is home to dozens of ad tech firms that promise full identity resolution. The U.S. would not want to destroy such an important economic sector, either.

Imagine my surprise when, while I was in middle of writing this review, U.S. lawmakers announced the American Privacy Rights Act (PDF). If passed, it would give individuals more control over how their information — including biological identifiers — may be collected, used, and retained. Importantly, it requires data minimization by default. It would be the most comprehensive federal privacy legislation in the U.S., and it also promises various security protections and remedies, though I think lawmakers’ promise to “prevent data from being hacked or stolen” might be a smidge unrealistic.

Such rules would more-or-less match the GDPR in setting a global privacy regime that other countries would be expected to meet, since so much of the world’s data is processed in the U.S. or otherwise under U.S. legal jurisdiction. The proposed law borrows heavily from the state-level California Consumer Privacy Act, too. My worry is that it will be treated by corporations similarly to the GDPR and CCPA by continuing to offload decision-making to users while taking advantage of a deliberate imbalance of power. Still, any progress on this front is necessary.

So, too, is it useful for anyone to help us understand how corporations and governments have jointly benefitted from privacy-hostile technologies. Tau’s “Means of Control” is one such example. You should read it. It is a deep exploration of one specific angle of how data flows from consumer software to surprising recipients. You may think you know this story, but I bet you will learn something. Even if you are not a government target — I cannot imagine I am — it is a reminder that the global private surveillance industry only functions because we all participate, however unwillingly. People get tracked based on their own devices, but also those around them. That is perhaps among the most offensive conclusions of Tau’s reporting. We have all been conscripted for any government buying this data. It only works because it is everywhere and used by everybody.

For all they have erred, democracies are not authoritarian societies. Without reporting like Tau’s, we would be unable to see what our own governments are doing and — just as important — how that differs from actual police states. As Tau writes, “in China, the state wants you to know you’re being watched. In America, the success lies in the secrecy“. Well, the secret is out. We now know what is happening despite the best efforts of an industry to keep it quiet, just like we know the Earth is heating up. Both problems massively affect our lived environment. Nobody — least of all me — would seriously compare the two. But we can say the same about each of them: now we know. We have the information. Now comes the hard part: regaining control.

Earlier this week, Dave Kendall of documentary production company Prairie Hollow and formerly of a Topeka, Kansas PBS station, wrote in the Kansas Reflector an article criticizing Meta. Kendall says he tried to promote posts on Facebook for a screening of “Hot Times in the Heartland” but was prevented from doing so. A presumably automated message said it was not compliant with its political ads policy.

I will note Meta’s ambiguous and apparently fluid definition of which posts count as political. But Kendall comes to the ridiculous conclusion that “Meta deems climate change too controversial for discussion” based solely on his inability to “boost” an existing post. Being pedantic but correct, that means that Meta did not prohibit discussion generally, just the ad.

I cannot fault Kendall’s frustration, however, as he correctly describes the non-specific support page and nonexistent support:

But in the Meta-verse, where it seems virtually impossible to connect with a human being associated with the administration of the platform, rules are rules, and it appears they would prefer to suppress anything that might prove problematic for them.

Exactly. This accurately describes the imbalanced power of even buying ads on Meta’s platforms. Advertisers are Meta’s customers and, unless one is a big spender, they receive little to no guidance. There are only automated checks and catch-all support contacts, neither of which are particularly helpful for anything other than obvious issues.

A short while later in the editorial, however, things take a turn for the wrong again:

The implications of such policies for our democracy are alarming. Why should corporate entities be able to dictate what type of speech or content is acceptable?

In a centralized social network like Facebook, the same automated technologies which flagged this post also flag and remove posts which contribute to a poor community. We already know how lax policies turn out and why those theories do not last in the real world.

Of course, in a decentralized social network, it is possible to create communities with different policies. The same spec that underpins Mastodon, for example, also powers Gab and Truth Social. Perhaps that is more similar to the system which Kendall would prefer — but that is not how Facebook is built.

Whatever issue Facebook flagged regarding those ads — Kendall is not clear, and I suspect that is because Facebook is not clear either — the problems of its poor response intensified later that day.

Clay Wirestone and Sherman Smith, opinion editor and editor-in-chief, respectively, of the Kansas Reflector:

This morning, sometime between 8:20 and 8:50 a.m. Thursday, Facebook removed all posts linking to Kansas Reflector’s website.

This move not only affected Kansas Reflector’s Facebook page, where we link to nearly every story we publish, but the pages of everyone who has ever shared a story from us.

[…]

Coincidentally, the removals happened the same day we published a column from Dave Kendall that is critical of Facebook’s decision to reject certain types of advertising: “When Facebook fails, local media matters even more for our planet’s future.”

Marisa Kabas, writing in the Handbasket:

Something strange started happening Thursday morning: Facebook users who’d at some point in the past posted a link to a story from the Kansas Reflector received notifications that their posts had violated community standards on cybersecurity. “It looks like you tried to gather sensitive information, or shared malicious software,” the alert said.

[…]

Shortly after 4, it appeared most links to the site were posting properly on Meta properties—Facebook, Instagram Threads — except for one: [Thursday’s column][ed] critical of Facebook.

If you wanted to make a kind-of-lame modern conspiracy movie, this is where the music swells and it becomes a fast-paced techno-thriller. Kabas followed this article with one titled “Here’s the Column Meta Doesn’t Want You to See”, republishing Kendall’s full article “in an attempt to sidestep Meta’s censorship”.

While this interpretation of a deliberate effort by Facebook to silence critical reporting is kind of understandable, given its poor communication and the lack of adequate followup, it hardly strikes me as realistic. In what world would Meta care so much about tepid criticism published by a small news operation that it would take deliberate manual actions to censor it? Even if you believe Meta would be more likely to kneecap a less visible target than, say, a national news outlet, it does not make sense for Facebook to be this actively involved in hiding any of the commentary I have linked to so far.

Facebook’s explanation sounds more plausible to me. Sherman Smith, Kansas Reflector:

Facebook spokesman Andy Stone in a phone call Friday attributed the removal of those posts, along with all Kansas Reflector posts the day before, to “a mistaken security issue that popped up.” He wouldn’t elaborate on how the mistake happened and said there would be no further explanation.

[…]

“It was a security issue related to the Kansas Reflector domain, along with the News From The States domain and The Handbasket domain,” Stone added. “It was not this particular story. It was at the domain level.”

If some system at Meta erroneously flagged as a threat Kendall’s original attempt to boost a post, it makes sense that related stories and domains would also be flagged. Consider how beneficial this same chain of effects could be if there were actually a malicious link: not only does it block the main offending link, but also any adjacent links that look similar, and any copycats or references. That is an entirely fair way to prevent extreme platform abuse. In this case, with large numbers of people trying to post one link that had already been flagged, alongside other similar links, it is easy to see how Meta’s systems might see suspicious behaviour.

For an even simpler example, consider how someone forgetting a password for their account looks exactly the same as someone trying to break into it. On any website worth its salt, you will be slowed down or prevented from trying more than some small number of password attempts, even if you are the actual account owner. This is common security behaviour; Meta’s is merely more advanced.

This is not to say Meta got this right — not even a little bit. I have no reason to carry water for Meta and I have plenty to criticize; more on that later. Unfortunately, the coverage of this non-story has been wildly disproportionate and misses the actual problems. CNN reported that Meta was “accused of censoring” the post. The Wrap said definitively that it “block[ed] Kansas Reflector and MSNBC columnist over op-ed criticizing Facebook”. An article in PC Magazine claimed “Facebook really, really doesn’t want you to read” Kendall’s story.

This is all nonsense.

What is true and deeply frustrating is the weak approach of companies like Meta and Google toward customer service. Both have offloaded the administrative work of approving or rejecting ads to largely automated systems, with often vague and unhelpful responses, because they have prioritized scale above quality from their earliest days.

For contrast, consider how apps made available in Apple’s App Store have always received human review. There are plenty of automated processes, too, which can detect obvious problems like the presence of known malware — but if an app passes those tests, a person sees it before approving or rejecting it. Of course, this system is also deeply flawed; see the vast number of articles and links I have posted over the years about the topic. Any developer can tell you that Apple’s support has problems, too. But you can see a difference in approaches between companies which have scaled with human intervention, and those which have avoided it.

Criticism of Meta in this case is absolutely warranted. It should be held to a higher standard, with more options available for disputing its moderation judgements, and its pathetic response in this case deserves the scrutiny and scorn it is receiving. This is particularly true as it rolls out its de-prioritization of “political” posts in users’ feeds, while continuing to dodge meaningful explanations of what will be affected.

Dion Lefler, the Wichita Eagle:

Both myself and Eagle investigative reporter Chance Swaim have tried to contact Facebook/Meta — although we knew before we started that it’s a waste of time and typing.

Their corporate phone number is a we-don’t-give-a-bleep recording that hangs up on you after two repeats. And their so-called media relations department is where press emails go to die.

Trying to understand how these megalithic corporations make decisions is painful enough, and their ability to dodge the press gives the impression they are not accountable to anybody. They may operate our social spaces and digital marketplaces, but they are oftentimes poor stewards. There will always be problems at this scale. Yet, it often seems as though public-facing tech businesses, in particular, behave as though they are still scrappy upstarts with little responsibility to a larger public. Meta is proud to say its products “empower more than 3 billion people around the world”. I cannot imagine what it is like to design systems which affect that many people. But it is important to criticize the company when it messes up this badly without resorting to conspiracy theories or misleading narratives. The press can do better. But Meta also needs to be more responsive, less hostile, and offer better explanations of how these systems work because, like just about any massive entity, nobody should be trusting it at its word.

What follows is a short complaint about a couple of things I have written about occasionally over the past couple of years: proprietary chargers and Amazon’s rapid decline in trustworthiness. I am prefacing it with this disclaimer because perhaps you do not want to read a complaint today or, very likely, ever. But maybe you do.

The proprietary charger in question is for a Garmin watch, and I really did hope to find one in just two days because I am meeting up with a traveler. I checked a couple of electronics and sporting goods stores here but because Garmin does not have the presence of, say, Apple, only a handful of places seem to stock their unique cables, and none that I could get to before Sunday afternoon.

But, hey, we have an Amazon Prime account — mostly, it seems, for “Jack Reacher”. Prime is a modern human-powered logistics marvel. If ever there was a time when it would save the day, it would be in unique circumstances like these, right?

Well, only if you have a certain level of comfort for pseudo-branded products of questionable quality. I tried, but I could not find a “Garmin” cable, only a bunch of ones from companies that do not really exist. Do any of them have a warranty? Will the off-brand business be around a couple of months from now? The cables are less expensive, to be sure, almost to a point where there is no need to make a quality product because they can be treated as disposable. I do not like that; it feels rather wasteful. And even if I felt comfortable buying one of these cables for someone else, it would not have arrived in time, so even the Prime promise did not work out.

To be clear, my problem is not that Amazon could not serve me up a cable in a matter of hours. It is that Garmin’s choice to use a proprietary charger created complication, and that the best marketplace solution is so sketchy. Virtually all of these cables were marked “Amazon’s Choice”, which does not mean what it implies.

This is far from a one-time problem with Amazon which, not so long ago, was a perfectly reputable online store. Not any longer. I just recently noticed the Frigidaire water filters I bought with Amazon have typos on the packaging and filters themselves that make me wonder if they are real. I thought I had found an authentic part after dodging obvious knock-offs and plenty of questionable ads, but it seems I may have been conned.

Everything about this feels dirty. I just wanted a charging cable, and I found myself annoyed by one brand’s protectionism and another’s self-destruction. Neither of those things help me in the moment, and they do not make me feel good as a little peek into the broader context they represent.

Jim Dalrymple:

Siri has done what no person could for 30 years: Make me stop using an Apple product.

I am giving up on my 8 HomePods/minis out of the sheer frustration of trying to use Siri.

I’ve been in tech for 30 years and this is one of the worst technologies ever and only getting worse

Via Michael Tsai, who has collected recent quotes from other critics.

You could go back one, five, or ten years and find people complaining about pretty much the same problems. The ways it fails me today are pretty much the same as the way it has failed for me for years, and there is no excuse: I speak English with a mild Canadian accent, I do not have a stutter or any other disability, and I am using the latest version of iOS on the newest-model iPhone. Of course, that is not what Siri is tripped-up by — it transcribes me perfectly most of the time. But it delivers utter nonsense.

Sometimes, after I ask Siri to reply to a message, it will ask which contact details to use instead of just sending the message to the phone number or email address from which it came. Just now, I asked Siri how much three tablespoons of butter weighs, and it responded in litres. This is basic shit.

A voice interface is such a difficult interaction model to get right because there is no predictable boundary. A user must trust the computer to interpret and execute each command accurately and, if it fails them once, why would they attempt to do the same thing in the future? They know it does not work.

Sean Sperte, in response to a joke:

This is why Siri’s ineptitude is a branding problem for Apple more than anything else. (I believe it’s also the reason HomePod isn’t a bigger hit.)

To interpolate one of the few good moments from a bad show, Apple has a P.R. problem because it has an actual problem in Siri.

John Gruber:

First impressions really matter, but in Siri’s case, it’s over a decade of lived experience. If I were at Apple and believed the company finally had a good voice assistant experience, I’d push for a new brand.

I would not be surprised if Apple used a complete rearchitecting of Siri to change its name.

Something I cannot help but wonder is whether Siri would still be so bad if users could pick something else. That goes for any platform and any product, by the way — what if you could pick Google’s assistant on an Amazon device, or Siri on a Google device? I am not suggesting this is how it ought to be. But what if these voice assistants actually had to compete with each other directly instead of in the context of the products in which they are sold? Would that inspire more rapid development, higher quality, and more confidence from users?

Instead, here we are: Apple may as well give up on Siri as it is currently envisioned. It seems many users do not really trust it. I have given up on it for anything more complicated than setting a timer.

The U.S. antitrust case against Apple was not a closely guarded secret. Stories in the New York Times and Bloomberg spoiled not just the general timing of the case, but its contours as well. That gave Adam Kovacevich, of Chamber of Progress, the confidence to dispute the government’s arguments before the lawsuit was filed — a risky choice, I think.

Kovacevich is the CEO and co-founder of Chamber of Progress, a nominally progressive lobbying organization for large technology companies. It was launched in 2021, and is funded by corporations you know like Amazon, Apple, Google, and Uber; Kovacevich used to work on public policy at Google. The Chamber uses its support of progressive causes like voter rights and universal health care as cover for its main activity, which is reflecting the priorities of its funders. The Chamber routinely argues on its blog and in legal filings in defence of big business as usual.

Kovacevich begins his attempt at front-running the government’s arguments by transforming a possibility into an definite:

This suit has been rumored for months, so we have a good idea of what it will include. It will likely force iPhones to work more like Android devices.

If you’re among the millions of Americans who have purchased an iPhone because of integrated features like Find My Phone, Apple Pay, iMessage, or integration with Airpods and Apple Watch, you better hope that this lawsuit fails.

Because if it succeeds, there will no longer be any difference between your iPhone and an Android device.

This is flagrantly untrue. Maybe you are willing to cut Kovacevich some slack because this article was written before the complaint was filed, but I am not, because Kovacevich could have just waited one extra day to see if he was right. But, even on Wednesday, it would have been outside reasonable grounds to think the case would pitch enough stuff that, if successful, would remove “any difference between” iPhones and Android phones. Even the E.U.’s Digital Markets Act, for how comprehensive it is, will not have that result.

In a press release published after the suit was filed — otherwise known as the correct time to react to something: after it has happened — Kovacevich did pull back to a more cautious position of saying it “would make iPhones more like Androids”, emphasis mine. But that is so vague, even in its full context of “forc[ing] Apple to open up its software and hardware”, it is almost meaningless. Is a private API for the NFC chip really part of what makes an iPhone so different from an Android phone? That seems like a pretty flimsy argument when there is so much about iOS that is actually meaningfully different from Android and not for reasons hostile to competition.

Kovacevich:

This lawsuit wasn’t spurred by consumer or voter complaints. Instead, companies like Tile, Beeper, Spotify, Match Group (a former client of DOJ Antitrust Chief Jonathan Kanter), banks, and payment apps have all spent months pushing the DOJ to bring this lawsuit. They would be the largest beneficiaries of the lawsuit.

Whether Americans’ complaints “spurred” the Department of Justice to act is a good question, but it is untrue to argue there have been no complaints. Most people in the U.S. have, for years, responded favourably to polls asking if they support regulating the largest technology firms, though they have not ranked it as a top priority. Even the Chamber of Progress’ own polling found support for regulations, somewhat undermined by the specific examples of consequences.

It is probably true that business complaints were the primary drivers of the DoJ’s action, though. An annotation I wrote for one part about payment apps in my copy of the complaint reads “sounds like a bank wrote this”. But protesting this on the grounds of corporate involvement is pretty rich coming from the guy who runs a lobbying firm arguing for the positions of even bigger corporations. Are we really supposed to be mad if Tile benefits?

Kovacevich:

More than 135 million Americans own an iPhone. And for many of them, the ease and simplicity of iPhone’s integrated experience is why they purchased the device in the first place.

I have owned Tile tracking devices. Apple’s AirTags and Find My Phone work much better.

I have owned Android Watches. But the connection between my Apple Watch and my iPhone is seamless.

When I pop in my AirPods, my iPhone recognizes them right away. And iMessage just works across my phone, computer, and iPad.

When I purchase an app on my phone, it’s automatically available on my iPad too.

Despite years of hype over “mobile payments,” I never even considered leaving my traditional wallet at home until I started using Apple Pay.

Why, specifically, are these third-party products less capable on an iPhone compared to first-party options, Adam?

More to the point, what is the goal here? The government’s position is not that Apple should reduce the capabilities of its own products, but that Apple should not so aggressively restrict third-party capabilities. What if other smartwatches or tracking devices or headphones worked better with iPhones? Maybe not entirely to Apple’s first-party standards but, you know, better. That sounds like a preferable situation to one in which consumers are compelled to remain within the confines of first-party products allegedly because of deliberate attempts to avoid competition.

Kovacevich:

I understand fully why Tile, Beeper, and Match Group have agitated for this lawsuit. It would surely benefit them. But US competition law is designed to help consumers, not competitors. And this suit will force Apple to break the seamless experience that millions of customers have chosen.

That is one perspective on U.S. competition law. But it is not an argument shared by everybody, and it is disingenuous to claim that is how the law has been “designed” so much has how it has been shaped since the 1970s.

The argument in favour of also balancing a desire for competition has been criticized by lobbyists for large technology firms, but it is a discussion worth having: what problems are created by the mere existence of uniquely large businesses? The Chamber and the CCIA say their size is what lets them offer things like comprehensive services and free shipping, which consumers like and, therefore, there is no need to intervene. But are there negative outcomes, too, especially if smaller businesses struggle to compete due to those apparently inherent advantages of being big? That is a core question of newer perspectives on antitrust.

Kovacevich then takes on the question of whether the iPhone has “market power” or “monopoly power”, which are different things that he seems to conflate. The title of this section is “Courts Have Found that iOS Doesn’t have Market Power”, and I wanted to focus on this:

Furthermore, Judge Yvonne Gonzalez Rogers found in the Epic v. Apple case that:

Apple’s market share is below the general ranges of where courts found monopoly power under Section 2…[the] Court cannot conclude that Apple’s market power reaches the status of monopoly power in the mobile gaming market.

I am always suspicious when I see mashed-together quotes like these. Indeed, the first part of the quote comes from two pages before the second. While it was fair to eliminate some of the discussion and assessment of the market, this mashup eliminates significant context from before and after.

For background, on page 87, the judge notes that this is a calculation of the global mobile gaming market, of which Apple’s share is apparently nearly 60% by dollar value despite the iPhone’s 16% share of global devices. Whether this global share will be relevant to the 2024 trial is a question for the courts.

Immediately before the first part of that mashup quotation, the judge writes on page 137:

[…] That Apple has more than a majority in a mostly duopolistic, and otherwise highly concentrated, market indicates that Apple has considerable market power.

So to Kovacevich’s section title — “Courts Have Found that iOS Doesn’t have Market Power” — I would note that courts have also found iOS does have market power. And here is what the judge wrote immediately following the second part of that mashed-up quote, as it appears on page 139:

That said, the evidence does suggest that Apple is near the precipice of substantial market power, or monopoly power, with its considerable market share. Apple is only saved by the fact that its share is not higher, that competitors from related submarkets are making inroads into the mobile gaming submarket, and, perhaps, because plaintiff did not focus on this topic.

The impression you might get if you read Kovacevich’s summary is that Apple is definitely not a monopoly. But the actual argument made by the judge in this case is that if Apple’s share grows only a little more, it may be have a monopoly position.

Kovacevich wraps by comparing the duopoly of device options to Disneyland and Yosemite National Park:

It’s great for consumers that we have these two alternative models of mobile devices — one closed and integrated, one open and flexible. People vote with their pocketbooks — and have switched back and forth between Androids and iPhones.

So why should the government force iPhones to look more like Androids?

I enjoy visiting the safe, sanitized environment of Disneyland and the wild of Yosemite National Park. But I would hate to see the government force Disneyland to look more like Yosemite (or vice versa).

Tourist attractions are a poor analogy for owning a smartphone. A better one, if you want an analogy, is something like a really powerful company town compared to a normal city. Everything you can buy and do is filtered through a paternalistic owner, there are seemingly arbitrary rules, and despite all the bureaucracy, it is unwise for businesses to ignore setting up shop there because its residents seem to spend more money.

People make all kinds of trade-offs when they buy something as complex and convergent as a smartphone, and it is difficult to know how much of that is a fair vote with their wallet and how much of it is a side effect of the platform owner’s impositions.

We saw this play out before the iPhone 6 was introduced. Apple still sold plenty of iPhones even though its models had smaller displays than competing products, and it was unclear whether people were buying iPhones because they were small or in spite of their size. The still-unbeaten unit sales of the iPhone 6 models shows lots of people wanted a bigger iPhone. Some of those buyers formerly used an Android phone, but others were existing iPhone customers who bought previous models even though they wished they could be bigger. Still others were like me: people who still bought an iPhone because of other factors, even though they were now — and remain — too big.

Questions like these are far too complicated to simplify into the catchy but wrong claim that “government [will] force iPhones to look more like Androids”. There are undoubtably some — many, probably — who really like the way their iPhone works today. But I know people who have other smartwatches who wish they worked better with their iPhone. There are iPhone features which I bet would work better if Apple had meaningful competition within its own platform.

That lots of people buy iPhones is not inherently a vote of confidence in each detail of the entire package. If some of those things changed a little bit — the U.S. government’s suit is not a massive overhaul of the way the iPhone works — I doubt people would stop liking or trusting the product.

Whether they will like or trust their bank’s attempt at a wallet app is another discussion entirely.

During a White House press briefing on March 12, CBS News’ Ed O’Keefe asked press secretary Karine Jean-Pierre if photos of the president or other members of the White House are ever digitally altered. Jean-Pierre laughed and asked, in response, “why would we digitally alter photos? Are you comparing us to what’s going on in the U.K.?” O’Keefe said he was just doing due diligence. Jean-Pierre said, regarding digital photo manipulation, “that is not something that we do here”.

It is unclear to me whether Jean-Pierre was specifically declining the kind of multi-frame stacking apparent in the photo of the Princess of Wales and her children, or digital alterations more broadly. But it got me thinking — there is a strain of good-faith question to be asked here: are public bodies meeting the standards of editorial photography?

Well, first, it depends on which standards one refers to. There are many — the BBC has its own, as does NPR, the New York Times, and the National Press Photographers Association. Oddly, I could not find comparable documentation for the expectations of the official White House photographer. But it is the standards of the Associated Press which are the subject of the Princess of Wales photo debacle, and they are both representative and comprehensive:

Minor adjustments to photos are acceptable. These include cropping, dodging and burning, conversion into grayscale, elimination of dust on camera sensors and scratches on scanned negatives or scanned prints and normal toning and color adjustments. These should be limited to those minimally necessary for clear and accurate reproduction and that restore the authentic nature of the photograph. Changes in density, contrast, color and saturation levels that substantially alter the original scene are not acceptable. Backgrounds should not be digitally blurred or eliminated by burning down or by aggressive toning. The removal of “red eye” from photographs is not permissible.

If I can summarize these rules: changes should minimize the influence of the camera on how the scene was captured, and represent the scene as true to how it would be seen in real life. Oh, and photographers cannot remove red eye. Those are the standards I am expecting from the White House photographer to claim they do not digitally “alter” photos.

Happily, we can find out if those expectations are met even from some JPEG exports. Images edited using Adobe Lightroom carry metadata describing the edits made in surprising detail, and you can view that data using Photoshop or ExifTool. I opened a heavily manipulated photo of my own — the JPEG, not the original RAW file — and found in its metadata a record of colour and light correction, adjustment masks, perspective changes, and data about how much I healed and cloned. It was a lot and for clarification, that photo would not be acceptable by editorial standards.

To find out what was done by the White House, I downloaded the original-sized JPEG copies of many images from the Flickr accounts of the last three U.S. presidents. Then I examined the metadata. Even though O’Keefe’s question pertained specifically to the president, vice president, and other people in the White House, I broadened my search to include any photo. Surely all photos should meet editorial standards. I narrowed my attention to the current administration and the previous one because the Obama administration covered two terms, and that is a lot of pictures to go through.

We will start with an easy one. Remember that picture from the Osama Bin Laden raid? It is obviously manipulated and it says so right there in the description: “a classified document seen in this photograph has been obscured”. I think most people would believe that is a fair alteration.

But the image’s metadata reveals several additional spot exposure adjustments throughout the image. I am guessing some people in the back were probably under-exposed in the original.

This kind of exposure adjustment is acceptable by editorial standards — it is the digital version of dodging and burning. It is also pretty standard across administrations. A more stylized version was used during the Trump administration on pictures like this one to make some areas more indigo, and the Biden administration edited parts of this picture to make the lights bluer.

All administrations have turned some colour pictures greyscale, and have occasionally overdone it. The Trump administration increased the contrast and crushed the black levels in parts of this photo, and I wonder if that would be up to press standards.

There are lots more images across all three accounts which have gradient adjustments, vignettes, and other stylistic changes. These are all digital alterations to photos which are, at most, aesthetic choices that do not meaningfully change the scene or the way the image is interpreted.

But I also found images which had more than those simple adjustments. The Biden administration published a photo of a lone officer in the smoke of a nineteen-gun salute. Its metadata indicates the healing brush tool was used in a few places (line breaks added to fit better inline):

<crs:RetouchInfo>
    <rdf:Seq>
        <rdf:li>
                centerX = 0.059098, 
                centerY = 0.406924, 
                radius = 0.011088, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.037496, 
                sourceY = 0.387074, 
                spotType = heal
        </rdf:li>
        <rdf:li>
                centerX = 0.432986, 
                centerY = 0.119173, 
                radius = 0.010850, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.460986, 
                sourceY = 0.106420, 
                spotType = heal
        </rdf:li>
        <rdf:li>
                centerX = 0.622956, 
                centerY = 0.430625, 
                radius = 0.010763, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.652456, 
                sourceY = 0.430625, 
                spotType = heal
        </rdf:li>
        <rdf:li>
                centerX = 0.066687, 
                centerY = 0.104860, 
                radius = 0.011204, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.041687, 
                sourceY = 0.104860, 
                spotType = heal
        </rdf:li>
    </rdf:Seq>
</crs:RetouchInfo>

I am not sure exactly what was removed from the image, but there appears to be enough information here to indicate where the healing brush was used. Unfortunately, I cannot find any documentation about how to read these tags. (My guess is that these are percent coordinates and that 0,0 is the upper-left corner.) If all that was removed is lens or sensor crud, it would probably be acceptable. But if objects were removed, it would not meet editorial standards.

The Trump administration also has photos that have been retouched (line breaks added to fit better inline):

<crs:RetouchInfo>
    <rdf:Seq>
        <rdf:li>
                centerX = 0.451994, 
                centerY = 0.230277, 
                radius = 0.009444, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.431994, 
                sourceY = 0.230277, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.471218, 
                centerY = 0.201147, 
                radius = 0.009444, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.417885, 
                sourceY = 0.264397, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.912961, 
                centerY = 0.220015, 
                radius = 0.009444, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.904794, 
                sourceY = 0.254265, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.097888, 
                centerY = 0.603009, 
                radius = 0.009444, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.069790, 
                sourceY = 0.606021, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.044445, 
                centerY = 0.443587, 
                radius = 0.009444, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.076612, 
                sourceY = 0.451837, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.388536, 
                centerY = 0.202074, 
                radius = 0.009444, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.274036, 
                sourceY = 0.201324, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.744251, 
                centerY = 0.062064, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.794084, 
                sourceY = 0.158064, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.715719, 
                centerY = 0.155432, 
                radius = 0.012959, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.782736, 
                sourceY = 0.190757, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.667622, 
                centerY = 0.118204, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.659455, 
                sourceY = 0.078204, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.631788, 
                centerY = 0.082258, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.643121, 
                sourceY = 0.120008, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.768446, 
                centerY = 0.089400, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.786446, 
                sourceY = 0.124150, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.805172, 
                centerY = 0.059118, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.810672, 
                sourceY = 0.100618, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.525624, 
                centerY = 0.138548, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.482791, 
                sourceY = 0.162548, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.509623, 
                centerY = 0.182811, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.482790, 
                sourceY = 0.175061, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.417535, 
                centerY = 0.076733, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.373202, 
                sourceY = 0.076483, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.223111, 
                centerY = 0.275574, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.256444, 
                sourceY = 0.275574, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.201020, 
                centerY = 0.239967, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.216353, 
                sourceY = 0.204467, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.097134, 
                centerY = 0.132270, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.121134, 
                sourceY = 0.138270, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.045526, 
                centerY = 0.096486, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.020859, 
                sourceY = 0.137486, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.062159, 
                centerY = 0.113695, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.039326, 
                sourceY = 0.140945, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.058762, 
                centerY = 0.134971, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.042762, 
                sourceY = 0.161471, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.413132, 
                centerY = 0.425824, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.439799, 
                sourceY = 0.425824, 
                spotType = clone
        </rdf:li>
    </rdf:Seq>
</crs:RetouchInfo>

Even though there are lots more edits to this photo, it seems plausible they were made to remove lens or sensor dust made more obvious by the heavy use of the dehaze (+14), contrast (+50), and clarity (+2) adjustments.

For what it is worth, this does not seem like a scandal to me — at least, not unless it can be shown edits to White House photos were made to alter what was actually in the frame. But, to review: does the White House digitally alter images? Yes, at least a little. Does the White House conform to accepted editorial standards? I am not sure. Should it? In my view, yes, always — and so should the products of any government photographer. Has the White House done anything remotely close to that Princess of Wales image? Not that I have seen. Should I stop writing this as a series of rhetorical questions? Oh, hell, yes.

Big news out of Brussels:

The European Commission has fined Apple over €1.8 billion for abusing its dominant position on the market for the distribution of music streaming apps to iPhone and iPad users (‘iOS users’) through its App Store. In particular, the Commission found that Apple applied restrictions on app developers preventing them from informing iOS users about alternative and cheaper music subscription services available outside of the app (‘anti-steering provisions’). This is illegal under EU antitrust rules.

Margrethe Vestager, executive vice president of the European Commission, in the transcript of a speech announcing the Commission’s findings and penalty:

Let me give you three examples of Apple’s anti-steering obligations:

  • First, music streaming developers were not allowed to inform their users, inside their own apps, of cheaper prices for the same subscription on the internet.

  • Second, they were also not allowed to include links in their apps to lead consumers to their websites and pay lower prices there.

  • And third, they were also not allowed to contact their own newly acquired users, for instance by email, to inform them about pricing options after they set up an account.

These anti-steering rules have been among the most aggressively policed of all the App Store policies. They have snared apps for violations like having a link buried in some documentation, requiring even large developers to create special pages — perhaps because Apple saw even small transgressions as opening the door to loopholes. Better be as tedious and cautious as possible.

Nevertheless, a few years ago, the Commission started looking into complaints that streaming music services — specifically — were disadvantaged by these policies. One could argue its interest in this specific category is because it is one area where European developers have some clout: in addition to Spotify, Deezer and SoundCloud are also European products. That is not a criticism: it should be unsurprising for European regulators to investigate an area where they have the grounds to do so. Alas, this is a relatively narrow investigation ahead of the more comprehensive enforcement of the Digital Markets Act, so treat this as a preview of what is to come for non-compliant companies.

The Commission has illustrated this in its press release with an image that features the icons of — among other apps — Beats Music, which Apple acquired in 2014 and turned into Apple Music, and Rdio, which was shut down in 2015.

Aside from the curious infographic, the Commission released this decision without much supporting documentation, as usual. It promises more information is to come after it removes confidential details. It is kind of an awkward statement if you are used to reading legal opinions made by regulatory bodies elsewhere, many of which post the opinion is alongside the decision so it is possible to work through the reasoning. Here, you get a press release and a speech — that is all.

Apple’s response to this decision is barely restrained and looks, frankly, terrible for one of the world’s largest and most visible corporations. There is no friendly soft-touch language here, nor is it a zesty spare statement. This is a press release seasoned with piss and vinegar:

The primary advocate for this decision — and the biggest beneficiary — is Spotify, a company based in Stockholm, Sweden. Spotify has the largest music streaming app in the world, and has met with the European Commission more than 65 times during this investigation.

[…]

Despite that success, and the App Store’s role in making it possible, Spotify pays Apple nothing. That’s because Spotify — like many developers on the App Store — made a choice. Instead of selling subscriptions in their app, they sell them on their website. And Apple doesn’t collect a commission on those purchases.

[…]

When it comes to doing business, not everyone’s going to agree on the best deal. But it sure is hard to beat free.

Strictly speaking — and we all know how much Apple likes that — Spotify pays more than “nothing” to distribute its app on iOS because a developer membership is not free.

But — point taken. Apple is making its familiar claim that iOS software avoids its in-app purchase model is basically freeloading, but it is very happy for any developer’s success. Happy, happy, happy. Real fuckin’ happy. Left unsaid is how much of this infrastructure — hosting, updates, developer tooling, and so on — is required by Apple’s policies to be used by third-party developers. It has the same condescending vibe as the letter sent to Basecamp in 2020 amidst the Hey app fiasco. At the time, the App Review Board wrote “[t]hese apps do not offer in-app purchase — and, consequently, have not contributed any revenue to the App Store over the last eight years”, as though it is some kind of graceful obligation for Apple to support applications that do not inflate its own services income.

Nevertheless, Apple is standing firm. One might think it would reconsider its pugilism after facing this €1.8 billion penalty, investigations on five continents specifically regarding its payment policies, new laws written to address them, and flagging developer relations — but no. It wants to fight and it does not seem to care how that looks.

Today, Spotify has a 56 percent share of Europe’s music streaming market — more than double their closest competitor’s — […]

Apple does not state Spotify’s closest European competitor but, according to an earlier media statement, it is Amazon Music, followed Apple Music. This is a complicated comparison: Spotify has a free tier, and Amazon bundles a version of its service with a Prime membership. Apple Music’s free tier is a radio-only service.

On that basis, it does seem odd from this side of the Atlantic if the Commission concluded Apple’s in-app payment policies were responsible for increased prices if the leading service is available free. But that is not what the Commission actually found. It specifically says the longtime policies “preventing [apps] from informing iOS users about alternative and cheaper music subscription services available outside of the app” are illegal, especially when combined with Apple’s first-party advantages. One effect among many could be higher prices paid by consumers. In the cases of Deezer and SoundCloud, for example, that is true: both apps charge more for in-app purchased subscriptions, compared to those purchased from the web, to cover Apple’s commission. But that is only one factor.

Carrying on:

[…] and pays Apple nothing for the services that have helped make them one of the most recognizable brands in the world. A large part of their success is due to the App Store, along with all the tools and technology that Spotify uses to build, update, and share their app with Apple users around the world.

This model has certainly played a role in Apple’s own success, according to an Apple-funded study (PDF): “Apple benefits as well, when the ecosystem it established expands and grows, either directly through App Store commissions or indirectly as the value users get from their iPhones increases”. Apple seems fixated on the idea that many apps of this type have their own infrastructure and, therefore, have little reason to get on board with Apple’s policies other than to the extent required. Having a universal software marketplace is probably very nice, but having each Spotify bug fix vetted by App Review probably provides less value than Apple wants to believe.

Like many companies, Spotify uses emails, social media, text messages, web ads, and many other ways to reach potential customers. Under the App Store’s reader rule, Spotify can also include a link in their app to a webpage where users can create or manage an account.

We introduced the reader rule years ago in response to feedback from developers like Spotify. And a lot of reader apps use that option to link users to a webpage — from e-readers to video streaming services. Spotify could too — but they’ve chosen not to.

About that second paragraph:

  • This change was not made because of developer requests. It was agreed to as part of a settlement with authorities in Japan in September 2021.

    Meanwhile, the European Commission says it began investigating Apple in June 2020, and informed the company of its concerns in April 2021, then narrowing them last year. I mention this in case there was any doubt this policy change was due to regulatory pressure.

  • This rule change may have been “introduced” in September 2021, but it was not implemented until the end of March 2022. It has been in effect for less than two years — hardly the “years ago” timeframe Apple says.

  • For clarification, external account management links are subject to strict rules and Apple approval. Remember how Deezer and SoundCloud offer in-app purchases? Apple’s policies say that means they cannot offer an account management link in their apps.

    This worldwide policy is specific to “reader” apps and is different from region-specific external purchase capabilities for non-“reader” apps. It only permits a single external link — one specific URL — which is only capable of creating and managing accounts, not individually purchased items. Still it is weird how Spotify does not take advantage of this permission.

  • Spotify, a “reader” app, nevertheless attempted to ship app updates which included a way to get an email with information about buying audiobooks. These updates were rejected because Spotify is only able to email customers in ways that do not circumvent in-app purchases for specific items.

You can quibble with Spotify’s attempts to work around in-app purchase rules — it is obviously trying to challenge them in a very public way — but it is Apple which has such restrictive policies around external links, down to how they may be described. It is a by-the-letter reading to be as strict as possible, lest any loopholes be exploited. This inflexibility would surely be explained by Apple as its “level playing field”, but we all know that is not entirely true.

Instead, Spotify wants to bend the rules in their favor by embedding subscription prices in their app without using the App Store’s In-App Purchase system. They want to use Apple’s tools and technologies, distribute on the App Store, and benefit from the trust we’ve built with users — and to pay Apple nothing for it.

It is not entirely clear Spotify actually wants to do any of these things; it is more honest to say it has to do them if it wants to have an iPhone app. Spotify has routinely disputed various iOS policies only to turn around and reject Apple’s solutions. Spotify complained that users could not play music natively through the HomePod, but has not taken advantage of third-party music app support on the device added in 2020. Instead, it was Apple’s Siri improvements last year that brought Spotify to the HomePod, albeit in an opaque way.

If we accept Apple’s premise, however, it remains a mystery why Apple applies its platform monetization policy to iOS and the related operating systems it has spawned, but not to MacOS. By what criteria, other than Apple’s policy choices, are Mac developers able to sell digital goods however they want — unless they use the Mac App Store — but iOS developers must ask Apple’s permission to include a link to an external payment flow? And that is the conceded, enhanced freedom version of this policy.

There is little logic to the iOS in-app purchase rules, which do not apply equally to physical goods, reader apps, or even some à la carte digital goods. Nobody has reason to believe this façade any longer.

Apple obviously believes the Mac is a different product altogether with different policies, and that is great. The relatively minor restrictions it has imposed still permit a level of user control unimaginable on iOS, and Apple does not seem to have an appetite to further lock it down to iOS levels. But the differences are a matter of policy, not technology.

Apple justifies its commission saying it “reflects the value Apple provides developers through ongoing investments in the tools, technologies, and services”. That is a new standard which apparently applies only to its iOS-derived platforms, compared to the way it invested in tools for Mac development. Indeed, Apple used to charge more for developer memberships when third-party software was only for the Mac, but even the top-of-the-line $3,500 Premier membership was probably not 30% of most developers’ revenue. Apple also charged for new versions of Mac OS X at this time. Now, it distributes all that for free; developers pay a small annual fee, and a more substantial rate to use the only purchasing mechanism they can use for most app types in most parts of the world.

For whatever reason — philosophical or financial — Apple’s non-Mac platforms are restricted and it will defend that stance until it is unable to do so. And, no matter how bad that looks, I kind of get it. I believe there could be a place for a selective and monitored software distribution system, where some authority figure has attested to the safety and authenticity of an app. That is not so different conceptually from how Apple’s notarization policies will be working in Europe.

I oscillate between appreciating and detesting an app store model, even if the App Store is a mess. But even when I am in a better mood, however, it seems crystal clear that such a system would be far better if it were not controlled by the platform owner. The conflict of interest is simply too great. It would be better if some arm’s-length party, perhaps spiritually similar to Meta’s Oversight Board, would control software and developer policies. I doubt that would fix every complaint with the App Store and App Review process but I bet it would have been a good start.

The consequences of being so pugnacious for over fifteen years of the App Store model has, I think, robbed Apple of the chance to set things right. Regulators around the world are setting new inconsistent standards based on fights between large corporations and gigantic ones, with developers of different sizes lobbying for their own wish lists. Individual people have no such influence, but all of these corporations likely believe they are doing what is right and best for their users.

As the saying goes, pressure makes diamonds, and Apple’s policies are being tested. I hope it can get this right, yet press releases like this one gives me little reason to believe in positive results from Apple’s forcibly loosened grip on its most popular platform. And with the Digital Markets Act now in effect, those stakes are high. I never imagined Apple would be thrilled for the rules of its platform to be upended by courts and lawmakers nor excited by a penalty in the billions, but it sure seems like it would be better for everybody if Apple embraced reality.

Even though it has only been a couple of days since word got out that Apple was cancelling development of its long-rumoured though never confirmed car project, there have been a wave of takes explaining what this means, exactly. The uniqueness of this project was plenty intriguing because it seemed completely out of left field. Apple makes computers of different sizes, sure, but the largest surface you would need for any of them is a desk. And now the company was working on a car?

Much reporting during its development was similarly bizarre due to the nature of the project. Instead of leaks from within the technology industry, sources were found in auto manufacturing. Public records requests were used by reporters at the Guardian, IEEE Spectrum, and Business Insider — among others — to get a peek at its development in a way that is not possible for most of Apple’s projects. I think the unusual nature of it has broken some brains, though, and we can see that in coverage of its apparent cancellation.

Mark Gurman, of Bloomberg, in an analysis supplementing the news he broke of Project Titan’s demise. Gurman writes that Apple will now focus its development efforts on generative “A.I.” products:

The big question is how soon AI might make serious money for Apple. It’s unlikely that the company will have a full-scale AI lineup of applications and features for a few years. And Apple’s penchant for user privacy could make it challenging to compete aggressively in the market.

For now, Apple will continue to make most of its money from hardware. The iPhone alone accounts for about half its revenue. So AI’s biggest potential in the near term will be its ability to sell iPhones, iPads and other devices.

These paragraphs, from perhaps the highest-profile reporter on the Apple beat, present the company’s usual strategy for pretty much everything it makes as a temporary measure until it can — uhh — do what, exactly? What is the likelihood that Apple sells access to generative services to people who do not have its hardware products? Those odds seem very, very poor to me, and I do not understand why Gurman is framing this in the way he is.

While it is true a few Apple services are available to people who do not use the company’s hardware products, they are exclusively media subscriptions. It does not make sense to keep people from legally watching the expensive shows it makes for Apple TV Plus. iCloud features are also available outside the hardware ecosystem but, again, that seems more like a pragmatic choice for syncing. Generative “A.I.” does not fit those models and it is not, so far, a profit-making endeavour. Microsoft and OpenAI are both losing money every time their products are used, even by paying customers.

I could imagine some generative features could come to Pages or Keynote at iCloud.com, but only because they were also added to native applications that are only available on Apple’s platforms. But Apple still makes the vast majority of its money by selling computers to people; its services business is mostly built on those customers adding subscriptions to their Apple-branded hardware.

“A.I.” features are likely just that: features, existing in a larger context. If Apple wants, it can use them to make editing pictures better in Photos, or make Siri somewhat less stupid. It could also use trained models to make new products; Gurman nods toward the Vision Pro’s Persona feature as something which uses “artificial intelligence”. But the likelihood of Apple releasing high-profile software features separate and distinct from its hardware seems impossibly low. It has built its SoCs specifically for machine learning, after all.

Speaking of new products, Brian X. Chen and Tripp Mickle, of the New York Times, wrote a decent insiders’ narrative of the car’s development and cancellation. But this paragraph seems, quite simply, wrong:

The car project’s demise was a testament to the way Apple has struggled to develop new products in the years since Steve Jobs’s death in 2011. The effort had four different leaders and conducted multiple rounds of layoffs. But it festered and ultimately fizzled in large part because developing the software and algorithms for a car with autonomous driving features proved too difficult.

I do not understand on what basis Apple “has struggled to develop new products” in the last thirteen years. Since 2011, Apple has introduced the Apple Watch, AirPods, Vision Pro, migrated Macs to in-house SoCs causing an industry-wide reckoning, and added a bevy of services. And those are just the headlining products; there are also HomePods and AirTags, Macs with Retina displays, iPhones with facial recognition, a range of iPads that support the Apple Pencil, also a new product. None of those things existed before 2011.

These products are not all wild success stories, and some of them need a lot of work to feel great. But that list disproves the idea that Apple has “struggled” with launching new things. If anything, there has been a steady narrative over that same period that Apple has too many products. The rest of this Times report seems fine, but this one paragraph — and, really, just the first sentence — is simply incorrect.

These are all writers who cover Apple closely. They are familiar with the company’s products and strategies. These takes feel like they were written without any of that context or understanding, and it truly confuses me how any of them finished writing these paragraphs and thought they accurately captured a business they know so much about.

What people with Big Business Brains often like to argue about the unethical but wildly successful ad tech industry is that it is not as bad as it looks because your individual data does not have any real use or value. Ad tech vendors would not bother retaining such granular details because it is beneficial, they say, only in a more aggregated and generalized form.

The problem with this argument is that it keeps getting blown up by their demonstrable behaviour.1 For a recent example, consider Avast, an antivirus and security software provider, which installed to users’ computers a web browser toolbar that promised to protect against third-party tracking but, in actual fact, was collecting browsing history for — and you are not going to believe this — third-party tracking and advertising companies on behalf of the Avast subsidiary Jumpshot. It was supposed to be anonymized but, according to the U.S. Federal Trade Commission, this “proprietary algorithm” was so ineffective that Avast managed to collect six petabytes of revealing browsing history between 2014–2020. Then, it sold access (PDF):

[…] For example, from May 2017 to April 2019, Jumpshot granted LiveRamp, a data company that specializes in various identity services, a “world-wide license” to use consumers’ granular browsing information, including all clicks, timestamps, persistent identifiers, and cookie values, for a number of specified purposes. […]

One agreement between LiveRamp and Jumpshot stated that Jumpshot would use two services: first, “ID Syncing Services,” in which “LiveRamp and [Jumpshot] will engage in a synchronization and matching of identifiers,” and second, “Data Distribution Services,” in which “LiveRamp will ingest online Client Data and facilitate the distribution of Client’s Data (i.e., data segments and attributes of its users associated with Client IDs) to third-party platforms for the purpose of performing ad targeting and measurement.” These provisions permit the targeting of Avast consumers using LiveRamp’s ability to match Respondents’ persistent identifiers to LiveRamp’s own persistent identifiers, thereby associating data collected from Avast users with LiveRamp’s data.

We know these allegations due to the FTC’s settlement — though, I should say, these claims have not been proven, because Avast paid a $16.5 million penalty and said it would not use any of the data it collected “for advertising purposes”. The caveat makes this settlement feel a little incomplete to me. While there are other ways aggregated personal data can be used, like in market research, it does not seem Avast and Jumpshot were all that careful about obtaining consent when this software was first rolled out. When they did, the results were predictable (PDF):

Respondents had direct evidence that many consumers did not want their browsing information to be sold to third parties, even when they were told that the information would only be shared in de-identified form. In 2019, when Avast asked users of other Avast antivirus software to opt-in to the collection and sale of de-identified browsing information, fewer than 50% of consumers did so.

I am interpreting “fewer than 50%” as “between 40–49%”; if 18% of users had opted in, I expect the FTC would have said “fewer than 20%”. Most people do not want to be tracked. For comparison, this seems to be at the upper end of App Tracking Transparency opt-in rates.

I noted the LiveRamp connection when I first linked to investigations of Avast’s deceptive behaviour, though it seems Wolfie Christl beat me to the punch in December 2019. Christl also pointed out Jumpshot’s supply of data to Lotame, something the FTC also objected to. LiveRamp’s whole thing is resolving audiences based on personal information, though it says it will not return this information directly. Still, this granular identity resolution is not the kind of thing most people would like to participate in. Even if they consent, it is unclear if they are fully aware of the consequences.

This is just one settlement but it helps illustrate the distribution and mingling of granular user data. Marketers may be restricted to larger audiences and it may not be possible to directly extract users’ personally identifiable information — though it is often trivial to do so. But it is not comforting to be told collected data is only useful as part of a broader set. First of all, it is not: there are existing albeit limited ways it is possible to target small numbers of people. Even if that were true, though, this highly specific data is the foundation of larger sets. Ad tech companies want to follow you as specifically and closely as they can, and there are only nominal safeguards because collecting it all is too damn valuable.


  1. Well, and also how weird it is to be totally okay with collecting a massive amount of data with virtually no oversight or regulations so long as industry players pinky promise to only use some of it. ↥︎

When I bought my mid-2017 iMac, I had assumed I would get eight to ten years of updates from it, similar to my mid-2012 MacBook Air. Alas, just four years after it was on my desk, Apple deemed it unworthy of running MacOS Sonoma, which means I have begun looking at desktop replacements on a slightly more urgent timetable. Not today, mind you, and hopefully not for a while — but my desk will need something new eventually.

And it will be very different because Apple now only makes one iMac. The 27-inch model used to fill an in-between prosumer role for those who needed more power, but could not afford or justify something like the Mac Pro.1 It has been an ideal computer for me, and I want to at least match it spec-for-spec: a 27-inch display, top-of-the-line CPU, 1 TB of internal storage, and 64 GB of RAM. Mine cost CAD $3,750, with two caveats:

  1. I bought it refurbished, which saved me CAD $350.

  2. I got the best spec I could in every way except storage — a terabyte is fine for me — and RAM, which I left at the base 16 GB configuration. I then paid CAD $346 from Amazon for 64 GB of RAM, which I was able to install myself.

    One might protest, saying this is an unfair comparison, to which I would respond: yes, that is kind of the point. There is no longer an option to install aftermarket upgrades of any kind, which means Apple should give users a reason to trust its pricing.

For complete fairness, however, I will compare only new non-refurbished prices, and I will use U.S. dollars to prevent currency conversion issues. If I had bought this computer in this spec from Apple in a not-refurbished state, in the United States, it would have cost me USD $4,500. (For the record, $1,400 of that cost is from upgrading the stock 16 GB of RAM to 64 GB. This was robbery even by 2017 standards.)

Ideally, I will be able to match the price I paid for my iMac and, to be even fairer, I will adjust for inflation: about USD $5,500 is my target. So let us start with the simplest issue: the display.

Since the iMac of today is no longer viable due to its single size, my contenders are the Mac Mini, the Mac Studio, and the Mac Pro. All of these will require an external display, and if I want to match my iMac’s 5K Retina display, the choices are infamously poor. Aside from Apple’s Studio Display, there are two other options: LG’s UHD UltraFine and Samsung’s ViewFinity S9. The LG monitor is $1,300 and, as I understand it, still unreliable, while the Samsung is the same price as Apple’s at $1,600. Since I would likely end up with either of the latter, my target computer costs $3,900 or less.

I can write off the Mac Pro because it starts at $7,000, even though its base spec satisfies my requirements. With a Studio Display, the total bill is nearly double what my iMac would have cost. The Mac Mini is no good, either, because its RAM ceiling is just 32 GB. Please do not send me email about how 32 GB of Apple’s special memory is equivalent to 64 GB of standard RAM.

That leaves the Mac Studio. The model with the best Ultra system-on-a-chip comes standard with the RAM and storage spec I want, but it is $5,000. With a display, it will be over a thousand dollars above my inflation-adjusted target. But hang on, because the CPU upgrade is $1,000 on its own; with the base Ultra SoC, I am just above the inflation-adjusted budget. That is close enough for my books, and a surprising result: you can now get the second-best SoC available on any Mac with a display for basically the same as the highest-end 2017 iMac.

Remember, too, that the iMac I bought was nowhere near the fastest model Apple introduced in 2017 — that was the iMac Pro, which started at $5,000, but with 32 GB of RAM. Upgrading that to 64 GB would have cost $800, and I have not even factored in inflation. The spiritual successor to the iMac Pro is probably the Mac Studio with an Ultra SoC, and it is less expensive at the same spec — including a display — than the iMac Pro used to be.

Perhaps that makes the Mac Studio with the Max SoC the successor to the 27-inch high-spec iMac models. As of writing, a Mac Studio configured with 64 GB of RAM, a 1 TB SSD, and the best Max SoC available is $2,800. Add a display, and you are looking at a setup $100 less expensive than the non-inflation-adjusted list price of my iMac.

These are all expensive computers, and I still think Apple charges way too much for RAM — though at least upgrading from 32 to 64 GB is now just $400. But this is a way better situation than I had expected. I thought I would be in a very difficult buying situation when it comes time to replace my beloved iMac without a direct equivalent. But writing this article as a way of working out my options has me feeling pleasantly surprised.

Now just wait a moment as I take a sip of water and look at the pricing in the Canadian store.


  1. If you are a little bit old, you may remember a time when the performance Mac tower was almost affordable. The Power Mac G5, for example, started at USD $2,000, and the highest standard configuration was $3,000. Adjusted for inflation, that is under $5,000 for Apple’s highest-performance Mac. ↥︎