Article

Four years ago this week, social media companies decided they would stop platforming then-outgoing president Donald Trump after he celebrated seditionists who had broken into the U.S. Capitol Building in a failed attempt to invalidate the election and allow Trump to stay in power. After two campaigns and a presidency in which he tested the limits of what those platforms would allow, enthusiasm for a violent attack on government was apparently one step too far. At the time, Mark Zuckerberg explained:

Over the last several years, we have allowed President Trump to use our platform consistent with our own rules, at times removing content or labeling his posts when they violate our policies. We did this because we believe that the public has a right to the broadest possible access to political speech, even controversial speech. But the current context is now fundamentally different, involving use of our platform to incite violent insurrection against a democratically elected government.

Zuckerberg, it would seem, now has regrets — not about doing too little over those and the subsequent years, but about doing too much. For Zuckerberg, the intervening four years have been stifled by “censorship” on Meta’s platforms; so, this week, he announced a series of sweeping changes to their governance. He posted a summary on Threads but the five-minute video is far more loaded, and it is what I will be referring to. If you do not want to watch it — and I do not blame you — the transcript at Tech Policy Press is useful. The key changes:

  1. Fact-checking is to be replaced with a Community Notes feature, similar to the one on X.

  2. Change the Hateful Conduct policies to be more permissive about language used in discussions about immigration and gender.

  3. Make automated violation detection tools more permissive and focus them on “high-severity” problems, relying on user reports for material the company thinks is of a lower concern.

  4. Roll back restrictions on the visibility and recommendation of posts related to politics.

  5. Relocate the people responsible for moderating Meta’s products from California to another location — Zuckerberg does not specify — and move the U.S.-focused team to Texas.

  6. Work with the incoming administration on concerns about governments outside the U.S. pressuring them to “censor more”.

Regardless of whether you feel each of these are good or bad ideas, I do not think you should take Zuckerberg’s word for why the company is making these changes. Meta’s decision to stop working directly with fact-checkers, for example, is just as likely a reaction to the demands of FCC commissioner Brendan Carr, who has a bananas view (PDF) of how the First Amendment to the U.S. Constitution works. According to Carr, social media companies should be forbidden from contributing their own speech to users’ posts based on the rankings of organizations like NewsGuard. According both Carr and Zuckerberg, fact-checkers demand “censorship” in some way. This is nonsense: they were not responsible for the visibility of posts. I do not think much of this entire concept, but surely they only create more speech by adding context in a similar way as Meta hopes will still happen with Community Notes. Since Carr will likely be Trump’s nominee to run the FCC, it is important for Zuckerberg to get his company in line.

Meta’s overhaul of its Hateful Conduct policies also shows the disparity between what Zuckerberg says and the company’s actions. Removing rules that are “out of touch with mainstream discourse” sounds fair. What it means in practice, though, is to allow people to make COVID-19 more racist, demean women, and — of course — discriminate against LGBTQ people in more vicious ways. I understand the argument for why these things should be allowed by law, but there is no obligation for Meta to carry this speech. If Meta’s goal is to encourage a “friendly and positive” environment, why increase its platforms’ permissiveness to assholes? Perhaps the answer is in the visibility of these posts — maybe Meta is confident it can demote harmful posts while still technically allowing them. I am not.

We can go through each of these policy changes, dissect them, and consider the actual reasons for each, but I truly believe that is a waste of time compared to looking at the sum of what they accomplish. Conservatives, particularly in the U.S., have complained for years about bias against their views by technology companies, an updated version of similar claims about mass media. Despite no evidence for this systemic bias, the myth stubbornly persists. Political strategists even have a cute name for it: “working the refs”. Jeff Cohen and Norman Solomon, Creators Syndicate, August 1992:

But in a moment of candor, [Republican Party Chair Rich] Bond provided insight into the Republicans’ media-bashing: “There is some strategy to it,” he told the Washington Post. “I’m the coach of kids’ basketball and Little League teams. If you watch any great coach, what they try to do is ‘work the refs.’ Maybe the ref will cut you a little slack next time.”

Zuckerberg and Meta have been worked — heavily so. The playbook of changes outlined by Meta this week are a logical response in an attempt to court scorned users, and not just the policy changes here. On Monday, Meta announced Dana White, UFC president and thrice-endorser of Trump, would be joining its board. Last week, it promoted Joel Kaplan, a former Republican political operative, to run its global policy team. Last year, Meta hired Dustin Carmack who, according to his LinkedIn, directs the company’s policy and outreach for nineteen U.S. states, and previously worked for the Heritage Foundation, the Office of the Director of National Intelligence, and Ron DeSantis. These are among the people forming the kinds of policies Meta is now prescribing.

This is not a problem solved through logic. If it were, studies showing a lack of political bias in technology company policy would change more minds. My bet is that these changes will not have what I assume is the desired effect of improving the company’s standing with far-right conservatives or the incoming administration. If Meta becomes more permissive for bigots, it will encourage more of that behaviour. If Meta does not sufficiently suggest those kinds of posts because it wants “friendly and positive” platforms, the bigots will cry “shadowban”. Meta’s products will corrode. That does not mean they will no longer be influential or widely used, however; as with its forthcoming A.I. profiles, Meta is surely banking that its dominant position and a kneecapped TikTok will continue driving users and advertisers to its products, however frustratedly.

Zuckerberg appears to think little of those who reject the new policies:

[…] Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better.

I am allergic to the phrase “virtue signalling” but I am willing to try getting through this anyway. This has been widely interpreted as because of their virtue signalling, but I think it is just as accurate if you think of it as because of our virtue signalling. Zuckerberg has complained about media and government “pressure” to more carefully moderate Meta’s platforms. But he cannot ignore how this week’s announcement also seems tied to implicit pressure. Trump is not yet the president, true, but Zuckerberg met with him shortly after the election and, apparently, the day before these changes were announced. This is just as much “virtue signalling” — particularly moving some operations to Texas for reasons even Zuckerberg says are about optics.

Perhaps you think I am overreading this, but Zuckerberg explicitly said in his video introducing the changes that “recent elections also feel like a cultural tipping point towards once again prioritizing speech”. If he means elections other than those which occurred in the U.S. in November, I am not sure which. These are changes made from a uniquely U.S. perspective. To wit, the final commitment in the list above as explained by Zuckerberg (via the Tech Policy Press transcript):

Finally, we’re going to work with President Trump to push back on governments around the world. They’re going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever-increasing number of laws, institutionalizing censorship, and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years when even the US government has pushed for censorship.

For their part, the E.U. rejected Zuckerberg’s characterization of its policies, and Brazilian officials are not thrilled, either.

These changes — and particularly this last one — are illustrative of the devil’s bargain of large U.S.-based social media companies: they export their policies and values worldwide following whatever whims and trends are politically convenient at the time. Right now, it is important for Meta to avoid getting on the incoming Trump administration’s shit list, so they, like everyone, are grovelling. If the rest of the world is subjected to U.S.-style discussions, so be it. But so have we been for a long time. What is extraordinary about Meta’s changes is how many people will be impacted: billions, plural. Something like one-quarter the world’s population.

The U.S. is no stranger to throwing around its political and corporate power in a way few other nations can. Meta’s changes are another entry into that canon. There are people in some countries who will benefit from having more U.S.-centric policies, but most everyone elsewhere will find them discordant with more local expectations. These new policies are not satisfying for people everywhere around the world, but the old ones were not, either.

It is unfair to expect any platform operator to get things right for every audience, especially not at Meta’s scale. The options created by less centralized protocols like ActivityPub and AT Protocol are much more welcome. We should be able to have more control over our experience than we are trusted with.

Zuckerberg begins his video introduction by referencing a 2019 speech he gave at Georgetown University. In it, he speaks of the internet creating “significantly broader power to call out things we feel are unjust”. “[G]iving people a voice and broader inclusion go hand in hand,” he said, “and the trend has been towards greater voice over time”. Zuckerberg naturally centred his company’s products. But you know what is even more powerful than one company at massive scale? It is when no company needs to act as the world’s communications hub. The internet is the infrastructure for that, and we would be better off if we rejected attempts to build moats.

The ads for Apple Intelligence have mostly been noted for what they show, but there is also something missing: in the fine print and in its operating systems, Apple still calls it a “beta” release, but not in its ads. Given the exuberance with which Apple is marketing these features, that label seems less like a way to inform users the software is unpolished, and more like an excuse for why it does not work as well as one might expect of a headlining feature from the world’s most valuable company.

“Beta” is a funny word when it comes to Apple’s software. It often makes available preview builds of upcoming O.S. releases to users and developers for feedback, testing software compatibility, and to build with new APIs. This is voluntary and done with the understanding that the software is unfinished, and bugs — even serious ones — can be expected.

Apple has also, rarely, applied the “beta” label to features in regular releases which are distributed to all users, not just those who signed up. This type of “beta” seems less honest. Instead of communicating this feature is a work in progress, it seems to say we are releasing this before it is done. Maybe that is a subtle distinction, but it is there. One type of beta is testing; the other type asks users to disregard their expectations of polish, quality, and functionality so that a feature can be pushed out earlier than it should.

We have seen this on rare occasions: once with Portrait mode; more notably, with Siri. Mat Honan, writing for Gizmodo in December 2011:

Check out any of Apple’s ads for the iPhone 4S. They’re promoting Siri so hard you’d be forgiven for thinking Siri is the new CEO of Apple. And it’s not just that first wave of TV ads, a recent email Apple sent out urges you to “Give the phone that everyone’s talking about. And talking to.” It promises “Siri: The intelligent assistant you can ask to make calls, send texts, set reminders, and more.”

What those Apple ads fail to report — at all — is that Siri is very much a half-baked product. Siri is officially in beta. Go to Siri’s homepage on Apple.com, and you’ll even notice a little beta tag by the name.

This is familiar.

The ads for Siri gave the impression of great capability. It seemed like you could ask it how to tie a bowtie, what events were occurring in a town or city, and more. The response was not shown for these queries, but the implication was that Siri could respond. What became obvious to anyone who actually used Siri is that it would show web search results instead. But, hey, it was a “beta” — for two years.

The ads for Apple Intelligence do one better and show features still unreleased. The fine print does mention “some features and languages will be coming over the next year”, without acknowledging the very feature in this ad is one of them. And, when it does actually come out, it is still officially in “beta”, so I guess you should not expect it to work properly.

This all seems like a convoluted way to evade full responsibility of the Apple Intelligence experience which, so far, has been middling for me. Genmoji is kind of fun, but Notification Summaries are routinely wrong. Priority messages in Mail is helpful when it correctly surfaces an important email, and annoying when it highlights spam. My favourite feature — in theory — is the Reduce Interruptions Focus mode, which is supposed to only show notifications when they are urgent or important. It is the kind of thing I have been begging for to deal with the overburdened notifications system. But, while it works pretty well sometimes, it is not dependable enough to rely on. It will sometimes prioritize scam messages written with a sense of urgency, but fail to notify me when my wife messages me a question. It still necessitates I occasionally review the notifications suppressed by this Focus mode. It is helpful, but not consistently enough to be confidence-inspiring.

Will users frustrated by the questionable reliability of Apple Intelligence routinely return to try again? If my own experience with Siri is any guidance — and I am not sure it is, but it is all I have — I doubt it. If these features did not work on the first dozen attempts, why would they work any time after? This strategy, I think, teaches people to set their expectations low.

This beta-tinged rollout is not entirely without its merits. Apple is passively soliciting feedback within many of its Apple Intelligence features, at a scale far greater than it could by restricting testing to only its own staff and contractors. But it also means the public becomes unwitting testers. As with Siri before, Apple heavily markets this set of features as the defining characteristic of this generation of iPhones, yet we are all supposed to approach this as though we are helping Apple make sure its products are ready? Sorry, it does not work like that. Either something is shipping or it is not, and if it does not work properly, users will quickly learn not to trust it.

Wired has been publishing a series of predictions about the coming state of the world. Unsurprisingly, most concern artificial intelligence — how it might impact health, music, our choices, the climate, and more. It is an issue of the magazine Wired describes as its “annual trends briefing”, but it also kind of like a hundred-page op-ed section. It is a mixed bag.

A.I. critic Gary Marcus contributed a short piece about what he sees as the pointlessness of generative A.I. — and it is weak. That is not necessarily because of any specific argument, but because of how unfocused it is despite its brevity. It opens with a short history of OpenAI’s models, with Marcus writing “Generative A.I. doesn’t actually work that well, and maybe it never will”. Thesis established, he begins building the case:

Fundamentally, the engine of generative AI is fill-in-the-blanks, or what I like to call “autocomplete on steroids.” Such systems are great at predicting what might sound good or plausible in a given context, but not at understanding at a deeper level what they are saying; an AI is constitutionally incapable of fact-checking its own work. This has led to massive problems with “hallucination,” in which the system asserts, without qualification, things that aren’t true, while inserting boneheaded errors on everything from arithmetic to science. As they say in the military: “frequently wrong, never in doubt.”

Systems that are frequently wrong and never in doubt make for fabulous demos, but are often lousy products in themselves. If 2023 was the year of AI hype, 2024 has been the year of AI disillusionment. Something that I argued in August 2023, to initial skepticism, has been felt more frequently: generative AI might turn out to be a dud. The profits aren’t there — estimates suggest that OpenAI’s 2024 operating loss may be $5 billion — and the valuation of more than $80 billion doesn’t line up with the lack of profits. Meanwhile, many customers seem disappointed with what they can actually do with ChatGPT, relative to the extraordinarily high initial expectations that had become commonplace.

Marcus’ financial figures here are bizarre and incorrect. He quotes a Yahoo News-syndicated copy of a PC Gamer article, which references a Windows Central repackaging of a paywalled report by the Information — a wholly unnecessary game of telephone when the New York Times obtained financial documents with the same conclusion, and which were confirmed by CNBC. The summary of that Information article — that OpenAI “may run out of cash in 12 months, unless they raise more [money]”, as Marcus wrote on X — is somewhat irrelevant now after OpenAI proceeded to raise $6.6 billion at a staggering valuation of $157 billion.

I will leave analysis of these financials to MBA types. Maybe OpenAI is like Amazon, which took eight years to turn its first profit, or Uber, which took fourteen years. Maybe it is unlike either and there is no way to make this enterprise profitable.

None of that actually matters, though, when considering Marcus’ actual argument. He posits that OpenAI is financially unsound as-is, and that Meta’s language models are free. Unless OpenAI “come outs [sic] with some major advance worthy of the name of GPT-5 before the end of 2025″, the company will be in a perilous state and, “since it is the poster child for the whole field, the entire thing may well soon go bust”. But hold on: we have gone from ChatGPT is disappointing “many customers” — no citation provided — to the entire concept of generative A.I. being a dead end. None of this adds up.

The most obvious problem is that generative A.I. is not just ChatGPT or other similar chat bots; it is an entire genre of features. I wrote earlier this month about some of the features I use regularly, like Generative Remove in Adobe Lightroom Classic. As far as I know, this is no different than something like OpenAI’s Dall‍-‍E in concept: it has been trained on a large library of images to generate something new. Instead of responding to a text-based prompt, it predicts how it should replicate textures and objects in an arbitrary image. It is far from perfect, but it is dramatically better than the healing brush tool before it, and clone stamping before that.

There are other examples of generative A.I. as features of creative tools. It can extend images and replace backgrounds pretty well. The technology may be mediocre at making video on its own terms, but it is capable of improving the quality of interpolated slow motion. In the technology industry, it is good at helping developers debug their work and generate new code.

Yet, if you take Marcus at his word, these things and everything else generative A.I. “might turn out to be a dud”. Why? Marcus does not say. He does, however, keep underscoring how shaky he finds OpenAI’s business situation. But this Wired article is ostensibly about generative A.I.’s usefulness — or, in Marcus’ framing, its lack thereof — which is completely irrelevant to this one company’s financials. Unless, that is, you believe the reason OpenAI will lose five billion dollars this year is because people are unhappy with it, which is not the case. It simply costs a fortune to train and run.

The one thing Marcus keeps coming back to is the lack of a “moat” around generative A.I., which is not an original position. Even if this is true, I do not see this as evidence of a generative A.I. bubble bursting — at least, not in the sense of how many products it is included in or what capabilities it will be trained on.

What this looks like, to me, is commoditization. If there is a financial bubble, this might mean it bursts, but it does not mean the field is wiped out. Adobe is not disappearing; neither are Google or Meta or Microsoft. While I have doubts about whether chat-like interfaces will continue to be a way we interact with generative A.I., it continues to find relevance in things many of us do every day.

A little over two years after OpenAI released ChatGPT upon the world, and about four years since Dall-E, the company’s toolset now — “finally” — makes it possible to generate video. Sora, as it is called, is not the first generative video tool to be released to the public; there are already offerings from Hotshot, Luma, Runway, and Tencent. OpenAI’s is the highest-profile so far, though: the one many people will use, and the products of which we will likely all be exposed to.

A generator of video is naturally best seen demonstrated in that format, and I think Marques Brownlee’s preview is a good place to start. The results are, as I wrote in February when Sora was first shown, undeniably impressive. No matter how complicated my views about generative A.I. — and I will get there — it is bewildering that a computer can, in a matter of seconds, transform noise into a convincing ten-second clip depicting whatever was typed into a text box. It can transform still images into video, too.

It is hard to see this as anything other than extraordinary. Enough has been written by now about “any sufficiently advanced technology [being] indistinguishable from magic” to bore, but this truly captures it in a Penn & Teller kind of way: knowing how it works only makes it somehow more incredible. Feed computers on a vast scale video which has been labelled — partly by people, and partly by automated means which are reliant on this exact same training process — and it can average that into entirely new video that often appears plausible.1 I am basing my assessment on the results generated by others because Sora requires a paid OpenAI account, and because there is currently a waiting list.

There are, of course, limitations of both technology and policy. Sora has problems with physics, the placement of objects in space, and consistency between and within shots. Sora does not generate audio, even though OpenAI has the capability. Prompts in text and images are checked for copyright violations, public figures’ likenesses, criminal usage, and so forth. But there is no meaningful restrictions on the video itself. This is not how things must be; this is a design decision.

I keep thinking about the differences between A.I. features and A.I. products. I use very few A.I. products; an open-ended image generator, for example, is technically interesting but not very useful to me. Unlike a crop of Substack writers, I do not think pretending to have commissioned art lends me any credibility. But I now use A.I. features on a regular basis, in part because so many things are now “A.I. features” in name and by seemingly no other quality. Generative Remove in Adobe Lightroom Classic, for example, has become a terrific part of my creative workflow. There are edits I sometimes want to make which, if not for this feature, would require vastly more time which, depending on the job, I may not have. It is an image generator just like Dall-E or Stable Diffusion, but it is limited by design.

Adobe is not taking a principled stance; Photoshop contains a text-based image generator which, I think, does not benefit from being so open-ended. It would, for me, be improved if its functionality were integrated into more specific tools; for example, the crop tool could also allow generative reframing.

Sora, like ChatGPT and Dall-E, is an A.I. product. But I would find its capabilities more useful and compelling if they were a feature within a broader video editing environment. Its existence implies a set of tools which could benefit a video editor’s workflow. For example, the object removal and tracking features in Premiere Pro feel more useful to me than its ability to generate b-roll, which just seems like a crappy excuse to avoid buying stock footage or paying for a second unit.

Limiting generative A.I. in this manner would also make its products more grounded in reality and less likely to be abused. It would also mean withholding capabilities. Clearly, there are some people who see a demonstration of the power of generative A.I. as a worthwhile endeavour unto itself. As a science experiment, I get it, but I do not think these open-ended tools should be publicly available. Alas, that is not the future venture capitalists, and shareholders, and — I guess — the creators of these products have decided is best for us.

We are now living in a world of slop, and we have been for some time. It began as infinite reams of text-based slop intended to be surfaced in search results. It became image-based slop which paired perfectly with Facebook’s pivot to TikTok-like recommendations. Image slop and audio slop came together to produce image slideshow slop dumped into the pipelines of Instagram Reels, TikTok, YouTube Shorts. Brace yourselves for a torrent of video slop about pyramids and the Bermuda triangle and pyramids. None of these were made using Sora, as far as I know; at least some were generated by Hailuo from Minimax. I had to dig a little bit for these examples, but not too much, and it is only going to get worse.

Much has been written about how all this generative stuff has the capability of manipulating reality — and rightfully so. It lends credence to lies, and its mere existence can cause unwarranted doubt. But there is another problem: all of this makes our world a little bit worse because it is cheap to produce in volume. We are on the receiving end of a bullshit industry, and the toolmakers see no reason to slow it down. Every big platform — including the web itself — is full of this stuff, and it is worse for all of us. Cynicism aside, I cannot imagine the leadership at Google or Meta actually enjoys using their own products as they wade through generated garbage.

This is hitting each of us in similar ways. If you use a computer that is connected to the internet, you are likely running into A.I.-generated stuff all the time, perhaps without being fully aware of it. The recipe you followed, the repair guide you found, the code you copy-and-pasted, and the images in the video you watched? Any of them could have been generated in a data farm somewhere. I do not think that is inherently bad, though it is an uncertain feeling.

I am part of the millennial generation. I grew up at a time in which we were told we were experiencing something brand new in world history. The internet allowed anyone to publish anything, and it was impossible to verify this new flood of information. We were taught to think critically and be cautious, since we never knew who created anything. Now we have a different problem: we are unsure what created anything.


  1. Without thinking about why it is the case, it is interesting how generative A.I. has no problem creating realistic-seeming text as text, but it struggles when it is an image containing text. But with a little knowledge about how these things work, that makes sense. ↥︎

Spencer Ackerman has been a national security reporter for over twenty years, and was partially responsible for the Guardian’s coverage of NSA documents leaked by Edward Snowden. He has good reason to be skeptical of privacy claims in general, and his experience updating his iPhone made him worried:

Recently, I installed Apple’s iOS 18.1 update. Shame on me for not realizing sooner that I should be checking app permissions for Siri — which I had thought I disabled as soon as I bought my device — but after installing it, I noticed this update appeared to change Siri’s defaults.

Apple has a history with changing preferences and dark patterns. This is particularly relevant in the case of the iOS 18.1 update because it was the one with Apple Intelligence, which creates new ambiguity between what is happening on-device and what goes to a server farm somewhere.

Allen Pike:

While easy tasks are handled by their on-device models, Apple’s cloud is used for what I’d call moderate-difficulty work: summarizing long emails, generating patches for Photos’ Clean Up feature, or refining prose in response to a prompt in Writing Tools. In my testing, Clean Up works quite well, while the other server-driven features are what you’d expect from a medium-sized model: nothing impressive.

Users shouldn’t need to care whether a task is completed locally or not, so each feature just quietly uses the backend that Apple feels is appropriate. The relative performance of these two systems over time will probably lead to some features being moved from cloud to device, or vice versa.

It would be nice if it truly did not matter — and, for many users, the blurry line between the two is probably fine. Private Cloud Compute seems to be trustworthy. But I fully appreciate Ackerman’s worries. Someone in his position necessarily must understand what is being stored and processed in which context.

However, Ackerman appears to have interpreted this setting change incorrectly:

I was alarmed to see that even my secure communications apps, like Proton and Signal, were toggled by default to “Learn from this App” and enable some subsidiary functions. I had to swipe them all off.

This setting was, to Ackerman, evidence of Apple “uploading your data to its new cloud-based AI project”, which is a reasonable assumption at a glance. Apple, like every technology company in the past two years, has decided to loudly market everything as being connected to its broader A.I. strategy. In launching these features in a piecemeal manner, though, it is not clear to a layperson which parts of iOS are related to Apple Intelligence, let alone where those interactions are taking place.

However, this particular setting is nearly three years old and unrelated to Apple Intelligence. This is related to Siri Suggestions which appear throughout the system. For example, the widget stack on my home screen suggests my alarm clock app when I charge my iPhone at night. It suggests I open the Microsoft Authenticator app on weekday mornings. When I do not answer the phone for what is clearly a scammer, it suggests I return the missed call. It is not all going to be gold.

Even at the time of its launch, its wording had the potential for confusion — something Apple has not clarified within the Settings app in the intervening years — and it seems to have been enabled by default. While this data may play a role in establishing the “personal context” Apple talks about — both are part of the App Intents framework — I do not believe it is used to train off-device Apple Intelligence models. However, Apple says this data may leave the device:

Your personal information — which is encrypted and remains private — stays up to date across all your devices where you’re signed in to the same Apple Account. As Siri learns about you on one device, your experience with Siri is improved on your other devices. If you don’t want Siri personalization to update across your devices, you can disable Siri in iCloud settings. See Keep what Siri knows about you up to date on your Apple devices.

While I believe Ackerman is incorrect about the setting’s function and how Apple handles its data, I can see how he interpreted it that way. The company is aggressively marketing Apple Intelligence, even though it is entirely unclear which parts of it are available, how it is integrated throughout the company’s operating systems, and which parts are dependent on off-site processing. There are people who really care about these details, and they should be able to get answers to these questions.

All of this stuff may seem wonderful and novel to Apple and, likely, many millions of users. But there are others who have reasonable concerns. Like any new technology, there are questions which can only be answered by those who created it. Only Apple is able to clear up the uncertainty around Apple Intelligence, and I believe it should. A cynical explanation is that this ambiguity is all deliberate because Apple’s A.I. approach is so much slower than its competitors and, so, it is disincentivized from setting clear boundaries. That is possible, but there is plenty of trust to be gained by being upfront now. Americans polled by Pew Research and Gallup have concerns about these technologies. Apple has repeatedly emphasized its privacy bonafides. But these features remain mysterious and suspicious for many people regardless of how much a giant corporation swears it delivers “stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency”.

All of that is nice, I am sure. Perhaps someone at Apple can start the trust-building by clarifying what the Siri switch does in the Settings app, though.

Brendan Nystedt, reporting for Wired on a new generation of admirers of crappy digital cameras from the early 2000s:

For those seeking to experiment with their photography, there’s an appeal to using a cheap, old digital model they can shoot with until it stops working. The results are often imperfect, but since the camera is digital, a photographer can mess around and get instant gratification. And for everyone in the vintage digital movement, the fact that the images from these old digicams are worse than those from a smartphone is a feature, not a bug.

Om Malik attributes it to wabi-sabi:

Retromania? Not really. It feels more like a backlash against the excessive perfection of modern cameras, algorithms, and homogenized modern image-making. I don’t disagree — you don’t have to do much to come up with a great-looking photo these days. It seems we all want to rebel against the artistic choices of algorithms and machines — whether it is photos or Spotify’s algorithmic playlists versus manually crafted mixtapes.

I agree, though I do not see why we need to find just one cause — an artistic decision, a retro quality, an aesthetic trend, a rejection of perfection — when it could be driven by any number of these factors. Nailing down exactly which of these is the most important factor is not of particular interest to me; certainly, not nearly as much as understanding that people, as a general rule, value feeling.

I have written about this before and it is something I wish to emphasize repeatedly: efficiency and clarity are necessary elements, but are not the goal. There needs to be space for how things feel. I wrote this as it relates to cooking and cars and onscreen buttons, and it is still something worth pursuing each and every time we create anything.

I thought about this with these two articles, but first last week when Wil Shipley announced the end of Delicious Library:

Amazon has shut off the feed that allowed Delicious Library to look up items, unfortunately limiting the app to what users already have (or enter manually).

I wasn’t contacted about this.

I’ve pulled it from the Mac App Store and shut down the website so nobody accidentally buys a non-functional app.

Delicious Library was many things: physical and digital asset management software, a kind of personal library, and a wish list. But it was also — improbably — fun. Little about cataloguing your CDs and books sounds like it ought to be enjoyable, but Shipley and Mike Matas made it feel like something you wanted to do. You wanted to scan items with your Mac’s webcam just because it felt neat. You wanted to see all your media on a digital wooden shelf, if for no other reason than it made those items feel as real onscreen as they are in your hands.

Delicious Library became known as the progenitor of the “delicious generation” of applications, which prioritized visual appeal as much as utility. It was not enough for an app to be functional; it needed to look and feel special. The Human Interface Guidelines were just that: guidelines. One quality of this era was the apparently fastidious approach to every pixel. Another quality is that these applications often had limited features, but were so much fun to use that it was possible to overlook their restrictions.

I do not need to relitigate the subsequent years of visual interfaces going too far, then being reeled in, and then settling in an odd middle ground where I am now staring at an application window with monochrome line-based toolbar icons, deadpan typography, and glassy textures, throwing a heavy drop shadow. None of the specifics matter much. All I care about is how these things feel to look at and to use, something which can be achieved regardless of how attached you are to complex illustrations or simple line work. Like many people, I spend hours a day staring at pixels. Which parts of that are making my heart as happy as my brain? Which mundane tasks are made joyful?

This is not solely a question of software; it has relevance in our physical environment, too, especially as seemingly every little thing in our world is becoming a computer. But it can start with pixels on a screen. We can draw anything on them; why not draw something with feeling? I am not sure we achieve that through strict adherence to perfection in design systems and structures.

I am reluctant to place too much trust in my incomplete understanding of a foreign-to-me concept rooted in another country’s very particular culture, but perhaps the sabi is speaking loudest to me. Our digital interfaces never achieve a patina; in fact, the opposite is more often true: updates seem to erase the passage of time. It is all perpetually new. Is it any wonder so many of us ache for things which seem to freeze the passage of time in a slightly hazier form?

I am not sure how anyone would go about making software feel broken-in, like a well-worn pair of jeans or a lounge chair. Perhaps that is an unattainable goal for something on a screen; perhaps we never really get comfortable with even our most favourite applications. I hope not. It would be a shame if we lose that quality as software eats our world.

Michael Liedtke, Associated Press:

The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Google’s industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.

[…]

Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.

Casey Newton:

In addition to requiring that Chrome be divested, the proposal calls for several other major changes that would be enforced over a 10-year period. They include:

  • Blocking Google from making deals like the one it has with Apple to be its default search engine.

  • Requiring it to let device manufacturers show users a “choice screen” with multiple search engine options on it.

  • Licensing data about search queries, results, and what users click on to rivals.

  • Blocking Google from buying or investing in advertising or search companies, including makers of AI chatbots. (Google agreed to invest up to $2 billion into Anthropic last year.)

The full proposal (PDF) is a pretty easy read. One of the weirder ideas pitched by the Colorado side is to have Google “fund a nationwide advertising and education program” which may, among other things, “include reasonable, short-term incentive payments to users” who pick a non-Google search engine from the choice screen.

I am guessing that is not going to happen, and not just because “Plaintiff United States and its Co-Plaintiff States do not join in proposing these remedies”. In fact, much of this wish list seems unlikely to be part of the final judgement expected next summer — in part because it is extensive, in part because of politics, and also because it seems unrelated.

Deborah Mary Sophia, Akash Sriram, and Kenrick Cai, Reuters:

“DOJ will face substantial headwinds with this remedy,” because Chrome can run search engines other than Google, said Gus Hurwitz, senior fellow and academic director at University of Pennsylvania Carey Law School. “Courts expect any remedy to have a causal connection to the underlying antitrust concern. Divesting Chrome does absolutely nothing to address this concern.”

I — an effectively random Canadian with no expertise in this and, so, you should take my perspective with appropriate caveats — disagree.

The objective of disentangling Chrome from Google’s ownership, according to the executive summary (PDF) produced by the Department of Justice, is to remove “a significant challenge to effectuate a remedy that aims to ‘unfetter [these] market[s] from anticompetitive conduct'”:

A successful remedy requires that Google: stop third-party payments that exclude rivals by advantaging Google and discouraging procompetitive partnerships that would offer entrants access to efficient and effective distribution; disclose data sufficient to level the scale-based playing field it has illegally slanted, including, at the outset, licensing syndicated search results that provide potential competitors a chance to offer greater innovation and more effective competition; and reduce Google’s ability to control incentives across the broader ecosystem via ownership and control of products and data complementary to search.

The DOJ’s theory of growth reinforcing quality and market dominance is sound, from what I understand, and Google does advantage Chrome in some key ways. Most directly related to this case is whether Chrome activity is connected to Google Search. Despite company executives explicitly denying using Chrome browsing data for ranking, a leak earlier this year confirmed Google does, indeed, consider Chrome views in its rankings.

There is also a setting labelled “Make searches and browsing better”, which automatically “sends URLs of the pages you visit” to Google for users of Chromium-based browsers. Google says this allows the company to “predict what sites you might visit next and to show you additional info about the page you’re visiting” which allows users to “browse faster because content is proactively loaded”.

There is a good question as to how much Google Search would be impacted if Google could not own Chrome or operate its own browser for five years, as the remedy proposes. How much weight these features have in Google’s ranking system is something only Google knows. And the DOJ does not propose that Google Search cannot be preloaded in browsers whatsoever. Many users would probably still select Google as their browser’s search engine, too. But Google Search does benefit from Google’s ownership of Chrome itself, so perhaps it is worth putting barriers between the two.

I do not think Chrome can exist as a standalone company. I also do not think it makes sense for another company to own it, since any of those big enough to do so either have their own browsers — Apple’s Safari, Microsoft’s Edge — or would have the potential to create new anticompetitive problems, like if it were acquired by Meta.

What if the solution looks more like prohibiting Google from uniquely leveraging Chrome to benefit its other products? I do not know how that could be written in legal terms, but it appears to me this is one of the DOJ’s goals for separating Chrome and Google.

You might want to skip this one.

From the perspective of this outsider, the results of this year’s U.S. presidential election are stunning. I feel terrible for those within the U.S. who will endure another four years of having longtime institutions ripped apart by a criminal administration and its enablers in the legislative and judicial branches. This is true of just about everybody, but the brunt of the pain inflicted will — again — be directed toward the LGBTQ community, immigrants, visible minorities, and women.

As the world’s sole superpower, however, the effects of U.S. lawmaking will be felt everywhere. The incoming administration’s actions will, at best, disregard consequence. Again: at best. The rest of the world will attempt to govern itself around the whims of an unstable sex abuser, his dangerously feckless cabinet, and a host of grovelling billionaires whispering in his ear.

While the oligarchs and authoritarians of the world will have influence over what happens next in the U.S., us normal people will not. The best we can do is prevent a similar catastrophe befalling our communities. Democracies around the world have elected a raft of far-right ideologues and strongmen — in Austria, Belgium, France, Indonesia, Italy, and the Netherlands. Nationalist ideologies in Europe are now the “establishment”.

Here at home, Canada’s Conservative Party leader is more popular than his rivals and he is itching for an election. Though not our farthest-right party, his policies are of the slash-and-burn variety; his party uniformly voted against those new privacy laws.

Closer still, our provincial government is enacting massive reforms aligned with some of the most conservative U.S. states. At their recent conference, they embraced carbon dioxide as a token principle. Like many conservative governments, they are targeting people who are transgender with restrictive legislation opposed by medical professionals. These policies got the attention of Amnesty International when they were announced.

A predictable response from centrist parties is that they will move rightward to present themselves as a moderating alternative to the more hardline conservatives. I am not a political scientist, but it does not seem that growing the size of the tent will be inviting to a electorate increasingly comfortable with far-right ideas. There are thankfully still places where democracies in recent elections have not embraced a nationalist agenda, and where elections are not between shades of conservatism. Our politicians would do well to learn from them.

We each get to choose our societal role. At the moment, for those of us who do not align with these dominant forces, it can feel pretty small. This is not an airport book; I am not ending this thing on a hopeful note and a list of to-dos. I am scared of what this U.S. election means for decades to come. I am just as worried about policies close to home, and those are the ones I can try to do something about.

I am not giving up. But I am overwhelmed by how far democratic countries around the world have regressed, and how much further they are likely to go.

In short.

In long:

Ten years ago, the USB Implementers Forum finalized the specification for USB-C 1.0, and the world rejoiced, for it would free us from the burden of remembering which was the correct orientation of the plug relative to the socket. And lo, it was good.

And then we all actually got around to using USB-C devices and realized this whole thing is a little bit messy. While there was now a universal connector, the capabilities of the cable can range from those which support only power with maybe a trickle of data, all the way up to others which carry data at USB4 speeds. But that is not all. It might also support various Thunderbolt standards — 3, 4, and now 5 — and DisplayPort. That is neat. Again, this is all done using the same connector size and shape, and with cables that look practically interchangeable.

Which brings us to Ian Bogost, writing in the Atlantic — a requisite destination for intellectualized lukewarm takes — about his cable woes:

I am unfortunately old enough to remember when the first form of USB was announced and then launched. The problem this was meant to solve was the same one as today’s: “A rat’s nest of cords, cables and wires,” as The New York Times described the situation in 1998. Individual gadgets demanded specific plugs: serial, parallel, PS/2, SCSI, ADB, and others. USB longed to standardize and simplify matters — and it did, for a time.

But then it evolved: USB 1.1, USB 2.0, USB 3.0, USB4, and then, irrationally, USB4 2.0. Some of these cords and their corresponding ports looked identical, but had different capabilities for transferring data and powering devices. I can only gesture to the depth of absurdity that was soon attained without boring you to tears or lapsing into my own despair. […]

Reader — and I mean this with respect — I am only too willing to bore you to tears with another article about USB-C. Bogost is right, though. The original USB standard unified the many different ports one was expected to use for peripherals. It basically succeeded for at least two of them: the keyboard and mouse. Both require minimal data, so they work fine regardless of whether the port supports USB 1.1 or USB 3.1. Such standardization also came with loads more benefits, too, like reducing setup and configuration once necessary for even basic peripherals.

Where things got complicated is when data transfer speeds actually matter. USB 1.1 — the first version most people actually used — topped out at 12 Mbits per second; USB 2.0 could do 480 Mbits per second. Even so, the ports and cables looked identical. If you plugged an external hard drive into your computer using the wrong cable, you would notice because it would crawl.

This begat more specs allowing for higher speeds, requiring new cables and — sometimes — new connectors. And it was kind of a mess. So the USB-IF got together and created USB-C, which at least solves some of these problems. It is a more elegant connector and, so far, it has been flexible enough to support a wide range of uses.

That is kind of the problem with it, though: the connector can do everything, but there is no easy way to see what capabilities are supported by either the port or the cable. Put another way, if you connect a Thunderbolt 5 hard drive using the same cable as you use to charge new Magic Mouse and Keyboard, you will notice, just as you did twenty years ago.

Bogost, after describing his array of gadgets connected by USB-A, USB-C, and micro-HDMI:

This chaos was supposed to end, with USB-C as our savior. The European Union even passed a law to make that port the charging standard by the end of this year. […]

Hope persists that someday, eventually, this hell can be escaped — and that, given sufficient standardization, regulatory intervention, and consumer demand, a winner will emerge in the battle of the plugs. But the dream of having a universal cable is always and forever doomed, because cables, like humankind itself, are subject to the curse of time, the most brutal standard of them all. At any given moment, people use devices they bought last week alongside those they’ve owned for years; they use the old plugs in rental cars or airport-gate-lounge seats; they buy new gadgets with even better capabilities that demand new and different (if similar-looking) cables. […]

If the ultimate goal is a single cable and connector that can do everything from charge your bike light to connect a RAID array — do we still have RAID arrays? — I think that is foolish.

But I do not think that is the expectation. For one thing, note Bogost’s correctly chosen phrasing of what the E.U.’s standard entails. All devices have unified around a single charging standard, which does not demand any specialized cable. I use a Thunderbolt cable to sync my iPhone and charge my third-party keyboard, because I cannot be tamed.1 The same is true of my laptop and also my wife’s, the headphones I am wearing right now, a Bluetooth speaker we have kicking around, our Nintendo Switch, and my bicycle tire pump. Having one cable for all this stuff rules.

If you need higher speeds, though, I would bet you know that. If the difference between Thunderbolt 4 and Thunderbolt 5 matters to you, you are a different person than most. And, I would wager, you are probably happy that you can connect a fancy Thunderbolt drive to any old USB-C port and still read its contents, even if it is not as fast. That kind of compatibility is great.

Lookalike connectors are nothing new, however. P.C. users probably remember the days of PS/2 ports for the keyboard and mouse, which had the same plugs but were not interchangeable. 3.5mm circular ports were used for audio out, audio in, microphone — separate from audio in, for some reason — and individual speakers. This was such a mess that Microsoft and Intel decided PC ports needed colour-coding (PDF). Even proprietary connectors have this problem, as Apple demonstrated with some Lightning accessories.

We are doomed to repeat this so long as the same connectors and cables describe a wide range of capabilities. But solving that should never be the expectation. We should be glad to unify around standards for at least basic functions like charging and usable data transfer. USB-C faced an uphill battle because we probably had — and still have — devices which use other connectors. While my tire pump uses USB-C, my bike light charges using some flavour of mini-USB port. I do not know which. I have one cable that works and I dare not lose it.

Every newer standard is going to face an increasingly steep hill. USB-C now has a supranational government body mandating its use for wired charging in many devices which, for all its benefits, is also a hurdle if and when someone wants to build some device in which it would be difficult to accommodate a USB-C port. That I am struggling to think of a concrete example is perhaps an indicator of the specificity of such a product and, also, that I am not in the position of dreaming up such products.

But even without that regulatory oversight, any new standard will have to supplant a growing array of USB-C devices. We may not get another attempt at this kind of universality for a long time yet. It is a good thing USB-C is quite an elegant connector, and such a seemingly flexible set of standards.


  1. I still use a Lightning Magic Trackpad which means I used to charge it and sync my iPhone with the same cable, albeit more slowly. Apparently, the new USB-C Magic Trackpad is incompatible with my 2017 iMac, though I am not entirely sure why. Bluetooth, maybe? Standards! ↥︎

If software is judged by the difference between what it is actually capable of compared to what it promises, Siri is unquestionably the worst built-in iOS application. I cannot think of any other application which comes preloaded with a new iPhone that so greatly underdelivers, and has for so long.

Siri is thirteen years old, and we all know the story: beneath the more natural language querying is a fairly standard command-and-control system. In those years, Apple has updated the scope of its knowledge and responses, but because the user interface is not primarily a visual one, its outer boundaries are fuzzy. It has limits, but a user cannot know what they are until they try something and it fails. Complaining about Siri is both trite and evergreen. Yes, Siri has sucked forever, but maybe this time will be different.

At WWDC this year, Apple announced Siri would get a whole new set of powers thanks to Apple Intelligence. Users could, Apple said, speak with more natural phrasing. It also said Siri would understand the user’s “personal context” — their unique set of apps, contacts, and communications. All of that sounds great, but I have been down this road before. Apple has often promised improvements to Siri that have not turned it into the compelling voice-activated digital assistant it is marketed to be.

I was not optimistic — and I am glad — because Siri in iOS 18.1 is still pretty poor, with a couple of exceptions: its new visual presentation is fantastic, and type-to-Siri is nice. It is unclear exactly how Siri is enhanced with Apple Intelligence — more on this later — but this version is exactly as frustrating as those before it, in all the same ways.

As a reminder, Apple says users can ask Siri…

  • …to text a contact by using only their first name.

  • …for directions to locations using the place name.

  • …to play music by artist, album, or song.

  • …to start and stop timers.

  • …to convert from one set of units to another.

  • …to translate from one language to another.

  • …about Apple’s product features and documentation, new in iOS 18.1.

  • …all kinds of other stuff.

It continues to do none of these things reliably or predictably. Even Craig Federighi, when he was asked by Joanna Stern, spoke of his pretty limited usage:

I’m opening my garage, I’m closing my garage, I’m turning on my lights.

All kinds of things, I’m sending messages, I’m setting timers.

I do not want to put too much weight on this single response, but these are weak examples. This is what he could think of off the top of his head? That is all? I get it; I do not use it for much, either. And, as Om Malik points out, even the global metrics Federighi cites in the same answer do not paint a picture of success.

So, a refresh, and I will start with something positive: its new visual interface. Instead of a floating orb, the entire display warps and colour-shifts before being surrounded by a glowing border, as though enveloped in a dense magical vapour. Depending on how you activate Siri, the glow will originate from a different spot: from the power button, if you press and hold it; or from the bottom of the display, if you say “Siri” or “Hey, Siri”.

You can also now invoke text-based Siri — perfect for times when you do not want to speak aloud — by double-tapping the home bar. There has long been an option to type to Siri, but it has not been surfaced this easily, and I like it.

That is kind of where the good news stops, at least in my testing. I have rarely had a problem with Siri’s ability to understand what I am saying — I have a flat, Canadian accent, and I can usually speak without pauses or repeating words. There are writers who are more capable of testing for improvements for people with disabilities.

No, the things which Siri has flubbed have always been, for me, in its actions. Some of those should be new in iOS 18.1, or at least newly refined, but it is hard to know which. While Siri looks entirely different in this release, it is unclear what new capabilities it possesses. The full release notes say it can understand spoken queries better, and it has product documentation, but it seems anything else will be coming in future updates. I know a feature Apple calls “onscreen awareness”, which can interpret what is displayed, is one of those. I also know some personal context features will be released later — Apple says a user “could ask, ‘When is Mom’s flight landing?’ and Siri will find the flight details” no matter how they were sent. This is all coming later and, presumably, some of it requires third-party developer buy-in.

But who reads and remembers the release notes? What we all see is a brand-new Siri, and what we hear about is Apple Intelligence. Surely there must be some improvements beyond being able to ask the Apple assistant about the company’s own products, right? Well, if there are, I struggled to find them. Here are the actual interactions I have had in beta versions of iOS 18.1 for each thing in the list above:

  • I asked Siri to text Ellis — not their real name — a contact I text regularly. It began a message to a different Ellis I have in my contacts, to whom I have not spoken in over ten years.

    Similarly, I asked it to text someone I have messaged on an ongoing basis for fifteen years. Their thread is pinned to the top of Messages. Before it would let me text them, it asked if I wanted it to send it to their phone number or their email address.

  • I was driving and I asked for directions to Walmart. Its first suggestion was farther away and opposite the direction I was already travelling.

  • I asked Siri to “play the new album from Better Lovers”, an artist I have in my library and an album that I recently listened to in Apple Music. No matter my enunciation, it responded by playing an album from the Backseat Lovers, a band I have never listened to.

    I asked Siri to play an album which contains a song of the same name. This is understandably ambiguous if I do not explicitly state “play the album” or “play the song“. However, instead of asking for clarification when there is a collision like this, it just rolls the dice. Sometimes it plays the album, sometimes the song. But I am an album listener more often than I am a song listener, and my interactions with Siri and Apple Music should reflect that.

  • Siri starts timers without issue. It is one of few things which behaves reliably. But when I asked it to “stop the timer”, it asked me to clarify “which one?” between one active timer and two already-stopped timers. It should just stop the sole active timer; why would I ask it to stop a stopped timer?

  • I asked Siri “how much does a quarter cup of butter weigh?” and it converts that to litres or — because my device is set to U.S. localization for the purposes of testing Apple Intelligence — gallons. Those are volumetric measurements, not weight-based. If I ask Siri “what is the weight of a quarter cup of butter?”, it searches the web. I have to explicitly say “convert one quarter cup of butter to grams”.

  • I asked Siri “what is puente in English?” and it informed me I needed to use the Translate app. Apparently, you can only translate from Siri’s language — English, in this case — to another language when using Siri. Translating from a different language cannot be done with Siri alone.

  • I rarely see the Priority Messages feature in Mail, so I asked Siri about it. I tried different ways to phrase my question, like “what is the Priority Messages feature in Mail?”, but it would not return any documentation about this feature.

Maybe I am using Siri wrong, or expecting too much. Perhaps all of this is a beta problem. But, aside from the last bullet, these are the kinds of things Apple has said Siri can do for over a decade, and it does not do so predictably or reliably. I have had similar or identical problems with Siri in non-beta versions of iOS. Today, while using the released version of iOS 18.1, I asked it if a nearby deli was open. It gave me the hours for a deli in Spokane — hundreds of kilometres away, and in a different country.

This all feels like it may be, perhaps, a side effect of treating an iPhone as an entire widget with a governed set of software add-ons. The quality of the voice assistant is just one of a number of factors to consider when buying a smartphone, and the predictably poor Siri is probably not going to be a deciding factor for many.

But the whole widget has its advantages — you can find plenty of people discussing those, and Apple’s many marketing pieces will dutifully recite them. Since its debut in 2011, Apple has rarely put Siri front-and-centre in its iPhone advertising campaigns, but it is doing just that with the iPhone 16. It is showcasing features which rely on whole-device control — features that, admittedly, will not be shipping for many months. But the message is there: Siri has a deep understanding of your world and can present just the right information for you. Yet, as I continue to find out, it does not do that for me. It does not know who I text in the first-party Messages app or what music I listen to in Apple Music.

Would Siri be such a festering scab if it had competitors within iOS? Thanks to an extremely permissive legal environment around the world in which businesses scoop up vast amounts of behavioural data to make it slightly easier to market laundry detergent and dropshipped widgets, there is a risk to granting this kind of access to some third-party product. But if there were policies to make that less worrisome, and if Apple permitted it, there would be more intense pressure to improve Siri — and, for that matter, all voice assistants tied to specific devices.

The rollout of Apple Intelligence is uncharacteristically piecemeal and messy. Apple did not promise a big Siri overhaul in this version of iOS 18.1. But by giving it a new design, Apple communicates something is different. It is not — at least, not yet. Maybe it will be one day. Nothing about Siri’s state over the past decade-plus gives me hope that it will, however. I have barely noticed improvements in the things Apple says it should do better in iOS 18.1, like preserving context and changing my mind mid-dictation.

Siri remains software I distrust. Like Federighi, I would struggle to list my usage beyond a handful of simple commands — timers, reminders, and the occasional message. Anything else, and it remains easier and more reliable to wash my hands if I am kneading pizza dough, or park the car if I am driving, and do things myself.

Apple is a famously tight-knit business. Its press releases and media conferences routinely drum the integration of hardware, software, and services as something only Apple is capable of doing. So it sticks out when features feel like they were developed by people who do not know what another part of the company is doing. This happened to me twice in the past week.

Several years ago, Apple added a very nice quality-of-life improvement to the Mac operating system: software installers began offering to delete themselves after they had done their job. This was a good idea.

In the ensuing years, Apple made some other changes to MacOS in an effort to — it says — improve privacy and security. One of the new rules it imposed was requiring the user to grant apps specific permission to access certain folders; another was a requirement to allow one app to modify or delete another.

And, so, when I installed an application earlier this month, I was shown an out-of-context dialog at the end of the process asking for access to my Downloads folder. I granted it. Then I got a notification that the Installer app was blocked from modifying or deleting another file. To change it, I had to open System Settings, toggle the switch, enter my password, and then I was prompted to restart the Installer application — but it seemed to delete itself just fine without my doing so.

This is a built-in feature, triggered by where the installer has been downloaded, using an Apple-provided installation packaging system.1 But it is stymied by a different set of system rules and unexpected permissions requests.


Another oddity is in Apple’s two-factor authentication system. Because Apple controls so much about its platforms, authentication codes are delivered through a system prompt on trusted devices. Preceding the code is a notification informing the user their “Apple Account is being used to sign in”, and it includes a map of where that is.

This map is geolocated based on the device’s IP address, which can be inaccurate for many reasons — something Apple discloses in its documentation:

This location is based on the new device’s IP address and might reflect the network that it’s connected to, rather than the exact physical location. If you know that you’re the person trying to sign in but don’t recognize the location, you can still tap Allow and view the verification code.

It turns out one of the reasons the network might think you are located somewhere other than where you are is because you may be using iCloud Private Relay. Even if you have set it to “maintain general location”, it can sometimes be incredibly inaccurate. I was alarmed to see a recent attempt from Toronto when I was trying to sign into iCloud at home in Calgary — a difference of over 3,000 kilometres.

The map gives me an impression of precision and security. But if it is made less accurate in part because of a feature Apple created and markets, it is misleading and — at times — a cause of momentary anxiety.

What is more, Safari supports automatically filling authentication codes delivered by text message. Apple’s own codes, though, cannot be automatically filled.


These are small things — barely worth the bug report. They also show how features introduced one year are subverted by those added later, almost like nobody is keeping track of all of the different capabilities in Apple’s platforms. I am sure there are more examples; these are just the ones which happened in the past week, and which I have been thinking about. They expose little cracks in what is supposed to be a tight, coherent package of software.


  1. Thanks to Keir Ansell for tracking down this documentation for me. ↥︎

The New York Times recently ran a one–two punch of stories about the ostensibly softening political involvement of Mark Zuckerberg and Meta — where by “punch”, I mean “gentle caress”.

Sheera Frenkel and Mike Isaac on Meta “distanc[ing] itself from politics”:

On Facebook, Instagram and Threads, political content is less heavily featured. App settings have been automatically set to de-emphasize the posts that users see about campaigns and candidates. And political misinformation is harder to track on the platforms after Meta removed transparency tools that journalists and researchers used to monitor the sites.

[…]

“It’s quite the pendulum swing because a decade ago, everyone at Facebook was desperate to be the face of elections,” said Katie Harbath, chief executive of Anchor Change, a tech consulting firm, who previously worked at Facebook.

Facebook used to have an entire category of “Government and Politics” advertising case studies through 2016 and 2017; it was removed by early 2018. I wonder if anything of note happened in the intervening months. Anything at all.

All of this discussion has so far centred U.S. politics; due to the nature of reporting, that will continue for the remainder of this piece. I wonder if Meta is ostensibly minimizing politics everywhere. What are the limits of that policy? Its U.S. influence is obviously very loud and notable, but its services have taken hold — with help — around the world. No matter whether it moderates those platforms aggressively or it deprioritizes what it identifies as politically sensitive posts, the power remains U.S.-based.

Theodore Schleifer and Mike Isaac, in the other Times article about Zuckerberg personally, under a headline claiming he “is done with politics”, wrote about the arc of his philanthropic work, which he does with his wife, Dr. Priscilla Chan:

Two years later, taking inspiration from Bill Gates, Mr. Zuckerberg and Dr. Chan established the Chan Zuckerberg Initiative, a philanthropic organization that poured $436 million over five years into issues such as legalizing drugs and reducing incarceration.

[…]

Mr. Zuckerberg and Dr. Chan were caught off guard by activism at their philanthropy, according to people close to them. After the protests over the police killing of George Floyd in 2020, a C.Z.I. employee asked Mr. Zuckerberg during a staff meeting to resign from Facebook or the initiative because of his unwillingness at the time to moderate comments from Mr. Trump.

The incident, and others like it, upset Mr. Zuckerberg, the people said, pushing him away from the foundation’s progressive political work. He came to view one of the three central divisions at the initiative — the Justice and Opportunity team — as a distraction from the organization’s overall work and a poor reflection of his bipartisan point-of-view, the people said.

This foundation, like similar ones backed by other billionaires, appears to be a mix of legitimate interests for Chan and Zuckerberg, and a vehicle for tax avoidance. I get that its leadership tries to limit its goals and focus on specific areas. But to be in any way alarmed by internal campaigning? Of course there are activists there! One cannot run a charitable organization claiming to be “building a better future for everyone” without activism. That Zuckerberg’s policies at Meta is an issue for foundation staff points to the murky reality of billionaire-controlled charitable initiatives.

Other incidents piled up. After the 2020 election, Mr. Zuckerberg and Dr. Chan were criticized for donating $400 million to the nonprofit Center for Tech and Civic Life to help promote safety at voting booths during pandemic lockdowns. Mr. Zuckerberg and Dr. Chan viewed their contributions as a nonpartisan effort, though advisers warned them that they would be criticized for taking sides.

The donations came to be known as “Zuckerbucks” in Republican circles. Conservatives, including Mr. Trump and Representative Jim Jordan of Ohio, a Republican who is chairman of the House Judiciary Committee, blasted Mr. Zuckerberg for what they said was an attempt to increase voter turnout in Democratic areas.

This is obviously a bad faith criticism. In what healthy democracy would lawmakers actively campaign against voter encouragement? Zuckerberg ought to have stood firm. But it is one of many recent clues as to Zuckerberg’s thinking.

My pet theory is Zuckerberg is not realigning on politics — either personally or as CEO of Meta — out of principle; I am not even sure he is changing at all. He has always been sympathetic to more conservative voices. Even so, it is important for him to show he is moving toward overt libertarianism. In the United States, politicians of both major parties have been investigating Meta for antitrust concerns. Whether the effort by Democrats is in earnest is a good question. But the Republican efforts have long been dominated by a persecution complex where they believe U.S. conservative voices are being censored — something which has been repeatedly shown to be untrue or, at least, lacking context. If Zuckerberg can convince Republican lawmakers he is listening to their concerns, maybe he can alleviate the bad faith antitrust concerns emanating from the party.

I would not be surprised if Zuckerberg’s statements encourage Republican critics to relent. Unfortunately, as in 2016, that is likely to taint any other justifiable qualms with Meta as politically motivated. Recall how even longstanding complaints about Facebook’s practices, privacy-hostile business, and moderation turned into a partisan argument. The giants of Silicon Valley have every reason to expect ongoing scrutiny. After Meta’s difficult 2022, it is now worth more than ever before — the larger and more influential it becomes, the more skepticism it should expect.

Hannah Murphy, Financial Times:

Some suggest Zuckerberg has been emboldened by X’s Musk.

“With Elon Musk coming and literally saying ‘fuck you’ to people who think he shouldn’t run Twitter the way he has, he is dramatically lowering the bar for what is acceptable behaviour for a social media platform,” said David Evan Harris, the Chancellor’s public scholar at California University, Berkeley and a former Meta staffer. “He gives Mark Zuckerberg a lot of permission and leeway to be defiant.”

This is super cynical. It also feels, unfortunately, plausible for both Zuckerberg and Meta as a company. There is a vast chasm of responsible corporate behaviour which opened up in the past two years and it seems like it is giving room to already unethical players to shine.

See Also: Karl Bode was a guest on “Tech Won’t Save Us” to discuss Zuckerberg’s P.R. campaign with Paris Marx.

Sarah Perez, TechCrunch:

iOS apps that build their own social networks on the back of users’ address books may soon become a thing of the past. In iOS 18, Apple is cracking down on the social apps that ask users’ permission to access their contacts — something social apps often do to connect users with their friends or make suggestions for who to follow. Now, Apple is adding a new two-step permissions pop-up screen that will first ask users to allow or deny access to their contacts, as before, and then, if the user allows access, will allow them to choose which contacts they want to share, if not all.

Kevin Roose, New York Times, in an article with the headline “Did Apple Just Kill Social Apps?”:

Now, some developers are worried that they may struggle to get new apps off the ground. Nikita Bier, a start-up founder and advisor who has created and sold several viral apps aimed at young people, has called the iOS 18 changes “the end of the world,” and said they could render new friend-based social apps “dead on arrival.”

That might be a little melodramatic. I recently spent some time talking to Mr. Bier and other app developers and digging into the changes. I also heard from Apple about why they believe the changes are good for users’ privacy, and from some of Apple’s rivals, who see it as an underhanded move intended to hurt competitors. And I came away with mixed feelings.

Leaving aside the obviously incendiary title, I think this article’s framing is pretty misleading. Apple’s corporate stance is the only one favourable to these limitations. Bier is the only on-the-record developer who thinks these changes are bad; while Roose interviewed others who said contact uploads had slowed since iOS 18’s release, they were not quoted “out of fear of angering the Cupertino colossus”. I suppose that is fair — Apple’s current relationship with developers seems to be pretty rocky. But this article ends up poorly litigating Bier’s desires against Apple giving more control to users.

Bier explicitly markets himself as a “growth expert”; his bio on X is “I make apps grow really fast”. He has, to quote Roose, “created and sold several viral apps” in part by getting users to share their contact list, even children. Bier’s first hit app, TBH, was marketed to teenagers and — according to several sources I could find, including a LinkedIn post by Kevin Natanzon — it “requested address book access before actually being able to use the app”. A more respectful way of offering this feature would be to ask for contacts permission only when users want to add friends. Bier’s reputation for success is built on this growth hacking technique, so I understand why he is upset.

What I do not understand is granting Bier’s objections the imprimatur of a New York Times story when one can see the full picture of Bier’s track record. On the merits, I am unsympathetic to his complaints. Users can still submit their full contact list if they so choose, but now they have the option of permitting only some access to an app I have not even decided I trust.

Roose:

Apple’s stated rationale for these changes is simple: Users shouldn’t be forced to make an all-or-nothing choice. Many users have hundreds or thousands of contacts on their iPhones, including some they’d rather not share. (A therapist, an ex, a random person they met in a bar in 2013.) iOS has allowed users to give apps selective access to their photos for years; shouldn’t the same principle apply to their contacts?

The surprise is not that Apple is allowing more granular contacts access, it is that it has taken this long for the company to do so. Developers big and small have abused this feature to a shocking degree. Facebook ingested the contact lists of a million and a half users unintentionally — and millions of users intentionally — a massive collection of data which was used to inform its People You May Know feature. LinkedIn is famously creepy and does basically the same thing. Clubhouse borrowed from the TBH playbook by slurping up contacts before you could use the app.1 This has real consequences in surfacing hidden connections many people would want to stay hidden.

Even a limited capacity of allowing users to more easily invite friends can go wrong. When Tribe offered such a feature, it spammed users’ contacts. It settled a resulting class action suit in 2018 for $200,000 without admitting wrongdoing. That may have been accidental. Circle, on the other hand, was deliberate in its 2013 campaign.

Apple’s position is, therefore, a reasonable one, but it is strange to see no voices from third-party experts favourable to this change. Well-known iOS security researchers Mysk celebrated it; why did Roose not talk to them? I am sure there are others who would happily adjudicate Apple’s claims. The cool thing about a New York Times email address is that people will probably reply, so it seems like a good idea to put that power to use. Instead, all we get is this milquetoast company-versus-growth-hacker narrative, with some antitrust questions thrown in toward the end.

Roose:

Some developers also pointed out that the iOS 18 changes don’t apply to Apple’s own services. iMessage, for example, doesn’t have to ask for permission to access users’ contacts the way WhatsApp, Signal, WeChat and other third-party messaging apps do. They see that as fundamentally anti-competitive — a clear-cut example of the kind of self-preferencing that antitrust regulators have objected to in other contexts.

I am not sure this is entirely invalid, but it seems like an overreach. The logic of requiring built-in apps to request the same permissions as third-party apps is, I think, understandable on fairness grounds, but there is a reasonable argument to be made for implied consent as well. Assessing this is a whole different article.

But Messages accesses the contacts directory on-device, while many other apps will transport the list off-device. That is a huge difference. Your contact list is almost certainly unique. The specific combination of records is a goldmine for social networks and data brokers wishing to individually identify you, and understand your social graph.

I have previously argued that permission to access contacts is conceptually being presented to the wrong person — it ought to, in theory, be required by the people in your contacts instead. Obviously that would be a terrible idea in practice. Yet each of us has only given our contact information to a person; we may not expect them to share it more widely.

As in so many other cases, the answer here is found in comprehensive privacy legislation. You should not have to worry that your phone number in a contact list or used for two-factor authentication is going to determine your place in the global social graph. You should not have to be concerned that sharing your own contact list in a third-party app will expose connections or send an unintended text to someone you have not spoken with in a decade. Data collected for a purpose should only be used for that purpose; violating that trust should come with immediate penalties, not piecemeal class action settlements and FTC cases.

Apple’s solution is imperfect. But if it stops the Biers of the world from building apps which ingest wholesale the contact lists of teenagers, I find it difficult to object.


  1. Remember when Clubhouse was the next big thing, and going to provide serious competition to incumbent giants? ↥︎

Allow me to set the scene: you have been seated with a group of your friends at a restaurant, catching up in a lively discussion, when a member of the waitstaff shows up. They take everyone’s orders, then the discussion resumes — but they return a short while later to ask if you heard about the specials. You had and, anyway, you have already ordered what you want, so the waiter leaves. You chat amongst yourselves again.

But then they appear again. Might they suggest some drinks? How about a side? Every couple of minutes, they reliably return, breaking your discussion to sell you on something else.

Would you like to see the menu again? Here, try this new thing. Here, try this classic thing we brought back. Here is a different chair. How about we swap the candles on the table for a disco ball? Would you like to hear the specials again? Have you visited our other locations?

It is weird because you had been to this restaurant a few times before and the service was attentive, but not a nuisance. Now that you think of it, though, the waitstaff became increasingly desperate after your first visit. Those first interruptions were fine because they were expected — even desired. But there is a balance. You are coming to this restaurant because the food and the drinks are good, sure, but you are there with friends to catch up.

Now, pressured by management, the waiters have become a constant irritant instead of helpful, and there is nothing you can do. You can ask them to leave you alone, but they only promise a slightly longer gap. There is no way to have a moment to yourselves. Do you get up and leave? Do you come back? I would not. In actual experience, there are restaurants I avoid because the service is just too needy.

Also, apps.

When I open any of the official clients for the most popular social media platforms — Instagram, Threads, X, or YouTube — I am thrust into an environment where I am no longer encouraged to have a good time on my own terms. From home feeds containing a blend of posts from accounts I follow and those I do not, to all manner of elements encouraging me to explore other stuff — the platform is never satisfied with my engagement. I have not even factored in ads; this is solely about my time commitment. These platforms expect more of it.

These are decisions made by people who, it would seem, have little trust in users. There is rarely an off switch for any of these features — at best, there is most often only a way to temporarily hide them.

These choices illustrate the sharply divergent goals of these platforms and my own wishes. I would like to check out the latest posts from the accounts I follow, maybe browse around a bit, and then get out. That is a complete experience for me. Not so for these platforms.

Which makes it all the more interesting when platforms try new things they think will be compelling, like this announcement from Meta:

We’re expanding Meta AI’s Imagine features, so you can now imagine yourself as a superhero or anything else right in feed, Stories and your Facebook profile pictures. You can then easily share your AI-generated images so your friends can see, react to or mimic them. Meta AI can also suggest captions for your Stories on Facebook and Instagram.

[…]

And we’re testing new Meta AI-generated content in your Facebook and Instagram feeds, so you may see images from Meta AI created just for you (based on your interests or current trends). You can tap a suggested prompt to take that content in a new direction or swipe to Imagine new content in real time.

Perhaps this is appealing to you, but I find this revolting. Meta’s superficially appealing generated images have no place in my Instagram feed; they do not reflect how I actually want to use Instagram at all.

Decisions like these have infected the biggest platforms in various ways, which explains why I cannot stand to use most of them any longer. The one notable asterisk is YouTube which, as of last year, allows you to hide suggested videos on the homepage, which also turns off Shorts’ infinite scrolling. However, every video page still contains suggestions for what you should watch next. Each additional minute of your time is never enough for any of these platforms; they always want the minute after that, too.

You really notice the difference in respect when you compare these platforms against smaller, less established competitors. When I open Bluesky or my current favourite Mastodon client, it feels similar to the way social media did about ten years ago blended with an updated understanding of platform moderation. Glass is another tremendous product which lets me see exactly what I want, and discover more if I would like to — but there is no pressure.

The business models of these companies are obviously and notably very different from those of incumbent players. Bluesky and Mastodon are both built atop open protocols, so their future is kind of independent of whether the companies themselves exist. But, also, it is possible there will come a time when those protocols lack the funding to be updated, and are only used by not more than a handful of people each running their own instance. Glass, on the other hand, is just a regular boring business: users pay money for it.

Is the future of some of these smaller players going to mimic those which have come before? Must they ultimately disrespect their users? I hope that is not the roadmap for any of them. It should not be necessary to slowly increase the level of hostility between product and user. It should be possible to build a successful business by treating people with respect.

The biggest social platforms are fond of reminding us about how they facilitate connections and help us communicate around the world. They are a little bit like a digital third place. And, just as you would not hang out somewhere that was actively trying to sabotage your ability to chat with your friends in real life, it is hard to know why you would do so online, either. Happily, where Google and Meta and X exhaust me with their choices, there are a wealth of new ideas that bring back joy.

It was only a couple of weeks ago when Mark Zuckerberg wrote in a letter to U.S. lawmakers about his regret in — among other things — taking officials at their word about Russian election meddling in 2020. Specifically, he expressed remorse for briefly demoting a single link to a then-breaking New York Post story involving data it had obtained from a copy of the hard drive of a laptop formerly belonging to Hunter Biden, the son of the now-U.S. president.

At the time, some U.S. officials were concerned about the possibility it was partly or wholly Russian disinformation. This was later found to be untrue. In his letter, Zuckerberg wrote “in retrospect, we shouldn’t have demoted the story” and said the company has “changed [its] policies and processes to make sure this doesn’t happen again”.

To be clear, the laptop story was not a hoax and social media platforms ultimately erred in the decisions they made, but their policies were not inherently unfair and were reasonably cautious in their handling of the story. Nevertheless, the phrase “Hunter Biden’s laptop” is now a kind of shibboleth among those who believe there is a mass censorship campaign by disinformation researchers, intelligence agencies, and social media companies. That group includes people like Jim Jordan, to whom Zuckerberg addressed his obsequious letter. Surely, he was no longer taking U.S. officials at their word, and would be shrugging off their suggestions for platform moderation.

Right?

Kevin Collier and Phil Helsel, NBC News:

Social media giant Meta announced Monday that it is banning Russian media outlet RT, days after the Biden administration accused RT of acting as an arm of Moscow’s spy agencies.

[…]

U.S. officials allege that in Africa, RT is behind an online platform called “African Stream” but hides its role; that in Germany, it secretly runs a Berlin-based English-language site known as “Red”; and that in France, it hired a journalist in Paris to carry out “influence projects” aimed at a French-speaking audience.

As of writing, the Instagram and Threads accounts for Red are still online, but its Facebook page is not. A June report in Tagesspiegel previously connected Red to RT.

But I could not find any previous reporting connecting African Stream to Russia before U.S. officials made that claim. Even so, without corroborating evidence, Meta dutifully suspended African Stream’s presence on its platforms, which appeared to be active as of Friday.

Meta should — absolutely — do its best to curtail governments’ use of its platforms to spread propaganda and disinformation. All platforms should do so. I also hope it was provided more substantial evidence of RT’s involvement in African Stream. By that standard, it was also reasonable — if ultimately wrong — for it to minimize the spread of the Post story in 2020 based on the information it had at the time.

For all Zuckerberg’s grovelling to U.S. lawmakers, Meta ultimately gets to choose what is allowed on its platforms and what is not. It is right for it to be concerned about political manipulation. But this stuff is really hard to moderate. That is almost certainly why it is deprioritizing “political” posts — not because they do not get engagement or that the engagement they do get is heated and negative, but because it risks allegations of targeted censorship and spreading disinformation. Better, in Meta’s view, to simply restrict it all. Zuckerberg has figured out Meta is just as valuable when it does not react to criticism.

What I am worried about is the rising tension between the near-global scope of social media platforms and the parallel attempts by governments to get them to meet local expectations. Many of these platforms are based in the U.S. and have uncomfortably exported those values worldwide. Meta’s platforms are among the world’s most-used, so it is often among the most criticized. But earlier this month, X was banned in Brazil. The U.S. is seeking to ban TikTok and, based on a hearing today, it may well succeed.

It is concerning these corporations have such concentrated power, but I also do not think it makes sense to either treat them as common carriers, nor for them to be moderated in other countries as they would in the United States. I am more supportive of decentralized social software based on protocols like ActivityPub. Those can be, if anything, even more permissive and even harder to moderate. That also makes them more difficult for governments to restrict them — something which I support, but I know is not seen universally as the correct choice. They minimize the control of a single party’s decisions and, with it, help reduce the kinds of catastrophes we have seen from the most popular days of Facebook and Twitter.

Surely there will be new problems with which to contend, and perhaps it will have been better for there to be monolithic decision-makers after all. But it is right to try something different, and I am glad to see support building in different expressions. It is an exciting time for the open web.

Even so, I still wish for a good MacOS Bluesky client.

I do not wish to make a whole big thing out of this, but I have noticed a bunch of little things which make my iPhone a little bit harder to use. For this, I am setting aside things like rearranging the Home Screen, which still feels like playing Tetris with an adversarial board. These are all things which are relatively new, beginning with the always-on display and the Island in particular, neither of which I had on my last iPhone.

The always-on display is a little bit useful and a little bit of a gimmick. I have mine set to hide the wallpaper and notifications. In this setup, however, the position of media controls becomes unpredictable. Imagine you are listening to music when someone wishes to talk to you. You reach down to the visible media controls and tap where the pause button is, knowing that this only wakes the display. You go in for another tap to pause but — surprise — you got a notification at some point and, so, now that you have woken up the display, the notification slides in from the bottom and moves the media controls up, so you have now tapped on a notification instead.

I can resolve this by enabling notifications on the dimmed lock screen view, but that seems more like a workaround than a solution to this unexpected behaviour. A simple way to fix this would be to not show media controls when the phone is locked and the display is asleep. They are not functional, but they create an expectation for where those controls will be, which is not necessarily the case.

The Dynamic Island is fussy, too. I frequently interact with it for media playback, but it has a very short time-out. That is, if I pause media from the Dynamic Island, the ability to resume playback disappears after just a few seconds; I find this a little disorientating.

I do not understand how to swap the priority or visibility of Dynamic Island Live Activities. That is to say the Dynamic Island will show up to two persistent items, one of which will be minimized into a little circular icon, while the other will wrap around the display cutout. Apple says I should be able to swap the position of these by swiping horizontally, but I can only seem to make one of the Activities disappear no matter how I swipe. And, when I do make an Activity disappear, I do not know how I can restore it.

I find a lot of the horizontal swiping gestures too easy to activate in the Dynamic Island — I have unintentionally made an Activity disappear more than once — and across the system generally. It seems only a slightly off-centre angle is needed to transform a vertical scrolling action into a horizontal swiping one. Many apps make use of “sloppy” swiping — being able to swipe horizontally anywhere on the display to move through sequential items or different pages — and vertical scrolling in the same view, but the former is too easy for me to trigger when I intend the latter.

I also find the area above the Dynamic Island too easy to touch when I am intending to expand the current Live Activity. This will be interpreted as touching the Status Bar, which will jump the scroll position of the current view to the top.

Lastly, the number of unintended taps I make has, anecdotally, skyrocketed. One reason for this is a change made several iOS versions ago to recognize touches more immediately. If I am scrolling a long list and I tap the display to stop the scroll in-place, resting my thumb onscreen is sometimes read as a tap action on whatever control is below it. Another reason for accidental touches is that pressing the sleep/wake button does not immediately stop interpreting taps on the display. You can try this now: open Mail, press the sleep/wake button, then — without waiting for the display to fall asleep — tap some message in the list. It is easy to do this accidentally when I return my phone to my pocket, for example.

These are all little things but they are a cumulative irritation. I do not think my motor skills have substantially changed in the past seventeen years of iOS device use, though I concede they have perhaps deteriorated a little. I do notice more things behaving unexpectedly. I think part of the reason is this two-dimensional slab of glass is being asked to interpret a bunch of gestures in some pretty small areas.

Chance Miller, 9to5Mac:

Apple has changed its screen recording privacy prompt in the latest beta of macOS Sequoia. As we reported last week, Apple’s initial plan was to prompt users to grant screen recording permissions weekly.

In macOS Sequoia beta 6, however, Apple has adjusted this policy and will now prompt users on a monthly basis instead. macOS Sequoia will also no longer prompt you to approve screen recording permissions every time you reboot your Mac.

After I wrote about the earlier permissions prompt, I got an email from Adam Selby, who manages tens of thousands of Macs in an enterprise context. Selby wanted to help me understand the conditions which trigger this alert, and to give me some more context. The short version is that Apple’s new APIs allow clearer and more informed user control over screen recording to the detriment of certain types of application, and — speculation alert — it is possible this warning will not appear in the first versions of MacOS Sequoia shipped to users.

Here is an excerpt from the release notes for the MacOS 15.0 developer beta:

Applications utilizing deprecated APIs for content capture such as CGDisplayStream & CGWindowListCreateImage can trigger system alerts indicating they might be able to collect detailed information about the user. Developers need to migrate to ScreenCaptureKit and SCContentSharingPicker. (120910350)

It turns out the “and” in that last sentence is absolutely critical. In last year’s beta releases of MacOS 14, Apple began advising developers it would be deprecating CoreGraphics screenshot APIs, and that applications should migrate to ScreenCaptureKit. However, this warning was removed by the time MacOS 14.0 shipped to users, only for it to reappear in the beta versions of 14.4 released to developers earlier this year. Apple’s message was to get on board — and fast — with ScreenCaptureKit.

ScreenCaptureKit was only the first part of this migration for developers. The second part — returning to the all-important “and” from the 15.0 release notes — is SCContentSharingPicker. That is the selection window you may have seen if you have recently tried screen sharing with, say, FaceTime. It has two agreeable benefits: first, it is not yet another permissions dialog; second, it allows the user to know every time the screen is being recorded because they are actively granting access through a trusted system process.

This actually addresses some of the major complaints I have with the way Apple has built out its permissions infrastructure to date:

[…] Even if you believe dialog boxes are a helpful intervention, Apple’s own sea of prompts do not fulfil the Jobs criteria: they most often do not tell users specifically how their data will be used, and they either do not ask users every time or they cannot be turned off. They are just an occasional interruption to which you must either agree or find some part of an application is unusable.

Instead of the binary choices of either granting apps blanket access to record your screen or having no permissions dialog at all for what could be an abused feature, this picker gives users the control and knowledge over how an app may record their screen. This lacks a scary catch-all dialog in favour of ongoing consent. A user will know exactly when an app is recording their screen, and exactly what it is recording, because that permission is no longer something an app gets, but something given to it by this picker.

This makes sense for a lot of screen recording use cases — for example, if someone is making a demo video, or if they are showing their screen in an online meeting. But if someone is trying to remotely access a computer, there is a sort of Möbius strip of permissions where you need to be able to see the remote screen in order to grant access to be able to see the screen. The Persistent Content Capture entitlement is designed to fix that specific use case.

Even though I think this structure will work for most apps, most of the time, it will add considerable overhead for apps like xScope, which allows you to measure and sample anything you can see, or ScreenFloat — a past sponsor — which allows you to collect, edit, and annotate screenshots and screen recordings. To use these utilities and others like them, a user will need to select the entire screen from the window picking control every time they wish to use a particular tool. Something as simple as copying an onscreen colour is now a clunky task without, as far as I can tell, any workaround. That is basically by design: what good is it to have an always-granted permission when the permissions structure is predicated on ongoing consent? But it does mean these apps are about to become very cumbersome. Either you need to grant whole-screen access every time you invoke a tool (or launch the app), or you do so a month at a time — and there is no guarantee the latter grace period will stick around in future versions of MacOS.

I think it is possible MacOS 15.0 ships without this dialog. In part, that is because its text — “requesting to bypass the system window picker” — is technical and abstruse, written with seemingly little care for average user comprehension. I also think that could be true because it is what happened last year with MacOS 14.0. That is not to say it will be gone for good; Apple’s intention is very clear to me. But hopefully there will be some new APIs or entitlement granted to legitimately useful utility apps built around latent access to seeing the whole screen when a user commands. At the very least, users should be able to grant access indefinitely.

I do not think it is coincidental this Windows-like trajectory for MacOS has occurred as Apple tries to focus more on business customers. In an investor call last year, Tim Cook said Apple’s “enterprise business is growing”. In one earlier this month, he seemed to acknowledge it was a factor, saying the company “also know[s] the importance of security for our users and enterprises, so we continue to advance protections across our products” in the same breath as providing an update on the company’s Mac business. This is a vague comment and I am wary of reading too much into it, but it is notable to see the specific nod to Mac enterprise security this month. I hope this does not birth separate “Home” and “Professional” versions of MacOS.

Still, there should be a way for users to always accept the risks of their actions. I am confident in my own ability to choose which apps I run and how to use my own computer. For many people — maybe most — it makes sense to provide a layer of protection for possibly harmful actions. But there must also be a way to suppress these warnings. Apple ought to be doing better on both counts. As Michael Tsai writes, the existing privacy system “feels like it was designed, not to help the user understand what’s going on and communicate their preferences to the system, but to deflect responsibility”. The new screen recording picker feels like an honest attempt at restricting what third-party apps are able to do without the user’s knowledge, and without burdening users with an uninformative clickwrap agreement.

But, please, let me be riskier if I so choose. Allow me to let apps record the entire screen all the time, and open unsigned apps without going through System Settings. Give me the power to screw myself over, and then let me get out of it. One does not get better at cooking by avoiding tools that are sharp or hot. We all need protections from our own stupidity at times, but there should always be a way to bypass them.

Marko Zivkovic, in an April report for AppleInsider, revealed several new Safari features to debut this year. Some of them, like A.I.-based summarization, were expected and shown at WWDC. Then there was this:

Also accessible from the new page controls menu is a feature Apple is testing called “Web Eraser.” As its name would imply, it’s designed to allow users to remove, or erase, specific portions of web pages, according to people familiar with the feature.

WWDC came and went without any mention of this feature, despite its lengthy and detailed description in that April story. Zivkovic, in a June article, speculated on what happened:

So, why did Apple remove a Safari feature that was fully functional?

The answer to that question is likely two-fold — to avoid controversy and to make leaked information appear inaccurate or incorrect.

The first of these reasons is plausible to me; the second is not. In May, Lara O’Reilly of Business Insider reported on a letter sent by a group of publishers and advertisers worried Apple was effectively launching an ad blocker. Media websites may often suck, but this would be a big step for a platform owner to take. I have no idea if that letter caused Apple to reconsider, but it seems likely to me it would be prudent and reasonable for the company to think more carefully about this feature’s capabilities and how it is positioned.

The apparent plot to subvert AppleInsider’s earlier reporting, on the other hand, is ludicrous. If you believe Zivkovic, Apple went through the time and expense of developing a feature so refined it must have been destined for public use because there is, according to Zivkovic, “no reason to put effort into the design of an internal application”,1 then decided it was not worth launching because AppleInsider spoiled it. This was not the case for any other feature revealed in that same April report for, I guess, some top secret reason. As evidence, Zivkovic points to several products which have been merely renamed for launch:

A notable example of this occurred in 2023, when Apple released the first developer betas of its new operating system for the Apple Vision Pro headset. Widely expected to make its debut under the name xrOS, the company instead announced “visionOS.”

Even then, there were indications of a rushed rebrand. Apple’s instructional videos and code from the operating systems contained clear mentions of the name xrOS.

Apple renamed several operating system features ahead of launch. To be more specific, the company renamed its Adaptive Voice Shortcuts accessibility feature to Vocal Shortcuts.

As mentioned earlier, Intelligent Search received the name Highlights, while Generative Playground was changed to “Image Playground.” The name “Generative Playground” still appears as the application title in the recently released developer betas of Apple’s operating systems.

None of these seem like ways of discrediting media. Renaming the operating system for the Vision Pro to “visionOS” makes sense because it is the name of the product — similar to tvOS and iPadOS — and, also, “xrOS” is clunky. Because of how compartmentalized Apple is, the software team probably did not know what name it would go by until it was nearly time to reveal it. But they needed to call it something so they could talk about it in progress meetings without saying “the spatial computer operating system”, or whatever. This and all of the other examples just seem like temporary names getting updated for public use. None of this supports the thesis that Apple canned Web Eraser to discredit Zivkovic. There is a huge difference between replacing the working name of a product with one which has been finalized, and developing an entire new feature only to scrap it to humiliate a reporter.

Besides, Mark Gurman already tried this explanation. In a March 2014 9to5Mac article, Gurman reported on the then-unreleased Health app for iOS, which he said would be named “Healthbook” and would have a visual design similar to the Passbook app, now known as Wallet. After the Health app was shown at WWDC that year, Gurman claimed it was renamed and redesigned “late in development due to the leak”. While I have no reason to doubt the images Gurman showed were re-created from real screenshots, and there was evidence of the “Healthbook” name in early builds of the Health app, I remain skeptical it was entirely changed in direct response to Gurman’s report. It is far more likely the name was a placeholder, and the March version of the app’s design was still a work in progress.

The June AppleInsider article is funny in hindsight for how definitive it is in the project’s cancellation — it “never became available to the public”; it “has been removed in its entirety […] leaving no trace of it”. Yet, mere weeks later, it seems a multitrillion-dollar corporation decided it would not be bullied by an AppleInsider writer, held its head high, and released it after all. You have to admire the bravery.

Juli Clover, of MacRumors, was first early to report on its appearance in the fifth beta builds of this year’s operating systems under a new name (Update: it seems like Cherlynn Low of Engadget was first; thanks Jeff):

Distraction Control can be used to hide static content on a page, but it is not an ad blocker and cannot be used to permanently hide ads. An ad can be temporarily hidden, but the feature was not designed for ads, and an ad will reappear when it refreshes. It was not created for elements on a webpage that regularly change.

I cannot confirm but, after testing it, I read this to mean it will hide elements with some kind of identifier which remains fixed across sessions — an id or perhaps a unique string of classes — and within the same domain. If the identifier changes on each load, the element will re-appear. Since ads often appear with different identifiers each time and this feature is (I think) limited by domain, it is not an effective ad blocker.

Zivkovic’s follow-up story from after Distraction Control was included in an August beta build is, more or less, a rehashing of only the first explanation for the feature’s delay from what he wrote in June, never once commenting on his more outlandish theory:

Based on the version of Distraction Control revealed on Monday, it appears as though Apple wanted to distance itself from Web Eraser and the negative connotations surrounding the feature.

As mentioned earlier, the company renamed Web Eraser to Distraction Control. In addition to this, the fifth developer beta of iOS 18 includes a new pop-up message that informs users of the feature’s overall purpose, making it clear that it’s not meant to block ads.

It has been given a more anodyne name and it now has a dialog box.

Still, this shows Zivkovic’s earlier report was correct: Apple was developing an easy-to-use feature to hide page elements within Safari and it is in beta builds of the operating systems launching this year. Zivkovic should celebrate this. Instead, his speculative June report makes his earlier reliable reporting look shaky because, it would seem, he was too impatient to wait and see if the feature would launch later. That would be unusual for Apple but still more likely than the company deciding to cancel it entirely.

The August report also adds some new information but, in an effort to create distance between Web Eraser and Distraction Control, Zivkovic makes some unforced errors:

When it comes to ads, pre-release versions of Web Eraser behaved differently from the publicly available Distraction Control. Internal versions of the feature had the ability to block the same page element across different web pages and maintained the users’ choice of hidden elements even after the page was refreshed.

This description of the Distraction Control behaviour is simply not true. In my testing, page elements with stable identifiers remain hidden between pages on the same domain, after the page has been refreshed, and after several hours in a new browser tab.

Zivkovic should be thrilled about his April scoop. Instead, the two subsequent reports undermine the confidence of that first report and unnecessarily complicate the most likely story with baseless speculation that borders on conspiracy theories. From the outside, it appears the early rumour about Web Eraser was actually beneficial for the feature. Zivkovic accurately reported its existence and features. Publishers, worried about its use as a first-party ad blocker, wrote to Apple. Apple delayed the feature’s launch and, when it debuted, gave it a new name and added a dialog box on first use to clarify its intent. Of course, someone can still use Distraction Control to hide ads but, by being a manual process on a per-domain basis, it is a far more tedious process than downloading a dedicated ad blocker.

This was not a ruse to embarrass rumour-mongers. It was just product development: a sometimes messy, sometimes confusing process which, in this case, seemed to result in a better feature with a clearer scope. Unless someone reports otherwise, it does not need to be much more complicated than that.


  1. If Zivkovic believes Apple does not care much about designing things for internal use only, he is sorely mistaken. Not every internal tool is given that kind of attention, but many are. ↥︎

In response to Apple’s increasingly distrustful permissions prompts, it is worth thinking about what benefits this could provide. For example, apps can start out trustworthy and later become malicious through updates or ownership changes, and users should be reminded of the permissions they have afforded it. There is a recent example of this in Bartender. But I am not sure any of this is helped by yet another alert.

The approach seems to be informed by the Steve Jobs definition of privacy, as he described it at D8 in 2010:

Privacy means people know what they’re signing up for — in plain English, and repeatedly. That’s what it means.

I’m an optimist. I believe people are smart, and some people want to share more data than other people do. Ask ’em. Ask ’em every time. Make them tell you to stop asking them, if they get tired of your asking them. Let them know precisely what you’re gonna do with their data.

Some of the permissions dialogs thrown by Apple’s operating systems exist to preempt abuse, while others were added in response to specific scandals. The prompt for accessing your contacts, for example, was added after Path absorbed users’ lists.

The new weekly nag box for screen recording in the latest MacOS Sequoia is also conceivably a response to a specific incident. Early this year, the developer of Bartender sold the app to another developer without telling users. The app has long required screen recording permissions to function. It made some users understandably nervous about transferring that power, especially because the transition was done so quietly to a new shady owner.

I do not think this new prompt succeeds in helping users make an informed decision. There is no information in the dialog’s text informing you who the developer is, and if it has changed. It does not appear the text of the dialog can be customized for the developer to provide a reason. If this is thrown by an always-running app like Bartender, a user will either become panicked or begin passively accepting this annoyance.

The latter is now the default response state to a wide variety of alerts and cautions. Car alarms are ineffective. Hospitals and other medical facilities are filled with so many beeps staff become “desensitized”. People agree to cookie banners without a second of thought. Alert fatigue is a well-known phenomenon, such that it informed the Canadian response in the earliest days of the pandemic. Without more thoughtful consideration of how often and in what context to inform people of something, it is just pollution.

There is apparently an entitlement which Apple can grant, but it is undocumented. It is still the summer and this could all be described in more robust terms over the coming weeks. Yet it is alarming this prompt was introduced with so little disclosure.

I believe people are smart, too. But I do not believe they are fully aware of how their data is being collected and used, and none of these dialog boxes do a good job of explaining that. An app can ask to record your screen on a weekly basis, but the user is not told any more than that. It could ask for access to your contacts — perhaps that is only for local, one-time use, or the app could be sending a copy to the developer, and a user has no way of knowing which. A weather app could be asking for your location because you requested a local forecast, but it could also be reselling it. A Mac app can tell you to turn on full disk access for plausible reasons, but it could abuse that access later.

Perhaps the most informative dialog boxes are the cookie consent forms you see across the web. In their most comprehensive state, you can see which specific third-parties may receive your behavioural data, and they allow you to opt into or out of categories of data use. Yet nobody actually reads those cookie consents because they have too much information.

Of course, nobody expects dialog boxes to be a complete solution to our privacy and security woes. A user places some trust in each layer of the process: in App Review, if they downloaded software from the App Store; in built-in protections; in the design of the operating system itself; and in the developer. Even if you believe dialog boxes are a helpful intervention, Apple’s own sea of prompts do not fulfil the Jobs criteria: they most often do not tell users specifically how their data will be used, and they either do not ask users every time or they cannot be turned off. They are just an occasional interruption to which you must either agree or find some part of an application is unusable.

Users are not typically in a position to knowledgeably authorise these requests. They are not adequately informed, and it is poor policy to treat these as individualized problems.

Since owners of web properties became aware of the traffic-sending power of search engines — most often Google in most places — they have been in an increasingly uncomfortable relationship as search moves beyond ten relevant links on a page. Google does not need websites, per se; it needs the information they provide. Its business recommendations are powered in part by reviews on other websites. Answers to questions appear in snippets, sourced to other websites, without the user needing to click away.

Publishers and other website owners might consider this a bad deal. They feed Google all this information hoping someone will visit their website, but Google is adding features that make it less likely they will do so. Unless they were willing to risk losing all their Google search traffic, there was little a publisher could do. Individually, they needed Google more than Google needed them.

But that has not been quite as true for Reddit. Its discussions hold a uniquely large corpus of suggestions and information on specific topics and in hyper-local contexts, as well as a whole lot of trash. While the quality of Google’s results have been sliding, searchers discovered they could append “Reddit” to a query to find what they were looking for.

Google realized this and, earlier this year, signed a $60 million deal with Reddit allowing it to scrape the site to train its A.I. features. Part of that deal apparently involved indexing pages in search as, last month, Reddit restricted that capability to Google. That is: if you want to search Reddit, you can either use the site’s internal search engine, or you can use Google. Other search engines still display results created from before mid-July, according to 404 Media, but only Google is permitted to crawl anything newer.

It is unclear to me whether this is a deal only available to Google, or if it is open to any search engine that wants to pay. Even if it was intended to be exclusive, I have a feeling it might not be for much longer. But it seems like something Reddit would only care about doing with Google because other search engines basically do not matter in the United States or worldwide.1 What amount of money do you think Microsoft would need to pay for Bing to be the sole permitted crawler of Reddit in exchange for traffic from its measly market share? I bet it is a lot more than $60 million.

Maybe that is one reason this agreement feels uncomfortable to me. Search engines are marketed as finding results across the entire web but, of course, that is not true: they most often obey rules declared in robots.txt files, but they also do not necessarily index everything they are able to, either. These are not explicit limitations. Yet it feels like it violates the premise of a search engine to say that it will be allowed to crawl and link to other webpages. The whole thing about the web is that the links are free. There is no guarantee the actual page will be freely accessible, but the link itself is not restricted. It is the central problem with link tax laws, and this pay-to-index scheme is similarly restrictive.

This is, of course, not the first time there has been tension in how a site balances search engine visibility and its own goals. Publishers have, for years, weighed their desire to be found by readers against login requirements and paywalls — guided by the overwhelming influence of Google.

Google used to require publishers provide free articles to be indexed by the search engine but, in 2017, it replaced that with a model that is more flexible for publishers. Instead of forcing a certain number of free page views, publishers are now able to provide Google with indexable data.

Then there are partnerships struck by search engines and third parties to obtain specific kinds of data. These were summarized well in the recent United States v. Google decision (PDF), and they are probably closest in spirit to this Reddit deal:

GSEs enter into data-sharing agreements with partners (usually specialized vertical providers) to obtain structured data for use in verticals. Tr. at 9148:2-5 (Holden) (“[W]e started to gather what we would call structured data, where you need to enter into relationships with partners to gather this data that’s not generally available on the web. It can’t be crawled.”). These agreements can take various forms. The GSE might offer traffic to the provider in exchange for information (i.e., data-for-traffic agreements), pay the provider revenue share, or simply compensate the provider for the information. Id. at 6181:7-18 (Barrett-Bowen).

As of 2020, Microsoft has partnered with more than 100 providers to obtain structured data, and those partners include information sources like Fandango, Glassdoor, IMDb, Pinterest, Spotify, and more. DX1305 at .004, 018–.028; accord Tr. at 6212:23–6215:10 (Barrett-Bowen) (agreeing that Microsoft partners with over 70 providers of travel and local information, including the biggest players in the space).

The government attorneys said Bing is required to pay for structured data owing to its smaller size, while Google is able to obtain structured data for free because it sends partners so much traffic. The judge ultimately rejected their argument Microsoft struggled to sign these agreements or it was impeded in doing so, but did not dispute the difference in negotiating power between the two companies.

Once more, for emphasis: Google usually gets structured data for free but, in this case, it agreed to pay $60 million; imagine how much it would cost Bing.

This agreement does feel pretty unique, though. It is hard for me to imagine many other websites with the kind of specific knowledge found aplenty on Reddit. It is a centralized version of the bulletin boards of the early 2000s for such a wide variety of interests and topics. It is such a vast user base that, while it cannot ignore Google referrals, it is not necessarily reliant on them in the same way as many other websites are.

Most other popular websites are insular social networks; Instagram and TikTok are not relying on Google referrals. Wikipedia would probably be the best comparison to Reddit in terms of the contribution it makes to the web — even greater, I think — but every article page I tried except the homepage is overwhelmingly dependent on external search engine traffic.

Meanwhile, pretty much everyone else still has to pay Google for visitors. They have to buy the ads sitting atop organic search results. They have to buy ads on maps, on shopping carousels, on videos. People who operate websites hope they will get free clicks, but many of them know they will have to pay for some of them, even though Google will happily lift and summarize their work without compensation.

I cannot think of any other web property which has this kind of leverage over Google. While this feels like a violation of the ideals and principles that have built the open web on which Google has built its empire, I wonder if Google will make many similar agreements, if any. I doubt it — at least for now. This feels funny; maybe that is why it is so unique, and why it is not worth being too troubled by it.


  1. The uptick of Bing in the worldwide chart appears to be, in part, thanks to a growing share in China. Its market share has also grown a little in Africa and South America, but only by tiny amounts. However, Reddit is blocked in China, so a deal does not seem particularly attractive to either party. ↥︎