Article

There is a long line of articles questioning Apple’s ability to deliver on artificial intelligence because of its position on data privacy. Today, we got another in the form of a newsletter.

Reed Albergotti, Semafor:

Meanwhile, Apple was focused on vertically integrating, designing its own chips, modems, and other components to improve iPhone margins. It was using machine learning on small-scale projects, like improving its camera algorithms.

[…]

Without their ads businesses, companies like Google and Meta wouldn’t have built the ecosystems and cultures required to make them AI powerhouses, and that environment changed the way their CEOs saw the world.

Again, I will emphasize this is a newsletter. It may seem like an article from a prestige publisher that prides itself on “separat[ing] the facts from our views”, but you might notice how, aside from citing some quotes and linking to ads, none of Albergotti’s substantive claims are sourced. This is just riffing.

I remain skeptical. Albergotti frames this as both a mindset shift and a necessity for advertising companies like Google and Meta. But the company synonymous with the A.I. boom, OpenAI, does not have the same business model. Besides, Apple behaves like other A.I. firms by scraping the web and training models on massive amounts of data. The evidence for this theory seems pretty thin to me.

But perhaps a reluctance to be invasive and creepy is one reason why personalized Siri features have been delayed. I hope Apple does not begin to mimic its peers in this regard; privacy should not be sacrificed. I think it is silly to be dependent on corporate choices rather than legislation to determine this, but that is the world some of us live in.

Let us concede the point anyhow, since it suggests a role Apple could fill by providing an architecture for third-party A.I. on its products. It does not need to deliver everything to end users; it can focus on building a great platform. Albergotti might sneeze at “designing its own chips […] to improve iPhone margins”, which I am sure was one goal, but it has paid off in ridiculously powerful Macs perfect for A.I. workflows. And, besides, it has already built some kind of plugin architecture into Apple Intelligence because it has integrated ChatGPT. There is no way for other providers to add their own extension — not yet, anyhow — but the system is there.

Gus Mueller:

The crux of the issue in my mind is this: Apple has a lot of good ideas, but they don’t have a monopoly on them. I would like some other folks to come in and try their ideas out. I would like things to advance at the pace of the industry, and not Apple’s. Maybe with a blessed system in place, Apple could watch and see how people use LLMs and other generative models (instead of giving us Genmoji that look like something Fisher-Price would make). And maybe open up the existing Apple-only models to developers. There are locally installed image processing models that I would love to take advantage of in my apps.

Via Federico Viticci, MacStories:

Which brings me to my second point. The other feature that I could see Apple market for a “ChatGPT/Claude via Apple Intelligence” developer package is privacy and data retention policies. I hear from so many developers these days who, beyond pricing alone, are hesitant toward integrating third-party AI providers into their apps because they don’t trust their data and privacy policies, or perhaps are not at ease with U.S.-based servers powering the popular AI companies these days. It’s a legitimate concern that results in lots of potentially good app ideas being left on the table.

One of Apple’s specialties is in improving the experience of using many of the same technologies as everyone else. I would like to see that in A.I., too, but I have been disappointed by its lacklustre efforts so far. Even long-running projects where it has had time to learn and grow have not paid off, as anyone can see in Siri’s legacy.

What if you could replace these features? What if Apple’s operating systems were great platforms by which users could try third-party A.I. services and find the ones that fit them best? What if Apple could provide certain privacy promises, too? I bet users would want to try alternatives in a heartbeat. Apple ought to welcome the challenge.

In just about every discussion concerning TikTok’s ability to operate within the United States, including my own, two areas of concern are cited: users’ data privacy, and the manipulation of public opinion through its feeds by a hostile foreign power. Regarding the first, the U.S., Canada, and any other country is not serious about the mishandling of private information unless it passes comprehensive data privacy legislation, so we can ignore that for now. The latter argument, however, merited my writing thousands of words in that single article. So let me dig into it again from a different angle.

In a 2019 speech at Georgetown University, Mark Zuckerberg lamented an apparently lost leadership by the U.S. in technology. “A decade ago, almost all of the major internet platforms were American,” he said. “Today, six of the top ten are Chinese”.

Incidentally, Zuckerberg gave this speech the same year in which his company announced, after five years of hard work and ingratiation, it was no longer pursuing an expansion into China which would necessarily require it to censor users’ posts. It instead decided to mirror the denigration of Chinese internet companies by U.S. lawmakers and lobbied for a TikTok ban. This does not suggest a principled opposition on the grounds of users’ free expression. Rather, it was seen as a good business move to expand into China until it became more financially advantageous to try to get Chinese companies banned stateside.

I do not know where Zuckerberg got his “six of the top ten” number. The closest I could find was five — based solely on total user accounts. Regardless of the actual number, Zuckerberg was correct to say Chinese internet companies have been growing at a remarkable rate, but it is a little misleading; aside from TikTok, these apps mostly remain a domestic phenomenon. WeChat’s user base, for example, is almost entirely within China, though it is growing internationally as one example of China’s “Digital Silk Road” initiative.

Tech companies from the U.S. still reign supreme nearly everywhere, however. The country exports the most popular social networks, messaging services, search engines, A.I. products, CDNs, and operating systems. Administration after administration has recognized the value to the U.S. of boosting the industry for domestic and foreign policy purposes. It has been a soft power masterstroke for decades.

In normal circumstances, this is moderately concerning for those of us in the rest of the world. Policies set in the U.S. — either those set by companies because of cultural biases or, in the case of something like privacy or antitrust, legal understanding — may not reflect expectations in other regions, and it is not ideal that so much of modern life depends so heavily on the actions of a single country.

These are not normal circumstances — especially here, in Canada. The president of the U.S. is deliberately weakening the Canadian economy in an attempt to force us to cede our sovereignty. Earlier this week, while he was using his extraordinary platform to boost the price of Tesla shares, the president reiterated this argument while also talking about increasing the size of the U.S. military. This is apparently all one big joke in a broadly similar way as is pushing a live chicken into oncoming traffic. Many people have wasted many hours and words trying to understand why he is so focused on this fifty-first state nonsense — our vast natural resources, perhaps, or the potential for polar trade routes because of warming caused by those vast natural resources. But this is someone whose thoughts, in the words of David Brooks, “are often just six fireflies beeping randomly in a jar”. He said why he wants Canada in that Tesla infomercial. “When you take away that [border] and you look at that beautiful formation,” he said while gesticulating with his hands in a shape unlike the combined area of Canada and the United States but quite like how someone of his vibe might crassly describe a woman’s body, “there is no place anywhere in the world that looks like that”. We are nothing more than a big piece of land, and he would like to take it.

Someone — I believe it was Musk, standing just beside him — then reminded him of how he wants Greenland, too, which put a smile on his face as he said “if you add Greenland […] it’s gonna look beautiful”. In the Oval Office yesterday, he sat beside NATO’s Mark Rutte and said “we have a couple of [military] bases on Greenland already”, and “maybe you will see more and more soldiers go there, I don’t know”. It is all just a big, funny joke, from a superpower with the world’s best-funded military, overseen by a chaotic idiot. Ha ha ha.

The U.S. has become a hostile foreign power to Canada and, so, we should explore its dominance in technology under the same criteria as it has China’s purported control over TikTok and how that has impacted U.S. sovereignty. If, for instance, it makes sense to be concerned about the obligation of Chinese companies to reflect ruling party ideology, it is perhaps more worrisome U.S. tech companies are lining up to do so voluntarily. They have a choice.

Similarly, should we be suspicious that our Instagram feeds and Google searches are being tilted in a pro-U.S. direction? I am certain one could construct a study similar to those indicating a pro-China bias on TikTok (PDF) with U.S. platforms. Is YouTube pushing politically divisive videos to Canadians in an effort to weaken our country? Is Facebook suggesting pro-U.S. A.I. slop to Canadians something more than algorithmic noise?

This is before considering Elon Musk who, as both a special government employee and owner of X, is more directly controlling than Chinese officials are speculated to be over TikTok. X has become a solitary case study in state influence over social media. Are the feeds of Canadian users being manipulated? Is his platform a quasi-official propaganda outlet?

Without evidence, these ideas all strike me as conspiracy-brained nonsense. I imagine one could find just as much to support these ideas as is found in those TikTok studies, a majority of which observe the effects of select searches. The Network Contagion one (PDF), linked earlier, is emblematic of these kinds of reports, about which I wrote last year referencing two other examples — one written for the Australian government, and a previous Network Contagion report:

The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.

The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.

The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. […]

The more recent Network Contagion study is perhaps even less reliable. It comprises a similar exploration of search results, and surveys comparing TikTok users’ views to those of non-users. In the first case, the researchers only assessed four search terms: Tibet, Tiananmen, Uyghur, and Xinjiang. TikTok’s search results produced the fewest examples of “anti-China sentiment” in comparison with Instagram and YouTube, but the actual outcomes were not consistent. Results for “Uyghur” and “Xinjiang” on TikTok were mostly irrelevant; on YouTube, however, nearly half of a user’s journey would show videos supportive of China for both queries. Results for “Tibet” were much more likely to be “anti-China” on Instagram than the other platforms, though similarly “pro-China” as TikTok.

These queries are obviously sensitive in China, and I have no problem believing TikTok may be altering search results. But this study, like the others I have read, is not at all compelling if you start picking it apart. For the “Uyghur” and “Xinjiang” examples, researchers say the heavily “pro-China” results on YouTube are thanks to “pro-CCP media assets” and “official or semi-official CCP media sources” uploading loads of popular videos with a positive spin. Sometimes, TikTok is more likely to show irrelevant results; at other times, it shows “neutral” videos, which the researchers say are things like unbiased news footage. In some cases — as with results for “Tiananmen” and “Uyghur” — TikTok was similarly likely to show “pro-China” and “anti-China” results. The researchers hand-wave away these mixed outcomes by arguing “the TikTok search algorithm systematically suppresses undesirable anti-China content while flooding search results with irrelevant content”. Yet the researchers document no effort to compare the results of these search terms with anything else — controversial or politically sensitive terms outside China, for example, or terms which result in overwhelmingly dour results, or generic apolitical terms. In all cases, TikTok returns more irrelevant results than the other platforms; maybe it is just bad at search. We do not know because we have nothing to compare it to. Again, I have no problem believing TikTok may be suppressing results, but this study does not convince me it is uniformly reflecting official Chinese government lines.

As for the survey results, they show TikTok users had more favourable views of China as a travel destination and were less concerned about its human rights abuses. This could plausibly be explained by TikTok users skewing younger and, therefore, growing up seeing a much wealthier China than older generations. Younger people might simply be less aware of human rights abuses. For contrast, people who do not use TikTok are probably more likely to have negative views of China — not just because they are more likely to be older, but because they are suspicious of the platform. “When controlling for age,” the researchers say, “TikTok use significantly and uniquely predicted more positive perceptions of China’s human rights record” among video-based platforms, but Facebook users also had more positive perceptions, and nobody is claiming Facebook is in the bag for China. Perhaps there are other reasons — but they go unexplored.

This is a long digression, but it indicates to me just how possible it would be to create a similar understanding for social media’s impact on Canada. In my own experience on YouTube — admittedly different from a typical experience because I turned off video history — the Related Videos on just about everything I watch are packed with recommendations for Fox News, channels dedicated to people getting “owned”, and pro-Trump videos. I do not think YouTube is trying to sway me into a pro-American worldview and shed my critical thinking skills, but one could produce a compelling argument for it.

This is something we are going to need to pay an increasing level of attention toward. People formerly with Canadian intelligence are convinced the U.S. president is doing to Canada in public what so many before him have done to fair-weather friends in private. They believe his destabilization efforts may be supported by a propaganda strategy, particularly on Musk’s X. These efforts may not be unique to social media, either. Postmedia, the publisher of the National Post plus the most popular daily newspapers in nearly every major Canadian city, is majority U.S.-owned. This is not good.

Yet we should not treat social platforms the same as we do media organizations. We should welcome foreign-owned publications to cover our country, but the ownership of our most popular outlets should be primarily domestic. The internet does not work like that — for both good and bad — nor should we expect it to. Requiring Canadian subsidiaries of U.S. social media companies or banning them outright would continue the ongoing splintering of internet services with little benefit for Canadians or, indeed, the expectations of the internet. We should take a greater lead in determining our digital future without being so hostile to foreign services. That means things like favouring protocols over platforms, which give users more choice over their experience, and permit a level of autonomy and portability. It means ensuring a level of digital sovereignty with our most sensitive data.

It is also a reminder to question the quality of automated recommendations and search results. We do not know how any of them work — companies like Google often cite third-party manipulation as a reason to keep them secret — and I do not know that people would believe tech companies if they were more transparent in their methodology. To wit, digital advertisements often have a button or menu item explaining why you are seeing that particular ad, but it has not stopped many people from believing their conversations are picked up by a device’s microphone and used for targeting. If TikTok released the source for its recommendations engine, would anyone trust it? How about if Meta did the same for its products? I doubt it; nobody believes these companies anyway.

The tech industry is facing declining public trust. The United States’ reputation is sinking among allies and its domestic support for civil rights is in freefall. Its leader is waging economic war on the country where I live. CEOs lack specific values and are following the shifting tides. Yet our world relies on technologies almost entirely dependent on the stability of the U.S., which is currently in short supply. The U.S., as Paris Marx wrote, “needs to know that it cannot dominate the tech economy all on its own, and that the people of the world will no longer accept being subject to the whims of its dominant internet and tech companies”. The internet is a near-miraculous global phenomenon. Restricting companies based on their country of origin is not an effective way to correct this imbalance. But we should not bend to U.S. might, either. It is, after all, just one country of many. The rest of the world should encourage it to meet us at our level.

Apple spokesperson Jacqueline Roy, in a statement provided seemingly first to both Stephen Nellis, of Reuters, and John Gruber:

[…] We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.

Unsurprisingly, this news comes on a Friday and is announced via a carefully circulated statement instead of on Apple’s own website. It is a single feature, but it is a high-priority one showcased in its celebrity infused ads for its flagship iPhone models. I think Apple ought to have published its own news instead of relying on other outlets to do its dirty work, but it fits a pattern. It happened with AirPods and again with AirPower; the former has become wildly successful, while the latter was canned.

This announcement reflects rumours of the feature’s fraught development. Mark Gurman, Bloomberg, in February:

The company first unveiled plans for a new AI-infused Siri at its developers conference last June and has even advertised some of the features to customers. But Apple is still racing to finish the software, said the people, who asked not to be identified because the situation is private. Some features, originally planned for April, may have to be postponed until May or later, they said.

Do note how “or later” is doing the bulk of the work in this paragraph. Nevertheless, he seems to have forecasted the delay announced today.

While it is possible to reconcile Apple’s “coming year” timeline with Gurman’s May-or-later availability while staying within the current release cycle, the statement is a tacit acknowledgement these features are now slated for the next major versions of Apple’s operating system, perhaps no sooner than a release early next year. I do not see why Apple would have issued this statement if it were confident it could ship personalized Siri before September’s new releases. That is a long time between marketing and release for any company, and particularly so for Apple.

This is a risk of announcing something before it is ready, something the WWDC keynote is increasingly filled with. Instead of monolithic September releases with occasional tweaks throughout the year, Apple adopted a more incremental strategy. I would like to believe this has made Apple’s software more polished — or, less charitably, slowed its quality decline. What it has actually done is turn Apple’s big annual software presentation into a series of promises to be fulfilled throughout the year. To its credit, it has almost always delivered and, so, it has been easy to assume the hot streak will continue. This is a good reminder we should treat anything not yet shipping in a release or beta build as a potential feature only.

The delay may ultimately be good news. It is better for Apple to ship features that work well than it is to get things out the door quickly. Investors do not seem bothered; try spotting the point on today’s chart when Gruber and Reuters published the statements they received. And, anyway, most Apple Intelligence features released so far seem rushed and faulty. I would not want to see more of the same. Siri has little reputation to lose, so it makes sense to get this round of changes more right than not.

Besides, Apple only just began including the necessary APIs in the latest developer betas of iOS 18.4. No matter when Apple gets this out, the expansiveness of its functionality is dependent on third-party buy-in. There was a time when developer adoption of new features was a given. That is no longer the case.

According to Gurman as recently as earlier this week, a May release is possible (Update: Oops, I should have checked again.), but I would bet against it. If his reporting is to be believed, however, the key features announced as delayed today require a wholly different architecture which Apple was planning on merging with the existing Siri infrastructure midway through the iOS 19 cycle. It seems possible to me the timeline on both projects could now be interlinked. After all, why not? It is not like Siri is getting worse. No rush.

Remember when new technology felt stagnant? All the stuff we use — laptops, smartphones, watches, headphones — coalesced around a similar design language. Everything became iterative or, in more euphemistic terms, mature. Attempts to find a new thing to excite people mostly failed. Remember how everything would change with 5G? How about NFTs? How is your metaverse house? The world’s most powerful hype machine could not make any of these things stick.

This is not necessarily a problem in the scope of the world. There should be a point at which any technology settles into a recognizable form and function. These products are, ideally, utilitarian — they enable us to do other stuff.

But here we are in 2025 with breakthroughs in artificial intelligence and, apparently, quantum computing and physics itself. The former is something I have written about at length already because it has become adopted so quickly and so comprehensively — whether we like it or not — that it is impossible to ignore. But the news in quantum computers is different because it is much, much harder for me to grasp. I feel like I should be fascinated, and I suppose I am, but mainly because I find it all so confusing.

This is not an explainer-type article. This is me working things out for myself. Join me. I will not get far.

Hartmut Neven, of Google, in December:

Today I’m delighted to announce Willow, our latest quantum chip. Willow has state-of-the-art performance across a number of metrics, enabling two major achievements.

  • The first is that Willow can reduce errors exponentially as we scale up using more qubits. This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years.

  • Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe.

Catherine Bolgar, Microsoft:

Microsoft today introduced Majorana 1, the world’s first quantum chip powered by a new Topological Core architecture that it expects will realize quantum computers capable of solving meaningful, industrial-scale problems in years, not decades.

It leverages the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.

Microsoft says it created a new state of matter and observed a particular kind of particle, both for the first time. In a twelve-minute video, the company defines this new era — called the “quantum age” — as a literal successor to the Stone Age and the Bronze Age. Jeez.

There is hype, and then there is hype. This is the latter. Even if it is backed by facts — I have no reason to suspect Microsoft is lying in large part because, to reiterate, I do not know anything about this — and even if Microsoft deserves this much attention, it is a lot. Maybe I have become jaded by one too many ostensibly world-changing product launches.

There is good reason to believe the excitement shown by Google and Microsoft is not pure hyperbole. The problem is neither company is effective at explaining why. As of writing the first sentence of this piece, my knowledge of quantum computers was only that they can be much, much, much faster than any computer today, thanks to the unique properties of quantum mechanics and, specifically, quantum bits. That is basically all. But what does a wildly fast computer enable in the real world? My brain can only grasp the consumer-level stuff I use, so I am reminded of something I wrote when the first Mac Studio was announced a few years ago: what utility does speed have?

I am clearly thinking in terms far too small. Domenico Vicinanza wrote a good piece for the Conversation earlier this year:

Imagine being able to explore every possible solution to a problem all at once, instead of once at a time. It would allow you to navigate your way through a maze by simultaneously trying all possible paths at the same time to find the right one. Quantum computers are therefore incredibly fast at finding optimal solutions, such as identifying the shortest path, the quickest way.

This explanation helped me — not a lot, but a little bit. What I remain confused by are the examples in the announcements from Google and Microsoft. Why quantum computing could help “discover new medicines” or “lead to self-healing materials” seems like it should be obvious to anyone reading, but I do not get it.

I am suspicious in part because technology companies routinely draw links between some new buzzy thing they are selling and globally significant effects: alleviating hunger, reducing waste, fixing our climate crisis, developing alternative energy sources, and — most of all — revolutionizing medical care. Search the web for (hyped technology) cancer and you can find this kind of breathless revolutionary language drawing a clear line between cancer care and 5G, 6G, blockchain, DAOs, the metaverse, NFTs, and Web3 as a whole. This likely speaks as much about insidious industries that take advantage of legitimate qualms with the medical system and fears of cancer, but it is nevertheless a pattern with these new technologies.

I am not even saying these promises are always wrong. Technological advancement has surely led to improving cancer care, among other kinds of medical treatments.

I have no big goal for this post — no grand theme or message. I am curious about the promises of quantum computers for the same reason I am curious about all kinds of inventions. I hope they work in the way Google, Microsoft, and other inventors in this space seem to believe. It would be great if some of the world’s neglected diseases can be cured and we could find ways to fix our climate.

But — and this is a true story — I read through Microsoft’s various announcement pieces and watched that video while I was waiting on OneDrive to work properly. I struggle to understand how the same company that makes a bad file syncing utility is also creating new states of matter. My brain is fully cooked.

In November 2023, two researchers at the University of California, Irvine, and their supervisor published “Dazed and Confused”, a working paper about Google’s reCAPTCHAv2 system. They write mostly about how irritating and difficult it is to use, and also explore its privacy and labour costs — and it is that last section in which I had some doubts when I first noticed the paper being passed around in July.

I was content to leave it there, assuming this paper would be chalked up as one more curiosity on a heap of others on arXiv. It has not been subjected to peer review at any journal, as far as I can figure out, nor can I find another academic article referencing it. (I am not counting the dissertation by one of the paper’s authors summarizing its findings.) Yet parts of it are on their way to becoming zombie statistics. Mike Elgan, writing in his October Computerworld column, repeated the paper’s claim that “Google might have profited as much as $888 billion from cookies created by reCAPTCHA sessions”. Ted Litchfield of PC Gamer included another calculation alleging solving CAPTCHAs “consum[ed] 7.5 million kWhs of energy[,] which produced 7.5 million pounds of CO2 pollution”; the article is headlined reCAPTCHAs “[…] made us spend 819 million hours clicking on traffic lights to generate nearly $1 trillion for Google”. In a Boing Boing article earlier this month, Mark Frauenfelder wrote:

[…] Through analyzing over 3,600 users, the researchers found that solving image-based challenges takes 557% longer than checkbox challenges and concluded that reCAPTCHA has cost society an estimated 819 million hours of human time valued at $6.1 billion in wages while generating massive profits for Google through its tracking capabilities and data collection, with the value of tracking cookies alone estimated at $888 billion.

I get why these figures are alluring. CAPTCHAs are heavily studied; a search of Google Scholar for “CAPTCHA” returns over 171,000 results. As you might expect, most are adversarial experiments, but there are several examining usability and, others, privacy. However, I could find just one previous paper correlating, say, emissions and CAPTCHA solving, and it was a joke paper (PDF) from the 2009 SIGBOVIK conference, “the Association for Computational Heresy Special Interest Group”. Choice excerpt: “CAPTCHAs were the very starting point for human computation, a recently proposed new field of Computer Science that lets computer scientists appear less dumb to the world”. Excellent.

So you can see why the claims of the U.C. Irvine researchers have resonated in the press. For example, here is what they — Andrew Searles, Renascence Tarafder Prapty, and Gene Tsudik — wrote in their paper (PDF) about emissions:

Assuming un-cached scenarios from our technical analysis (see Appendix B), network bandwidth overhead is 408 KB per session. This translates into 134 trillion KB or 134 Petabytes (194 x 1024 Terrabytes [sic]) of bandwidth. A recent (2017) survey estimated that the cost of energy for network data transmission was 0.06 kWh/GB (Kilowatt hours per Gigabyte). Based on this rate, we estimate that 7.5 million kWh of energy was used on just the network transmission of reCAPTCHA data. This does not include client or server related energy costs. Based on the rates provided by the US Environmental Protection Agency (EPA) and US Energy Information Administration (EIA), 1 kWh roughly equals 1-2.4 pounds of CO2 pollution. This implies that reCAPTCHA bandwidth consumption alone produced in the range of 7.5-18 million pounds of CO2 pollution over 9 years.

Obviously, any emissions are bad — but how much is 7.5–18 million pounds of CO2 over nine years in context? A 2024 working paper from the U.S. Federal Housing Finance Agency estimated residential properties each produce 6.8 metric tons of CO2 emissions from electricity and heating, or about 15,000 pounds. That means CAPTCHAs produced as much CO2 as providing utilities to 55–133 U.S. houses per year. Not good, sure, but not terrible — at least, not when you consider the 408 kilobyte session transfer against, say, Google’s homepage, which weighs nearly 2 MB uncached. Realistically, CAPTCHAs are not a meaningful burden on the web or our environment.

The numbers in this discussion area are suspect. From these CO2 figures to the value of reCAPTCHA cookies — apparently responsible for nearly half of Google’s revenue from when it acquired the company — I find the evidence for them lacking. Yet they continue to circulate in print and, now, in a Vox-esque mini documentary.

The video, on the CHUPPL “investigative journalism” YouTube channel, was created by Jack Joyce. I found it via Frauenfelder, of Boing Boing, and it was also posted by Daniel Sims at TechSpot and Emma Roth at the Verge. The roughly 17-minute mini-doc has been watched nearly 200,000 times, and the CHUPPL channel has over 350,000 subscribers. Neither number is massive for YouTube, but it is not a small amount of viewers, either. Four of the ten videos from CHUPPL have achieved over a million views apiece. This channel has a footprint. But watching the first half of its reCAPTCHA video is what got me to open BBEdit and start writing this thing. It is a masterclass in how the YouTube video essay format and glossy production can mask bad journalism. I asked CHUPPL several questions about this video and did not receive a response by the time I published this.

Let me begin at the beginning:

How does this checkbox know that I’m not a robot? I didn’t click any motorcycles or traffic lights. I didn’t even type in distorted words — and yet it knew. This infamous tech is called reCAPTCHA and, when it comes to reach, few tools rival its presence across the web. It’s on twelve and a half million websites, quietly sitting on pages that you visit every day, and it’s actually not very good at stopping bots.

While Joyce provides sources for most claims in this video, there is not one for this specific number. According to BuiltWith, which tracks technologies used on websites, the claim is pretty accurate — it sees it used on about twelve million websites, and it is the most popular CAPTCHA script.

But Google has far more popular products than these if it wants to track you across the web. Google Maps, for example, is on over 15 million live websites, Analytics is on around 31 million, and AdSense is on nearly 49 million. I am not saying that we should not be concerned about reCAPTCHA because it is on only twelve million sites, but that number needs context. Google Maps is more popular, according to BuiltWith, than reCAPTCHA. If Google wants to track user activity across the web, AdSense is explicitly designed for that purpose. Yes, it is probably true that “few tools rival its presence across the web”, but you can say that of just about any technology from Google, Meta, Amazon, Cloudflare, and a handful of other giants — but, especially, Google.

Back to the intro:

It turns out reCAPTCHA isn’t what we think it is, and the public narrative around reCAPTCHA is an impossibly small sliver of the truth. And by accepting that sliver as the full truth, we’ve all been misled. For months, we followed the data, we examined glossed over research, and uncovered evidence that most people don’t know exists. This isn’t the story of an inconsequential box. It’s the story of a seemingly innocent tool and how it became a gateway for corporate greed and mass surveillance. We found buried lawsuits, whispers of the NSA, and echoes of Edward Snowden. This is the story of the future of the Internet and who’s trying to control it.

The claims in this introduction vastly oversell what will be shown in this video. The lawsuits are not “buried”, they were linked from the reCAPTCHA Wikipedia article as it appeared before the video was published. The “whispers” and “echoes” of mass surveillance disclosures will prove to be based on almost nothing. There are real concerns with reCAPTCHA, and this video does justice to almost none of them.

The main privacy problems with reCAPTCHA are found in its ubiquity and its ownership. Google swears up and down it collects device and user behaviour data through reCAPTCHA only for better bot detection. It issued a statement saying as much to Techradar in response to the “Dazed and Confused” paper circulating again. In a 2021 blog post announcing reCAPTCHA Enterprise — the latest version combining V2, V3, and the mobile SDKs under a single brand — Google says:

Today, reCAPTCHA Enterprise is a pure security product. Information collected is used to provide and improve reCAPTCHA Enterprise and for general security purposes. We don’t use this data for any other purpose.

[…] Additionally, none of the data collected can be used for personalized advertising by Google.

Google goes on to explain that it collects data as a user navigates through a website to help determine if they are a bot without having to present a challenge. Again, it is adamant none of this data is used to feed its targeted advertising machine.

There are a couple of problems with this. First, because Google does not disclose exactly how reCAPTCHA works, its promise requires that you trust the company. It is not a great idea to believe the word of corporations in general. Specifically, in Google’s case, a leak of its search ranking signals last year directly contradicted its public statements. But, even though Google was dishonest then, there is currently no evidence reCAPTCHA data is being misused in the way Joyce’s video suggests. Coyly asking questions with sinister-sounding music underneath is not a substitute for evidence.

The second problem is the way Google’s privacy policy can be interpreted, as reported by Thomas Claburn in 2020 in the Register:

Zach Edwards, co-founder of web analytics biz Victory Medium, found that Google’s reCAPTCHA’s JavaScript code makes it possible for the mega-corp to conduct “triangle syncing,” a way for two distinct web domains to associate the cookies they set for a given individual. In such an event, if a person visits a website implementing tracking scripts tied to either those two advertising domains, both companies would receive network requests linked to the visitor and either could display an ad targeting that particular individual.

You will hear from Edwards later in Joyce’s video making a similar argument. Just because Google can do this, it does not mean it is actually doing so. It has the far more popular AdSense for that.

ReCAPTCHA interacts with three Google cookies when it is present: AEC, NID, and OGPC. According to Google, AEC is “used to detect spam, fraud, and abuse” including for advertising click fraud. I could not find official documentation about OGPC, but it and NID appear to be used for advertising for signed-out users. Of these, NID is most interesting to me because it is also used to store Google Search preferences, so someone who uses Google’s most popular service is going to have it set regardless, and its value is fixed for six months. Therefore, it is possible to treat it as a unique identifier for that time.

I could not find a legal demand of Google specifically for reCAPTCHA history. But I did find a high-profile request to re-identify NID cookies. In 2017, the first Trump administration began seizing records from reporters, including those from the New York Times. The Times uses Google Apps for its email system. That administration and then the Biden one tried obtaining email metadata, too, while preventing Times executives from disclosing anything about it. In the warrant (PDF), the Department of Justice demands of Google:

PROVIDER is required to disclose to the United States the following records and other information, if available, for the Account(s) for the time period from January 14, 2017, through April 30, 2017, constituting all records and other information relating to the Account(s) (except the contents of communications), including:

[…]

Identification of any PROVIDER account(s) that are linked to the Account(s) by cookies, including all PROVIDER user IDs that logged into PROVIDER’s services by the same machine as the Account(s).

And by “cookies”, the government says that includes “[…] cookies related to user preferences (such as NID), […] cookies used for advertising (such as NID, SID, IDE, DSID, FLC, AID, TAID, and exchange_uid) […]” plus Google Analytics cookies. This is not the first time Google’s cookies have been used in intelligence or law enforcement matters — the NSA has, of course, been using them that way for years — but it is notable for being an explicit instance of tying the NID cookie, which is among those used with reCAPTCHA, to a user’s identity. (Google says site owners can use a different reCAPTCHA domain to disassociate its cookies.) Also, given the effort of the Times’ lawyers to release this warrant, it is not surprising I was unable to find another public document containing similar language. I could not find any other reporting on this cookie-based identification effort, so I think this is news. In this case, Google successfully fought the government’s request for email metadata.

Assuming Google retains these records, what the Department of Justice was demanding would be enough to connect a reCAPTCHA user to other Google product activity and a Google account holder using the shared NID cookie. Furthermore, it is a problem that so much of the web relies on a relative handful of companies. Google has long treated the open web as its de facto operating system, coercing site owners to use features like AMP or making updates to comply with new search ranking guidelines. It is not just Google that is overly controlling, to be fair — I regularly cannot access websites on my iMac because Cloudflare believes I am a robot and it will not let me prove otherwise — but it is the most significant example. Its fingers in every pie — from site analytics, to fonts, to advertising, to maps, to browsers, to reCAPTCHA — means it has a unique vantage point from which to see how billions of people use the web.

These are actual privacy concerns, but you will learn none of them from Joyce’s video. You will instead be on the receiving end of a scatterbrained series of suggestions of reCAPTCHA’s singularly nefarious quality, driven by just-asking-questions conspiratorial thinking, without reaching a satisfying destination.

From here on, I am going to use timecodes as reference points. 1:56:

Journalists told you such a small sliver of the truth that I would consider it to be deceptive.

Bad news: Joyce is about to be fairly deceptive while relying on the hard work of journalists.

At 3:24:

Okay, you’re probably thinking “why does any of this matter?”, and I agree with you.

I did agree with you. I actually halted this investigation for a few weeks because I thought it was quite boring — until I went to renew my passport. (Passport status dot state dot gov.)

I got a CAPTCHA — not a checkbox, not fire hydrants, but the old one. And I clicked it. And it took me here.

The “here” Joyce mentions is a page at captcha.org, which is redirected from its original destination at captcha.com. The material is similar on both. The ownership of the .org domain is unclear, but the .com is run by Captcha, Inc., and it sells the CAPTCHA package used by the U.S. Department of State among other government departments. I have a sneaking suspicion the .org retains some ties to Captcha, Inc. given the DNS records of each. Also, the list of CAPTCHA software on the .org site begins with all the packages offered by Captcha, Inc., and its listing for reCAPTCHA is outdated — it does not display Google as its owner, for example — but the directory’s operators found time to add the recaptcha.sucks website.

About that. 4:07:

An entire page dedicated to documenting the horrors of reCAPTCHA: alleging national security implications for the U.S. and foreign governments, its ability to doxx users, mentioning secret FISA orders — the same type of orders that Edward Snowden risked his life to warn us about. […]

Who put this together? “Anonymous”.

if you are a web-native journalist, wishing to get in touch, we doubt you are going to have a hard-time figuring out who we are anyway.

This felt like a key left in plain sight, whispering there’s a door nearby and it’s meant to be opened. This is what we’re good at. This is what we do.

The U.S. “national security implications” are, as you can see on screen as these words are being said, not present: “stay tuned — it will be continued”, the message from ten years ago reads. The FISA reference, meanwhile, is a quote from Google’s national security requests page acknowledging the types of data it can disclose under these demands. It is a note that FISA exists and, under that law, Google can be compelled to disclose user data — a policy that applies to every company.

This all comes from the ReCAPTCHA Sucks website. On the About page, the site author acknowledges they are a competitor and maintains their anonymity is due to trademark concerns:

a free-speech / gripe-site on trademarked domains must not be used in a ‘bad faith’ — what includes promotion of competing products and services.

and under certain legal interpretations disclosing of our identity here might be construed as a promotion of our own competing captcha product or service.

it frustrates us indeed, but those are the rules of the game.

The page concludes, as Joyce quoted:

if you are a web-native journalist, wishing to get in touch, we doubt you are going to have a hard-time figuring out who we are anyway.

Joyce reads this as a kind of spooky challenge yet, so far as I can figure out, did not attempt to contact the site’s operators. I asked CHUPPL about this and I have not heard back. It is not very difficult to figure out who they are. The site has a shared technical infrastructure, including a historic Google Analytics account, with captcha.com. It feels less like the work of a super careful anonymous tipster, and more like an open secret from an understandably cheesed competitor.

5:05:

Okay, let’s get this out of the way: reCAPTCHA is not and really has never been very good at stopping bots.

Joyce points to the success rate of a couple of reCAPTCHA breakers here as evidence of its ineffectiveness, though does not mention they were both against the audio version. What Joyce does not establish is whether these programs were used much in the real world.

In 2023, Trend Micro published research into the way popular CAPTCHA solving services operate. Despite the seemingly high success rate of automated techniques, “they break CAPTCHAs by farming out CAPTCHA-breaking tasks to actual human solvers” because there are a lot more services out there than reCAPTCHA. That is exactly how many CAPTCHA solvers market their services, though some are now saying they use A.I. instead. Also, it is not as though other types of CAPTCHAs are not subject to similar threats. In 2021, researchers solved hCAPTCHA (PDF) with a nearly 96% success rate. Being only okay at stopping bot traffic is not unique to reCAPTCHA, and these tools are just one of several technologies used to minimize automated traffic. And, true enough, none of these techniques is perfect, or even particularly successful. But that does not mean their purpose is nefarious, as Joyce suggests later in the video, at 11:45:

Google has said that they don’t use the data collected from reCAPTCHA for targeted advertising, which actually scares me a bit more. If not for targeted ads, which is their whole business model, why is Google acting like an intelligence agency?

Joyce does not answer this directly, instead choosing to speculate about a way reCAPTCHA data could be used to identify people who submit anonymous tips to the FBI — yes, really. More on that later.

5:49:

2018 was the launch of V3. According to researchers at U.C. Irvine, there’s practically no difference between V2 and V3.

Onscreen, Joyce shows an excerpt from the “Dazed and Confused” paper, and the sentence fragment “there is no discernable difference between reCAPTCHAv2 and reCAPTCHAv3” is highlighted. But just after that, you can see the sentence continues: “in terms of appearance or perception of image challenges and audio challenges”.

Screenshot from CHUPPL video.
Screenshot from CHUPPL video showing excerpt from an academic paper.

Remember: these researchers were mainly studying the usability of these CAPTCHAs. This section is describing how users perceive the similar challenges presented by both versions. They are not saying V2 and V3 have “practically no difference” in general terms.

At 6:56:

ReCAPTCHA “takes a pixel-by-pixel fingerprint” of your browser. A real-time map of everything you do on the internet.

This part contains a quote from a 2015 Business Insider article by Lara O’Reilly. O’Reilly, in turn, cites research by AdTruth, then — as now — owned by Experian. I can find plenty of references to O’Reilly’s article but, try as I might, I have not been able to find a copy of the original report. But, as a 2017 report from Cracked Labs (PDF) points out, Experian’s AdTruth “provides ‘universal device recognition’”, “creat[ing] a ‘unique user ID’ for each device, by collecting information such as IP addresses, device models and device settings”. To the extent “pixel-by-pixel fingerprint” means anything in this context — it does not, but it misleadingly sounds to me like it is taking screenshots — Experian’s offering also fits that description. It is a problem there are so many things which quietly monitor user activity across their entire digital footprint.

Unfortunately, at 7:41, Joyce whiffs hard while trying to make this point:

If there’s any part of this video you should listen to, it’s this. Stop making dinner, stop scrolling on your phone, and please listen.

When I tell you that reCAPTCHA is watching you, I’m not saying that in some abstract, metaphorical way. Right now, reCAPTCHA is watching you. It knows that you’re watching me. And it doesn’t want you to know.

This stumbles in two discrete ways. First, reCAPTCHA is owned by Google, but so is YouTube. Google, by definition, knows what you are doing on YouTube. It does not need reCAPTCHA to secretly gather that information, too.

Second, the evidence Joyce presents for why “it doesn’t want you to know” is that Google has added some CSS to hide a floating badge, a capability it documents. This is for one presentation of reCAPTCHAv2, which is as invisible background validation and where a checkbox is shown only to suspicious users.

Screenshot from CHUPPL video.
Screenshot from CHUPPL video.

I do not think Google “does not want you to know” about reCAPTCHA on YouTube. I think it thinks it is distracting. Google products using other Google technologies has not been a unique concern since the company merged user data and privacy policies in 2012.

The second half of the video, following the sponsor read, is a jumbled mess of arguments. Joyce spends time on a 2015 class action lawsuit filed against Google in Massachusetts alleging completing the old-style word-based reCAPTCHA was unfairly using unpaid labour to transcribe books. It was tossed in 2016 because the plaintiff (PDF) “failed to identify any statute assigning value to the few seconds it takes to transcribe one word”, and “Google’s profit is not Plaintiff’s damage”.

Joyce then takes us on a meandering journey through the way Google’s terms of use document is written — this is where we hear from Edwards reciting the same arguments as appeared in that 2020 Register article — and he touches briefly on the U.S. v. Google antitrust trial, none of which concerned reCAPTCHA. There is a mention of a U.K. audit in 2015 specifically related to its 2012 privacy policy merger. This is dropped with no explanation into the middle of Edwards’ questioning of what Google deems “security related” in the context of its current privacy policy.

Then we get to the FBI stuff. Remember earlier when I told you Joyce has a theory about how Google uses reCAPTCHA to unmask FBI tipsters? Here is when that comes up again:

Check this out: if you want to submit a tip to the FBI, you’re met with this notice acknowledging your right to anonymity. But even though the State Department doesn’t use reCAPTCHA, the FBI and the NSA do. […] If they want to know who submitted the anonymous report, Google has to tell them.

This is quite the theory. There is video of Edward Snowden and clips from news reports about the mysteries of the FISA court. Dramatic music. A chart of U.S. government requests for user data from Google.

But why focus on reCAPTCHA when the FBI and NSA — and a whole bunch of other government sites — also use Google Analytics? Though Google says Analytics cookies are distinct from those used by its advertising services, site owners can link them together, which would not be obvious to users. There is no evidence the FBI or any other government agency is doing so. The actual problem here is that sensitive and ostensibly anonymous government sites are using any Google services whatsoever, probably because they are a massive corporation with lots of widely-used products and services.

Even so, many federal sites use the product offered by Captcha, Inc. and it seems to respect privacy by being self-hosted. All of them should just use that. The U.S. government has its own analytics service; the stats are public. The reason for inconsistencies is probably the same reason any massive organization’s websites are fragmented: it is a lot of work to keep them unified.

Non-U.S. government sites are not much better. RCMP Alberta also uses Google Analytics, though not reCAPTCHA, as does London’s Metropolitan Police.

Joyce juxtaposes this with the U.S. Secret Service’s use of Babel Street’s Locate X data. He does not explain any direct connection to reCAPTCHA or Google, and there is a very good reason for this: there is none. Babel Street obtained some of its location data from Venntel, which is owned by Gravy Analytics, which obtained it from personalized ads.

Joyce ultimately settles on a good point near the end of the video, saying Google uses various browsing signals “before, during, and after” clicking the CAPTCHA to determine whether you are likely human. If it does not have enough information about you — “you clear your cookies, you are browsing Incognito, maybe you are using a privacy-focused browser” — it is more likely to challenge you.

None of this is actually news. It has all been disclosed by Google itself on its website and in a 2014 Wired article by Andy Greenberg, linked from O’Reilly’s Business Insider story. This is what Joyce refers to at 7:24 in the video in saying “reCAPTCHA doesn’t need to be good at stopping bots because it knows who you are. The new reCAPTCHA runs in the background, is invisible, and only shows challenges to bots or suspicious users”. But that is exactly how reCAPTCHA stops bots, albeit not perfectly: it either knows who you are and lets you through without a challenge, or it asks you for confirmation.

It is this very frustration I have as I try to protect my privacy while still using the web. I hit reCAPTCHA challenges frequently, especially when working on something like this article, in which I often relied on Google’s superior historical index and advanced search operators to look up stories from ten years ago. As I wrote earlier, I run into Cloudflare’s bot wall constantly on one of my Macs but not the other, and I often cannot bypass it without restarting my Mac or, ironically enough, using a private browsing window. Because I use Safari, website data is deleted more frequently, which means I am constantly logging into services I use all the time. The web becomes more cumbersome to use when you want to be tracked less.

There are three things I want to leave you with. First, there is an interesting video to be made about the privacy concerns of reCAPTCHA, but this is not it. It is missing evidence, does not put findings in adequate context, and drifts conspiratorially from one argument to another while only gesturing at conclusions. Joyce is incorrect in saying “journalists told you such a small sliver of the truth that I would consider it to be deceptive”. In fact, they have done the hard work over many years to document Google’s many privacy failures — including in reCAPTCHA. That work should bolster understandable suspicions about massive corporations ruining our right to privacy. This video is technically well produced, but it is of shoddy substance. It does not do justice to the work of the better journalists whose work it relies upon.

Second, CAPTCHAs offer questionable utility. As iffy as I find the data in the discussion section of the “Dazed and Confused” paper, its other findings seem solid: people find it irritating to label images or select boxes containing an object. A different paper (PDF) with two of the same co-authors and four other researchers found people most like reCAPTCHA’s checkbox-only presentation — the one that necessarily compromises user privacy — but also found some people will abandon tasks rather than solve a CAPTCHA. Researchers in 2020 (PDF) found CAPTCHAs were an impediment to people with visual disabilities. This is bad. Unfortunately, we are in a new era of mass web scraping — one reason I was able to so easily find many CAPTCHA solving services. Site owners wishing to control that kind of traffic have options like identifying user agents or I.P. address strings, but all of these can be defeated. CAPTCHAs can, too. Sometimes, all you can do is pile together a bunch of bad options and hope the result is passable.

Third, this is yet another illustration of how important it is for there to be strong privacy legislation. Nobody should have to question whether checking a box to prove they are not a robot is, even in a small way, feeding a massive data mining operation. We are never going to make progress on tracking as long as it remains legal and lucrative.

Do you remember the “Twitter Files”?

I completely understand if you do not. Announced with great fanfare by Elon Musk after his eager-then-reluctant takeover of the company, writers like Lee Fang, Michael Shellenberger, Rupa Subramanya, Matt Taibbi, and Bari Weiss were permitted access to internal records of historic moderation decisions. Each published long Twitter threads dripping in gravitas about their discoveries.

But after stripping away the breathless commentary and just looking at the documents as presented, Twitter’s actions did not look very evil after all. Clumsy at times, certainly, but not censorial — just normal discussions about moderation. Contrary to Taibbi’s assertions, the “institutional meddling” was research, not suppression.

Now, Musk works for the government’s DOGE temporary organization and has spent the past two weeks — just two weeks — creating chaos with vast powers and questionable legality. But that is just one of his many very real jobs. Another one is his ownership of X where he also has an executive role. Today, he decided to accuse another user of committing a crime, and used his power to suspend their account.

What was their “crime”? They quoted a Wired story naming six very young people who apparently have key roles at DOGE despite their lack of experience. The full tweet read:1

Here’s a list of techies on the ground helping Musk gaining and using access to the US Treasury payment system.

Akash Bobba

Edward Coristine

Luke Farritor

Gautier Cole Killian

Gavin Kliger

Ethan Shaotran

I wonder if the fired FBI agents may want dox them and maybe pay them a visit.

In the many screenshots I have seen of this tweet, few seem to include the last line as it is cut off by the way X displays it. Clicking “Show more” would have displayed it. It is possible to interpret this as violative of X’s Abuse and Harassment rules, which “prohibit[s] behavior that encourages others to harass or target specific individuals or groups of people with abuse”, including “behavior that urges offline action”.

X, as Twitter before it, enforces these policies haphazardly. The same policy also “prohibit[s] content that denies that mass murder or other mass casualty events took place”, but searching “Sandy Hook” or “Building 7” turns up loads of tweets which would presumably also run afoul. Turns out moderation of a large platform is hard and the people responsible sometimes make mistakes.

But the ugly suggestion made in that user’s post might not rise to the level of a material threat — a “crime”, as it were — and, so, might still be legal speech. Musk’s X also suspended a user who just posted the names of public servants. And Musk is currently a government employee in some capacity. The “Twitter Files” crew, ostensibly concerned about government overreach at social media platforms, should be furious about this dual role and heavy-handed censorship.

It was at this point in drafting this article that Mike Masnick of Techdirt published his impressions much faster than I could turn it around. I have been bamboozled by my day job. Anyway:

Let’s be crystal clear about what just happened: A powerful government official who happens to own a major social media platform (among many other businesses) just declared that naming government employees is criminal (it’s not) and then used his private platform to suppress that information. These aren’t classified operatives — they’re public servants who, theoretically, work for the American people and the Constitution, not Musk’s personal agenda.

This doesn’t just “seem like” a First Amendment issue — it’s a textbook example of what the First Amendment was designed to prevent.

So far, however, we have seen from the vast majority of them no exhausting threads, no demands for public hearings — in fact, barely anything. To his extremely limited credit, Taibbi did acknowledge it is “messed up”, going on to write:

That new-car free speech smell is just about gone now.

“Now”?

Taibbi is the only one of those authors who has written so much as a tweet about Musk’s actions. Everyone else — Fang, Shellenberger, Subramanya, and Weiss — has moved on to unsubstantive commentary about newer and shinier topics.

This is not mere hypocrisy. What Musk is doing is a far more explicit blurring of the lines between government power and platform speech permissions. This could be an interesting topic that a writer on the free speech beat might want to explore. But for a lot of them, it would align them too similarly to mainstream reporting, and their models do not permit that.

It is one of the problems with being a shallow contrarian. Because these writers must position themselves as alternatives to mainstream news coverage — “focus[ing] on stories that are ignored or misconstrued in the service of an ideological narrative”, “for people who dare to think for themselves”. How original. They suggest they cannot cover the same news — or, at least, not from a similar perspective — as in the mainstream. This is not actually true, of course: each of them frequently publishes hot takes about high-profile stories along their particular ideological bent, which often coincide with standard centre-right to right-wing thought. They are not unbiased. Yet this widely covered story has either escaped their attention, or they have mostly decided it is not worth mentioning.

I am not saying this is a conspiracy among these writers, or that they are lackeys for Musk or Trump. What I am saying is that their supposed principles are apparently only worth expressing when they are able to paint them as speaking truth to power, and their concept of power is warped beyond recognition. It goes like this: some misinformation researchers partially funded by government are “power”, but using the richest man in the world as a source is not. It also goes like this: when that same man works for the government in a quasi-official capacity and also owns a major social media platform, it is not worth considering those implications because Rolling Stone already has an article.

They can prove me wrong by dedicating just as much effort to exposing the blurrier-than-ever lines between a social media platform and the U.S. government. Instead, it is busy reposting glowing profiles of now-DOGE staff. They are not interested in standing for specific principles when knee-jerk contrarianism is so much more thrilling.


  1. There are going to be a lot of x.com links in this post, as it is rather unavoidable. ↥︎

The downfall of Quartz is really something to behold. It was launched in 2012 as a digital-only offshoot of the Atlantic specifically intended for business and economic news. It compared itself to esteemed publications like the Economist and Financial Times, and had a clever-for-early-2010s URL.1 It had an iPad-first layout. Six years later, it and “its own bot studio” were sold to Uzabase for a decent sum. But the good times did not last, and Quartz was eventually sold to G/O Media.

Riley MacLeod, Aftermath:

As of publishing, the “Quartz Intelligence Newsroom” has written 22 articles today, running the gamut from earnings reports to Reddit communities banning Twitter posts to the Sackler settlement to, delightfully, a couple articles about how much AI sucks. Quartz has been running AI-generated articles for months, but prior to yesterday, they appear to have been limited to summaries of earnings reports rather than news articles. Boilerplate at the bottom of these articles notes that “This is the first phase of an experimental new version of reporting. While we strive for accuracy and timeliness, due to the experimental nature of this technology we cannot guarantee that we’ll always be successful in that regard.”

MacLeod published this story last week, and I thought it would be a good time to check in on how it is going. So I opened the latest article from the “Quartz Intelligence Newsroom”, “Expected new tariffs will mean rising costs for everyday items”. It was published earlier today, and says at the top it “incorporates reporting from Yahoo, NBC Chicago and The Wall Street Journal on MSN.com”. The “Yahoo” story is actually a syndicated video from NBC’s Today Show, so that is not a great start as far as crediting sources goes.

Let us tediously dissect this article, beginning with the first paragraph:

As new tariffs are slated to take effect in early March, consumers in the U.S. can expect price increases on a variety of everyday items. These tariffs, imposed in a series of trade policy shifts, are anticipated to affect numerous sectors of the economy. The direct cost of these tariffs is likely to be passed on to consumers, resulting in higher prices for goods ranging from electronics to household items.

The very first sentence of this article appears to be wrong. The tariffs in question are supposed to be announced today, as stated in that Today Show clip, and none of the cited articles say anything about March. While a Reuters “exclusive” yesterday specified a March 1 enforcement date, the White House denied that report, with the president saying oil and gas tariffs would begin “around the 18 of February”.

To be fair to the robot writing the Quartz article, the president does not know what he is talking about. You could also see how a similar mistake could be made by a human being who read the Reuters story or has sources saying something similar. But the Quartz article does not cite Reuters — it, in fact, contains no links aside from those in the disclaimer quoted above — nor does it claim to have any basis for saying March.

The next paragraph is where things take a sloppier turn; see if you can spot it:

Data from recent analyses indicate that electronics, such as smartphones and laptops, will be among the most impacted by the new tariffs. Importers of these goods face increased costs, which they are poised to transfer to consumers. A report by the U.K.-based research firm Tech Analytics suggests that consumers might see price hikes of up to 15% on popular smartphone models and up to 10% on laptops. These increases are expected to influence consumer purchasing decisions, possibly leading to a decrease in sales volume.

If you are wondering why an article about U.S. tariffs published by a U.S. website is citing a U.K. source, you got the same weird vibe as I did. So I looked it up. And, as best I can tell, there is no U.K. research organization called “Tech Analytics” — none at all. There used to be and, because it was only dissolved in October, it is possible Tech Analytics could be a report from around then based on the president’s campaign statements. But I cannot find any record of Tech Analytics publishing anything whatsoever, or being cited in any news stories. This report does not exist.

I also could not find any source for the figures in this paragraph. Last month, the U.S. Consumer Technology Association published a report (PDF) exploring the effects of these tariffs on U.S. consumer goods. Analysis by Trade Partnership Worldwide indicated the proposed tariffs would raise the price of smartphones by 26–37%, and laptops by 46–68%. These figures assumed a rate of 70–100% on goods from China because that is what the president said he would do. He more recently said 10% tariffs should be expected, and that could mean smartphone prices really do increase by the amount in the Quartz article. However, there is again no (real) source or citation for those numbers.

As far as I can tell, Quartz, a business and financial news website, published a made-up source and some numbers in an article about a high-profile story. If a real person reviewed this story before publication, their work is not evident. Why should a reader trust anything from Quartz ever again?

Let us continue a couple of paragraphs later:

The automotive sector is also preparing for the impact of increased tariffs. Car manufacturers and parts suppliers are bracing for higher production costs as tariffs on imported steel and aluminum take hold. According to a February report from the Automobile Manufacturers Association of the U.S., vehicle prices might go up by an average of $1,500. This increase stems from the higher costs of materials that are critical to vehicle manufacturing and assembly.

Does the phrase “according to a February report” sound weird to you on the first of February? It does to me, too. Would it surprise you if I told you the “Automobile Manufacturers Association of the U.S.” does not exist? There was a U.S. trade group by the name of “Automobile Manufacturers Association” until 1999, according to Stan Luger in “Corporate Power, American Democracy, and the Automobile Industry”.2 There are also several current industry groups, none of which are named anything similar. This organization and its report do not exist. If they do, please tell me, but I found nothing relevant.

What about the figure itself, though — “vehicle prices might go up by an average of $1,500”? Again, I cannot find any supporting evidence. None of the sources cited in this article contain this number. A November Bloomberg story cites a Wolfe Research note in reporting new cars will be about $3,000 more expensive, not $1,500, at the same proposed rate as the White House is expected to announce today.

Again, I have to ask why anyone should trust Quartz with their financial news. I know A.I. makes mistakes and, as MacLeod quotes them saying, Quartz does too: “[w]hile we strive for accuracy and timeliness, due to the experimental nature of this technology we cannot guarantee that we’ll always be successful in that regard”.

This is the first article I checked, and I gave up after the fourth paragraph and two entirely fictional sources of information. Maybe the rest of the Quartz Intelligence Newsroom’s output is spotless and I got unlucky.

But — what a downfall for Quartz. Once positioning itself as the Economist for the 2010s, it is now publishing stuff that is made up by a machine and, apparently, is passed unchecked to the web for other A.I. scrapers to aggregate. G/O Media says it publishes “editorial content and conduct[s its] day-to-day business activities with the UTMOST INTEGRITY”. I disagree. I think we will struggle to understand for a long time how far and how fast standards have fallen. This is trash.


  1. Do not look at your address bar right now. ↥︎

  2. Yes, it is the citation on Wikipedia, but I looked it up for myself and confirmed it with a copy of the book. Pages 155–156. ↥︎

With this week’s public release of Apple’s operating system updates comes Apple Intelligence now on by default. More users will be discovering its “beta” features and Apple will, in theory, be collecting even more feedback about their quality. There are certainly issues with the output of Notification Summaries, Siri, and more.1 The flaws in results from Apple Intelligence’s many features are correctly scrutinized. Because of that, I think some people have overlooked the questionable user interface choices.

Not one of the features so far available through Apple Intelligence is particularly newsworthy from a user’s perspective. There are plenty of image generators, automatic summaries, and contextual response suggestions in other software. Apple is not breaking new ground in features, nor is it strategically. It is rarely first to do anything. What it excels at is implementation. Apple often makes some feature or product, however time-worn by others, feel so well-considered it has reached its inevitable form. That is why it is so baffling to me to use features in the Apple Intelligence suite and feel like they are half-baked.

Consider, for example, Writing Tools. This is a set of features available on text in almost any application to proofread it, summarize it, and rewrite it in different styles. You may have seen it advertised. While its name implies the source text is editable, these tools will work on pretty much any non-U.I. text — it works on webpages and in PDF files, but I was not able to make it work with text detected in PNG screenshots.

What this looks like on my Mac, sometimes, is as a blue button beside text I have highlighted. This is not consistent — this button appears in MarsEdit but not Pages; TextEdit but not BBEdit. These tools are also available from a contextual menu, which is the correct place in MacOS for taking actions upon a selection.

In any case, Writing Tools materializes in a popover. Despite my enabling of Reduce Transparency across the system, it launches with a subtle Apple Intelligence gradient background that makes it look translucent before it fades out. This popover works a little bit like a contextual menu and a little like a panel while doing the job of neither very successfully. Any action taken from this popover will spawn another popover. For example, selecting “Proofread” will close the Writing Tools popover and open a new, slightly wider one. After some calculation, the proofread selection will appear alongside buttons for “Replace”, “Copy”, and providing feedback. (I anticipate the latter is a function of the “beta” caveat and will eventually be removed.)

There are several problems with this, beginning with the choice to present this as a series of popovers. It is not entirely inappropriate; Apple says “[i]f you need content only temporarily, displaying it in a popover can help streamline your interface”. However, because popovers are intended for only brief interactions, they are designed to be easily dismissed, something Apple also acknowledges in its documentation. Popovers disappear if you click outside their bounds, if you switch to another window, or if you try to take an action after scrolling the highlighted text out of view. Apple has also made the choice to not cache the results of one of these tools on a passage of selected text. What can easily happen, therefore, is a user will select some text, run Proofread on it, and then — quite understandably — try to make edits to the text or perhaps switch to a different application, only to find that the writing tool has disappeared, and that opening it again will necessitate processing the text again. A user must select the resulting text in the popover or use the “Replace” or “Copy” buttons.

Unlike some other popovers in MacOS — like when you edit an event in Calendar — Writing Tools cannot exist as a floating, disconnected panel. It remains stubbornly attached to the selected text.

As noted, the Writing Tools popover is not the same width as the other popovers it will spawn. By sheer luck, I had one of my test windows positioned in such a way that the Writing Tools popover had enough space to display on the lefthand side of the window, but the popovers it launched appeared on the right because they are a bit wider. This made for a confusing and discordant experience.

Choice of component aside, the way the results of Writing Tools are displayed is so obviously lacklustre I am surprised it shipped in its current state. Two of the features I assumed I would find useful — as I am one person shy of an editor — are “Proofread” and “Rewrite”. But they both have a critical flaw: neither shows the differences between the original text and the changed version. For very short passages, this is not much of a problem, but a tool like “Proofread” implies use on more substantial chunks, or even a whole document. A user must carefully review the rewritten text to discover what changes were made, or place their faith in Apple and click the “Replace” button hoping all is well.

Apple could correct for all of these issues. It could display Writing Tools in a panel instead of a popover or, at least, make it possible to disconnect the popover from the selected and transform it into a panel. It should also make every popover the same width or require enough clearance for the widest popover spawned by Writing Tools so that they always open on the same side. It could bring to MacOS the same way of displaying differences in rewritten text as already exists on iOS but, for some reason, is not part of the Mac version. It could cache results so, if the text is unchanged, invoking the same tool again does not need to redo a successful action.

Writing Tools on MacOS is the most obviously flawed of the Apple Intelligence features suffering from weak implementation or questionable U.I. choices, but there are other examples, too. Some quick hits:

  • I could not figure out how to get Image Playground to generate an illustration of my dog, something I know is possible. On my iPhone, the toolbar in Image Playground shows a box to “describe an image”, a “People” button, and a plus button. The “People” button is limited to human beings detected in your photo library, even though Photos groups “People & Pets” together. Describing an image using my dog’s name also does not work. The way to do it is to tap the plus button — which contains a “Style” selector and buttons to choose or take a photo — then select “Choose Photo” to pick something from your library as a reference.

    This is somewhat more obvious in the Mac version because the toolbar is wide enough to fit the “Style” selector and, therefore, the plus button is labelled with a photo icon.

  • Also in Image Playground, I find the try-and-see approach as much fun as it is with Siri. I typed my dog’s breed into the image prompt, and it said it does not support the language. I then picked one photo of my dog from my photo library and it said it was “unable to use that description”. I wish the photo picker would not have shown me an option it was unable to use.

  • Automatic replies in Messages are unhelpful and, on MacOS, cannot be turned off without turning off Apple Intelligence altogether.

  • The settings for Apple Intelligence features are, by and large, not shown in the Apple Intelligence panel in Settings. That panel only contains a toggle for Apple Intelligence as a whole, a section for managing extensions — like ChatGPT — and Siri controls. Settings for individual features are instead placed in different parts of Settings or in individual apps.

    I think this is the correct choice overall, but it is peculiar to have everything Apple Intelligence branded across the system with its logo and gradient — and to advertise Apple Intelligence as its own software — only to have to find the menu in Notification settings for toggling summarization in different apps.

You will note that not a single one of these criticisms is related to the output of Apple Intelligence or a complaint about its limitations. These are all user interaction problems I have experienced. Perhaps this is the best Apple is able to do right now; perhaps it considered and rejected putting Writing Tools in a panel on MacOS for a good reason.

It is unfortunate these features feel almost undesigned — like engineers were responsible for building them, and then someone with human interface knowledge was brought in to add some design. There are plenty of things that are more visually appealing and consistent with platform expectations, like Priority Inbox in Mail. Many of the features seem more polished for iOS compared to MacOS.

Writing Tools, in particular, can and should be better. I write a little on my iPhone, but I write a lot on my Mac — not just posts here, but also emails, messages, and social media posts. A more advanced spelling and grammar checker that has at least some contextual awareness sounds very appealing to me. This is a letdown, and because of so many basic reasons. I do not need Apple Intelligence to be the apex of current technology. What I do expect, at the very least, is that it is user-friendly and feels at home on Apple’s own platforms. It needs work.


  1. In the public version of iOS 18.3, summaries are unavailable for apps from the News and Entertainment categories. ↥︎

Four years ago this week, social media companies decided they would stop platforming then-outgoing president Donald Trump after he celebrated seditionists who had broken into the U.S. Capitol Building in a failed attempt to invalidate the election and allow Trump to stay in power. After two campaigns and a presidency in which he tested the limits of what those platforms would allow, enthusiasm for a violent attack on government was apparently one step too far. At the time, Mark Zuckerberg explained:

Over the last several years, we have allowed President Trump to use our platform consistent with our own rules, at times removing content or labeling his posts when they violate our policies. We did this because we believe that the public has a right to the broadest possible access to political speech, even controversial speech. But the current context is now fundamentally different, involving use of our platform to incite violent insurrection against a democratically elected government.

Zuckerberg, it would seem, now has regrets — not about doing too little over those and the subsequent years, but about doing too much. For Zuckerberg, the intervening four years have been stifled by “censorship” on Meta’s platforms; so, this week, he announced a series of sweeping changes to their governance. He posted a summary on Threads but the five-minute video is far more loaded, and it is what I will be referring to. If you do not want to watch it — and I do not blame you — the transcript at Tech Policy Press is useful. The key changes:

  1. Fact-checking is to be replaced with a Community Notes feature, similar to the one on X.

  2. Change the Hateful Conduct policies to be more permissive about language used in discussions about immigration and gender.

  3. Make automated violation detection tools more permissive and focus them on “high-severity” problems, relying on user reports for material the company thinks is of a lower concern.

  4. Roll back restrictions on the visibility and recommendation of posts related to politics.

  5. Relocate the people responsible for moderating Meta’s products from California to another location — Zuckerberg does not specify — and move the U.S.-focused team to Texas.

  6. Work with the incoming administration on concerns about governments outside the U.S. pressuring them to “censor more”.

Regardless of whether you feel each of these are good or bad ideas, I do not think you should take Zuckerberg’s word for why the company is making these changes. Meta’s decision to stop working directly with fact-checkers, for example, is just as likely a reaction to the demands of FCC commissioner Brendan Carr, who has a bananas view (PDF) of how the First Amendment to the U.S. Constitution works. According to Carr, social media companies should be forbidden from contributing their own speech to users’ posts based on the rankings of organizations like NewsGuard. According both Carr and Zuckerberg, fact-checkers demand “censorship” in some way. This is nonsense: they were not responsible for the visibility of posts. I do not think much of this entire concept, but surely they only create more speech by adding context in a similar way as Meta hopes will still happen with Community Notes. Since Carr will likely be Trump’s nominee to run the FCC, it is important for Zuckerberg to get his company in line.

Meta’s overhaul of its Hateful Conduct policies also shows the disparity between what Zuckerberg says and the company’s actions. Removing rules that are “out of touch with mainstream discourse” sounds fair. What it means in practice, though, is to allow people to make COVID-19 more racist, demean women, and — of course — discriminate against LGBTQ people in more vicious ways. I understand the argument for why these things should be allowed by law, but there is no obligation for Meta to carry this speech. If Meta’s goal is to encourage a “friendly and positive” environment, why increase its platforms’ permissiveness to assholes? Perhaps the answer is in the visibility of these posts — maybe Meta is confident it can demote harmful posts while still technically allowing them. I am not.

We can go through each of these policy changes, dissect them, and consider the actual reasons for each, but I truly believe that is a waste of time compared to looking at the sum of what they accomplish. Conservatives, particularly in the U.S., have complained for years about bias against their views by technology companies, an updated version of similar claims about mass media. Despite no evidence for this systemic bias, the myth stubbornly persists. Political strategists even have a cute name for it: “working the refs”. Jeff Cohen and Norman Solomon, Creators Syndicate, August 1992:

But in a moment of candor, [Republican Party Chair Rich] Bond provided insight into the Republicans’ media-bashing: “There is some strategy to it,” he told the Washington Post. “I’m the coach of kids’ basketball and Little League teams. If you watch any great coach, what they try to do is ‘work the refs.’ Maybe the ref will cut you a little slack next time.”

Zuckerberg and Meta have been worked — heavily so. The playbook of changes outlined by Meta this week are a logical response in an attempt to court scorned users, and not just the policy changes here. On Monday, Meta announced Dana White, UFC president and thrice-endorser of Trump, would be joining its board. Last week, it promoted Joel Kaplan, a former Republican political operative, to run its global policy team. Last year, Meta hired Dustin Carmack who, according to his LinkedIn, directs the company’s policy and outreach for nineteen U.S. states, and previously worked for the Heritage Foundation, the Office of the Director of National Intelligence, and Ron DeSantis. These are among the people forming the kinds of policies Meta is now prescribing.

This is not a problem solved through logic. If it were, studies showing a lack of political bias in technology company policy would change more minds. My bet is that these changes will not have what I assume is the desired effect of improving the company’s standing with far-right conservatives or the incoming administration. If Meta becomes more permissive for bigots, it will encourage more of that behaviour. If Meta does not sufficiently suggest those kinds of posts because it wants “friendly and positive” platforms, the bigots will cry “shadowban”. Meta’s products will corrode. That does not mean they will no longer be influential or widely used, however; as with its forthcoming A.I. profiles, Meta is surely banking that its dominant position and a kneecapped TikTok will continue driving users and advertisers to its products, however frustratedly.

Zuckerberg appears to think little of those who reject the new policies:

[…] Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better.

I am allergic to the phrase “virtue signalling” but I am willing to try getting through this anyway. This has been widely interpreted as because of their virtue signalling, but I think it is just as accurate if you think of it as because of our virtue signalling. Zuckerberg has complained about media and government “pressure” to more carefully moderate Meta’s platforms. But he cannot ignore how this week’s announcement also seems tied to implicit pressure. Trump is not yet the president, true, but Zuckerberg met with him shortly after the election and, apparently, the day before these changes were announced. This is just as much “virtue signalling” — particularly moving some operations to Texas for reasons even Zuckerberg says are about optics.

Perhaps you think I am overreading this, but Zuckerberg explicitly said in his video introducing the changes that “recent elections also feel like a cultural tipping point towards once again prioritizing speech”. If he means elections other than those which occurred in the U.S. in November, I am not sure which. These are changes made from a uniquely U.S. perspective. To wit, the final commitment in the list above as explained by Zuckerberg (via the Tech Policy Press transcript):

Finally, we’re going to work with President Trump to push back on governments around the world. They’re going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever-increasing number of laws, institutionalizing censorship, and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years when even the US government has pushed for censorship.

For their part, the E.U. rejected Zuckerberg’s characterization of its policies, and Brazilian officials are not thrilled, either.

These changes — and particularly this last one — are illustrative of the devil’s bargain of large U.S.-based social media companies: they export their policies and values worldwide following whatever whims and trends are politically convenient at the time. Right now, it is important for Meta to avoid getting on the incoming Trump administration’s shit list, so they, like everyone, are grovelling. If the rest of the world is subjected to U.S.-style discussions, so be it. But so have we been for a long time. What is extraordinary about Meta’s changes is how many people will be impacted: billions, plural. Something like one-quarter the world’s population.

The U.S. is no stranger to throwing around its political and corporate power in a way few other nations can. Meta’s changes are another entry into that canon. There are people in some countries who will benefit from having more U.S.-centric policies, but most everyone elsewhere will find them discordant with more local expectations. These new policies are not satisfying for people everywhere around the world, but the old ones were not, either.

It is unfair to expect any platform operator to get things right for every audience, especially not at Meta’s scale. The options created by less centralized protocols like ActivityPub and AT Protocol are much more welcome. We should be able to have more control over our experience than we are trusted with.

Zuckerberg begins his video introduction by referencing a 2019 speech he gave at Georgetown University. In it, he speaks of the internet creating “significantly broader power to call out things we feel are unjust”. “[G]iving people a voice and broader inclusion go hand in hand,” he said, “and the trend has been towards greater voice over time”. Zuckerberg naturally centred his company’s products. But you know what is even more powerful than one company at massive scale? It is when no company needs to act as the world’s communications hub. The internet is the infrastructure for that, and we would be better off if we rejected attempts to build moats.

The ads for Apple Intelligence have mostly been noted for what they show, but there is also something missing: in the fine print and in its operating systems, Apple still calls it a “beta” release, but not in its ads. Given the exuberance with which Apple is marketing these features, that label seems less like a way to inform users the software is unpolished, and more like an excuse for why it does not work as well as one might expect of a headlining feature from the world’s most valuable company.

“Beta” is a funny word when it comes to Apple’s software. It often makes available preview builds of upcoming O.S. releases to users and developers for feedback, testing software compatibility, and to build with new APIs. This is voluntary and done with the understanding that the software is unfinished, and bugs — even serious ones — can be expected.

Apple has also, rarely, applied the “beta” label to features in regular releases which are distributed to all users, not just those who signed up. This type of “beta” seems less honest. Instead of communicating this feature is a work in progress, it seems to say we are releasing this before it is done. Maybe that is a subtle distinction, but it is there. One type of beta is testing; the other type asks users to disregard their expectations of polish, quality, and functionality so that a feature can be pushed out earlier than it should.

We have seen this on rare occasions: once with Portrait mode; more notably, with Siri. Mat Honan, writing for Gizmodo in December 2011:

Check out any of Apple’s ads for the iPhone 4S. They’re promoting Siri so hard you’d be forgiven for thinking Siri is the new CEO of Apple. And it’s not just that first wave of TV ads, a recent email Apple sent out urges you to “Give the phone that everyone’s talking about. And talking to.” It promises “Siri: The intelligent assistant you can ask to make calls, send texts, set reminders, and more.”

What those Apple ads fail to report — at all — is that Siri is very much a half-baked product. Siri is officially in beta. Go to Siri’s homepage on Apple.com, and you’ll even notice a little beta tag by the name.

This is familiar.

The ads for Siri gave the impression of great capability. It seemed like you could ask it how to tie a bowtie, what events were occurring in a town or city, and more. The response was not shown for these queries, but the implication was that Siri could respond. What became obvious to anyone who actually used Siri is that it would show web search results instead. But, hey, it was a “beta” — for two years.

The ads for Apple Intelligence do one better and show features still unreleased. The fine print does mention “some features and languages will be coming over the next year”, without acknowledging the very feature in this ad is one of them. And, when it does actually come out, it is still officially in “beta”, so I guess you should not expect it to work properly.

This all seems like a convoluted way to evade full responsibility of the Apple Intelligence experience which, so far, has been middling for me. Genmoji is kind of fun, but Notification Summaries are routinely wrong. Priority messages in Mail is helpful when it correctly surfaces an important email, and annoying when it highlights spam. My favourite feature — in theory — is the Reduce Interruptions Focus mode, which is supposed to only show notifications when they are urgent or important. It is the kind of thing I have been begging for to deal with the overburdened notifications system. But, while it works pretty well sometimes, it is not dependable enough to rely on. It will sometimes prioritize scam messages written with a sense of urgency, but fail to notify me when my wife messages me a question. It still necessitates I occasionally review the notifications suppressed by this Focus mode. It is helpful, but not consistently enough to be confidence-inspiring.

Will users frustrated by the questionable reliability of Apple Intelligence routinely return to try again? If my own experience with Siri is any guidance — and I am not sure it is, but it is all I have — I doubt it. If these features did not work on the first dozen attempts, why would they work any time after? This strategy, I think, teaches people to set their expectations low.

This beta-tinged rollout is not entirely without its merits. Apple is passively soliciting feedback within many of its Apple Intelligence features, at a scale far greater than it could by restricting testing to only its own staff and contractors. But it also means the public becomes unwitting testers. As with Siri before, Apple heavily markets this set of features as the defining characteristic of this generation of iPhones, yet we are all supposed to approach this as though we are helping Apple make sure its products are ready? Sorry, it does not work like that. Either something is shipping or it is not, and if it does not work properly, users will quickly learn not to trust it.

Wired has been publishing a series of predictions about the coming state of the world. Unsurprisingly, most concern artificial intelligence — how it might impact health, music, our choices, the climate, and more. It is an issue of the magazine Wired describes as its “annual trends briefing”, but it also kind of like a hundred-page op-ed section. It is a mixed bag.

A.I. critic Gary Marcus contributed a short piece about what he sees as the pointlessness of generative A.I. — and it is weak. That is not necessarily because of any specific argument, but because of how unfocused it is despite its brevity. It opens with a short history of OpenAI’s models, with Marcus writing “Generative A.I. doesn’t actually work that well, and maybe it never will”. Thesis established, he begins building the case:

Fundamentally, the engine of generative AI is fill-in-the-blanks, or what I like to call “autocomplete on steroids.” Such systems are great at predicting what might sound good or plausible in a given context, but not at understanding at a deeper level what they are saying; an AI is constitutionally incapable of fact-checking its own work. This has led to massive problems with “hallucination,” in which the system asserts, without qualification, things that aren’t true, while inserting boneheaded errors on everything from arithmetic to science. As they say in the military: “frequently wrong, never in doubt.”

Systems that are frequently wrong and never in doubt make for fabulous demos, but are often lousy products in themselves. If 2023 was the year of AI hype, 2024 has been the year of AI disillusionment. Something that I argued in August 2023, to initial skepticism, has been felt more frequently: generative AI might turn out to be a dud. The profits aren’t there — estimates suggest that OpenAI’s 2024 operating loss may be $5 billion — and the valuation of more than $80 billion doesn’t line up with the lack of profits. Meanwhile, many customers seem disappointed with what they can actually do with ChatGPT, relative to the extraordinarily high initial expectations that had become commonplace.

Marcus’ financial figures here are bizarre and incorrect. He quotes a Yahoo News-syndicated copy of a PC Gamer article, which references a Windows Central repackaging of a paywalled report by the Information — a wholly unnecessary game of telephone when the New York Times obtained financial documents with the same conclusion, and which were confirmed by CNBC. The summary of that Information article — that OpenAI “may run out of cash in 12 months, unless they raise more [money]”, as Marcus wrote on X — is somewhat irrelevant now after OpenAI proceeded to raise $6.6 billion at a staggering valuation of $157 billion.

I will leave analysis of these financials to MBA types. Maybe OpenAI is like Amazon, which took eight years to turn its first profit, or Uber, which took fourteen years. Maybe it is unlike either and there is no way to make this enterprise profitable.

None of that actually matters, though, when considering Marcus’ actual argument. He posits that OpenAI is financially unsound as-is, and that Meta’s language models are free. Unless OpenAI “come outs [sic] with some major advance worthy of the name of GPT-5 before the end of 2025″, the company will be in a perilous state and, “since it is the poster child for the whole field, the entire thing may well soon go bust”. But hold on: we have gone from ChatGPT is disappointing “many customers” — no citation provided — to the entire concept of generative A.I. being a dead end. None of this adds up.

The most obvious problem is that generative A.I. is not just ChatGPT or other similar chat bots; it is an entire genre of features. I wrote earlier this month about some of the features I use regularly, like Generative Remove in Adobe Lightroom Classic. As far as I know, this is no different than something like OpenAI’s Dall‍-‍E in concept: it has been trained on a large library of images to generate something new. Instead of responding to a text-based prompt, it predicts how it should replicate textures and objects in an arbitrary image. It is far from perfect, but it is dramatically better than the healing brush tool before it, and clone stamping before that.

There are other examples of generative A.I. as features of creative tools. It can extend images and replace backgrounds pretty well. The technology may be mediocre at making video on its own terms, but it is capable of improving the quality of interpolated slow motion. In the technology industry, it is good at helping developers debug their work and generate new code.

Yet, if you take Marcus at his word, these things and everything else generative A.I. “might turn out to be a dud”. Why? Marcus does not say. He does, however, keep underscoring how shaky he finds OpenAI’s business situation. But this Wired article is ostensibly about generative A.I.’s usefulness — or, in Marcus’ framing, its lack thereof — which is completely irrelevant to this one company’s financials. Unless, that is, you believe the reason OpenAI will lose five billion dollars this year is because people are unhappy with it, which is not the case. It simply costs a fortune to train and run.

The one thing Marcus keeps coming back to is the lack of a “moat” around generative A.I., which is not an original position. Even if this is true, I do not see this as evidence of a generative A.I. bubble bursting — at least, not in the sense of how many products it is included in or what capabilities it will be trained on.

What this looks like, to me, is commoditization. If there is a financial bubble, this might mean it bursts, but it does not mean the field is wiped out. Adobe is not disappearing; neither are Google or Meta or Microsoft. While I have doubts about whether chat-like interfaces will continue to be a way we interact with generative A.I., it continues to find relevance in things many of us do every day.

A little over two years after OpenAI released ChatGPT upon the world, and about four years since Dall-E, the company’s toolset now — “finally” — makes it possible to generate video. Sora, as it is called, is not the first generative video tool to be released to the public; there are already offerings from Hotshot, Luma, Runway, and Tencent. OpenAI’s is the highest-profile so far, though: the one many people will use, and the products of which we will likely all be exposed to.

A generator of video is naturally best seen demonstrated in that format, and I think Marques Brownlee’s preview is a good place to start. The results are, as I wrote in February when Sora was first shown, undeniably impressive. No matter how complicated my views about generative A.I. — and I will get there — it is bewildering that a computer can, in a matter of seconds, transform noise into a convincing ten-second clip depicting whatever was typed into a text box. It can transform still images into video, too.

It is hard to see this as anything other than extraordinary. Enough has been written by now about “any sufficiently advanced technology [being] indistinguishable from magic” to bore, but this truly captures it in a Penn & Teller kind of way: knowing how it works only makes it somehow more incredible. Feed computers on a vast scale video which has been labelled — partly by people, and partly by automated means which are reliant on this exact same training process — and it can average that into entirely new video that often appears plausible.1 I am basing my assessment on the results generated by others because Sora requires a paid OpenAI account, and because there is currently a waiting list.

There are, of course, limitations of both technology and policy. Sora has problems with physics, the placement of objects in space, and consistency between and within shots. Sora does not generate audio, even though OpenAI has the capability. Prompts in text and images are checked for copyright violations, public figures’ likenesses, criminal usage, and so forth. But there is no meaningful restrictions on the video itself. This is not how things must be; this is a design decision.

I keep thinking about the differences between A.I. features and A.I. products. I use very few A.I. products; an open-ended image generator, for example, is technically interesting but not very useful to me. Unlike a crop of Substack writers, I do not think pretending to have commissioned art lends me any credibility. But I now use A.I. features on a regular basis, in part because so many things are now “A.I. features” in name and by seemingly no other quality. Generative Remove in Adobe Lightroom Classic, for example, has become a terrific part of my creative workflow. There are edits I sometimes want to make which, if not for this feature, would require vastly more time which, depending on the job, I may not have. It is an image generator just like Dall-E or Stable Diffusion, but it is limited by design.

Adobe is not taking a principled stance; Photoshop contains a text-based image generator which, I think, does not benefit from being so open-ended. It would, for me, be improved if its functionality were integrated into more specific tools; for example, the crop tool could also allow generative reframing.

Sora, like ChatGPT and Dall-E, is an A.I. product. But I would find its capabilities more useful and compelling if they were a feature within a broader video editing environment. Its existence implies a set of tools which could benefit a video editor’s workflow. For example, the object removal and tracking features in Premiere Pro feel more useful to me than its ability to generate b-roll, which just seems like a crappy excuse to avoid buying stock footage or paying for a second unit.

Limiting generative A.I. in this manner would also make its products more grounded in reality and less likely to be abused. It would also mean withholding capabilities. Clearly, there are some people who see a demonstration of the power of generative A.I. as a worthwhile endeavour unto itself. As a science experiment, I get it, but I do not think these open-ended tools should be publicly available. Alas, that is not the future venture capitalists, and shareholders, and — I guess — the creators of these products have decided is best for us.

We are now living in a world of slop, and we have been for some time. It began as infinite reams of text-based slop intended to be surfaced in search results. It became image-based slop which paired perfectly with Facebook’s pivot to TikTok-like recommendations. Image slop and audio slop came together to produce image slideshow slop dumped into the pipelines of Instagram Reels, TikTok, YouTube Shorts. Brace yourselves for a torrent of video slop about pyramids and the Bermuda triangle and pyramids. None of these were made using Sora, as far as I know; at least some were generated by Hailuo from Minimax. I had to dig a little bit for these examples, but not too much, and it is only going to get worse.

Much has been written about how all this generative stuff has the capability of manipulating reality — and rightfully so. It lends credence to lies, and its mere existence can cause unwarranted doubt. But there is another problem: all of this makes our world a little bit worse because it is cheap to produce in volume. We are on the receiving end of a bullshit industry, and the toolmakers see no reason to slow it down. Every big platform — including the web itself — is full of this stuff, and it is worse for all of us. Cynicism aside, I cannot imagine the leadership at Google or Meta actually enjoys using their own products as they wade through generated garbage.

This is hitting each of us in similar ways. If you use a computer that is connected to the internet, you are likely running into A.I.-generated stuff all the time, perhaps without being fully aware of it. The recipe you followed, the repair guide you found, the code you copy-and-pasted, and the images in the video you watched? Any of them could have been generated in a data farm somewhere. I do not think that is inherently bad, though it is an uncertain feeling.

I am part of the millennial generation. I grew up at a time in which we were told we were experiencing something brand new in world history. The internet allowed anyone to publish anything, and it was impossible to verify this new flood of information. We were taught to think critically and be cautious, since we never knew who created anything. Now we have a different problem: we are unsure what created anything.


  1. Without thinking about why it is the case, it is interesting how generative A.I. has no problem creating realistic-seeming text as text, but it struggles when it is an image containing text. But with a little knowledge about how these things work, that makes sense. ↥︎

Spencer Ackerman has been a national security reporter for over twenty years, and was partially responsible for the Guardian’s coverage of NSA documents leaked by Edward Snowden. He has good reason to be skeptical of privacy claims in general, and his experience updating his iPhone made him worried:

Recently, I installed Apple’s iOS 18.1 update. Shame on me for not realizing sooner that I should be checking app permissions for Siri — which I had thought I disabled as soon as I bought my device — but after installing it, I noticed this update appeared to change Siri’s defaults.

Apple has a history with changing preferences and dark patterns. This is particularly relevant in the case of the iOS 18.1 update because it was the one with Apple Intelligence, which creates new ambiguity between what is happening on-device and what goes to a server farm somewhere.

Allen Pike:

While easy tasks are handled by their on-device models, Apple’s cloud is used for what I’d call moderate-difficulty work: summarizing long emails, generating patches for Photos’ Clean Up feature, or refining prose in response to a prompt in Writing Tools. In my testing, Clean Up works quite well, while the other server-driven features are what you’d expect from a medium-sized model: nothing impressive.

Users shouldn’t need to care whether a task is completed locally or not, so each feature just quietly uses the backend that Apple feels is appropriate. The relative performance of these two systems over time will probably lead to some features being moved from cloud to device, or vice versa.

It would be nice if it truly did not matter — and, for many users, the blurry line between the two is probably fine. Private Cloud Compute seems to be trustworthy. But I fully appreciate Ackerman’s worries. Someone in his position necessarily must understand what is being stored and processed in which context.

However, Ackerman appears to have interpreted this setting change incorrectly:

I was alarmed to see that even my secure communications apps, like Proton and Signal, were toggled by default to “Learn from this App” and enable some subsidiary functions. I had to swipe them all off.

This setting was, to Ackerman, evidence of Apple “uploading your data to its new cloud-based AI project”, which is a reasonable assumption at a glance. Apple, like every technology company in the past two years, has decided to loudly market everything as being connected to its broader A.I. strategy. In launching these features in a piecemeal manner, though, it is not clear to a layperson which parts of iOS are related to Apple Intelligence, let alone where those interactions are taking place.

However, this particular setting is nearly three years old and unrelated to Apple Intelligence. This is related to Siri Suggestions which appear throughout the system. For example, the widget stack on my home screen suggests my alarm clock app when I charge my iPhone at night. It suggests I open the Microsoft Authenticator app on weekday mornings. When I do not answer the phone for what is clearly a scammer, it suggests I return the missed call. It is not all going to be gold.

Even at the time of its launch, its wording had the potential for confusion — something Apple has not clarified within the Settings app in the intervening years — and it seems to have been enabled by default. While this data may play a role in establishing the “personal context” Apple talks about — both are part of the App Intents framework — I do not believe it is used to train off-device Apple Intelligence models. However, Apple says this data may leave the device:

Your personal information — which is encrypted and remains private — stays up to date across all your devices where you’re signed in to the same Apple Account. As Siri learns about you on one device, your experience with Siri is improved on your other devices. If you don’t want Siri personalization to update across your devices, you can disable Siri in iCloud settings. See Keep what Siri knows about you up to date on your Apple devices.

While I believe Ackerman is incorrect about the setting’s function and how Apple handles its data, I can see how he interpreted it that way. The company is aggressively marketing Apple Intelligence, even though it is entirely unclear which parts of it are available, how it is integrated throughout the company’s operating systems, and which parts are dependent on off-site processing. There are people who really care about these details, and they should be able to get answers to these questions.

All of this stuff may seem wonderful and novel to Apple and, likely, many millions of users. But there are others who have reasonable concerns. Like any new technology, there are questions which can only be answered by those who created it. Only Apple is able to clear up the uncertainty around Apple Intelligence, and I believe it should. A cynical explanation is that this ambiguity is all deliberate because Apple’s A.I. approach is so much slower than its competitors and, so, it is disincentivized from setting clear boundaries. That is possible, but there is plenty of trust to be gained by being upfront now. Americans polled by Pew Research and Gallup have concerns about these technologies. Apple has repeatedly emphasized its privacy bonafides. But these features remain mysterious and suspicious for many people regardless of how much a giant corporation swears it delivers “stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency”.

All of that is nice, I am sure. Perhaps someone at Apple can start the trust-building by clarifying what the Siri switch does in the Settings app, though.

Brendan Nystedt, reporting for Wired on a new generation of admirers of crappy digital cameras from the early 2000s:

For those seeking to experiment with their photography, there’s an appeal to using a cheap, old digital model they can shoot with until it stops working. The results are often imperfect, but since the camera is digital, a photographer can mess around and get instant gratification. And for everyone in the vintage digital movement, the fact that the images from these old digicams are worse than those from a smartphone is a feature, not a bug.

Om Malik attributes it to wabi-sabi:

Retromania? Not really. It feels more like a backlash against the excessive perfection of modern cameras, algorithms, and homogenized modern image-making. I don’t disagree — you don’t have to do much to come up with a great-looking photo these days. It seems we all want to rebel against the artistic choices of algorithms and machines — whether it is photos or Spotify’s algorithmic playlists versus manually crafted mixtapes.

I agree, though I do not see why we need to find just one cause — an artistic decision, a retro quality, an aesthetic trend, a rejection of perfection — when it could be driven by any number of these factors. Nailing down exactly which of these is the most important factor is not of particular interest to me; certainly, not nearly as much as understanding that people, as a general rule, value feeling.

I have written about this before and it is something I wish to emphasize repeatedly: efficiency and clarity are necessary elements, but are not the goal. There needs to be space for how things feel. I wrote this as it relates to cooking and cars and onscreen buttons, and it is still something worth pursuing each and every time we create anything.

I thought about this with these two articles, but first last week when Wil Shipley announced the end of Delicious Library:

Amazon has shut off the feed that allowed Delicious Library to look up items, unfortunately limiting the app to what users already have (or enter manually).

I wasn’t contacted about this.

I’ve pulled it from the Mac App Store and shut down the website so nobody accidentally buys a non-functional app.

Delicious Library was many things: physical and digital asset management software, a kind of personal library, and a wish list. But it was also — improbably — fun. Little about cataloguing your CDs and books sounds like it ought to be enjoyable, but Shipley and Mike Matas made it feel like something you wanted to do. You wanted to scan items with your Mac’s webcam just because it felt neat. You wanted to see all your media on a digital wooden shelf, if for no other reason than it made those items feel as real onscreen as they are in your hands.

Delicious Library became known as the progenitor of the “delicious generation” of applications, which prioritized visual appeal as much as utility. It was not enough for an app to be functional; it needed to look and feel special. The Human Interface Guidelines were just that: guidelines. One quality of this era was the apparently fastidious approach to every pixel. Another quality is that these applications often had limited features, but were so much fun to use that it was possible to overlook their restrictions.

I do not need to relitigate the subsequent years of visual interfaces going too far, then being reeled in, and then settling in an odd middle ground where I am now staring at an application window with monochrome line-based toolbar icons, deadpan typography, and glassy textures, throwing a heavy drop shadow. None of the specifics matter much. All I care about is how these things feel to look at and to use, something which can be achieved regardless of how attached you are to complex illustrations or simple line work. Like many people, I spend hours a day staring at pixels. Which parts of that are making my heart as happy as my brain? Which mundane tasks are made joyful?

This is not solely a question of software; it has relevance in our physical environment, too, especially as seemingly every little thing in our world is becoming a computer. But it can start with pixels on a screen. We can draw anything on them; why not draw something with feeling? I am not sure we achieve that through strict adherence to perfection in design systems and structures.

I am reluctant to place too much trust in my incomplete understanding of a foreign-to-me concept rooted in another country’s very particular culture, but perhaps the sabi is speaking loudest to me. Our digital interfaces never achieve a patina; in fact, the opposite is more often true: updates seem to erase the passage of time. It is all perpetually new. Is it any wonder so many of us ache for things which seem to freeze the passage of time in a slightly hazier form?

I am not sure how anyone would go about making software feel broken-in, like a well-worn pair of jeans or a lounge chair. Perhaps that is an unattainable goal for something on a screen; perhaps we never really get comfortable with even our most favourite applications. I hope not. It would be a shame if we lose that quality as software eats our world.

Michael Liedtke, Associated Press:

The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Google’s industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.

[…]

Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.

Casey Newton:

In addition to requiring that Chrome be divested, the proposal calls for several other major changes that would be enforced over a 10-year period. They include:

  • Blocking Google from making deals like the one it has with Apple to be its default search engine.

  • Requiring it to let device manufacturers show users a “choice screen” with multiple search engine options on it.

  • Licensing data about search queries, results, and what users click on to rivals.

  • Blocking Google from buying or investing in advertising or search companies, including makers of AI chatbots. (Google agreed to invest up to $2 billion into Anthropic last year.)

The full proposal (PDF) is a pretty easy read. One of the weirder ideas pitched by the Colorado side is to have Google “fund a nationwide advertising and education program” which may, among other things, “include reasonable, short-term incentive payments to users” who pick a non-Google search engine from the choice screen.

I am guessing that is not going to happen, and not just because “Plaintiff United States and its Co-Plaintiff States do not join in proposing these remedies”. In fact, much of this wish list seems unlikely to be part of the final judgement expected next summer — in part because it is extensive, in part because of politics, and also because it seems unrelated.

Deborah Mary Sophia, Akash Sriram, and Kenrick Cai, Reuters:

“DOJ will face substantial headwinds with this remedy,” because Chrome can run search engines other than Google, said Gus Hurwitz, senior fellow and academic director at University of Pennsylvania Carey Law School. “Courts expect any remedy to have a causal connection to the underlying antitrust concern. Divesting Chrome does absolutely nothing to address this concern.”

I — an effectively random Canadian with no expertise in this and, so, you should take my perspective with appropriate caveats — disagree.

The objective of disentangling Chrome from Google’s ownership, according to the executive summary (PDF) produced by the Department of Justice, is to remove “a significant challenge to effectuate a remedy that aims to ‘unfetter [these] market[s] from anticompetitive conduct'”:

A successful remedy requires that Google: stop third-party payments that exclude rivals by advantaging Google and discouraging procompetitive partnerships that would offer entrants access to efficient and effective distribution; disclose data sufficient to level the scale-based playing field it has illegally slanted, including, at the outset, licensing syndicated search results that provide potential competitors a chance to offer greater innovation and more effective competition; and reduce Google’s ability to control incentives across the broader ecosystem via ownership and control of products and data complementary to search.

The DOJ’s theory of growth reinforcing quality and market dominance is sound, from what I understand, and Google does advantage Chrome in some key ways. Most directly related to this case is whether Chrome activity is connected to Google Search. Despite company executives explicitly denying using Chrome browsing data for ranking, a leak earlier this year confirmed Google does, indeed, consider Chrome views in its rankings.

There is also a setting labelled “Make searches and browsing better”, which automatically “sends URLs of the pages you visit” to Google for users of Chromium-based browsers. Google says this allows the company to “predict what sites you might visit next and to show you additional info about the page you’re visiting” which allows users to “browse faster because content is proactively loaded”.

There is a good question as to how much Google Search would be impacted if Google could not own Chrome or operate its own browser for five years, as the remedy proposes. How much weight these features have in Google’s ranking system is something only Google knows. And the DOJ does not propose that Google Search cannot be preloaded in browsers whatsoever. Many users would probably still select Google as their browser’s search engine, too. But Google Search does benefit from Google’s ownership of Chrome itself, so perhaps it is worth putting barriers between the two.

I do not think Chrome can exist as a standalone company. I also do not think it makes sense for another company to own it, since any of those big enough to do so either have their own browsers — Apple’s Safari, Microsoft’s Edge — or would have the potential to create new anticompetitive problems, like if it were acquired by Meta.

What if the solution looks more like prohibiting Google from uniquely leveraging Chrome to benefit its other products? I do not know how that could be written in legal terms, but it appears to me this is one of the DOJ’s goals for separating Chrome and Google.

You might want to skip this one.

From the perspective of this outsider, the results of this year’s U.S. presidential election are stunning. I feel terrible for those within the U.S. who will endure another four years of having longtime institutions ripped apart by a criminal administration and its enablers in the legislative and judicial branches. This is true of just about everybody, but the brunt of the pain inflicted will — again — be directed toward the LGBTQ community, immigrants, visible minorities, and women.

As the world’s sole superpower, however, the effects of U.S. lawmaking will be felt everywhere. The incoming administration’s actions will, at best, disregard consequence. Again: at best. The rest of the world will attempt to govern itself around the whims of an unstable sex abuser, his dangerously feckless cabinet, and a host of grovelling billionaires whispering in his ear.

While the oligarchs and authoritarians of the world will have influence over what happens next in the U.S., us normal people will not. The best we can do is prevent a similar catastrophe befalling our communities. Democracies around the world have elected a raft of far-right ideologues and strongmen — in Austria, Belgium, France, Indonesia, Italy, and the Netherlands. Nationalist ideologies in Europe are now the “establishment”.

Here at home, Canada’s Conservative Party leader is more popular than his rivals and he is itching for an election. Though not our farthest-right party, his policies are of the slash-and-burn variety; his party uniformly voted against those new privacy laws.

Closer still, our provincial government is enacting massive reforms aligned with some of the most conservative U.S. states. At their recent conference, they embraced carbon dioxide as a token principle. Like many conservative governments, they are targeting people who are transgender with restrictive legislation opposed by medical professionals. These policies got the attention of Amnesty International when they were announced.

A predictable response from centrist parties is that they will move rightward to present themselves as a moderating alternative to the more hardline conservatives. I am not a political scientist, but it does not seem that growing the size of the tent will be inviting to a electorate increasingly comfortable with far-right ideas. There are thankfully still places where democracies in recent elections have not embraced a nationalist agenda, and where elections are not between shades of conservatism. Our politicians would do well to learn from them.

We each get to choose our societal role. At the moment, for those of us who do not align with these dominant forces, it can feel pretty small. This is not an airport book; I am not ending this thing on a hopeful note and a list of to-dos. I am scared of what this U.S. election means for decades to come. I am just as worried about policies close to home, and those are the ones I can try to do something about.

I am not giving up. But I am overwhelmed by how far democratic countries around the world have regressed, and how much further they are likely to go.

In short.

In long:

Ten years ago, the USB Implementers Forum finalized the specification for USB-C 1.0, and the world rejoiced, for it would free us from the burden of remembering which was the correct orientation of the plug relative to the socket. And lo, it was good.

And then we all actually got around to using USB-C devices and realized this whole thing is a little bit messy. While there was now a universal connector, the capabilities of the cable can range from those which support only power with maybe a trickle of data, all the way up to others which carry data at USB4 speeds. But that is not all. It might also support various Thunderbolt standards — 3, 4, and now 5 — and DisplayPort. That is neat. Again, this is all done using the same connector size and shape, and with cables that look practically interchangeable.

Which brings us to Ian Bogost, writing in the Atlantic — a requisite destination for intellectualized lukewarm takes — about his cable woes:

I am unfortunately old enough to remember when the first form of USB was announced and then launched. The problem this was meant to solve was the same one as today’s: “A rat’s nest of cords, cables and wires,” as The New York Times described the situation in 1998. Individual gadgets demanded specific plugs: serial, parallel, PS/2, SCSI, ADB, and others. USB longed to standardize and simplify matters — and it did, for a time.

But then it evolved: USB 1.1, USB 2.0, USB 3.0, USB4, and then, irrationally, USB4 2.0. Some of these cords and their corresponding ports looked identical, but had different capabilities for transferring data and powering devices. I can only gesture to the depth of absurdity that was soon attained without boring you to tears or lapsing into my own despair. […]

Reader — and I mean this with respect — I am only too willing to bore you to tears with another article about USB-C. Bogost is right, though. The original USB standard unified the many different ports one was expected to use for peripherals. It basically succeeded for at least two of them: the keyboard and mouse. Both require minimal data, so they work fine regardless of whether the port supports USB 1.1 or USB 3.1. Such standardization also came with loads more benefits, too, like reducing setup and configuration once necessary for even basic peripherals.

Where things got complicated is when data transfer speeds actually matter. USB 1.1 — the first version most people actually used — topped out at 12 Mbits per second; USB 2.0 could do 480 Mbits per second. Even so, the ports and cables looked identical. If you plugged an external hard drive into your computer using the wrong cable, you would notice because it would crawl.

This begat more specs allowing for higher speeds, requiring new cables and — sometimes — new connectors. And it was kind of a mess. So the USB-IF got together and created USB-C, which at least solves some of these problems. It is a more elegant connector and, so far, it has been flexible enough to support a wide range of uses.

That is kind of the problem with it, though: the connector can do everything, but there is no easy way to see what capabilities are supported by either the port or the cable. Put another way, if you connect a Thunderbolt 5 hard drive using the same cable as you use to charge new Magic Mouse and Keyboard, you will notice, just as you did twenty years ago.

Bogost, after describing his array of gadgets connected by USB-A, USB-C, and micro-HDMI:

This chaos was supposed to end, with USB-C as our savior. The European Union even passed a law to make that port the charging standard by the end of this year. […]

Hope persists that someday, eventually, this hell can be escaped — and that, given sufficient standardization, regulatory intervention, and consumer demand, a winner will emerge in the battle of the plugs. But the dream of having a universal cable is always and forever doomed, because cables, like humankind itself, are subject to the curse of time, the most brutal standard of them all. At any given moment, people use devices they bought last week alongside those they’ve owned for years; they use the old plugs in rental cars or airport-gate-lounge seats; they buy new gadgets with even better capabilities that demand new and different (if similar-looking) cables. […]

If the ultimate goal is a single cable and connector that can do everything from charge your bike light to connect a RAID array — do we still have RAID arrays? — I think that is foolish.

But I do not think that is the expectation. For one thing, note Bogost’s correctly chosen phrasing of what the E.U.’s standard entails. All devices have unified around a single charging standard, which does not demand any specialized cable. I use a Thunderbolt cable to sync my iPhone and charge my third-party keyboard, because I cannot be tamed.1 The same is true of my laptop and also my wife’s, the headphones I am wearing right now, a Bluetooth speaker we have kicking around, our Nintendo Switch, and my bicycle tire pump. Having one cable for all this stuff rules.

If you need higher speeds, though, I would bet you know that. If the difference between Thunderbolt 4 and Thunderbolt 5 matters to you, you are a different person than most. And, I would wager, you are probably happy that you can connect a fancy Thunderbolt drive to any old USB-C port and still read its contents, even if it is not as fast. That kind of compatibility is great.

Lookalike connectors are nothing new, however. P.C. users probably remember the days of PS/2 ports for the keyboard and mouse, which had the same plugs but were not interchangeable. 3.5mm circular ports were used for audio out, audio in, microphone — separate from audio in, for some reason — and individual speakers. This was such a mess that Microsoft and Intel decided PC ports needed colour-coding (PDF). Even proprietary connectors have this problem, as Apple demonstrated with some Lightning accessories.

We are doomed to repeat this so long as the same connectors and cables describe a wide range of capabilities. But solving that should never be the expectation. We should be glad to unify around standards for at least basic functions like charging and usable data transfer. USB-C faced an uphill battle because we probably had — and still have — devices which use other connectors. While my tire pump uses USB-C, my bike light charges using some flavour of mini-USB port. I do not know which. I have one cable that works and I dare not lose it.

Every newer standard is going to face an increasingly steep hill. USB-C now has a supranational government body mandating its use for wired charging in many devices which, for all its benefits, is also a hurdle if and when someone wants to build some device in which it would be difficult to accommodate a USB-C port. That I am struggling to think of a concrete example is perhaps an indicator of the specificity of such a product and, also, that I am not in the position of dreaming up such products.

But even without that regulatory oversight, any new standard will have to supplant a growing array of USB-C devices. We may not get another attempt at this kind of universality for a long time yet. It is a good thing USB-C is quite an elegant connector, and such a seemingly flexible set of standards.


  1. I still use a Lightning Magic Trackpad which means I used to charge it and sync my iPhone with the same cable, albeit more slowly. Apparently, the new USB-C Magic Trackpad is incompatible with my 2017 iMac, though I am not entirely sure why. Bluetooth, maybe? Standards! ↥︎

If software is judged by the difference between what it is actually capable of compared to what it promises, Siri is unquestionably the worst built-in iOS application. I cannot think of any other application which comes preloaded with a new iPhone that so greatly underdelivers, and has for so long.

Siri is thirteen years old, and we all know the story: beneath the more natural language querying is a fairly standard command-and-control system. In those years, Apple has updated the scope of its knowledge and responses, but because the user interface is not primarily a visual one, its outer boundaries are fuzzy. It has limits, but a user cannot know what they are until they try something and it fails. Complaining about Siri is both trite and evergreen. Yes, Siri has sucked forever, but maybe this time will be different.

At WWDC this year, Apple announced Siri would get a whole new set of powers thanks to Apple Intelligence. Users could, Apple said, speak with more natural phrasing. It also said Siri would understand the user’s “personal context” — their unique set of apps, contacts, and communications. All of that sounds great, but I have been down this road before. Apple has often promised improvements to Siri that have not turned it into the compelling voice-activated digital assistant it is marketed to be.

I was not optimistic — and I am glad — because Siri in iOS 18.1 is still pretty poor, with a couple of exceptions: its new visual presentation is fantastic, and type-to-Siri is nice. It is unclear exactly how Siri is enhanced with Apple Intelligence — more on this later — but this version is exactly as frustrating as those before it, in all the same ways.

As a reminder, Apple says users can ask Siri…

  • …to text a contact by using only their first name.

  • …for directions to locations using the place name.

  • …to play music by artist, album, or song.

  • …to start and stop timers.

  • …to convert from one set of units to another.

  • …to translate from one language to another.

  • …about Apple’s product features and documentation, new in iOS 18.1.

  • …all kinds of other stuff.

It continues to do none of these things reliably or predictably. Even Craig Federighi, when he was asked by Joanna Stern, spoke of his pretty limited usage:

I’m opening my garage, I’m closing my garage, I’m turning on my lights.

All kinds of things, I’m sending messages, I’m setting timers.

I do not want to put too much weight on this single response, but these are weak examples. This is what he could think of off the top of his head? That is all? I get it; I do not use it for much, either. And, as Om Malik points out, even the global metrics Federighi cites in the same answer do not paint a picture of success.

So, a refresh, and I will start with something positive: its new visual interface. Instead of a floating orb, the entire display warps and colour-shifts before being surrounded by a glowing border, as though enveloped in a dense magical vapour. Depending on how you activate Siri, the glow will originate from a different spot: from the power button, if you press and hold it; or from the bottom of the display, if you say “Siri” or “Hey, Siri”.

You can also now invoke text-based Siri — perfect for times when you do not want to speak aloud — by double-tapping the home bar. There has long been an option to type to Siri, but it has not been surfaced this easily, and I like it.

That is kind of where the good news stops, at least in my testing. I have rarely had a problem with Siri’s ability to understand what I am saying — I have a flat, Canadian accent, and I can usually speak without pauses or repeating words. There are writers who are more capable of testing for improvements for people with disabilities.

No, the things which Siri has flubbed have always been, for me, in its actions. Some of those should be new in iOS 18.1, or at least newly refined, but it is hard to know which. While Siri looks entirely different in this release, it is unclear what new capabilities it possesses. The full release notes say it can understand spoken queries better, and it has product documentation, but it seems anything else will be coming in future updates. I know a feature Apple calls “onscreen awareness”, which can interpret what is displayed, is one of those. I also know some personal context features will be released later — Apple says a user “could ask, ‘When is Mom’s flight landing?’ and Siri will find the flight details” no matter how they were sent. This is all coming later and, presumably, some of it requires third-party developer buy-in.

But who reads and remembers the release notes? What we all see is a brand-new Siri, and what we hear about is Apple Intelligence. Surely there must be some improvements beyond being able to ask the Apple assistant about the company’s own products, right? Well, if there are, I struggled to find them. Here are the actual interactions I have had in beta versions of iOS 18.1 for each thing in the list above:

  • I asked Siri to text Ellis — not their real name — a contact I text regularly. It began a message to a different Ellis I have in my contacts, to whom I have not spoken in over ten years.

    Similarly, I asked it to text someone I have messaged on an ongoing basis for fifteen years. Their thread is pinned to the top of Messages. Before it would let me text them, it asked if I wanted it to send it to their phone number or their email address.

  • I was driving and I asked for directions to Walmart. Its first suggestion was farther away and opposite the direction I was already travelling.

  • I asked Siri to “play the new album from Better Lovers”, an artist I have in my library and an album that I recently listened to in Apple Music. No matter my enunciation, it responded by playing an album from the Backseat Lovers, a band I have never listened to.

    I asked Siri to play an album which contains a song of the same name. This is understandably ambiguous if I do not explicitly state “play the album” or “play the song“. However, instead of asking for clarification when there is a collision like this, it just rolls the dice. Sometimes it plays the album, sometimes the song. But I am an album listener more often than I am a song listener, and my interactions with Siri and Apple Music should reflect that.

  • Siri starts timers without issue. It is one of few things which behaves reliably. But when I asked it to “stop the timer”, it asked me to clarify “which one?” between one active timer and two already-stopped timers. It should just stop the sole active timer; why would I ask it to stop a stopped timer?

  • I asked Siri “how much does a quarter cup of butter weigh?” and it converts that to litres or — because my device is set to U.S. localization for the purposes of testing Apple Intelligence — gallons. Those are volumetric measurements, not weight-based. If I ask Siri “what is the weight of a quarter cup of butter?”, it searches the web. I have to explicitly say “convert one quarter cup of butter to grams”.

  • I asked Siri “what is puente in English?” and it informed me I needed to use the Translate app. Apparently, you can only translate from Siri’s language — English, in this case — to another language when using Siri. Translating from a different language cannot be done with Siri alone.

  • I rarely see the Priority Messages feature in Mail, so I asked Siri about it. I tried different ways to phrase my question, like “what is the Priority Messages feature in Mail?”, but it would not return any documentation about this feature.

Maybe I am using Siri wrong, or expecting too much. Perhaps all of this is a beta problem. But, aside from the last bullet, these are the kinds of things Apple has said Siri can do for over a decade, and it does not do so predictably or reliably. I have had similar or identical problems with Siri in non-beta versions of iOS. Today, while using the released version of iOS 18.1, I asked it if a nearby deli was open. It gave me the hours for a deli in Spokane — hundreds of kilometres away, and in a different country.

This all feels like it may be, perhaps, a side effect of treating an iPhone as an entire widget with a governed set of software add-ons. The quality of the voice assistant is just one of a number of factors to consider when buying a smartphone, and the predictably poor Siri is probably not going to be a deciding factor for many.

But the whole widget has its advantages — you can find plenty of people discussing those, and Apple’s many marketing pieces will dutifully recite them. Since its debut in 2011, Apple has rarely put Siri front-and-centre in its iPhone advertising campaigns, but it is doing just that with the iPhone 16. It is showcasing features which rely on whole-device control — features that, admittedly, will not be shipping for many months. But the message is there: Siri has a deep understanding of your world and can present just the right information for you. Yet, as I continue to find out, it does not do that for me. It does not know who I text in the first-party Messages app or what music I listen to in Apple Music.

Would Siri be such a festering scab if it had competitors within iOS? Thanks to an extremely permissive legal environment around the world in which businesses scoop up vast amounts of behavioural data to make it slightly easier to market laundry detergent and dropshipped widgets, there is a risk to granting this kind of access to some third-party product. But if there were policies to make that less worrisome, and if Apple permitted it, there would be more intense pressure to improve Siri — and, for that matter, all voice assistants tied to specific devices.

The rollout of Apple Intelligence is uncharacteristically piecemeal and messy. Apple did not promise a big Siri overhaul in this version of iOS 18.1. But by giving it a new design, Apple communicates something is different. It is not — at least, not yet. Maybe it will be one day. Nothing about Siri’s state over the past decade-plus gives me hope that it will, however. I have barely noticed improvements in the things Apple says it should do better in iOS 18.1, like preserving context and changing my mind mid-dictation.

Siri remains software I distrust. Like Federighi, I would struggle to list my usage beyond a handful of simple commands — timers, reminders, and the occasional message. Anything else, and it remains easier and more reliable to wash my hands if I am kneading pizza dough, or park the car if I am driving, and do things myself.

Apple is a famously tight-knit business. Its press releases and media conferences routinely drum the integration of hardware, software, and services as something only Apple is capable of doing. So it sticks out when features feel like they were developed by people who do not know what another part of the company is doing. This happened to me twice in the past week.

Several years ago, Apple added a very nice quality-of-life improvement to the Mac operating system: software installers began offering to delete themselves after they had done their job. This was a good idea.

In the ensuing years, Apple made some other changes to MacOS in an effort to — it says — improve privacy and security. One of the new rules it imposed was requiring the user to grant apps specific permission to access certain folders; another was a requirement to allow one app to modify or delete another.

And, so, when I installed an application earlier this month, I was shown an out-of-context dialog at the end of the process asking for access to my Downloads folder. I granted it. Then I got a notification that the Installer app was blocked from modifying or deleting another file. To change it, I had to open System Settings, toggle the switch, enter my password, and then I was prompted to restart the Installer application — but it seemed to delete itself just fine without my doing so.

This is a built-in feature, triggered by where the installer has been downloaded, using an Apple-provided installation packaging system.1 But it is stymied by a different set of system rules and unexpected permissions requests.


Another oddity is in Apple’s two-factor authentication system. Because Apple controls so much about its platforms, authentication codes are delivered through a system prompt on trusted devices. Preceding the code is a notification informing the user their “Apple Account is being used to sign in”, and it includes a map of where that is.

This map is geolocated based on the device’s IP address, which can be inaccurate for many reasons — something Apple discloses in its documentation:

This location is based on the new device’s IP address and might reflect the network that it’s connected to, rather than the exact physical location. If you know that you’re the person trying to sign in but don’t recognize the location, you can still tap Allow and view the verification code.

It turns out one of the reasons the network might think you are located somewhere other than where you are is because you may be using iCloud Private Relay. Even if you have set it to “maintain general location”, it can sometimes be incredibly inaccurate. I was alarmed to see a recent attempt from Toronto when I was trying to sign into iCloud at home in Calgary — a difference of over 3,000 kilometres.

The map gives me an impression of precision and security. But if it is made less accurate in part because of a feature Apple created and markets, it is misleading and — at times — a cause of momentary anxiety.

What is more, Safari supports automatically filling authentication codes delivered by text message. Apple’s own codes, though, cannot be automatically filled.


These are small things — barely worth the bug report. They also show how features introduced one year are subverted by those added later, almost like nobody is keeping track of all of the different capabilities in Apple’s platforms. I am sure there are more examples; these are just the ones which happened in the past week, and which I have been thinking about. They expose little cracks in what is supposed to be a tight, coherent package of software.


  1. Thanks to Keir Ansell for tracking down this documentation for me. ↥︎

The New York Times recently ran a one–two punch of stories about the ostensibly softening political involvement of Mark Zuckerberg and Meta — where by “punch”, I mean “gentle caress”.

Sheera Frenkel and Mike Isaac on Meta “distanc[ing] itself from politics”:

On Facebook, Instagram and Threads, political content is less heavily featured. App settings have been automatically set to de-emphasize the posts that users see about campaigns and candidates. And political misinformation is harder to track on the platforms after Meta removed transparency tools that journalists and researchers used to monitor the sites.

[…]

“It’s quite the pendulum swing because a decade ago, everyone at Facebook was desperate to be the face of elections,” said Katie Harbath, chief executive of Anchor Change, a tech consulting firm, who previously worked at Facebook.

Facebook used to have an entire category of “Government and Politics” advertising case studies through 2016 and 2017; it was removed by early 2018. I wonder if anything of note happened in the intervening months. Anything at all.

All of this discussion has so far centred U.S. politics; due to the nature of reporting, that will continue for the remainder of this piece. I wonder if Meta is ostensibly minimizing politics everywhere. What are the limits of that policy? Its U.S. influence is obviously very loud and notable, but its services have taken hold — with help — around the world. No matter whether it moderates those platforms aggressively or it deprioritizes what it identifies as politically sensitive posts, the power remains U.S.-based.

Theodore Schleifer and Mike Isaac, in the other Times article about Zuckerberg personally, under a headline claiming he “is done with politics”, wrote about the arc of his philanthropic work, which he does with his wife, Dr. Priscilla Chan:

Two years later, taking inspiration from Bill Gates, Mr. Zuckerberg and Dr. Chan established the Chan Zuckerberg Initiative, a philanthropic organization that poured $436 million over five years into issues such as legalizing drugs and reducing incarceration.

[…]

Mr. Zuckerberg and Dr. Chan were caught off guard by activism at their philanthropy, according to people close to them. After the protests over the police killing of George Floyd in 2020, a C.Z.I. employee asked Mr. Zuckerberg during a staff meeting to resign from Facebook or the initiative because of his unwillingness at the time to moderate comments from Mr. Trump.

The incident, and others like it, upset Mr. Zuckerberg, the people said, pushing him away from the foundation’s progressive political work. He came to view one of the three central divisions at the initiative — the Justice and Opportunity team — as a distraction from the organization’s overall work and a poor reflection of his bipartisan point-of-view, the people said.

This foundation, like similar ones backed by other billionaires, appears to be a mix of legitimate interests for Chan and Zuckerberg, and a vehicle for tax avoidance. I get that its leadership tries to limit its goals and focus on specific areas. But to be in any way alarmed by internal campaigning? Of course there are activists there! One cannot run a charitable organization claiming to be “building a better future for everyone” without activism. That Zuckerberg’s policies at Meta is an issue for foundation staff points to the murky reality of billionaire-controlled charitable initiatives.

Other incidents piled up. After the 2020 election, Mr. Zuckerberg and Dr. Chan were criticized for donating $400 million to the nonprofit Center for Tech and Civic Life to help promote safety at voting booths during pandemic lockdowns. Mr. Zuckerberg and Dr. Chan viewed their contributions as a nonpartisan effort, though advisers warned them that they would be criticized for taking sides.

The donations came to be known as “Zuckerbucks” in Republican circles. Conservatives, including Mr. Trump and Representative Jim Jordan of Ohio, a Republican who is chairman of the House Judiciary Committee, blasted Mr. Zuckerberg for what they said was an attempt to increase voter turnout in Democratic areas.

This is obviously a bad faith criticism. In what healthy democracy would lawmakers actively campaign against voter encouragement? Zuckerberg ought to have stood firm. But it is one of many recent clues as to Zuckerberg’s thinking.

My pet theory is Zuckerberg is not realigning on politics — either personally or as CEO of Meta — out of principle; I am not even sure he is changing at all. He has always been sympathetic to more conservative voices. Even so, it is important for him to show he is moving toward overt libertarianism. In the United States, politicians of both major parties have been investigating Meta for antitrust concerns. Whether the effort by Democrats is in earnest is a good question. But the Republican efforts have long been dominated by a persecution complex where they believe U.S. conservative voices are being censored — something which has been repeatedly shown to be untrue or, at least, lacking context. If Zuckerberg can convince Republican lawmakers he is listening to their concerns, maybe he can alleviate the bad faith antitrust concerns emanating from the party.

I would not be surprised if Zuckerberg’s statements encourage Republican critics to relent. Unfortunately, as in 2016, that is likely to taint any other justifiable qualms with Meta as politically motivated. Recall how even longstanding complaints about Facebook’s practices, privacy-hostile business, and moderation turned into a partisan argument. The giants of Silicon Valley have every reason to expect ongoing scrutiny. After Meta’s difficult 2022, it is now worth more than ever before — the larger and more influential it becomes, the more skepticism it should expect.

Hannah Murphy, Financial Times:

Some suggest Zuckerberg has been emboldened by X’s Musk.

“With Elon Musk coming and literally saying ‘fuck you’ to people who think he shouldn’t run Twitter the way he has, he is dramatically lowering the bar for what is acceptable behaviour for a social media platform,” said David Evan Harris, the Chancellor’s public scholar at California University, Berkeley and a former Meta staffer. “He gives Mark Zuckerberg a lot of permission and leeway to be defiant.”

This is super cynical. It also feels, unfortunately, plausible for both Zuckerberg and Meta as a company. There is a vast chasm of responsible corporate behaviour which opened up in the past two years and it seems like it is giving room to already unethical players to shine.

See Also: Karl Bode was a guest on “Tech Won’t Save Us” to discuss Zuckerberg’s P.R. campaign with Paris Marx.