Following yesterday’s ruling finding Apple has disregarded a U.S. court’s instructions to permit links to external purchases from within iOS apps under reasonable terms, the publisher of MacDailyNews responded with the site’s take. In case you are not already familiar, MacDailyNews has a consistently right-libertarian editorial slant. It is not one I agree with, but that has only the tiniest bit of relevance to this commentary.
It’s too bad Gonzalez Rogers expected Apple to provide a service that she ordered for free, because it makes no sense for Apple to do such a thing. Gonzales ordered Apple to allow developers to advertise lower prices elsewhere within Apple’s App Store. It is Apple’s App Store. Despite what Epic Games wishes and misrepresents, the App Store is not a public utility. Apple built it. Apple maintains it. Apple owns it, not Epic Games or some ditzy U.S. District Judge. Advertising within Apple’s App Store has value, a fee for which its owner has every right to charge, regardless of whatever the blank-eyed Gonzalez Rogers, bless her heart, expected.
I am sure there are plenty of people out there who believe Apple is entitled to run the iOS App Store as it sees fit. It is an argument with which I have sympathies outweighed by disagreements, but I get it.
What I do not get is describing a U.S. district court judge as “ditzy”.
It is an adjective invoked by MacDailyNews to describe just two people: Gonzalez Rogers and former European Commissioner for Competition Margrethe Vestager. It is an inherently sexist term — a cheap shot thrown at women who happen to have legally restricted the world’s most valuable corporation. Agree or disagree with their work, this kind of response is pathetic.
If, however, one is desperate to be a misogynist, they had better be certain the rest of their argument is airtight. And MacDailyNews falls on its face.
Gonzalez Rogers has not demanded an entirely free ride. In fact, she gave Apple substantial opportunity to explain how it arrived at (PDF) a hefty 27% commission rate for external purchases. Apple did not do so. It took hearings this year to learn it went so far as to get the Analysis Group to produce a report which happened to find (PDF) Apple was responsible for “up to 30% of a developer’s revenue”. But, Gonzalez Rogers writes, this study was not the basis for Apple’s justification for a 27% cut for external purchases, nor could it have been, because it was produced after records show Apple had already decided on that rate. It was reverse-engineered to maintain Apple’s entirely unjustified high commission rate.
Apple was afforded ample opportunity to respond to the Injunction. It chose to defy this Court’s order and manufacture post hoc justifications for maintaining an anticompetitive revenue stream. Apple’s actions to misconstrue the Injunction continue to impede competition. This Court will not play “whack-a-mole,” nor will it tolerate further delay.
Apple could have taken this up in a legally justifiable way that, plausibly, could have given it some reasonable commission on some sales. It did not do that, so now the court says no commission whatsoever is permissible. Simple. Besides, developers pay for hardware, a developer membership, and plenty of Apple’s services. They are not getting a free ride just by linking to an external payment option.
Moreover, developers do not “advertise” in the App Store. They can, but that is not what is being adjudicated in this case.
Media commentators can disagree on this ruling, on the provisions of the Digital Markets Act, and on Apple’s treatment of developers. There are many legitimate views and angles, and I think it is great to see so much discussion about this leading up to WWDC. But we can all do this without resorting to lazy sexism. Do better.
In September 2021, U.S. judge Yvonne Gonzalez Rogers issued a judgement in Epic Games’ case against Apple. She mostly sided with Apple but, critically, ruled third-party developers must be permitted to link to external purchasing mechanisms from within their apps.
Even that barest of changes, however, has apparently been too onerous for Apple to comply with in the spirit the court intended. Instead of collecting a typical 30% commission on in-app purchases, Apple said it would take 27% of external purchases made within seven days of someone using an in-app link. This sucks. The various rules Apple implemented, including the different commission rate, have been a problem ever since. In a ruling today, Gonzalez Rogers finds Apple’s measures do not comply with the court’s expectations.
Apple willfully violated a 2021 injunction that came out of the Epic Games case, Judge Yvonne Gonzalez Rogers said in a court filing on Wednesday.
[…]
Rogers added that she referred the matter to U.S. attorneys to investigate whether to pursue criminal contempt proceedings on both [Apple executive Alex] Roman and Apple.
The judge’s order (PDF) is full of withering remarks and findings, like this footnote on the sixth page (citation removed):
Apple’s “entitlement” perspective and mantra persisted beyond the Injunction. For
example, Apple’s Communications Director, Marni Goldberg, texted her colleague during the first evidentiary hearings, that “It’s Our F***ING STORE.” Not surprisingly (nor convincingly), she did not “recall” sending those messages.
There are several points like these where the judge makes clear she does not appreciate Apple’s obstinate approach. But the business-related findings are notable, too. For example, this passage on pages 17–18 (citations removed for clarity):
Further, in May 2023, Apple through Oliver and others received feedback from Bumble, a large, well-known developer on Apple’s and Google’s alternative billing programs. Bumble specifically advised Apple that “[p]ayment processing fees average out significantly higher than the 4% fee reduction currently offered by Google in the [user choice billing] program or [the] 3% fee in Apple’s … solution resulting in negative margin for developers.” In other words, Bumble explained to Apple that a “3% discount” was not economically viable for a developer because the external cost of payments exceeds 3%. Apple’s own internal assessment from February 2023 reflects data meriting the same conclusion — that the external costs of payments for developers on link-out purchases would exceed Apple’s 3% discount if it demanded a 27% rate.
The evidence uncovered in the 2025 hearing demonstrated Apple’s knowledge and expectation that the restrictions would effectively dissuade any real developer participation, to Apple’s economic advantage.
To all those who have said Apple’s regulatory and legal responses have amounted to malicious compliance, you appear to be correct. Stripping more formal language, as the judge has done here, reveals how fed up she is with Apple’s petulant misconduct.
Throughout this filing, Phil Schiller comes across very well, unlike fellow executives Luca Maestri, the aforementioned Alex Roman, and Tim Cook. In internal discussions, he consistently sounds like the most reasonable voice in the room — though Gonzalez Rogers still has stern words for him throughout. (For example, Schiller claimed external purchasing links alongside in-app options would make users more susceptible to fraud, even though under Apple’s rules it must review and approve those links. The judge writes “[n]o real-time business documents credit that view”.)
Gonzalez Rogers also has critical words about Apple’s current visual interface design patterns. In a section on page 32 featuring screenshots of possible button styles permissible for developers to provide external links, she writes of a “plain link or button style” not dissimilar to many post-iOS 7-style “buttons”:
Nothing about either example appears to be a “button,” by the ordinary usage and understanding of the word. There is, certainly, an external-link icon next to the call to action and hyperlink, but Apple strains to call either of these strings of text a “button.”
Yet, of a subsequent screenshot featuring one button of this style and another with a rounded rectangle background:
The lower example is readily identifiable as a button.
A final set of passages I would like to point to in this filing is the suspicion of Apple’s intellectual property justification for charging such onerous fees in the first place. Quite a bit of this is repeated from other judgements and filings in this case, but it is quite something to read them all together. For example, in a footnote on page 60 (citations removed for clarity):
[…] Apple also argues that the question of whether Apple’s commission appropriately
reflects the value of its intellectual property is not an issue for injunction compliance, and that it is legitimate for a business to promote the value of its corporation for stockholders. Apple misses the point. The issue is that Apple flouted the Court’s order by designing a top-down anticompetitive system, in which its commission played a fundamental role.
For the same reasons, the Court disagrees that requiring Apple to set a commission of zero constitutes and unconstitutional taking. For instance, as described infra Section IV, in the trademark context, “a party who has once infringed is allowed less leniency for purposes of injunction
enforcement than an innocent party.” Apple does not have an absolute right to the intellectual property that it wields as a shield to competition without adequate justification of its value. Apple was provided with an opportunity to value that intellectual property and chose not to do so.
On page 21, the judge cites an internal email on the topic:
[T]he team has discussed variations on the commission options with lower rates, but we struggled to land on ironclad pricing rationales that would (1) stand up to scrutinizing comparisons with defenses of
the commission and existing discounting approaches in other jurisdictions and (2) that we could substantiate solidly on a bottoms up basis without implicitly devaluing our IP / proprietary technology.
The justification for Apple’s commission is entirely fictional. The company is not expected to, in its words, “give away [its] technology for free”, but it is clearly charging commissions like these simply because it can. It owns the platform and it believes it is entitled to run it in any way it chooses. At Apple’s scale, however, that argument does not fly.
Legal bodies around the world are requiring similar changes, and Apple’s reluctance to rip off the bandage and adopt a single global policy seems increasingly stupid. The longer it drags this stuff out, the worse it looks.
I am sure there are smart people at Apple who believe they are morally and legally correct to keep fighting. But Gonzalez Rogers accused an executive of lying under oath, seems to finds the rest of the company’s executive team legally contemptible, and finds the behaviour of the world’s most valuable company to be pretty outrageous. All of this because, according to the company’s internal records on page 42, it might “lose over a billion [dollars] in revenue” if 25% of users chose to use external purchase links and the company collected no commission on them.
Alex Heath, of the Verge, spoke with Aravind Srinivas, CEO of Perplexity, earlier this week, and they had quite the conversation.
Many publishers have been upset with you for scraping their content. You’ve started cutting some of them checks. Do you feel like you’re in a good place with publishers now, or do you feel there’s still more work to be done?
I’m sure there’s more work to be done, but it’s in a way better place than it was last time we spoke. We are scraping but respecting robots.txt. We only use third-party data providers for anything that doesn’t allow us to scrape.
Heath has no followup, no request for clarification — nothing — so I am not sure if I am reading this right, but I think I am. Srinivas here says that Perplexity’s scraper itself respects website owners’ permissions but, if it is disallowed, the company gets the same data through third-parties. If a website owner does not want their data used by Perplexity, they must disallow its own scraper plus every single scraper that might sell to Perplexity. That barely resembles “respecting robots.txt”.
Again, I could have this wrong, but Heath does not bother to clarify this response.
Perplexity is currently working on its own web browser, Comet, and has signalled interest in buying Chrome should Google be forced to divest it. Srinivas calls it a “containerized operating system” and explains the company’s thinking in response to a question about ChatGPT’s Memory feature:
Our strategy is to allow people to stay logged in where they are. We’re going to build a browser, and that’s how we’ll access apps on behalf of the user on the client side.
I think memory will be won by the company that has the most context. ChatGPT knows nothing about what you buy on Instagram or Amazon. It also knows nothing about how much time you spend on different websites. You need to have all this data to deeply personalize for the user. It’s not about who rolls out memory based on the retrieval of past queries. That’s very simple to replicate.
If you are a money person, there is a logical next step to this, which Srinivas revealed on a small podcast with a couple of finance bros as they ask a question that just so happens to promote one of their sponsors: “how are you thinking about advertising in the context of search? […] Is there a future where, if you search for ‘what’s the best corporate card?’, Ramp is going to show up at the top if they bid on that?”, to which Srinivas responds “hopefully not” before going on to explain how Perplexity could eventually become ad-supported.
“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.
Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.
“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.
These are comments on a podcast and perhaps none of this will come to pass, but anyone can see how this is financially alluring. The “business friendly” but privacy hostile environment of the U.S. means companies like Perplexity can do this stuff with impunity. Its pitch sounds revolting now — exactly how Google’s behaviourally targeted ads sounded twenty years ago.
Perplexity is another careless business. It does not care if a website has specifically prohibited it from scraping; Perplexity will simply rely on a third-party scraper. Perplexity does not care about your privacy. I see no reason to treat this as a problem specific to individual companies, and these technologies do not respect geographic boundaries, either. We need better protections as users, which means more policy-based protections by governments taking privacy seriously.
But this industry is moving too fast. It is a “race”, after all, and any attempts to regulate it are either knocked down or compromised. There is a real need for lawmakers and regulators who care about privacy as a fundamental human right. These companies do not care and will not regulate themselves.
I would have loved to have been a fly on the wall when Mark Zuckerberg and the rest of the Meta’s leadership team found out about Sarah Wynn-Williams’ book “Careless People”. This conversation could have taken place anywhere, but I imagine it was in a glass-walled meeting room which would allow one of these millionaire’s shouted profanities to echo. That feels right. Wynn-Williams, a former executive at Facebook, is well-placed to tell stories of the company’s indifference to user privacy, its growth-at-all-costs mentality, its willingness to comply with the surveillance and censorship requirements of operating in China, and alleged sexual harassment by Joel Kaplan. And, of course, its inaction in Myanmar that played a “determining role” in a genocide that killed thousands in one month in 2017 alone.
Based on some of the anecdotes in this book, I am guessing Zuckerberg, Kaplan, and others learned about this from a public relations staffer. That is how they seem to learn about a lot of pretty important things. The first public indication this book was about to be released came in the form of a preview in the Washington Post. Apparently, Meta had only found out about it days before.
It must have been a surprise as Meta’s preemptive response came in the form of a barely formatted PDF sent to Semafor, and it seems pretty clear the company did not have an advance copy of the book because all of its rebuttals are made in broad strokes against the general outline reported by the Post. Now that I have read the book, I thought it would be fun and educational to compare Meta’s arguments against the actual claims Wynn-Williams makes. I was only half right — reading about the company’s role in the genocide in Myanmar remains a chilling exercise.
A caveat: Wynn-Williams’ book is the work of a single source — it is her testimony. Though there are references to external documents, there is not a single citation anywhere in the thing. In an article critical of the book, Katie Harbath, one of Wynn-Williams’ former colleagues, observes how infrequently credit is given to others. And it seems like it, as with most non-fiction books, was not fact-checked. That is not to disparage the book or its author, but only to note that this is one person’s recollections centred around her own actions and perspective.
One other caveat: I have my own biases against Meta. I think Mark Zuckerberg is someone who had one good idea and has since pretended to be a visionary while stumbling between acquisitions and bad ideas. Meta’s services are the product of bad incentives, an invasive business model, and a caustic relationship with users. I am predisposed to embracing a book validating those feelings. I will try to be fair.
Anyway, the first thing you will notice is that most of the points in Meta’s response do not dispute the stories Wynn-Williams tells; instead, the company wants everyone to think it is all “old news” and it resents all this stuff being dredged up again. Yet even though this is apparently a four-hundred-page rehash, Meta is desperate to silence Wynn-Williams in a masterful gambit. But of course Wynn-Williams is going to write about things we already know a little bit about; even so, there is much to learn.
Most of Meta’s document is dedicated to claims about the company’s history with China, beginning with the non-news that it wanted to expand into the country:
SWW’s “New” Claim:
Facebook Had A Desire To Operate In China.
Old News:
Zuckerberg Addressed This In 2019 Televised Speech. Mark himself said in a televised address in 2019, “[He] wanted our services in China … and worked hard to make this happen. But we could never come to agreement on what it would take for us to operate there.’ That is why we don’t operate our services in China today.”
No, it is not just you: the link in this section is, indeed, broken, and has been since this document was published, even though the page title suggests it was available six months prior. Meta’s communications staff must have known this because they include a link to the transcript, too. No, I cannot imagine why Meta thought it made sense to send people to an inactive video.
At any rate, Zuckerberg’s speech papers over the lengths to which the company and he — personally — went to ingratiate himself with leaders in China. Wynn-Williams dedicates many pages of her book to examples, but only one I want to focus on for now.
But let me begin with the phrase “we could never come to an agreement on what it would take for us to operate there”. In the context of this speech’s theme, the importance of free expression, this sounds like Meta had a principled disagreement with the censorship required of a company operating in China. This was not true. Which brings me to another claim Meta attempts to reframe:
SWW’s “New” Claim:
Facebook Developed Censorship Tools For Use By Chinese Officials.
Old News:
2016 New York Times Report On Potential Facebook Software Being Used By Facebook In Regard To China; Noted It “So Far Gone Unused, And There Is No Indication That Facebook Has Offered It To The Authorities In China.” […]
This is not a denial.
Wynn-Williams says she spent a great deal of time reading up on Facebook’s strategy in China since being told to take it over on a temporary basis in early 2017. Not only was the company okay with censoring users’ posts on behalf of the Chinese government, it viewed the capability as leverage and built software to help. She notices in one document “the ‘key’ offer is that Facebook will help China ‘promote safe and secure social order'”:
I find detailed content moderation and censorship tools. There would be an emergency switch to block any specific region in China (like Xinjiang, where the Uighurs are) from interacting with Chinese and non-Chinese users. Also an “Extreme Emergency Content Switch” to remove viral content originating inside or outside China “during times of potential unrest, including significant anniversaries” (like the June 4 anniversary of the Tiananmen Square pro-democracy protests and subsequent repression).
Their censorship tools would automatically examine any content with more than ten thousand views by Chinese users. Once this “virality counter” got built, the documents say that Facebook deployed it in Hong Kong and Taiwan, where it’s been running on every post.
Far from a principled “agreement on what it would take for us to operate [in China]”, Facebook was completely fine with the requirements of the country’s censorial regime until it became more of a liability. Then, it realized it was a better deal to pay lobbyists to encourage the U.S. government to ban TikTok.
SWW’s “New” Claim:
Facebook Tested Stealth App In China.
Old News:
2017 New York Times Headline: “In China, Facebook Tests The Waters With A Stealth App” “We have long said that we are interested in China, and are spending time understanding and learning more about the country in different ways,” Facebook said in a statement.”
Wynn-Williams provides plenty more detail than is contained in the Times report. I wish I could quote many pages of the book without running afoul of copyright law or feeling like a horrible person, so here is a brief summary: Facebook created an American shell company, which spun up another shell company in China. Meta moved one of its employees to an unnamed “Chinese human resources company” and makes her the owner of its Chinese shell company. A subsidiary of that company then launched two lightly reskinned Facebook apps in China, which probably violate Chinese data localization laws. And I need to quote the next part:
“Are Mark and Sheryl okay with it?” I ask.
He [Kaplan] admits that they weren’t aware of it.
I am unsure I believe Zuckerberg did not know about this arrangement.
As far as I can find, most of these details are new. The name of the Chinese shell company’s subsidiary was published by the Times, but the only reference I can find — predating this book — to the other shell companies is in an article on Sohu. I cannot find any English-language coverage of this situation.
Nor, it should be said, can I find any discussion of the legality of Facebook’s Chinese operations, the strange leadership in the documentation for the shell companies, the other apps Facebook apparently released in China, and the fallout after the Times article was published. Meta’s anticipatory response to Wynn-Williams’ book pretends none of this is newsworthy. I found it captivating.
SWW’s “New” Claim:
Chinese Dissident Guo Wengui Was Removed Due To Pressure From The Chinese Government.
Old News:
The Reasons Facebook Removed Guo Wengui From The Platform Were Publicly Reported In 2017;
Unpublished His Page And Suspended His Profile Because Of Repeated Violations Of Company’s Community
Standards.
Wynn-Williams says this is false. She quotes the same testimony Facebook’s general counsel gave before a Senate Intelligence Committee hearing, but only after laying out the explicit discussions between Facebook and the Cyberspace Administration of China saying the page needed to be removed. At a time when Facebook was eager to expand into China, the company’s compliance was expected.
Then we get to Myanmar:
SWW’s “New” Claim:
Facebook Dragged Its Feet On Myanmar Services.
Old News:
Facebook Publicly Acknowledged Myanmar Response In 2018. The facts here have been public record since 2018, and we have said publicly we know we were too slow to act on abuse on our services in Myanmar […]
“Too slow to act” is one way to put it — phrasing that, while not absolving Meta, obscures a more offensive reality.
Myanmar, Wynn-Williams writes, was a particularly good market for Facebook’s Free Basics program, a partnership with local providers that gives people free access to services like Facebook and WhatsApp, plus a selection of other apps and websites. It is an obvious violation of net neutrality principles, the kind Zuckerberg framed in terms of civil rights in the United States, but seemed to find more malleable in India. Wynn-Williams’ description of the company’s campaign to save it in India, one that was ultimately unsuccessful, is worth reading on its own. Facebook launched Free Basics in Myanmar in 2016; two years later, the company pulled the plug.
From 2014 to 2017, Kevin Roose reported for the New York Times, Facebook’s user base in Myanmar — a country of around fifty million people — grew from two million to thirty million, surely accelerated by Free Basics. During that time period, Wynn-Williams writes, Facebook observed a dramatic escalation in hate speech directed toward the Rohingya group. As this began, Meta had a single contractor based in Dublin who was reviewing user reports. One person.
Also, according to Wynn-Williams, Facebook is not available in Burmese, local users “appear to be using unofficial Facebook apps that don’t offer a reporting function”, and there incompatibility between Unicode and Zawgyi. As a result, fewer reports are received, they are not necessarily in a readable format, and they are processed by one person several time zones away.
Leading up to the November 2015 election, Wynn-Williams says they doubled the number of moderators speaking Burmese — two total — plus another two for election week. Wynn-Williams says she worked for more than a year to get management’s attention in a candidate for Myanmar-specific policies, only for Kaplan to reject the idea in early 2017.
From the end of August to the end of September that year, 6,700 people are killed. Hundreds of thousands more are forced to leave the country. All of this was accelerated by Facebook’s inaction and ineptitude. “Too slow to act” hardly covers how negligent Facebook was at the scale of this atrocity.
SWW’s New Claim:
Facebook Offered Advertisers The Ability To Target Vulnerable 13-17 Year Olds.
Old News:
Claim Was Based On A 2017 Article By The Australian, Which Facebook Refuted. “On May 1, 2017, The Australian posted a story regarding research done by Facebook and subsequently shared with an advertiser. The premise of the article is misleading. Facebook does not offer tools to target people based on their emotional state.”
In fact, Wynn-Williams quotes the refutation, calling it a “flat-out lie”. She says this is one of at least two similar slide decks discussing targeting ads based on a teenager’s emotional state, and in an internal discussion, finds an instance where Facebook apparently helped target beauty ads to teenage girls when they deleted pictures of themselves from their accounts. She also writes that the deputy chief privacy officer confirmed it is entirely possible to target ads based on implied emotion.
After the “lie” is released as a statement anyway, Wynn-Williams says she got a phone call from a Facebook ad executive frustrated by how it undermined their pitch to advertisers for precise targeting.
SWW’s New Claim:
Donald Trump Was Charged Less Money For Incendiary Adverts Which Reached More People.
The argument made by Meta’s Andrew Bosworth, in a since-deleted Twitter thread, was that Trump’s average CPM was almost always higher than that of the Clinton campaign. But while this is one interpretation of the Wired article — one the publication disputed — this is not the claim made by Wynn-Williams. She only says the high levels of engagement on Trump’s ads drove their prices down, but not necessarily less than Clinton’s ads.
Those were all of the claims Meta chose to preemptively dispute or reframe. The problem is that Wynn-Williams did make news in this book. There are plenty of examples of high-level discussions, including quoted email messages, showing the contemporaneous mindset to build its user base and market share no matter the cost. It is ugly. Meta’s public relations team apparently thought it could get in front of this thing without having all the details, but their efforts were weak and in vain.
Following its publication, Meta not only sought and received a legal demand that Wynn-Williams must stop talking about the book, it followed its “playbook for answering whistleblowers”. I am not a highly paid public relations person; I assume those who are might be able to explain if this strategy is effective. It does not seem as though it is, however.
This is all very embarrassing for Meta, yet seems entirely predictable. Like I wrote, I have my own biases against the company. I already thought it was a slimy company and I hate how much this confirms my expectations. So I think it is important to stay skeptical. Perhaps there are other sides to these stories. But nothing in “Careless People” seems unbelievable — only unseemly and ugly.
A little over a month ago, I had the misfortune of breaking both a fifteen-year record of intact phone screens and, relatedly, my phone’s screen. This sucked in part because I can no longer be so smug about not using a case, and also because I do not have AppleCare. I do have insurance coverage on my credit card. I had never gone through this process before, but it would turn out to be perfectly adequate.
There was some paperwork involved, of course. There was a fair bit of waiting, too. But I got paid in full for the repair. That is, while I had to put $450 up front, I ultimately paid nothing to fix my own accidental damage. For comparison, if I had purchased AppleCare when I got my iPhone 15 Pro, at $13.50 per month, it would have cost me over $240, plus $40 for the screen replacement itself.
I am not saying you should not buy AppleCare or extended warranties in general. Go bananas. In my case, I do not think it would have been worth $280 to save some paperwork and time.
This is also an example of how having a lower income makes everything cost more. There are usually minimum financial requirements for having a card with insurance like this, a level I thankfully meet. It allowed me to briefly shoulder the out-of-warranty repair cost and, ultimately, gave me a free repair. If I did not meet that threshold, I would have had to pay instalments of $13.50 just in case I dropped my phone. It does not seem right I paid nothing for this repair when someone earning less than I do would have paid so much.
There is a long line of articlesquestioningApple’s ability to deliver on artificial intelligence because of its position on data privacy. Today, we got another in the form of a newsletter.
Meanwhile, Apple was focused on vertically integrating, designing its own chips, modems, and other components to improve iPhone margins. It was using machine learning on small-scale projects, like improving its camera algorithms.
[…]
Without their ads businesses, companies like Google and Meta wouldn’t have built the ecosystems and cultures required to make them AI powerhouses, and that environment changed the way their CEOs saw the world.
Again, I will emphasize this is a newsletter. It may seem like an article from a prestige publisher that prides itself on “separat[ing] the facts from our views”, but you might notice how, aside from citing some quotes and linking to ads, none of Albergotti’s substantive claims are sourced. This is just riffing.
I remain skeptical. Albergotti frames this as both a mindset shift and a necessity for advertising companies like Google and Meta. But the company synonymous with the A.I. boom, OpenAI, does not have the same business model. Besides, Apple behaves like other A.I. firms by scraping the web and training models on massive amounts of data. The evidence for this theory seems pretty thin to me.
But perhaps a reluctance to be invasive and creepy is one reason why personalized Siri features have been delayed. I hope Apple does not begin to mimic its peers in this regard; privacy should not be sacrificed. I think it is silly to be dependent on corporate choices rather than legislation to determine this, but that is the world some of us live in.
Let us concede the point anyhow, since it suggests a role Apple could fill by providing an architecture for third-party A.I. on its products. It does not need to deliver everything to end users; it can focus on building a great platform. Albergotti might sneeze at “designing its own chips […] to improve iPhone margins”, which I am sure was one goal, but it has paid off in ridiculously powerful Macs perfectfor A.I.workflows. And, besides, it has already built some kind of plugin architecture into Apple Intelligence because it has integrated ChatGPT. There is no way for other providers to add their own extension — not yet, anyhow — but the system is there.
The crux of the issue in my mind is this: Apple has a lot of good ideas, but they don’t have a monopoly on them. I would like some other folks to come in and try their ideas out. I would like things to advance at the pace of the industry, and not Apple’s. Maybe with a blessed system in place, Apple could watch and see how people use LLMs and other generative models (instead of giving us Genmoji that look like something Fisher-Price would make). And maybe open up the existing Apple-only models to developers. There are locally installed image processing models that I would love to take advantage of in my apps.
Which brings me to my second point. The other feature that I could see Apple market for a “ChatGPT/Claude via Apple Intelligence” developer package is privacy and data retention policies. I hear from so many developers these days who, beyond pricing alone, are hesitant toward integrating third-party AI providers into their apps because they don’t trust their data and privacy policies, or perhaps are not at ease with U.S.-based servers powering the popular AI companies these days. It’s a legitimate concern that results in lots of potentially good app ideas being left on the table.
One of Apple’s specialties is in improving the experience of using many of the same technologies as everyone else. I would like to see that in A.I., too, but I have been disappointed by its lacklustre efforts so far. Even long-running projects where it has had time to learn and grow have not paid off, as anyone can see in Siri’s legacy.
What if you could replace these features? What if Apple’s operating systems were great platforms by which users could try third-party A.I. services and find the ones that fit them best? What if Apple could provide certain privacy promises, too? I bet users would want to try alternatives in a heartbeat. Apple ought to welcome the challenge.
In just about every discussion concerning TikTok’s ability to operate within the United States, including my own, two areas of concern are cited: users’ data privacy, and the manipulation of public opinion through its feeds by a hostile foreign power. Regarding the first, the U.S., Canada, and any other country is not serious about the mishandling of private information unless it passes comprehensive data privacy legislation, so we can ignore that for now. The latter argument, however, merited my writing thousands of words in that single article. So let me dig into it again from a different angle.
In a 2019 speech at Georgetown University, Mark Zuckerberg lamented an apparently lost leadership by the U.S. in technology. “A decade ago, almost all of the major internet platforms were American,” he said. “Today, six of the top ten are Chinese”.
Incidentally, Zuckerberg gave this speech the same year in which his company announced, after five years of hard work and ingratiation, it was no longer pursuing an expansion into China which would necessarily require it to censor users’ posts. It instead decided to mirror the denigration of Chinese internet companies by U.S. lawmakers and lobbied for a TikTok ban. This does not suggest a principled opposition on the grounds of users’ free expression. Rather, it was seen as a good business move to expand into China until it became more financially advantageous to try to get Chinese companies banned stateside.
I do not know where Zuckerberg got his “six of the top ten” number. The closest I could find was five — based solely on total user accounts. Regardless of the actual number, Zuckerberg was correct to say Chinese internet companies have been growing at a remarkable rate, but it is a little misleading; aside from TikTok, these apps mostly remain a domestic phenomenon. WeChat’s user base, for example, is almost entirely within China, though it is growing internationally as one example of China’s “Digital Silk Road” initiative.
Tech companies from the U.S. still reign supreme nearly everywhere, however. The country exports the most popular social networks, messaging services, search engines, A.I. products, CDNs, and operating systems. Administration after administration has recognized the value to the U.S. of boosting the industry for domestic and foreign policy purposes. It has been a soft power masterstroke for decades.
In normal circumstances, this is moderately concerning for those of us in the rest of the world. Policies set in the U.S. — either those set by companies because of cultural biases or, in the case of something like privacy or antitrust, legal understanding — may not reflect expectations in other regions, and it is not ideal that so much of modern life depends so heavily on the actions of a single country.
These are not normal circumstances — especially here, in Canada. The president of the U.S. is deliberately weakening the Canadian economy in an attempt to force us to cede our sovereignty. Earlier this week, while he was using his extraordinary platform to boost the price of Tesla shares, the president reiterated this argument while also talking about increasing the size of the U.S. military. This is apparently all one big joke in a broadly similar way as is pushing a live chicken into oncoming traffic. Many people have wasted many hours and words trying to understand why he is so focused on this fifty-first state nonsense — our vast natural resources, perhaps, or the potential for polar trade routes because of warming caused by those vast natural resources. But this is someone whose thoughts, in the words of David Brooks, “are often just six fireflies beeping randomly in a jar”. He said why he wants Canada in that Tesla infomercial. “When you take away that [border] and you look at that beautiful formation,” he said while gesticulating with his hands in a shape unlike the combined area of Canada and the United States but quite like how someone of his vibe might crassly describe a woman’s body, “there is no place anywhere in the world that looks like that”. We are nothing more than a big piece of land, and he would like to take it.
Someone — I believe it was Musk, standing just beside him — then reminded him of how he wants Greenland, too, which put a smile on his face as he said “if you add Greenland […] it’s gonna look beautiful”. In the Oval Office yesterday, he sat beside NATO’s Mark Rutte and said “we have a couple of [military] bases on Greenland already”, and “maybe you will see more and more soldiers go there, I don’t know”. It is all just a big, funny joke, from a superpower with the world’s best-funded military, overseen by a chaotic idiot. Ha ha ha.
The U.S. has become a hostile foreign power to Canada and, so, we should explore its dominance in technology under the same criteria as it has China’s purported control over TikTok and how that has impacted U.S. sovereignty. If, for instance, it makes sense to be concerned about the obligation of Chinese companies to reflect ruling party ideology, it is perhaps more worrisome U.S. tech companies are lining upto do sovoluntarily. They have a choice.
Similarly, should we be suspicious that our Instagram feeds and Google searches are being tilted in a pro-U.S. direction? I am certain one could construct a study similar to those indicating a pro-China bias on TikTok (PDF) with U.S. platforms. Is YouTube pushing politically divisive videos to Canadians in an effort to weaken our country? Is Facebook suggesting pro-U.S. A.I. slop to Canadians something more than algorithmic noise?
This is before considering Elon Musk who, as both a special government employee and owner of X, is more directly controlling than Chinese officials are speculated to be over TikTok. X has become a solitary case study in state influence over social media. Are the feeds of Canadian users being manipulated? Is his platform a quasi-official propaganda outlet?
Without evidence, these ideas all strike me as conspiracy-brained nonsense. I imagine one could find just as much to support these ideas as is found in those TikTok studies, a majority of which observe the effects of select searches. The Network Contagion one (PDF), linked earlier, is emblematic of these kinds of reports, about which I wrote last year referencing two other examples — one written for the Australian government, and a previous Network Contagion report:
The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.
The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.
The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. […]
The more recent Network Contagion study is perhaps even less reliable. It comprises a similar exploration of search results, and surveys comparing TikTok users’ views to those of non-users. In the first case, the researchers only assessed four search terms: Tibet, Tiananmen, Uyghur, and Xinjiang. TikTok’s search results produced the fewest examples of “anti-China sentiment” in comparison with Instagram and YouTube, but the actual outcomes were not consistent. Results for “Uyghur” and “Xinjiang” on TikTok were mostly irrelevant; on YouTube, however, nearly half of a user’s journey would show videos supportive of China for both queries. Results for “Tibet” were much more likely to be “anti-China” on Instagram than the other platforms, though similarly “pro-China” as TikTok.
These queries are obviously sensitive in China, and I have no problem believing TikTok may be altering search results. But this study, like the others I have read, is not at all compelling if you start picking it apart. For the “Uyghur” and “Xinjiang” examples, researchers say the heavily “pro-China” results on YouTube are thanks to “pro-CCP media assets” and “official or semi-official CCP media sources” uploading loads of popular videos with a positive spin. Sometimes, TikTok is more likely to show irrelevant results; at other times, it shows “neutral” videos, which the researchers say are things like unbiased news footage. In some cases — as with results for “Tiananmen” and “Uyghur” — TikTok was similarly likely to show “pro-China” and “anti-China” results. The researchers hand-wave away these mixed outcomes by arguing “the TikTok search algorithm systematically suppresses undesirable anti-China content while flooding search results with irrelevant content”. Yet the researchers document no effort to compare the results of these search terms with anything else — controversial or politically sensitive terms outside China, for example, or terms which result in overwhelmingly dour results, or generic apolitical terms. In all cases, TikTok returns more irrelevant results than the other platforms; maybe it is just bad at search. We do not know because we have nothing to compare it to. Again, I have no problem believing TikTok may be suppressing results, but this study does not convince me it is uniformly reflecting official Chinese government lines.
As for the survey results, they show TikTok users had more favourable views of China as a travel destination and were less concerned about its human rights abuses. This could plausibly be explained by TikTok users skewing younger and, therefore, growing up seeing a much wealthier China than older generations. Younger people might simply beless aware of human rights abuses. For contrast, people who do not use TikTok are probably more likely to have negative views of China — not just because they are more likely to be older, but because they are suspicious of the platform. “When controlling for age,” the researchers say, “TikTok use significantly and uniquely predicted more positive perceptions of China’s human rights record” among video-based platforms, but Facebook users also had more positive perceptions, and nobody is claiming Facebook is in the bag for China. Perhaps there are other reasons — but they go unexplored.
This is a long digression, but it indicates to me just how possible it would be to create a similar understanding for social media’s impact on Canada. In my own experience on YouTube — admittedly different from a typical experience because I turned off video history — the Related Videos on just about everything I watch are packed with recommendations for Fox News, channels dedicated to people getting “owned”, and pro-Trump videos. I do not think YouTube is trying to sway me into a pro-American worldview and shed my critical thinking skills, but one could produce a compelling argument for it.
This is something we are going to need to pay an increasing level of attention toward. People formerly with Canadian intelligence are convinced the U.S. president is doing to Canada in public what so many before him have done to fair-weather friends in private. They believe his destabilization efforts may be supported by a propaganda strategy, particularly on Musk’s X. These efforts may not be unique to social media, either. Postmedia, the publisher of the National Post plus the most popular daily newspapers in nearly every major Canadian city, is majority U.S.-owned. This is not good.
Yet we should not treat social platforms the same as we do media organizations. We should welcome foreign-owned publications to cover our country, but the ownership of our most popular outlets should be primarily domestic. The internet does not work like that — for both good and bad — nor should we expect it to. Requiring Canadian subsidiaries of U.S. social media companies or banning them outright would continue the ongoing splintering of internet services with little benefit for Canadians or, indeed, the expectations of the internet. We should take a greater lead in determining our digital future without being so hostile to foreign services. That means things like favouring protocols over platforms, which give users more choice over their experience, and permit a level of autonomy and portability. It means ensuring a level of digital sovereignty with our most sensitive data.
It is also a reminder to question the quality of automated recommendations and search results. We do not know how any of them work — companies like Google often cite third-party manipulation as a reason to keep them secret — and I do not know that people would believe tech companies if they were more transparent in their methodology. To wit, digital advertisements often have a button or menu item explaining why you are seeing that particular ad, but it has not stopped many people from believing their conversations are picked up by a device’s microphone and used for targeting. If TikTok released the source for its recommendations engine, would anyone trust it? How about if Meta did the same for its products? I doubt it; nobody believes these companies anyway.
The tech industry is facing declining public trust. The United States’ reputation is sinking among allies and its domestic support for civil rights is in freefall. Its leader is waging economic war on the country where I live. CEOs lack specific values and are following the shifting tides. Yet our world relies on technologies almost entirely dependent on the stability of the U.S., which is currently in short supply. The U.S., as Paris Marx wrote, “needs to know that it cannot dominate the tech economy all on its own, and that the people of the world will no longer accept being subject to the whims of its dominant internet and tech companies”. The internet is a near-miraculous global phenomenon. Restricting companies based on their country of origin is not an effective way to correct this imbalance. But we should not bend to U.S. might, either. It is, after all, just one country of many. The rest of the world should encourage it to meet us at our level.
Apple spokesperson Jacqueline Roy, in a statement provided seemingly first to both Stephen Nellis, of Reuters, and John Gruber:
[…] We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.
Unsurprisingly, this news comes on a Friday and is announced via a carefully circulated statement instead of on Apple’s own website. It is a single feature, but it is a high-priority one showcased in its celebrity infused ads for its flagship iPhone models. I think Apple ought to have published its own news instead of relying on other outlets to do its dirty work, but it fits a pattern. It happened with AirPods and again with AirPower; the former has become wildly successful, while the latter was canned.
This announcement reflects rumours of the feature’s fraught development. Mark Gurman, Bloomberg, in February:
The company first unveiled plans for a new AI-infused Siri at its developers conference last June and has even advertised some of the features to customers. But Apple is still racing to finish the software, said the people, who asked not to be identified because the situation is private. Some features, originally planned for April, may have to be postponed until May or later, they said.
Do note how “or later” is doing the bulk of the work in this paragraph. Nevertheless, he seems to have forecasted the delay announced today.
While it is possible to reconcile Apple’s “coming year” timeline with Gurman’s May-or-later availability while staying within the current release cycle, the statement is a tacit acknowledgement these features are now slated for the next major versions of Apple’s operating system, perhaps no sooner than a release early next year. I do not see why Apple would have issued this statement if it were confident it could ship personalized Siri before September’s new releases. That is a long time between marketing and release for any company, and particularly so for Apple.
This is a risk of announcing something before it is ready, something the WWDC keynote is increasingly filled with. Instead of monolithic September releases with occasional tweaks throughout the year, Apple adopted a more incremental strategy. I would like to believe this has made Apple’s software more polished — or, less charitably, slowed its quality decline. What it has actually done is turn Apple’s big annual software presentation into a series of promises to be fulfilled throughout the year. To its credit, it has almost always delivered and, so, it has been easy to assume the hot streak will continue. This is a good reminder we should treat anything not yet shipping in a release or beta build as a potential feature only.
The delay may ultimately be good news. It is better for Apple to ship features that work well than it is to get things out the door quickly. Investors do not seem bothered; try spotting the point on today’s chart when Gruber and Reuters published the statements they received. And, anyway, most Apple Intelligence features released so far seem rushed and faulty. I would not want to see more of the same. Siri has little reputation to lose, so it makes sense to get this round of changes more right than not.
Besides, Apple only just began including the necessary APIs in the latest developer betas of iOS 18.4. No matter when Apple gets this out, the expansiveness of its functionality is dependent on third-party buy-in. There was a time when developer adoption of new features was a given. That is no longer the case.
According to Gurman as recently as earlier this week, a May release is possible (Update: Oops, I should have checked again.), but I would bet against it. If his reporting is to be believed, however, the key features announced as delayed today require a wholly different architecture which Apple was planning on merging with the existing Siri infrastructure midway through the iOS 19 cycle. It seems possible to me the timeline on both projects could now be interlinked. After all, why not? It is not like Siri is getting worse. No rush.
Remember when new technology felt stagnant? All the stuff we use — laptops, smartphones, watches, headphones — coalesced around a similar design language. Everything became iterative or, in more euphemistic terms, mature. Attempts to find a new thing to excite people mostly failed. Remember how everything would change with 5G? How about NFTs? How is your metaverse house? The world’s most powerful hype machine could not make any of these things stick.
This is not necessarily a problem in the scope of the world. There should be a point at which any technology settles into a recognizable form and function. These products are, ideally, utilitarian — they enable us to do other stuff.
But here we are in 2025 with breakthroughs in artificial intelligence and, apparently, quantum computing and physics itself. The former is something I have written about at length already because it has become adopted so quickly and so comprehensively — whether we like it or not — that it is impossible to ignore. But the news in quantum computers is different because it is much, much harder for me to grasp. I feel like I should be fascinated, and I suppose I am, but mainly because I find it all so confusing.
This is not an explainer-type article. This is me working things out for myself. Join me. I will not get far.
Today I’m delighted to announce Willow, our latest quantum chip. Willow has state-of-the-art performance across a number of metrics, enabling two major achievements.
The first is that Willow can reduce errors exponentially as we scale up using more qubits. This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years.
Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe.
Microsoft today introduced Majorana 1, the world’s first quantum chip powered by a new Topological Core architecture that it expects will realize quantum computers capable of solving meaningful, industrial-scale problems in years, not decades.
It leverages the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.
Microsoft says it created a new state of matter and observed a particular kind of particle, both for the first time. In a twelve-minute video, the company defines this new era — called the “quantum age” — as a literal successor to the Stone Age and the Bronze Age. Jeez.
There is hype, and then there is hype. This is the latter. Even if it is backed by facts — I have no reason to suspect Microsoft is lying in large part because, to reiterate, I do not know anything about this — and even if Microsoft deserves this much attention, it is a lot. Maybe I have become jaded by one too many ostensibly world-changing product launches.
There is good reason to believe the excitement shown by Google and Microsoft is not pure hyperbole. The problem is neither company is effective at explaining why. As of writing the first sentence of this piece, my knowledge of quantum computers was only that they can be much, much, much faster than any computer today, thanks to the unique properties of quantum mechanics and, specifically, quantum bits. That is basically all. But what does a wildly fast computer enable in the real world? My brain can only grasp the consumer-level stuff I use, so I am reminded of something I wrote when the first Mac Studio was announced a few years ago: what utility does speed have?
I am clearly thinking in terms far too small. Domenico Vicinanza wrote a good piece for the Conversation earlier this year:
Imagine being able to explore every possible solution to a problem all at once, instead of once at a time. It would allow you to navigate your way through a maze by simultaneously trying all possible paths at the same time to find the right one. Quantum computers are therefore incredibly fast at finding optimal solutions, such as identifying the shortest path, the quickest way.
This explanation helped me — not a lot, but a little bit. What I remain confused by are the examples in the announcements from Google and Microsoft. Why quantum computing could help “discover new medicines” or “lead to self-healing materials” seems like it should be obvious to anyone reading, but I do not get it.
I am suspicious in part because technology companies routinely draw links between some new buzzy thing they are selling and globally significant effects: alleviating hunger, reducing waste, fixing our climate crisis, developing alternative energy sources, and — most of all — revolutionizing medical care. Search the web for (hyped technology) cancer and you can find this kind of breathless revolutionary language drawing a clear line between cancer care and 5G, 6G, blockchain, DAOs, the metaverse, NFTs, and Web3 as a whole. This likely speaks as much about insidious industries that take advantage of legitimate qualms with the medical system and fears of cancer, but it is nevertheless a pattern with these new technologies.
I am not even saying these promises are always wrong. Technological advancement has surely led to improving cancer care, among other kinds of medical treatments.
I have no big goal for this post — no grand theme or message. I am curious about the promises of quantum computers for the same reason I am curious about all kinds of inventions. I hope they work in the way Google, Microsoft, and other inventors in this space seem to believe. It would be great if some of the world’s neglected diseases can be cured and we could find ways to fix our climate.
But — and this is a true story — I read through Microsoft’s various announcement pieces and watched that video while I was waiting on OneDrive to work properly. I struggle to understand how the same company that makes a bad file syncing utility is also creating new states of matter. My brain is fully cooked.
In November 2023, two researchers at the University of California, Irvine, and their supervisor published “Dazed and Confused”, a working paper about Google’s reCAPTCHAv2 system. They write mostly about how irritating and difficult it is to use, and also explore its privacy and labour costs — and it is that last section in which I had some doubts when I first noticed the paper being passed around in July.
I was content to leave it there, assuming this paper would be chalked up as one more curiosity on a heap of others on arXiv. It has not been subjected to peer review at any journal, as far as I can figure out, nor can I find another academic article referencing it. (I am not counting the dissertation by one of the paper’s authors summarizing its findings.) Yet parts of it are on their way to becoming zombie statistics. Mike Elgan, writing in his October Computerworld column, repeated the paper’s claim that “Google might have profited as much as $888 billion from cookies created by reCAPTCHA sessions”. Ted Litchfield of PC Gamer included another calculation alleging solving CAPTCHAs “consum[ed] 7.5 million kWhs of energy[,] which produced 7.5 million pounds of CO2 pollution”; the article is headlined reCAPTCHAs “[…] made us spend 819 million hours clicking on traffic lights to generate nearly $1 trillion for Google”. In a Boing Boing article earlier this month, Mark Frauenfelder wrote:
[…] Through analyzing over 3,600 users, the researchers found that solving image-based challenges takes 557% longer than checkbox challenges and concluded that reCAPTCHA has cost society an estimated 819 million hours of human time valued at $6.1 billion in wages while generating massive profits for Google through its tracking capabilities and data collection, with the value of tracking cookies alone estimated at $888 billion.
I get why these figures are alluring. CAPTCHAs are heavily studied; a search of Google Scholar for “CAPTCHA” returns over 171,000 results. As you might expect, most are adversarial experiments, but there are several examining usability and, others, privacy. However, I could find just one previous paper correlating, say, emissions and CAPTCHA solving, and it was a joke paper (PDF) from the 2009 SIGBOVIK conference, “the Association for Computational Heresy Special Interest Group”. Choice excerpt: “CAPTCHAs were the very starting point for human computation, a recently proposed new field of Computer Science that lets computer scientists appear less dumb to the world”. Excellent.
So you can see why the claims of the U.C. Irvine researchers have resonated in the press. For example, here is what they — Andrew Searles, Renascence Tarafder Prapty, and Gene Tsudik — wrote in their paper (PDF) about emissions:
Assuming un-cached scenarios from our technical analysis (see Appendix B), network bandwidth overhead is 408 KB per session. This translates into 134 trillion KB or 134 Petabytes (194 x 1024 Terrabytes [sic]) of bandwidth. A recent (2017) survey estimated that the cost of energy for network data transmission was 0.06 kWh/GB (Kilowatt hours per Gigabyte). Based on this rate, we estimate that 7.5 million kWh of energy was used on just the network transmission of reCAPTCHA data. This does not include client or server related energy costs. Based on the rates provided by the US Environmental Protection Agency (EPA) and US Energy Information Administration (EIA), 1 kWh roughly equals 1-2.4 pounds of CO2 pollution. This implies that reCAPTCHA bandwidth consumption alone produced in the range of 7.5-18 million pounds of CO2 pollution over 9 years.
Obviously, any emissions are bad — but how much is 7.5–18 million pounds of CO2 over nine years in context? A 2024 working paper from the U.S. Federal Housing Finance Agency estimated residential properties each produce 6.8 metric tons of CO2 emissions from electricity and heating, or about 15,000 pounds. That means CAPTCHAs produced as much CO2 as providing utilities to 55–133 U.S. houses per year. Not good, sure, but not terrible — at least, not when you consider the 408 kilobyte session transfer against, say, Google’s homepage, which weighs nearly 2 MB uncached. Realistically, CAPTCHAs are not a meaningful burden on the web or our environment.
The numbers in this discussion area are suspect. From these CO2 figures to the value of reCAPTCHA cookies — apparently responsible for nearly half of Google’s revenue from when it acquired the company — I find the evidence for them lacking. Yet they continue to circulate in print and, now, in a Vox-esque mini documentary.
The video, on the CHUPPL “investigative journalism” YouTube channel, was created by Jack Joyce. I found it via Frauenfelder, of Boing Boing, and it was also posted by Daniel Sims at TechSpot and Emma Roth at the Verge. The roughly 17-minute mini-doc has been watched nearly 200,000 times, and the CHUPPL channel has over 350,000 subscribers. Neither number is massive for YouTube, but it is not a small amount of viewers, either. Four of the ten videos from CHUPPL have achieved over a million views apiece. This channel has a footprint. But watching the first half of its reCAPTCHA video is what got me to open BBEdit and start writing this thing. It is a masterclass in how the YouTube video essay format and glossy production can mask bad journalism. I asked CHUPPL several questions about this video and did not receive a response by the time I published this.
Let me begin at the beginning:
How does this checkbox know that I’m not a robot? I didn’t click any motorcycles or traffic lights. I didn’t even type in distorted words — and yet it knew. This infamous tech is called reCAPTCHA and, when it comes to reach, few tools rival its presence across the web. It’s on twelve and a half million websites, quietly sitting on pages that you visit every day, and it’s actually not very good at stopping bots.
While Joyce provides sources for most claims in this video, there is not one for this specific number. According to BuiltWith, which tracks technologies used on websites, the claim is pretty accurate — it sees it used on about twelve million websites, and it is the most popular CAPTCHA script.
But Google has far more popular products than these if it wants to track you across the web. Google Maps, for example, is on over 15 million live websites, Analytics is on around 31 million, and AdSense is on nearly 49 million. I am not saying that we should not be concerned about reCAPTCHA because it is on only twelve million sites, but that number needs context. Google Maps is more popular, according to BuiltWith, than reCAPTCHA. If Google wants to track user activity across the web, AdSense is explicitly designed for that purpose. Yes, it is probably true that “few tools rival its presence across the web”, but you can say that of just about any technology from Google, Meta, Amazon, Cloudflare, and a handful of other giants — but, especially, Google.
Back to the intro:
It turns out reCAPTCHA isn’t what we think it is, and the public narrative around reCAPTCHA is an impossibly small sliver of the truth. And by accepting that sliver as the full truth, we’ve all been misled. For months, we followed the data, we examined glossed over research, and uncovered evidence that most people don’t know exists. This isn’t the story of an inconsequential box. It’s the story of a seemingly innocent tool and how it became a gateway for corporate greed and mass surveillance. We found buried lawsuits, whispers of the NSA, and echoes of Edward Snowden. This is the story of the future of the Internet and who’s trying to control it.
The claims in this introduction vastly oversell what will be shown in this video. The lawsuits are not “buried”, they were linked from the reCAPTCHA Wikipedia article as it appeared before the video was published. The “whispers” and “echoes” of mass surveillance disclosures will prove to be based on almost nothing. There are real concerns with reCAPTCHA, and this video does justice to almost none of them.
The main privacy problems with reCAPTCHA are found in its ubiquity and its ownership. Google swears up and down it collects device and user behaviour data through reCAPTCHA only for better bot detection. It issued a statement saying as much to Techradar in response to the “Dazed and Confused” paper circulating again. In a 2021 blog post announcing reCAPTCHA Enterprise — the latest version combining V2, V3, and the mobile SDKs under a single brand — Google says:
Today, reCAPTCHA Enterprise is a pure security product. Information collected is used to provide and improve reCAPTCHA Enterprise and for general security purposes. We don’t use this data for any other purpose.
[…] Additionally, none of the data collected can be used for personalized advertising by Google.
Google goes on to explain that it collects data as a user navigates through a website to help determine if they are a bot without having to present a challenge. Again, it is adamant none of this data is used to feed its targeted advertising machine.
There are a couple of problems with this. First, because Google does not disclose exactly how reCAPTCHA works, its promise requires that you trust the company. It is not a great idea to believe the word of corporations in general. Specifically, in Google’s case, a leak of its search ranking signals last year directlycontradicted its public statements. But, even though Google was dishonest then, there is currently no evidence reCAPTCHA data is being misused in the way Joyce’s video suggests. Coyly asking questions with sinister-sounding music underneath is not a substitute for evidence.
The second problem is the way Google’s privacy policy can be interpreted, as reported by Thomas Claburn in 2020 in the Register:
Zach Edwards, co-founder of web analytics biz Victory Medium, found that Google’s reCAPTCHA’s JavaScript code makes it possible for the mega-corp to conduct “triangle syncing,” a way for two distinct web domains to associate the cookies they set for a given individual. In such an event, if a person visits a website implementing tracking scripts tied to either those two advertising domains, both companies would receive network requests linked to the visitor and either could display an ad targeting that particular individual.
You will hear from Edwards later in Joyce’s video making a similar argument. Just because Google can do this, it does not mean it is actually doing so. It has the far more popular AdSense for that.
ReCAPTCHA interacts with three Google cookies when it is present: AEC, NID, and OGPC. According to Google, AEC is “used to detect spam, fraud, and abuse” including for advertising click fraud. I could not find official documentation about OGPC, but it and NID appear to be used for advertising for signed-out users. Of these, NID is most interesting to me because it is also used to store Google Search preferences, so someone who uses Google’s most popular service is going to have it set regardless, and its value is fixed for six months. Therefore, it is possible to treat it as a unique identifier for that time.
I could not find a legal demand of Google specifically for reCAPTCHA history. But I did find a high-profile request to re-identify NID cookies. In 2017, the first Trump administration began seizing records from reporters, including those from the New York Times. The Times uses Google Apps for its email system. That administration and then the Biden one tried obtaining email metadata, too, while preventing Times executives from disclosing anything about it. In the warrant (PDF), the Department of Justice demands of Google:
PROVIDER is required to disclose to the United States the following records and other information, if available, for the Account(s) for the time period from January 14, 2017, through April 30, 2017, constituting all records and other information relating to the Account(s) (except the contents of communications), including:
[…]
Identification of any PROVIDER account(s) that are linked to the Account(s) by cookies, including all PROVIDER user IDs that logged into PROVIDER’s services by the same machine as the Account(s).
And by “cookies”, the government says that includes “[…] cookies related to user preferences (such as NID), […] cookies used for advertising (such as NID, SID, IDE, DSID, FLC, AID, TAID, and exchange_uid) […]” plus Google Analytics cookies. This is not the first time Google’s cookies have been used in intelligence or law enforcement matters — the NSA has, of course, been using them that way for years — but it is notable for being an explicit instance of tying the NID cookie, which is among those used with reCAPTCHA, to a user’s identity. (Google says site owners can use a different reCAPTCHA domain to disassociate its cookies.) Also, given the effort of the Times’ lawyers to release this warrant, it is not surprising I was unable to find another public document containing similar language. I could not find any other reporting on this cookie-based identification effort, so I think this is news. In this case, Google successfully fought the government’s request for email metadata.
Assuming Google retains these records, what the Department of Justice was demanding would be enough to connect a reCAPTCHA user to other Google product activity and a Google account holder using the shared NID cookie. Furthermore, it is a problem that so much of the web relies on a relative handful of companies. Google has long treated the open web as its de facto operating system, coercing site owners to use features like AMP or making updates to comply with new search ranking guidelines. It is not just Google that is overly controlling, to be fair — I regularly cannot access websites on my iMac because Cloudflare believes I am a robot and it will not let me prove otherwise — but it is the most significant example. Its fingers in every pie — from site analytics, to fonts, to advertising, to maps, to browsers, to reCAPTCHA — means it has a unique vantage point from which to see how billions of people use the web.
These are actual privacy concerns, but you will learn none of them from Joyce’s video. You will instead be on the receiving end of a scatterbrained series of suggestions of reCAPTCHA’s singularly nefarious quality, driven by just-asking-questions conspiratorial thinking, without reaching a satisfying destination.
From here on, I am going to use timecodes as reference points. 1:56:
Journalists told you such a small sliver of the truth that I would consider it to be deceptive.
Bad news: Joyce is about to be fairly deceptive while relying on the hard work of journalists.
At 3:24:
Okay, you’re probably thinking “why does any of this matter?”, and I agree with you.
I did agree with you. I actually halted this investigation for a few weeks because I thought it was quite boring — until I went to renew my passport. (Passport status dot state dot gov.)
I got a CAPTCHA — not a checkbox, not fire hydrants, but the old one. And I clicked it. And it took me here.
The “here” Joyce mentions is a page at captcha.org, which is redirected from its original destination at captcha.com. The material is similar on both. The ownership of the .org domain is unclear, but the .com is run by Captcha, Inc., and it sells the CAPTCHA package used by the U.S. Department of State among other government departments. I have a sneaking suspicion the .org retains some ties to Captcha, Inc. given the DNS recordsof each. Also, the list of CAPTCHA software on the .org site begins with all the packages offered by Captcha, Inc., and its listing for reCAPTCHA is outdated — it does not display Google as its owner, for example — but the directory’s operators found time to add the recaptcha.sucks website.
About that. 4:07:
An entire page dedicated to documenting the horrors of reCAPTCHA: alleging national security implications for the U.S. and foreign governments, its ability to doxx users, mentioning secret FISA orders — the same type of orders that Edward Snowden risked his life to warn us about. […]
Who put this together? “Anonymous”.
if you are a web-native journalist, wishing to get in touch, we doubt you are going to have a hard-time figuring out who we are anyway.
This felt like a key left in plain sight, whispering there’s a door nearby and it’s meant to be opened. This is what we’re good at. This is what we do.
The U.S. “national security implications” are, as you can see on screen as these words are being said, not present: “stay tuned — it will be continued”, the message from ten years ago reads. The FISA reference, meanwhile, is a quote from Google’s national security requests page acknowledging the types of data it can disclose under these demands. It is a note that FISA exists and, under that law, Google can be compelled to disclose user data — a policy that applies to every company.
This all comes from the ReCAPTCHA Sucks website. On the About page, the site author acknowledges they are a competitor and maintains their anonymity is due to trademark concerns:
a free-speech / gripe-site on trademarked domains must not be used in a ‘bad faith’ — what includes promotion of competing products and services.
and under certain legal interpretations disclosing of our identity here might be construed as a promotion of our own competing captcha product or service.
it frustrates us indeed, but those are the rules of the game.
The page concludes, as Joyce quoted:
if you are a web-native journalist, wishing to get in touch, we doubt you are going to have a hard-time figuring out who we are anyway.
Joyce reads this as a kind of spooky challenge yet, so far as I can figure out, did not attempt to contact the site’s operators. I asked CHUPPL about this and I have not heard back. It is not very difficult to figure out who they are. The site has a shared technical infrastructure, including a historic Google Analytics account, with captcha.com. It feels less like the work of a super careful anonymous tipster, and more like an open secret from an understandably cheesed competitor.
5:05:
Okay, let’s get this out of the way: reCAPTCHA is not and really has never been very good at stopping bots.
Joyce points to the success rate of a couple of reCAPTCHA breakers here as evidence of its ineffectiveness, though does not mention they were both against the audio version. What Joyce does not establish is whether these programs were used much in the real world.
In 2023, Trend Micro published research into the way popular CAPTCHA solving services operate. Despite the seemingly high success rate of automated techniques, “they break CAPTCHAs by farming out CAPTCHA-breaking tasks to actual human solvers” because there are a lot more services out there than reCAPTCHA. That is exactly how many CAPTCHA solvers markettheirservices, though some are now saying they use A.I. instead. Also, it is not as though other types of CAPTCHAs are not subject to similar threats. In 2021, researchers solved hCAPTCHA (PDF) with a nearly 96% success rate. Being only okay at stopping bot traffic is not unique to reCAPTCHA, and these tools are just one of several technologies used to minimize automated traffic. And, true enough, none of these techniques is perfect, or even particularly successful. But that does not mean their purpose is nefarious, as Joyce suggests later in the video, at 11:45:
Google has said that they don’t use the data collected from reCAPTCHA for targeted advertising, which actually scares me a bit more. If not for targeted ads, which is their whole business model, why is Google acting like an intelligence agency?
Joyce does not answer this directly, instead choosing to speculate about a way reCAPTCHA data could be used to identify people who submit anonymous tips to the FBI — yes, really. More on that later.
5:49:
2018 was the launch of V3. According to researchers at U.C. Irvine, there’s practically no difference between V2 and V3.
Onscreen, Joyce shows an excerpt from the “Dazed and Confused” paper, and the sentence fragment “there is no discernable difference between reCAPTCHAv2 and reCAPTCHAv3” is highlighted. But just after that, you can see the sentence continues: “in terms of appearance or perception of image challenges and audio challenges”.
Screenshot from CHUPPL video.
Remember: these researchers were mainly studying the usability of these CAPTCHAs. This section is describing how users perceive the similar challenges presented by both versions. They are not saying V2 and V3 have “practically no difference” in general terms.
At 6:56:
ReCAPTCHA “takes a pixel-by-pixel fingerprint” of your browser. A real-time map of everything you do on the internet.
This part contains a quote from a 2015 Business Insider article by Lara O’Reilly. O’Reilly, in turn, cites research by AdTruth, then — as now — owned by Experian. I can find plenty of references to O’Reilly’s article but, try as I might, I have not been able to find a copy of the original report. But, as a 2017 report from Cracked Labs (PDF) points out, Experian’s AdTruth “provides ‘universal device recognition’”, “creat[ing] a ‘unique user ID’ for each device, by collecting information such as IP addresses, device models and device settings”. To the extent “pixel-by-pixel fingerprint” means anything in this context — it does not, but it misleadingly sounds to me like it is taking screenshots — Experian’s offering also fits that description. It is a problem there are so many things which quietly monitor user activity across their entire digital footprint.
Unfortunately, at 7:41, Joyce whiffs hard while trying to make this point:
If there’s any part of this video you should listen to, it’s this. Stop making dinner, stop scrolling on your phone, and please listen.
When I tell you that reCAPTCHA is watching you, I’m not saying that in some abstract, metaphorical way. Right now, reCAPTCHA is watching you. It knows that you’re watching me. And it doesn’t want you to know.
This stumbles in two discrete ways. First, reCAPTCHA is owned by Google, but so is YouTube. Google, by definition, knows what you are doing on YouTube. It does not need reCAPTCHA to secretly gather that information, too.
Second, the evidence Joyce presents for why “it doesn’t want you to know” is that Google has added some CSS to hide a floating badge, a capability it documents. This is for one presentation of reCAPTCHAv2, which is as invisible background validation and where a checkbox is shown only to suspicious users.
Screenshot from CHUPPL video.
I do not think Google “does not want you to know” about reCAPTCHA on YouTube. I think it thinks it is distracting. Google products using other Google technologies has not been a unique concern since the company merged user data and privacy policies in 2012.
The second half of the video, following the sponsor read, is a jumbled mess of arguments. Joyce spends time on a 2015 class action lawsuit filed against Google in Massachusetts alleging completing the old-style word-based reCAPTCHA was unfairly using unpaid labour to transcribe books. It was tossed in 2016because the plaintiff (PDF) “failed to identify any statute assigning value to the few seconds it takes to transcribe one word”, and “Google’s profit is not Plaintiff’s damage”.
Joyce then takes us on a meandering journey through the way Google’s terms of use document is written — this is where we hear from Edwards reciting the same arguments as appeared in that 2020 Register article — and he touches briefly on the U.S. v. Google antitrust trial, none of which concerned reCAPTCHA. There is a mention of a U.K. audit in 2015 specifically related to its 2012 privacy policy merger. This is dropped with no explanation into the middle of Edwards’ questioning of what Google deems “security related” in the context of its current privacy policy.
Then we get to the FBI stuff. Remember earlier when I told you Joyce has a theory about how Google uses reCAPTCHA to unmask FBI tipsters? Here is when that comes up again:
Check this out: if you want to submit a tip to the FBI, you’re met with this notice acknowledging your right to anonymity. But even though the State Department doesn’t use reCAPTCHA, the FBI and the NSA do. […] If they want to know who submitted the anonymous report, Google has to tell them.
This is quite the theory. There is video of Edward Snowden and clips from news reports about the mysteries of the FISA court. Dramatic music. A chart of U.S. government requests for user data from Google.
But why focus on reCAPTCHA when the FBI and NSA — and a whole bunch of other government sites — also use Google Analytics? Though Google says Analytics cookies are distinct from those used by its advertising services, site owners can link them together, which would not be obvious to users. There is no evidence the FBI or any other government agency is doing so. The actual problem here is that sensitive and ostensibly anonymous government sites are using any Google services whatsoever, probably because they are a massive corporation with lots of widely-used products and services.
Even so, many federal sites use the product offered by Captcha, Inc. and it seems to respect privacy by being self-hosted. All of them should just use that. The U.S. government has its own analytics service; the stats are public. The reason for inconsistencies is probably the same reason any massive organization’s websites are fragmented: it is a lot of work to keep them unified.
Joyce juxtaposes this with the U.S. Secret Service’s use of Babel Street’s Locate X data. He does not explain any direct connection to reCAPTCHA or Google, and there is a very good reason for this: there is none. Babel Street obtained some of its location data from Venntel, which is owned by Gravy Analytics, which obtained it from personalized ads.
Joyce ultimately settles on a good point near the end of the video, saying Google uses various browsing signals “before, during, and after” clicking the CAPTCHA to determine whether you are likely human. If it does not have enough information about you — “you clear your cookies, you are browsing Incognito, maybe you are using a privacy-focused browser” — it is more likely to challenge you.
None of this is actually news. It has all been disclosed by Google itself on its website and in a 2014 Wired article by Andy Greenberg, linked from O’Reilly’s Business Insider story. This is what Joyce refers to at 7:24 in the video in saying “reCAPTCHA doesn’t need to be good at stopping bots because it knows who you are. The new reCAPTCHA runs in the background, is invisible, and only shows challenges to bots or suspicious users”. But that is exactly how reCAPTCHA stops bots, albeit not perfectly: it either knows who you are and lets you through without a challenge, or it asks you for confirmation.
It is this very frustration I have as I try to protect my privacy while still using the web. I hit reCAPTCHA challenges frequently, especially when working on something like this article, in which I often relied on Google’s superior historical index and advanced search operators to look up stories from ten years ago. As I wrote earlier, I run into Cloudflare’s bot wall constantly on one of my Macs but not the other, and I often cannot bypass it without restarting my Mac or, ironically enough, using a private browsing window. Because I use Safari, website data is deleted more frequently, which means I am constantly logging into services I use all the time. The web becomes more cumbersome to use when you want to be tracked less.
There are three things I want to leave you with. First, there is an interesting video to be made about the privacy concerns of reCAPTCHA, but this is not it. It is missing evidence, does not put findings in adequate context, and drifts conspiratorially from one argument to another while only gesturing at conclusions. Joyce is incorrect in saying “journalists told you such a small sliver of the truth that I would consider it to be deceptive”. In fact, they have done the hard work over many years to document Google’s many privacy failures — including in reCAPTCHA. That work should bolster understandable suspicions about massive corporations ruining our right to privacy. This video is technically well produced, but it is of shoddy substance. It does not do justice to the work of the better journalists whose work it relies upon.
Second, CAPTCHAs offer questionable utility. As iffy as I find the data in the discussion section of the “Dazed and Confused” paper, its other findings seem solid: people find it irritating to label images or select boxes containing an object. A different paper (PDF) with two of the same co-authors and four other researchers found people most like reCAPTCHA’s checkbox-only presentation — the one that necessarily compromises user privacy — but also found some people will abandon tasks rather than solve a CAPTCHA. Researchers in 2020 (PDF) found CAPTCHAs were an impediment to people with visual disabilities. This is bad. Unfortunately, we are in a new era of mass web scraping — one reason I was able to so easily find many CAPTCHA solving services. Site owners wishing to control that kind of traffic have options like identifying user agents or I.P. address strings, but all of these can be defeated. CAPTCHAs can, too. Sometimes, all you can do is pile together a bunch of bad options and hope the result is passable.
Third, this is yet another illustration of how important it is for there to be strong privacy legislation. Nobody should have to question whether checking a box to prove they are not a robot is, even in a small way, feeding a massive data mining operation. We are never going to make progress on tracking as long as it remains legal and lucrative.
I completely understand if you do not. Announced with great fanfare by Elon Musk after his eager-then-reluctant takeover of the company, writers like Lee Fang, Michael Shellenberger, Rupa Subramanya, Matt Taibbi, and Bari Weiss were permitted access to internal records of historic moderation decisions. Each published long Twitter threads dripping in gravitas about their discoveries.
But after stripping away the breathless commentary and just looking at the documents as presented, Twitter’s actions did not look very evil after all. Clumsy at times, certainly, but not censorial — just normal discussions about moderation. Contrary to Taibbi’s assertions, the “institutional meddling” was research, not suppression.
Now, Musk works for the government’s DOGE temporary organization and has spent the past two weeks — just two weeks — creating chaos with vast powers and questionable legality. But that is just one of his many very real jobs. Another one is his ownership of X where he also has an executive role. Today, he decided to accuse another user of committing a crime, and used his power to suspend their account.
What was their “crime”? They quoted a Wired story naming six very young people who apparently have key roles at DOGE despite their lack of experience. The full tweetread:1
Here’s a list of techies on the ground helping Musk gaining and using access to the US Treasury payment system.
Akash Bobba
Edward Coristine
Luke Farritor
Gautier Cole Killian
Gavin Kliger
Ethan Shaotran
I wonder if the fired FBI agents may want dox them and maybe pay them a visit.
In the many screenshots I have seen of this tweet, few seem to include the last line as it is cut off by the way X displays it. Clicking “Show more” would have displayed it. It is possible to interpret this as violative of X’s Abuse and Harassment rules, which “prohibit[s] behavior that encourages others to harass or target specific individuals or groups of people with abuse”, including “behavior that urges offline action”.
X, as Twitter before it, enforces these policies haphazardly. The same policy also “prohibit[s] content that denies that mass murder or other mass casualty events took place”, but searching “Sandy Hook” or “Building 7” turns up loads of tweets which would presumably also run afoul. Turns out moderation of a large platform is hard and the people responsible sometimes make mistakes.
But the ugly suggestion made in that user’s post might not rise to the level of a material threat — a “crime”, as it were — and, so, might still be legal speech. Musk’s X also suspended a user who just posted the names of public servants. And Musk is currently a government employee in some capacity. The “Twitter Files” crew, ostensibly concerned about government overreach at social media platforms, should be furious about this dual role and heavy-handed censorship.
It was at this point in drafting this article that Mike Masnick of Techdirt published his impressions much faster than I could turn it around. I have been bamboozled by my day job. Anyway:
Let’s be crystal clear about what just happened: A powerful government official who happens to own a major social media platform (among many other businesses) just declared that naming government employees is criminal (it’s not) and then used his private platform to suppress that information. These aren’t classified operatives — they’re public servants who, theoretically, work for the American people and the Constitution, not Musk’s personal agenda.
This doesn’t just “seem like” a First Amendment issue — it’s a textbook example of what the First Amendment was designed to prevent.
So far, however, we have seen from the vast majority of them no exhausting threads, no demands for public hearings — in fact, barely anything. To his extremely limited credit, Taibbi did acknowledge it is “messed up”, going on to write:
That new-car free speech smell is just about gone now.
“Now”?
Taibbi is the only one of those authors who has written so much as a tweet about Musk’s actions. Everyone else — Fang, Shellenberger, Subramanya, and Weiss — has moved on to unsubstantive commentary about newer and shinier topics.
This is not mere hypocrisy. What Musk is doing is a far more explicit blurring of the lines between government power and platform speech permissions. This could be an interesting topic that a writer on the free speech beat might want to explore. But for a lot of them, it would align them too similarly to mainstream reporting, and their models do not permit that.
It is one of the problems with being a shallow contrarian. Because these writers must position themselves as alternatives to mainstream news coverage — “focus[ing] on stories that are ignored or misconstrued in the service of an ideological narrative”, “for people who dare to think for themselves”. How original. They suggest they cannot cover the same news — or, at least, not from a similar perspective — as in the mainstream. This is not actually true, of course: each of them frequently publishes hot takes about high-profile stories along their particular ideological bent, which often coincide with standard centre-right to right-wing thought. They are not unbiased. Yet this widely covered story has either escaped their attention, or they have mostly decided it is not worth mentioning.
I am not saying this is a conspiracy among these writers, or that they are lackeys for Musk or Trump. What I am saying is that their supposed principles are apparently only worth expressing when they are able to paint them as speaking truth to power, and their concept of power is warped beyond recognition. It goes like this: some misinformation researchers partially funded by government are “power”, but using the richest man in the world as a source is not. It also goes like this: when that same man works for the government in a quasi-official capacity and also owns a major social media platform, it is not worth considering those implications because Rolling Stone already has an article.
They can prove me wrong by dedicating just as much effort to exposing the blurrier-than-ever lines between a social media platform and the U.S. government. Instead, it is busy reposting glowing profiles of now-DOGE staff. They are not interested in standing for specific principles when knee-jerk contrarianism is so much more thrilling.
There are going to be a lot of x.com links in this post, as it is rather unavoidable. ↥︎
The downfall of Quartz is really something to behold. It was launched in 2012 as a digital-only offshoot of the Atlantic specifically intended for business and economic news. It compared itself to esteemed publications like the Economist and Financial Times, and had a clever-for-early-2010s URL.1 It had an iPad-first layout. Six years later, it and “its own bot studio” were sold to Uzabase for a decent sum. But the good times did not last, and Quartz was eventually sold to G/O Media.
As of publishing, the “Quartz Intelligence Newsroom” has written 22 articles today, running the gamut from earnings reports to Reddit communities banning Twitter posts to the Sackler settlement to, delightfully, a couple articles about how much AI sucks. Quartz has been running AI-generated articles for months, but prior to yesterday, they appear to have been limited to summaries of earnings reports rather than news articles. Boilerplate at the bottom of these articles notes that “This is the first phase of an experimental new version of reporting. While we strive for accuracy and timeliness, due to the experimental nature of this technology we cannot guarantee that we’ll always be successful in that regard.”
MacLeod published this story last week, and I thought it would be a good time to check in on how it is going. So I opened the latest article from the “Quartz Intelligence Newsroom”, “Expected new tariffs will mean rising costs for everyday items”. It was published earlier today, and says at the top it “incorporates reporting from Yahoo, NBC Chicago and The Wall Street Journal on MSN.com”. The “Yahoo” story is actually a syndicated video from NBC’s Today Show, so that is not a great start as far as crediting sources goes.
Let us tediously dissect this article, beginning with the first paragraph:
As new tariffs are slated to take effect in early March, consumers in the U.S. can expect price increases on a variety of everyday items. These tariffs, imposed in a series of trade policy shifts, are anticipated to affect numerous sectors of the economy. The direct cost of these tariffs is likely to be passed on to consumers, resulting in higher prices for goods ranging from electronics to household items.
The very first sentence of this article appears to be wrong. The tariffs in question are supposed to be announced today, as stated in that Today Show clip, and none of the cited articles say anything about March. While a Reuters “exclusive” yesterday specified a March 1 enforcement date, the White House denied that report, with the president saying oil and gas tariffs would begin “around the 18 of February”.
To be fair to the robot writing the Quartz article, the president does not know what he is talking about. You could also see how a similar mistake could be made by a human being who read the Reuters story or has sources saying something similar. But the Quartz article does not cite Reuters — it, in fact, contains no links aside from those in the disclaimer quoted above — nor does it claim to have any basis for saying March.
The next paragraph is where things take a sloppier turn; see if you can spot it:
Data from recent analyses indicate that electronics, such as smartphones and laptops, will be among the most impacted by the new tariffs. Importers of these goods face increased costs, which they are poised to transfer to consumers. A report by the U.K.-based research firm Tech Analytics suggests that consumers might see price hikes of up to 15% on popular smartphone models and up to 10% on laptops. These increases are expected to influence consumer purchasing decisions, possibly leading to a decrease in sales volume.
If you are wondering why an article about U.S. tariffs published by a U.S. website is citing a U.K. source, you got the same weird vibe as I did. So I looked it up. And, as best I can tell, there is no U.K. research organization called “Tech Analytics” — none at all. There used to be and, because it was only dissolved in October, it is possible Tech Analytics could be a report from around then based on the president’s campaign statements. But I cannot find any record of Tech Analytics publishing anything whatsoever, or being cited in any news stories. This report does not exist.
I also could not find any source for the figures in this paragraph. Last month, the U.S. Consumer Technology Association published a report (PDF) exploring the effects of these tariffs on U.S. consumer goods. Analysis by Trade Partnership Worldwide indicated the proposed tariffs would raise the price of smartphones by 26–37%, and laptops by 46–68%. These figures assumed a rate of 70–100% on goods from China because that is what the president said he would do. He more recently said 10% tariffs should be expected, and that could mean smartphone prices really do increase by the amount in the Quartz article. However, there is again no (real) source or citation for those numbers.
As far as I can tell, Quartz, a business and financial news website, published a made-up source and some numbers in an article about a high-profile story. If a real person reviewed this story before publication, their work is not evident. Why should a reader trust anything from Quartz ever again?
Let us continue a couple of paragraphs later:
The automotive sector is also preparing for the impact of increased tariffs. Car manufacturers and parts suppliers are bracing for higher production costs as tariffs on imported steel and aluminum take hold. According to a February report from the Automobile Manufacturers Association of the U.S., vehicle prices might go up by an average of $1,500. This increase stems from the higher costs of materials that are critical to vehicle manufacturing and assembly.
Does the phrase “according to a February report” sound weird to you on the first of February? It does to me, too. Would it surprise you if I told you the “Automobile Manufacturers Association of the U.S.” does not exist? There was a U.S. trade group by the name of “Automobile Manufacturers Association” until 1999, according to Stan Luger in “Corporate Power, American Democracy, and the Automobile Industry”.2 There are also several current industry groups, none of which are named anything similar. This organization and its report do not exist. If they do, please tell me, but I found nothing relevant.
What about the figure itself, though — “vehicle prices might go up by an average of $1,500”? Again, I cannot find any supporting evidence. None of the sources cited in this article contain this number. A November Bloomberg story cites a Wolfe Research note in reporting new cars will be about $3,000 more expensive, not $1,500, at the same proposed rate as the White House is expected to announce today.
Again, I have to ask why anyone should trust Quartz with their financial news. I know A.I. makes mistakes and, as MacLeod quotes them saying, Quartz does too: “[w]hile we strive for accuracy and timeliness, due to the experimental nature of this technology we cannot guarantee that we’ll always be successful in that regard”.
This is the first article I checked, and I gave up after the fourth paragraph and two entirely fictional sources of information. Maybe the rest of the Quartz Intelligence Newsroom’s output is spotless and I got unlucky.
But — what a downfall for Quartz. Once positioning itself as the Economist for the 2010s, it is now publishing stuff that is made up by a machine and, apparently, is passed unchecked to the web for other A.I. scrapers to aggregate. G/O Media says it publishes “editorial content and conduct[s its] day-to-day business activities with the UTMOST INTEGRITY”. I disagree. I think we will struggle to understand for a long time how far and how fast standards have fallen. This is trash.
With this week’s public release of Apple’s operating system updates comes Apple Intelligence now on by default. More users will be discovering its “beta” features and Apple will, in theory, be collecting even more feedback about their quality. There are certainly issues with the output of Notification Summaries, Siri, and more.1 The flaws in results from Apple Intelligence’s many features are correctly scrutinized. Because of that, I think some people have overlooked the questionable user interface choices.
Not one of the features so far available through Apple Intelligence is particularly newsworthy from a user’s perspective. There are plenty of image generators, automatic summaries, and contextual response suggestions in other software. Apple is not breaking new ground in features, nor is it strategically. It is rarely first to do anything. What it excels at is implementation. Apple often makes some feature or product, however time-worn by others, feel so well-considered it has reached its inevitable form. That is why it is so baffling to me to use features in the Apple Intelligence suite and feel like they are half-baked.
Consider, for example, Writing Tools. This is a set of features available on text in almost any application to proofread it, summarize it, and rewrite it in different styles. You may have seen it advertised. While its name implies the source text is editable, these tools will work on pretty much any non-U.I. text — it works on webpages and in PDF files, but I was not able to make it work with text detected in PNG screenshots.
What this looks like on my Mac, sometimes, is as a blue button beside text I have highlighted. This is not consistent — this button appears in MarsEdit but not Pages; TextEdit but not BBEdit. These tools are also available from a contextual menu, which is the correct place in MacOS for taking actions upon a selection.
In any case, Writing Tools materializes in a popover. Despite my enabling of Reduce Transparency across the system, it launches with a subtle Apple Intelligence gradient background that makes it look translucent before it fades out. This popover works a little bit like a contextual menu and a little like a panel while doing the job of neither very successfully. Any action taken from this popover will spawn another popover. For example, selecting “Proofread” will close the Writing Tools popover and open a new, slightly wider one. After some calculation, the proofread selection will appear alongside buttons for “Replace”, “Copy”, and providing feedback. (I anticipate the latter is a function of the “beta” caveat and will eventually be removed.)
There are several problems with this, beginning with the choice to present this as a series of popovers. It is not entirely inappropriate; Apple says “[i]f you need content only temporarily, displaying it in a popover can help streamline your interface”. However, because popovers are intended for only brief interactions, they are designed to be easily dismissed, something Apple also acknowledges in its documentation. Popovers disappear if you click outside their bounds, if you switch to another window, or if you try to take an action after scrolling the highlighted text out of view. Apple has also made the choice to not cache the results of one of these tools on a passage of selected text. What can easily happen, therefore, is a user will select some text, run Proofread on it, and then — quite understandably — try to make edits to the text or perhaps switch to a different application, only to find that the writing tool has disappeared, and that opening it again will necessitate processing the text again. A user must select the resulting text in the popover or use the “Replace” or “Copy” buttons.
Unlike some other popovers in MacOS — like when you edit an event in Calendar — Writing Tools cannot exist as a floating, disconnected panel. It remains stubbornly attached to the selected text.
As noted, the Writing Tools popover is not the same width as the other popovers it will spawn. By sheer luck, I had one of my test windows positioned in such a way that the Writing Tools popover had enough space to display on the lefthand side of the window, but the popovers it launched appeared on the right because they are a bit wider. This made for a confusing and discordant experience.
Choice of component aside, the way the results of Writing Tools are displayed is so obviously lacklustre I am surprised it shipped in its current state. Two of the features I assumed I would find useful — as I am one person shy of an editor — are “Proofread” and “Rewrite”. But they both have a critical flaw: neither shows the differences between the original text and the changed version. For very short passages, this is not much of a problem, but a tool like “Proofread” implies use on more substantial chunks, or even a whole document. A user must carefully review the rewritten text to discover what changes were made, or place their faith in Apple and click the “Replace” button hoping all is well.
Apple could correct for all of these issues. It could display Writing Tools in a panel instead of a popover or, at least, make it possible to disconnect the popover from the selected and transform it into a panel. It should also make every popover the same width or require enough clearance for the widest popover spawned by Writing Tools so that they always open on the same side. It could bring to MacOS the same way of displaying differences in rewritten text as already exists on iOS but, for some reason, is not part of the Mac version. It could cache results so, if the text is unchanged, invoking the same tool again does not need to redo a successful action.
Writing Tools on MacOS is the most obviously flawed of the Apple Intelligence features suffering from weak implementation or questionable U.I. choices, but there are other examples, too. Some quick hits:
I could not figure out how to get Image Playground to generate an illustration of my dog, something I know is possible. On my iPhone, the toolbar in Image Playground shows a box to “describe an image”, a “People” button, and a plus button. The “People” button is limited to human beings detected in your photo library, even though Photos groups “People & Pets” together. Describing an image using my dog’s name also does not work. The way to do it is to tap the plus button — which contains a “Style” selector and buttons to choose or take a photo — then select “Choose Photo” to pick something from your library as a reference.
This is somewhat more obvious in the Mac version because the toolbar is wide enough to fit the “Style” selector and, therefore, the plus button is labelled with a photo icon.
Also in Image Playground, I find the try-and-see approach as much fun as it is with Siri. I typed my dog’s breed into the image prompt, and it said it does not support the language. I then picked one photo of my dog from my photo library and it said it was “unable to use that description”. I wish the photo picker would not have shown me an option it was unable to use.
Automatic replies in Messages are unhelpful and, on MacOS, cannot be turned off without turning off Apple Intelligence altogether.
The settings for Apple Intelligence features are, by and large, not shown in the Apple Intelligence panel in Settings. That panel only contains a toggle for Apple Intelligence as a whole, a section for managing extensions — like ChatGPT — and Siri controls. Settings for individual features are instead placed in different parts of Settings or in individual apps.
I think this is the correct choice overall, but it is peculiar to have everything Apple Intelligence branded across the system with its logo and gradient — and to advertise Apple Intelligence as its own software — only to have to find the menu in Notification settings for toggling summarization in different apps.
You will note that not a single one of these criticisms is related to the output of Apple Intelligence or a complaint about its limitations. These are all user interaction problems I have experienced. Perhaps this is the best Apple is able to do right now; perhaps it considered and rejected putting Writing Tools in a panel on MacOS for a good reason.
It is unfortunate these features feel almost undesigned — like engineers were responsible for building them, and then someone with human interface knowledge was brought in to add some design. There are plenty of things that are more visually appealing and consistent with platform expectations, like Priority Inbox in Mail. Many of the features seem more polished for iOS compared to MacOS.
Writing Tools, in particular, can and should be better. I write a little on my iPhone, but I write a lot on my Mac — not just posts here, but also emails, messages, and social media posts. A more advanced spelling and grammar checker that has at least some contextual awareness sounds very appealing to me. This is a letdown, and because of so many basic reasons. I do not need Apple Intelligence to be the apex of current technology. What I do expect, at the very least, is that it is user-friendly and feels at home on Apple’s own platforms. It needs work.
In the public version of iOS 18.3, summaries are unavailable for apps from the News and Entertainment categories. ↥︎
Four years ago this week, social media companies decided they would stop platforming then-outgoing president Donald Trump after he celebrated seditionists who had broken into the U.S. Capitol Building in a failed attempt to invalidate the election and allow Trump to stay in power. After two campaigns and a presidency in which he tested the limits of what those platforms would allow, enthusiasm for a violent attack on government was apparently one step too far. At the time, Mark Zuckerberg explained:
Over the last several years, we have allowed President Trump to use our platform consistent with our own rules, at times removing content or labeling his posts when they violate our policies. We did this because we believe that the public has a right to the broadest possible access to political speech, even controversial speech. But the current context is now fundamentally different, involving use of our platform to incite violent insurrection against a democratically elected government.
Zuckerberg, it would seem, now has regrets — not about doing too little over those and the subsequent years, but about doing too much. For Zuckerberg, the intervening four years have been stifled by “censorship” on Meta’s platforms; so, this week, he announced a series of sweeping changes to their governance. He posted a summary on Threads but the five-minute video is far more loaded, and it is what I will be referring to. If you do not want to watch it — and I do not blame you — the transcript at Tech Policy Press is useful. The key changes:
Fact-checking is to be replaced with a Community Notes feature, similar to the one on X.
Change the Hateful Conduct policies to be more permissive about language used in discussions about immigration and gender.
Make automated violation detection tools more permissive and focus them on “high-severity” problems, relying on user reports for material the company thinks is of a lower concern.
Roll back restrictions on the visibility and recommendation of posts related to politics.
Relocate the people responsible for moderating Meta’s products from California to another location — Zuckerberg does not specify — and move the U.S.-focused team to Texas.
Work with the incoming administration on concerns about governments outside the U.S. pressuring them to “censor more”.
Regardless of whether you feel each of these are good or bad ideas, I do not think you should take Zuckerberg’s word for why the company is making these changes. Meta’s decision to stop working directly with fact-checkers, for example, is just as likely a reaction to the demands of FCC commissioner Brendan Carr, who has a bananas view (PDF) of how the First Amendment to the U.S. Constitution works. According to Carr, social media companies should be forbidden from contributing their own speech to users’ posts based on the rankings of organizations like NewsGuard. According both Carr and Zuckerberg, fact-checkers demand “censorship” in some way. This is nonsense: they were not responsible for the visibility of posts. I do not think much of this entire concept, but surely they only create more speech by adding context in a similar way as Meta hopes will still happen with Community Notes. Since Carr will likely be Trump’s nominee to run the FCC, it is important for Zuckerberg to get his company in line.
Meta’s overhaul of its Hateful Conduct policies also shows the disparity between what Zuckerberg says and the company’s actions. Removing rules that are “out of touch with mainstream discourse” sounds fair. What it means in practice, though, is to allow people to make COVID-19 more racist, demean women, and — of course — discriminate against LGBTQ people in more vicious ways. I understand the argument for why these things should be allowed by law, but there is no obligation for Meta to carry this speech. If Meta’s goal is to encourage a “friendly and positive” environment, why increase its platforms’ permissiveness to assholes? Perhaps the answer is in the visibility of these posts — maybe Meta is confident it can demote harmful posts while still technically allowing them. I am not.
We can go through each of these policy changes, dissect them, and consider the actual reasons for each, but I truly believe that is a waste of time compared to looking at the sum of what they accomplish. Conservatives, particularly in the U.S., have complained for years about bias against their views by technology companies, an updated version of similar claims about mass media. Despite no evidence for this systemic bias, the myth stubbornly persists. Political strategists even have a cute name for it: “working the refs”. Jeff Cohen and Norman Solomon, Creators Syndicate, August 1992:
But in a moment of candor, [Republican Party Chair Rich] Bond provided insight into the Republicans’ media-bashing: “There is some strategy to it,” he told the Washington Post. “I’m the coach of kids’ basketball and Little League teams. If you watch any great coach, what they try to do is ‘work the refs.’ Maybe the ref will cut you a little slack next time.”
Zuckerberg and Meta have been worked — heavily so. The playbook of changes outlined by Meta this week are a logical response in an attempt to court scorned users, and not just the policy changes here. On Monday, Meta announced Dana White, UFC president and thrice-endorser of Trump, would be joining its board. Last week, it promoted Joel Kaplan, a former Republican political operative, to run its global policy team. Last year, Meta hired Dustin Carmack who, according to his LinkedIn, directs the company’s policy and outreach for nineteen U.S. states, and previously worked for the Heritage Foundation, the Office of the Director of National Intelligence, and Ron DeSantis. These are among the people forming the kinds of policies Meta is now prescribing.
This is not a problem solved through logic. If it were, studies showing a lack of political bias in technology company policy would change more minds. My bet is that these changes will not have what I assume is the desired effect of improving the company’s standing with far-right conservatives or the incoming administration. If Meta becomes more permissive for bigots, it will encourage more of that behaviour. If Meta does not sufficiently suggest those kinds of posts because it wants “friendly and positive” platforms, the bigots will cry “shadowban”. Meta’s products will corrode. That does not mean they will no longer be influential or widely used, however; as with its forthcoming A.I. profiles, Meta is surely banking that its dominant position and a kneecapped TikTok will continue driving users and advertisers to its products, however frustratedly.
Zuckerberg appears to think little of those who reject the new policies:
[…] Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better.
I am allergic to the phrase “virtue signalling” but I am willing to try getting through this anyway. This has beenwidelyinterpreted as because of their virtue signalling, but I think it is just as accurate if you think of it as because of our virtue signalling. Zuckerberg has complained about media and government “pressure” to more carefully moderate Meta’s platforms. But he cannot ignore how this week’s announcement also seems tied to implicit pressure. Trump is not yet the president, true, but Zuckerberg met with him shortly after the election and, apparently, the day before these changes were announced. This is just as much “virtue signalling” — particularly moving some operations to Texas for reasons even Zuckerberg says are about optics.
Perhaps you think I am overreading this, but Zuckerberg explicitly said in his video introducing the changes that “recent elections also feel like a cultural tipping point towards once again prioritizing speech”. If he means elections other than those which occurred in the U.S. in November, I am not sure which. These are changes made from a uniquely U.S. perspective. To wit, the final commitment in the list above as explained by Zuckerberg (via the Tech Policy Presstranscript):
Finally, we’re going to work with President Trump to push back on governments around the world. They’re going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever-increasing number of laws, institutionalizing censorship, and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years when even the US government has pushed for censorship.
For their part, the E.U. rejected Zuckerberg’s characterization of its policies, and Brazilian officials are not thrilled, either.
These changes — and particularly this last one — are illustrative of the devil’s bargain of large U.S.-based social media companies: they export their policies and values worldwide following whatever whims and trends are politically convenient at the time. Right now, it is important for Meta to avoid getting on the incoming Trump administration’s shit list, so they, like everyone, are grovelling. If the rest of the world is subjected to U.S.-style discussions, so be it. But so have we been for a long time. What is extraordinary about Meta’s changes is how many people will be impacted: billions, plural. Something like one-quarter the world’s population.
The U.S. is no stranger to throwing around its political and corporate power in a way few other nations can. Meta’s changes are another entry into that canon. There are people in some countries who will benefit from having more U.S.-centric policies, but most everyone elsewhere will find them discordant with more local expectations. These new policies are not satisfying for people everywhere around the world, but the old ones were not, either.
It is unfair to expect any platform operator to get things right for every audience, especially not at Meta’s scale. The options created by less centralized protocols like ActivityPub and AT Protocol are much more welcome. We should be able to have more control over our experience than we are trusted with.
Zuckerberg begins his video introduction by referencing a 2019 speech he gave at Georgetown University. In it, he speaks of the internet creating “significantly broader power to call out things we feel are unjust”. “[G]iving people a voice and broader inclusion go hand in hand,” he said, “and the trend has been towards greater voice over time”. Zuckerberg naturally centred his company’s products. But you know what is even more powerful than one company at massive scale? It is when no company needs to act as the world’s communications hub. The internet is the infrastructure for that, and we would be better off if we rejected attempts to build moats.
The ads for Apple Intelligence have mostly been noted for what they show, but there is also something missing: in the fine print and in its operating systems, Apple still calls it a “beta” release, but not in its ads. Given the exuberance with which Apple is marketing these features, that label seems less like a way to inform users the software is unpolished, and more like an excuse for why it does not work as well as one might expect of a headlining feature from the world’s most valuable company.
“Beta” is a funny word when it comes to Apple’s software. It often makes available preview builds of upcoming O.S. releases to users and developers for feedback, testing software compatibility, and to build with new APIs. This is voluntary and done with the understanding that the software is unfinished, and bugs — even serious ones — can be expected.
Apple has also, rarely, applied the “beta” label to features in regular releases which are distributed to all users, not just those who signed up. This type of “beta” seems less honest. Instead of communicating this feature is a work in progress, it seems to say we are releasing this before it is done. Maybe that is a subtle distinction, but it is there. One type of beta is testing; the other type asks users to disregard their expectations of polish, quality, and functionality so that a feature can be pushed out earlier than it should.
We have seen this on rare occasions: once with Portrait mode; more notably, with Siri. Mat Honan, writing for Gizmodo in December 2011:
Check out any of Apple’s ads for the iPhone 4S. They’re promoting Siri so hard you’d be forgiven for thinking Siri is the new CEO of Apple. And it’s not just that first wave of TV ads, a recent email Apple sent out urges you to “Give the phone that everyone’s talking about. And talking to.” It promises “Siri: The intelligent assistant you can ask to make calls, send texts, set reminders, and more.”
What those Apple ads fail to report — at all — is that Siri is very much a half-baked product. Siri is officially in beta. Go to Siri’s homepage on Apple.com, and you’ll even notice a little beta tag by the name.
This is familiar.
The ads for Siri gave the impression of great capability. It seemed like you could ask it how to tie a bowtie, what events were occurring in a town or city, and more. The response was not shown for these queries, but the implication was that Siri could respond. What became obvious to anyone who actually used Siri is that it would show web search results instead. But, hey, it was a “beta” — for two years.
The ads for Apple Intelligence do one better and show features still unreleased. The fine print does mention “some features and languages will be coming over the next year”, without acknowledging the very feature in this ad is one of them. And, when it does actually come out, it is still officially in “beta”, so I guess you should not expect it to work properly.
This all seems like a convoluted way to evade full responsibility of the Apple Intelligence experience which, so far, has been middling for me. Genmoji is kind of fun, but Notification Summaries are routinelywrong. Priority messages in Mail is helpful when it correctly surfaces an important email, and annoying when it highlights spam. My favourite feature — in theory — is the Reduce Interruptions Focus mode, which is supposed to only show notifications when they are urgent or important. It is the kind of thing I have been begging for to deal with the overburdened notifications system. But, while it works pretty well sometimes, it is not dependable enough to rely on. It will sometimes prioritize scam messages written with a sense of urgency, but fail to notify me when my wife messages me a question. It still necessitates I occasionally review the notifications suppressed by this Focus mode. It is helpful, but not consistently enough to be confidence-inspiring.
Will users frustrated by the questionable reliability of Apple Intelligence routinely return to try again? If my own experience with Siri is any guidance — and I am not sure it is, but it is all I have — I doubt it. If these features did not work on the first dozen attempts, why would they work any time after? This strategy, I think, teaches people to set their expectations low.
This beta-tinged rollout is not entirely without its merits. Apple is passively soliciting feedback within many of its Apple Intelligence features, at a scale far greater than it could by restricting testing to only its own staff and contractors. But it also means the public becomes unwitting testers. As with Siri before, Apple heavily markets this set of features as the defining characteristic of this generation of iPhones, yet we are all supposed to approach this as though we are helping Apple make sure its products are ready? Sorry, it does not work like that. Either something is shipping or it is not, and if it does not work properly, users will quickly learn not to trust it.
Wired has been publishing a series of predictions about the coming state of the world. Unsurprisingly, most concern artificial intelligence — how it might impact health, music, our choices, the climate, and more. It is an issue of the magazine Wired describes as its “annual trends briefing”, but it also kind of like a hundred-page op-ed section. It is a mixed bag.
A.I. critic Gary Marcus contributed a short piece about what he sees as the pointlessness of generative A.I. — and it is weak. That is not necessarily because of any specific argument, but because of how unfocused it is despite its brevity. It opens with a short history of OpenAI’s models, with Marcus writing “Generative A.I. doesn’t actually work that well, and maybe it never will”. Thesis established, he begins building the case:
Fundamentally, the engine of generative AI is fill-in-the-blanks, or what I like to call “autocomplete on steroids.” Such systems are great at predicting what might sound good or plausible in a given context, but not at understanding at a deeper level what they are saying; an AI is constitutionally incapable of fact-checking its own work. This has led to massive problems with “hallucination,” in which the system asserts, without qualification, things that aren’t true, while inserting boneheaded errors on everything from arithmetic to science. As they say in the military: “frequently wrong, never in doubt.”
Systems that are frequently wrong and never in doubt make for fabulous demos, but are often lousy products in themselves. If 2023 was the year of AI hype, 2024 has been the year of AI disillusionment. Something that I argued in August 2023, to initial skepticism, has been felt more frequently: generative AI might turn out to be a dud. The profits aren’t there — estimates suggest that OpenAI’s 2024 operating loss may be $5 billion — and the valuation of more than $80 billion doesn’t line up with the lack of profits. Meanwhile, many customers seem disappointed with what they can actually do with ChatGPT, relative to the extraordinarily high initial expectations that had become commonplace.
Marcus’ financial figures here are bizarre and incorrect. He quotes a Yahoo News-syndicated copy of a PC Gamer article, which references a Windows Central repackaging of a paywalled report by the Information — a wholly unnecessary game of telephone when the New York Timesobtained financial documents with the same conclusion, and which were confirmed by CNBC. The summary of that Information article — that OpenAI “may run out of cash in 12 months, unless they raise more [money]”, as Marcus wrote on X — is somewhat irrelevant now after OpenAI proceeded to raise $6.6 billion at a staggering valuation of $157 billion.
I will leave analysis of these financials to MBA types. Maybe OpenAI is like Amazon, which took eight years to turn its first profit, or Uber, which took fourteen years. Maybe it is unlike either and there is no way to make this enterprise profitable.
None of that actually matters, though, when considering Marcus’ actual argument. He posits that OpenAI is financially unsound as-is, and that Meta’s language models are free. Unless OpenAI “come outs [sic] with some major advance worthy of the name of GPT-5 before the end of 2025″, the company will be in a perilous state and, “since it is the poster child for the whole field, the entire thing may well soon go bust”. But hold on: we have gone from ChatGPT is disappointing “many customers” — no citation provided — to the entire concept of generative A.I. being a dead end. None of this adds up.
The most obvious problem is that generative A.I. is not just ChatGPT or other similar chat bots; it is an entire genre of features. I wrote earlier this month about some of the features I use regularly, like Generative Remove in Adobe Lightroom Classic. As far as I know, this is no different than something like OpenAI’s Dall-E in concept: it has been trained on a large library of images to generate something new. Instead of responding to a text-based prompt, it predicts how it should replicate textures and objects in an arbitrary image. It is far from perfect, but it is dramatically better than the healing brush tool before it, and clone stamping before that.
There are other examples of generative A.I. as features of creative tools. It can extend images and replace backgrounds pretty well. The technology may be mediocre at making video on its own terms, but it is capable of improving the quality of interpolated slow motion. In the technology industry, it is good at helping developers debug their work and generate new code.
Yet, if you take Marcus at his word, these things and everything else generative A.I. “might turn out to be a dud”. Why? Marcus does not say. He does, however, keep underscoring how shaky he finds OpenAI’s business situation. But this Wired article is ostensibly about generative A.I.’s usefulness — or, in Marcus’ framing, its lack thereof — which is completely irrelevant to this one company’s financials. Unless, that is, you believe the reason OpenAI will lose five billion dollars this year is because people are unhappy with it, which is not the case. It simply costs a fortune to train and run.
The one thing Marcus keeps coming back to is the lack of a “moat” around generative A.I., which is not an original position. Even if this is true, I do not see this as evidence of a generative A.I. bubble bursting — at least, not in the sense of how many products it is included in or what capabilities it will be trained on.
What this looks like, to me, is commoditization. If there is a financial bubble, this might mean it bursts, but it does not mean the field is wiped out. Adobe is not disappearing; neither are Google or Meta or Microsoft. While I have doubts about whether chat-like interfaces will continue to be a way we interact with generative A.I., it continues to find relevance in things many of us do every day.
A little over two years after OpenAI released ChatGPT upon the world, and about four years since Dall-E, the company’s toolset now — “finally” — makes it possible to generate video. Sora, as it is called, is not the first generative video tool to be released to the public; there are already offerings from Hotshot, Luma, Runway, and Tencent. OpenAI’s is the highest-profile so far, though: the one many people will use, and the products of which we will likely all be exposed to.
A generator of video is naturally best seen demonstrated in that format, and I think Marques Brownlee’s preview is a good place to start. The results are, as I wrote in February when Sora was first shown, undeniably impressive. No matter how complicated my views about generative A.I. — and I will get there — it is bewildering that a computer can, in a matter of seconds, transform noise into a convincing ten-second clip depicting whatever was typed into a text box. It can transform still images into video, too.
It is hard to see this as anything other than extraordinary. Enough has been written by now about “any sufficiently advanced technology [being] indistinguishable from magic” to bore, but this truly captures it in a Penn & Teller kind of way: knowing how it works only makes it somehow more incredible. Feed computers on a vast scale video which has been labelled — partly by people, and partly by automated means which are reliant on this exact same training process — and it can average that into entirely new video that often appears plausible.1 I am basing my assessment on the results generated by others because Sora requires a paid OpenAI account, and because there is currently a waiting list.
There are, of course, limitations of both technology and policy. Sora has problems with physics, the placement of objects in space, and consistency between and within shots. Sora does not generate audio, even though OpenAI has the capability. Prompts in text and images are checked for copyright violations, public figures’ likenesses, criminal usage, and so forth. But there is no meaningful restrictions on the video itself. This is not how things must be; this is a design decision.
I keep thinking about the differences between A.I. features and A.I. products. I use very few A.I. products; an open-ended image generator, for example, is technically interesting but not very useful to me. Unlike a crop of Substack writers, I do not think pretending to have commissioned art lends me any credibility. But I now use A.I. features on a regular basis, in part because so many things are now “A.I. features” in name and by seemingly no other quality. Generative Remove in Adobe Lightroom Classic, for example, has become a terrific part of my creative workflow. There are edits I sometimes want to make which, if not for this feature, would require vastly more time which, depending on the job, I may not have. It is an image generator just like Dall-E or Stable Diffusion, but it is limited by design.
Adobe is not taking a principled stance; Photoshop contains a text-based image generator which, I think, does not benefit from being so open-ended. It would, for me, be improved if its functionality were integrated into more specific tools; for example, the crop tool could also allow generative reframing.
Sora, like ChatGPT and Dall-E, is an A.I. product. But I would find its capabilities more useful and compelling if they were a feature within a broader video editing environment. Its existence implies a set of tools which could benefit a video editor’s workflow. For example, the object removal and tracking features in Premiere Pro feel more useful to me than its ability to generate b-roll, which just seems like a crappy excuse to avoid buying stock footage or paying for a second unit.
Limiting generative A.I. in this manner would also make its products more grounded in reality and less likely to be abused. It would also mean withholding capabilities. Clearly, there are some people who see a demonstration of the power of generative A.I. as a worthwhile endeavour unto itself. As a science experiment, I get it, but I do not think these open-ended tools should be publicly available. Alas, that is not the future venture capitalists, and shareholders, and — I guess — the creators of these products have decided is best for us.
We are now living in a world of slop, and we have been for some time. It began as infinite reams of text-based slop intended to be surfaced in search results. It became image-based slop which paired perfectly with Facebook’s pivot to TikTok-like recommendations. Image slop and audio slop came together to produce image slideshow slop dumped into the pipelines of Instagram Reels, TikTok, YouTube Shorts. Brace yourselves for a torrent of video slop about pyramids and the Bermuda triangle and pyramids. None of these were made using Sora, as far as I know; at least some were generated by Hailuo from Minimax. I had to dig a little bit for these examples, but not too much, and it is only going to get worse.
Much has been written about how all this generative stuff has the capability of manipulating reality — and rightfully so. It lends credence to lies, and its mere existence can cause unwarranted doubt. But there is another problem: all of this makes our world a little bit worse because it is cheap to produce in volume. We are on the receiving end of a bullshit industry, and the toolmakers see no reason to slow it down. Every big platform — including the web itself — is full of this stuff, and it is worse for all of us. Cynicism aside, I cannot imagine the leadership at Google or Meta actually enjoys using their own products as they wade through generated garbage.
This is hitting each of us in similar ways. If you use a computer that is connected to the internet, you are likely running into A.I.-generated stuff all the time, perhaps without being fully aware of it. The recipe you followed, the repair guide you found, the code you copy-and-pasted, and the images in the video you watched? Any of them could have been generated in a data farm somewhere. I do not think that is inherently bad, though it is an uncertain feeling.
I am part of the millennial generation. I grew up at a time in which we were told we were experiencing something brand new in world history. The internet allowed anyone to publish anything, and it was impossible to verify this new flood of information. We were taught to think critically and be cautious, since we never knew who created anything. Now we have a different problem: we are unsure what created anything.
Without thinking about why it is the case, it is interesting how generative A.I. has no problem creating realistic-seeming text as text, but it struggles when it is an image containing text. But with a little knowledge about how these things work, that makes sense. ↥︎
Spencer Ackerman has been a national security reporter for over twenty years, and was partially responsible for the Guardian’s coverage of NSA documents leaked by Edward Snowden. He has good reason to be skeptical of privacy claims in general, and his experience updating his iPhone made him worried:
Recently, I installed Apple’s iOS 18.1 update. Shame on me for not realizing sooner that I should be checking app permissions for Siri — which I had thought I disabled as soon as I bought my device — but after installing it, I noticed this update appeared to change Siri’s defaults.
Apple has a history with changing preferences and dark patterns. This is particularly relevant in the case of the iOS 18.1 update because it was the one with Apple Intelligence, which creates new ambiguity between what is happening on-device and what goes to a server farm somewhere.
While easy tasks are handled by their on-device models, Apple’s cloud is used for what I’d call moderate-difficulty work: summarizing long emails, generating patches for Photos’ Clean Up feature, or refining prose in response to a prompt in Writing Tools. In my testing, Clean Up works quite well, while the other server-driven features are what you’d expect from a medium-sized model: nothing impressive.
Users shouldn’t need to care whether a task is completed locally or not, so each feature just quietly uses the backend that Apple feels is appropriate. The relative performance of these two systems over time will probably lead to some features being moved from cloud to device, or vice versa.
It would be nice if it truly did not matter — and, for many users, the blurry line between the two is probably fine. Private Cloud Compute seems to be trustworthy. But I fully appreciate Ackerman’s worries. Someone in his position necessarily must understand what is being stored and processed in which context.
However, Ackerman appears to have interpreted this setting change incorrectly:
I was alarmed to see that even my secure communications apps, like Proton and Signal, were toggled by default to “Learn from this App” and enable some subsidiary functions. I had to swipe them all off.
This setting was, to Ackerman, evidence of Apple “uploading your data to its new cloud-based AI project”, which is a reasonable assumption at a glance. Apple, like every technology company in the past two years, has decided to loudly market everything as being connected to its broader A.I. strategy. In launching these features in a piecemeal manner, though, it is not clear to a layperson which parts of iOS are related to Apple Intelligence, let alone where those interactions are taking place.
However, this particular setting is nearly three years old and unrelated to Apple Intelligence. This is related to Siri Suggestions which appear throughout the system. For example, the widget stack on my home screen suggests my alarm clock app when I charge my iPhone at night. It suggests I open the Microsoft Authenticator app on weekday mornings. When I do not answer the phone for what is clearly a scammer, it suggests I return the missed call. It is not all going to be gold.
Even at the time of its launch, its wording had the potential for confusion — something Apple has not clarified within the Settings app in the intervening years — and it seems to have been enabled by default. While this data may play a role in establishing the “personal context” Apple talks about — both are part of the App Intents framework — I do not believe it is used to train off-device Apple Intelligence models. However, Apple says this data may leave the device:
Your personal information — which is encrypted and remains private — stays up to date across all your devices where you’re signed in to the same Apple Account. As Siri learns about you on one device, your experience with Siri is improved on your other devices. If you don’t want Siri personalization to update across your devices, you can disable Siri in iCloud settings. See Keep what Siri knows about you up to date on your Apple devices.
While I believe Ackerman is incorrect about the setting’s function and how Apple handles its data, I can see how he interpreted it that way. The company is aggressively marketing Apple Intelligence, even though it is entirely unclear which parts of it are available, how it is integrated throughout the company’s operating systems, and which parts are dependent on off-site processing. There are people who really care about these details, and they should be able to get answers to these questions.
All of this stuff may seem wonderful and novel to Apple and, likely, many millions of users. But there are others who have reasonable concerns. Like any new technology, there are questions which can only be answered by those who created it. Only Apple is able to clear up the uncertainty around Apple Intelligence, and I believe it should. A cynical explanation is that this ambiguity is all deliberate because Apple’s A.I. approach is so much slower than its competitors and, so, it is disincentivized from setting clear boundaries. That is possible, but there is plenty of trust to be gained by being upfront now. Americans polled by Pew Research and Gallup have concerns about these technologies. Apple has repeatedly emphasized its privacy bonafides. But these features remain mysterious and suspicious for many people regardless of how much a giant corporation swears it delivers “stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency”.
All of that is nice, I am sure. Perhaps someone at Apple can start the trust-building by clarifying what the Siri switch does in the Settings app, though.
Brendan Nystedt, reporting for Wired on a new generation of admirers of crappy digital cameras from the early 2000s:
For those seeking to experiment with their photography, there’s an appeal to using a cheap, old digital model they can shoot with until it stops working. The results are often imperfect, but since the camera is digital, a photographer can mess around and get instant gratification. And for everyone in the vintage digital movement, the fact that the images from these old digicams are worse than those from a smartphone is a feature, not a bug.
Retromania? Not really. It feels more like a backlash against the excessive perfection of modern cameras, algorithms, and homogenized modern image-making. I don’t disagree — you don’t have to do much to come up with a great-looking photo these days. It seems we all want to rebel against the artistic choices of algorithms and machines — whether it is photos or Spotify’s algorithmic playlists versus manually crafted mixtapes.
I agree, though I do not see why we need to find just one cause — an artistic decision, a retro quality, an aesthetic trend, a rejection of perfection — when it could be driven by any number of these factors. Nailing down exactly which of these is the most important factor is not of particular interest to me; certainly, not nearly as much as understanding that people, as a general rule, value feeling.
I have written about this before and it is something I wish to emphasize repeatedly: efficiency and clarity are necessary elements, but are not the goal. There needs to be space for how things feel. I wrote this as it relates to cooking and cars and onscreen buttons, and it is still something worth pursuing each and every time we create anything.
I thought about this with these two articles, but first last week when Wil Shipley announced the end of Delicious Library:
Amazon has shut off the feed that allowed Delicious Library to look up items, unfortunately limiting the app to what users already have (or enter manually).
I wasn’t contacted about this.
I’ve pulled it from the Mac App Store and shut down the website so nobody accidentally buys a non-functional app.
Delicious Library was many things: physical and digital asset management software, a kind of personal library, and a wish list. But it was also — improbably — fun. Little about cataloguing your CDs and books sounds like it ought to be enjoyable, but Shipley and Mike Matas made it feel like something you wanted to do. You wanted to scan items with your Mac’s webcam just because it felt neat. You wanted to see all your media on a digital wooden shelf, if for no other reason than it made those items feel as real onscreen as they are in your hands.
Delicious Library became known as the progenitor of the “delicious generation” of applications, which prioritized visual appeal as much as utility. It was not enough for an app to be functional; it needed to look and feel special. The Human Interface Guidelines were just that: guidelines. One quality of this era was the apparently fastidious approach to every pixel. Another quality is that these applications often had limited features, but were so much fun to use that it was possible to overlook their restrictions.
I do not need to relitigate the subsequent years of visual interfaces going too far, then being reeled in, and then settling in an odd middle ground where I am now staring at an application window with monochrome line-based toolbar icons, deadpan typography, and glassy textures, throwing a heavy drop shadow. None of the specifics matter much. All I care about is how these things feel to look at and to use, something which can be achieved regardless of how attached you are to complex illustrations or simple line work. Like many people, I spend hours a day staring at pixels. Which parts of that are making my heart as happy as my brain? Which mundane tasks are made joyful?
This is not solely a question of software; it has relevance in our physical environment, too, especially as seemingly every little thing in our world is becoming a computer. But it can start with pixels on a screen. We can draw anything on them; why not draw something with feeling? I am not sure we achieve that through strict adherence to perfection in design systems and structures.
I am reluctant to place too much trust in my incomplete understanding of a foreign-to-me concept rooted in another country’s very particular culture, but perhaps the sabi is speaking loudest to me. Our digital interfaces never achieve a patina; in fact, the opposite is more often true: updates seem to erase the passage of time. It is all perpetually new. Is it any wonder so many of us ache for things which seem to freeze the passage of time in a slightly hazier form?
I am not sure how anyone would go about making software feel broken-in, like a well-worn pair of jeans or a lounge chair. Perhaps that is an unattainable goal for something on a screen; perhaps we never really get comfortable with even our most favourite applications. I hope not. It would be a shame if we lose that quality as software eats our world.
The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Google’s industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.
[…]
Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.
In addition to requiring that Chrome be divested, the proposal calls for several other major changes that would be enforced over a 10-year period. They include:
Blocking Google from making deals like the one it has with Apple to be its default search engine.
Requiring it to let device manufacturers show users a “choice screen” with multiple search engine options on it.
Licensing data about search queries, results, and what users click on to rivals.
Blocking Google from buying or investing in advertising or search companies, including makers of AI chatbots. (Google agreed to invest up to $2 billion into Anthropic last year.)
The full proposal (PDF) is a pretty easy read. One of the weirder ideas pitched by the Colorado side is to have Google “fund a nationwide advertising and education program” which may, among other things, “include reasonable, short-term incentive payments to users” who pick a non-Google search engine from the choice screen.
I am guessing that is not going to happen, and not just because “Plaintiff United States and its Co-Plaintiff States do not join in proposing these remedies”. In fact, much of this wish list seems unlikely to be part of the final judgement expected next summer — in part because it is extensive, in part because of politics, and also because it seems unrelated.
“DOJ will face substantial headwinds with this remedy,” because Chrome can run search engines other than Google, said Gus Hurwitz, senior fellow and academic director at University of Pennsylvania Carey Law School. “Courts expect any remedy to have a causal connection to the underlying antitrust concern. Divesting Chrome does absolutely nothing to address this concern.”
I — an effectively random Canadian with no expertise in this and, so, you should take my perspective with appropriate caveats — disagree.
The objective of disentangling Chrome from Google’s ownership, according to the executive summary (PDF) produced by the Department of Justice, is to remove “a significant challenge to effectuate a remedy that aims to ‘unfetter [these] market[s] from anticompetitive conduct'”:
A successful remedy requires that Google: stop third-party payments that exclude rivals by advantaging Google and discouraging procompetitive partnerships that would offer entrants access to efficient and effective distribution; disclose data sufficient to level the scale-based playing field it has illegally slanted, including, at the outset, licensing syndicated search results that provide potential competitors a chance to offer greater innovation and more effective competition; and reduce Google’s ability to control incentives across the broader ecosystem via ownership and control of products and data complementary to search.
The DOJ’s theory of growth reinforcing quality and market dominance is sound, from what I understand, and Google does advantage Chrome in some key ways. Most directly related to this case is whether Chrome activity is connected to Google Search. Despite company executives explicitly denying using Chrome browsing data for ranking, a leak earlier this year confirmed Google does, indeed, consider Chrome views in its rankings.
There is also a setting labelled “Make searches and browsing better”, which automatically “sends URLs of the pages you visit” to Google for users of Chromium-based browsers. Google says this allows the company to “predict what sites you might visit next and to show you additional info about the page you’re visiting” which allows users to “browse faster because content is proactively loaded”.
There is a good question as to how much Google Search would be impacted if Google could not own Chrome or operate its own browser for five years, as the remedy proposes. How much weight these features have in Google’s ranking system is something only Google knows. And the DOJ does not propose that Google Search cannot be preloaded in browsers whatsoever. Many users would probably still select Google as their browser’s search engine, too. But Google Search does benefit from Google’s ownership of Chrome itself, so perhaps it is worth putting barriers between the two.
I do not think Chrome can exist as a standalone company. I also do not think it makes sense for another company to own it, since any of those big enough to do so either have their own browsers — Apple’s Safari, Microsoft’s Edge — or would have the potential to create new anticompetitive problems, like if it were acquired by Meta.
What if the solution looks more like prohibiting Google from uniquely leveraging Chrome to benefit its other products? I do not know how that could be written in legal terms, but it appears to me this is one of the DOJ’s goals for separating Chrome and Google.