Jeremy Freed, of GQ, in an article with the provocative headline “You Won’t Be Able to Escape Smart Glasses in 2026”:
“If AI glasses are going to go mainstream, 2026 will be the year that we start to see that,” says Sinead Bovell, a futurist and the founder of tech education company, WAYE. Meta introduced its first line of Ray-Ban AI Glasses in 2021, and has sold more than 2 million pairs since launching the second generation in 2023. By the end of 2026, the company plans to sell 3 million more while ramping up production to 10 million pairs annually. As hard as it is to imagine 10 million people — the combined populations of NYC and Philly — buying Meta AI Glasses every year, it may well come to pass. “The iPhone came out in 2007 and by 2011 BlackBerry was still the number-one smartphone,” says Bovell. “The iPhone wasn’t seen as a phone, it was seen as a toy. The exact same things that were said about it in 2008 are being said now about [smart] glasses.” Likewise, no one knew they needed an Apple Watch when the product launched in 2015, but the company has reportedly sold hundreds of millions of them since then.
This article was published a day after people from, presumably, Meta or EssilorLuxottica told Bloomberg they were going to double production to 20 million units by the end of this year in response to overwhelming demand. So from two million since 2023, three million this year alone, to twenty million next year — that is quite the forecasted sales curve. It is almost enough to make you think Meta has a hardware hit on its hands.
As for Bovell’s claim that the “exact same things that were said about it [the iPhone] in 2008 are being said now about [smart] glasses”, the problem is that there is a far better comparison given we have already had an example of smart glasses in Google Glass, and the same things were being said about those a decade ago. In 2012, CBC News quoted a researcher saying Glass is “the mainstreaming of this kind of device”. In 2013, Jessica Guynn, of the Los Angeles Times, wrote that they “may still be on the fringes of mainstream consciousness. But they are not going to stay there very long”. The following year, Paul Saffo lamented for CNN that while “[i]nfo-glasses today are like PCs in 1984 – they look cool but perform a few functions that aren’t all that useful, such as taking pictures or surfing the Web while sitting in a bar with friends” — yes, “they look cool” is presented as a factual statement — in the very near future from 2014 “we are certain to be astonished by the capabilities of the device sitting on the bridge of our nose”.
Well, it has been nearly twelve years since Saffo wrote that, and the killer capabilities of smart glasses remain based entirely around the camera. And you know what? It is a pretty good feature — but it alone is not as compelling today as was a smartphone in 2008. I do not think the question is are smart glasses today akin to smartphones in 2008?; the question is more like what is different about today’s smart glasses compared to Google Glass?. To his credit, Freed attempts to answer this in noting that Meta’s Wayfarer shape instead of a sci-fi is an obvious upgrade, but I think he underplays the advancements in image, language, and speech detection since Google Glass by calling it “a Siri-like voice assistant”. GQ is not a technology publication, true, but that is among the biggest changes for a device so dependent on real-time interaction with the surrounding environment, like for translation features.
But there are problems with today’s smart glasses that remain unchanged from those that affected Google Glass. Most obviously, they are still a privacy nightmare for yourself and for others. Meta says the externally-visible recording LED must not be obstructed to record video, but people are modifying the glasses to remove that restriction. They must effectively be treated like spy glasses because they could be recording anywhere — in a public area running facial recognition software, to the apparent privacy of a massage room.
Meta is far from the only company producing glasses like these. Snap has its Spectacles and Xiaomi’s A.I. Glasses are available in China. All of these companies are responsible for developing a selfish future that prioritizes selling buyers on the advantages of an unobtrusive camera while barely acknowledging the societal impact of the same. Google is taking another kick at the can, and rumours consistently indicate Apple and Samsung are each working on their own, too. They may all say the right things about privacy, but the fundamental fact is that a barely-visible camera is a tool for abuse as much as it is entertainment.
Freed:
Whether the possibilities presented by smart glasses sound fun and appealing or like the tipping point into a dystopian nightmare is a matter of perspective. There are the obvious doubts about what happens if someone hacks your glasses and what companies like Meta are planning to do with your data (spoiler alert: it’s being used to train AI), but these aren’t so different from existing concerns around other internet-enabled devices. “Every piece of technology ever created has been used for good and bad things,” says Edward R. McNicholas, a Partner at Ropes & Gray in Washington DC who leads the firm’s global data, privacy and cybersecurity practice. “Just think of the Internet itself — it helps bad actors, but it brings the globe together, creates enormous economic opportunity, and inspires millions.” What will ultimately decide the fate of smart glasses, he says, is regulatory friction — and cultural embrace. “That is, what’s the rizz? Do the 20-somethings deem it based or cringe?”
McNicholas was admitted to the Maryland bar in 1996. His career as a lawyer is at least Millennial-aged and, as a Millennial myself, I feel pretty confident in saying he cannot use “rizz”, “based”, or “cringe” like this. It is not, in fact, lit.
I find it difficult to believe it is a coincidence there are two stories promoting Meta’s A.I. glasses appearing in the news the same week Meta laid off ten percent of its Reality Labs employees, and reallocating funds to the team developing those glasses. I am sure these things have their defenders, and may be more popular than Meta expected given the company’s long run of hardware flops. The relative success of the glasses means Meta can jettison its original messy concept of the metaverse and redefine it to suit its needs today.
But this does not feel like the nascent days of the iPhone, nor like we will not “be able to escape smart glasses” this year. I knew lots of people with smartphones in the mid-to-late-2000s, including some with original iPhones despite them not being available in Canada. Anecdotally, I do not personally know anyone who owns or is even thinking about buying smart glasses. Mind you, I know plenty of people with an Apple Watch today who did not consider it compelling even years after it launched. Maybe it is like the early days of smartwatch ownership, after all, and I simply do not notice because Meta’s glasses just look like Ray-Bans. That is, I guess, the whole point.
Despite the concept of smart glasses being the product of so much hype and excitement, it never seems to have materialized in something you can buy. Maybe that will change; maybe that has changed without me noticing it. But one of the other biggest shifts of the past ten years is how much people say they want more distance from technology. One of the predictions for 2026 in a list from the New York Times is the rise of the “dumb phone” as a status symbol. Some people who have tried smartwatches have found them more demanding over time than helpful; I stopped wearing one after four years because I need less technology in my life, not more. There is a vast gulf between what people say they want and their actual behaviour, of course, but I cannot shake the feeling this technology is still too much of an imposition. We will not need to “escape smart glasses” if people still choose not to buy them.
I made a mistake on Friday: instead of waiting to polish a more comprehensive article, I effectively live-blogged my shifting understanding of how StatCounter was collecting its iOS version number data by way of updates and edits to existing posts. In my own defence, I did not know the rate of users updating to iOS 26 would become as much of a story unto itself as it has. So allow me to straighten this out.
Here is the background: StatCounter publishes market share data by country and user technology based on statistics it collects from its web analytics package which, it says, is used by over a million websites totalling around five billion page views monthly. I have not heard of many of the sites using its analytics, but it seems to be a large enough and generic enough sample that it should be indicative — more so than, say, visitors to my audience-specific website. Ed Hardy, over at Cult of Mac, used StatCounter’s figures to report, on January 8, that “only about 15% of iPhone users have some version of the new operating system installed”. Hardy compared this to historical StatCounter figures showing a 63% adoption rate of iOS 18 by the same time last year, 54% on iOS 17 the year prior, and 62% on iOS 16 the year before that. If true, this would represent a catastrophic reluctance for iPhone users to update.
If true.
I do not think the iOS 26 uptake rate is about 15%. I think it is lower than the 54–63% range in previous years, but not by nearly that much. I think StatCounter has been misinterpreting iOS 26’s user base since last year because of a change Apple made to Safari.
If the phrase “user agent” does not make you respond by tipping your head to the side like my dog did when I asked him if he knew what I meant by that, you can skip this paragraph. A user agent string is a way for software to identify itself when it makes an HTTP request on behalf of a user. A user agent might describe the type and version of a web browser, the operating system, and have other information so that, in the old days, websites could check for compatibility. This leads to user agent strings that look a little silly:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.0.
This does not represent a Firefox user, despite starting with “Mozilla”, nor does it represent a Safari or Chrome user, despite the mentions of “Safari”, “Chrome”, and “AppleWebKit”. It is a user agent string for Microsoft Edge, which is begging to be treated like its competitors.
This is a simplified explanation, but it is important for how StatCounter works. When someone browses a website containing its analytics code, it reads the user agent string, and that is how StatCounter determines market share. The above user would be counted for Edge market share (“Edg/134.0.0”) and Windows. Which version of Windows? Well, while “NT 10.0” suggests it is Windows 10, it is also used by Edge running on Windows 11 — that part of the user string has been frozen. The Chromium team did the same thing and reduced the amount of specific information in the user agent string. This removes a method of fingerprinting and is generally fine.
This movement was spearheaded by Apple in 2017, when Ricky Mondello announced Safari Technology Preview 46 “freezes Safari’s user agent string. It will not change in the future”. But this remained a desktop-only change until September 2025, when Jen Simmons and others who work on WebKit, announced that the version of Safari shipping in iOS 26 would have its user agent stuck on the previous version of iOS:
Also, now in Safari on iOS, iPadOS, and visionOS 26 the user agent string no longer lists the current version of the operating system. Safari 18.6 on iOS has a UA string of:
Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.6 Mobile/15E148 Safari/604.1
And Safari 26.0 on iOS has a UA string of:
Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/26.0 Mobile/15E148 Safari/604.1
Apple justified this change only by implication, writing “we highly recommend using feature detection instead of UA string detection when writing conditional code”. But, as Jeff Johnson points out, this change does not eliminate version detection entirely:
[…] because Safari is always inseparable from the OS, so it’s possible to derive the iOS version from the Safari version, which continues to be incremented in the User-Agent. On macOS, in contrast, the latest version of Safari typically supports the three latest major OS versions, so Safari 26 can be installed on macOS 15 Sequoia and macOS 14 Sonoma in addition to macOS 26 Tahoe, and therefore the User-Agent — which actually says “OS X 10_15_7”! — is a little more effective at obscuring the OS version.
I noticed this, too, and it led to a mistake I made in my first guess at understanding why StatCounter was reporting some iOS 26 traffic, but not a lot. I thought StatCounter could have made a change to its analytics package to interpret this part of the user agent string instead, but that it may not have rolled out to all of its users. I was wrong.
What actually appears to account for iOS 26’s seemingly pitiful adoption rate is that third-party browsers like Chrome and Brave produce a user agent string that looks like this, on my iPhone running iOS 26.3:
Mozilla/5.0 (iPhone; CPU iPhone OS 26_3_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/144.0.7559.53 Mobile/15E148 Safari/604.1
Safari, meanwhile, produces this user agent:
Mozilla/5.0 (iPhone; CPU iPhone OS 18_7 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/26.3 Mobile/15E148 Safari/604.1
“iPhone OS 26_3_0” on Chrome, but “iPhone OS 18_7” in Safari. And iOS 18.7 also exists, with a similar user agent string as Safari, albeit with “Version/18.7” in place of “Version/26.3”. The operating system version is the same in both, however: “18_7”. StatCounter’s iOS 26 data is not reflective of all iOS users — just those using third-party browsers that still have the current iOS version in their user agent string.
Even though third-party browsers are available on iOS, most users browse the web through Safari. And that means StatCounter is almost certainly counting the vast majority of people on iOS 26 as iOS 18.7 users. I retrieved those user agent strings using StatCounter’s detection utility, which is how it says you can validate the accuracy of its statistics. And it seems they are not. (I asked StatCounter to confirm this but have not heard back. Update: StatCounter’s CEO told me iOS 26 users had been miscounted.)
The actual rate of iOS 26 adoption is difficult to know right now. Web traffic to generalist websites, like the type collected by StatCounter, seems to me like it would be a good proxy had its measurement capabilities kept up with changes to iOS. Other sources, like TelemetryDeck, indicate a far higher market share — 55% as I am writing this — but its own stats reported nearly 78% adoption of iOS 18 at this time last year, also greater than StatCounter’s 63%, but not by as much. TelemetryDeck’s numbers are based on aggregate data from its in-app analytics product, so they should be more accurate, but that also depends on which apps integrate TelemetryDeck and who uses them. What we can see, though, is the difference between last year and this year at the same time, around 23 percentage points. For comparison, in January 2024, TelemetryDeck reported around 74% had updated to iOS 17 — iOS 26 is 19 points less.
If its reporting for this year is similarly representative, it likely indicates a 20-point slide in iOS 26 adoption. Not nearly as terrible as the misleading StatCounter dashboard suggests, but still a huge and embarrassing difference compared to prior years. Apple will likely update its own figures in the coming weeks for a further point of comparison. However, even though there are early indications iOS 26 is not as well-received as its predecessors, what we do not know is why that is. Fear not, however, for there are obvious conclusions to be drawn.
It’s not that millions of iPhone users around the world have somehow overlooked the launch of iOS 26 followed by iOS 26.1 and iOS 26.2. They are holding off installing the upgrades because this is Apple’s most controversial new version in many years. The reason: Liquid Glass — a translucent and fluid new interface. Many elements of the UI go semi-transparent, while clever effects make it seem like users are looking through glass at objects shown on the screen behind the Control Center and pop-up windows.
David Price, of Macworld, made the same assumption based on Hardy’s story — twice:
It’s debatable whether the egregious design of last year’s OS updates falls under the category of arrogance or incompetence; perhaps it’s both. But the takeaway for Apple should be that customer loyalty is finite, and there are consequences when you consistently lower your quality-control standards. When your entire business is built on people liking you, it’s best not to take them for granted.
I have no particular affinity for Liquid Glass. I am not sure its goals are well-conceived, and I do not think it achieves those objectives.
Even so, I think the aversion to Liquid Glass is so strong among some commentators that erroneous stats are fine so long as they are confirmation of their biases. Put it this way: if just 15% of users had, indeed, upgraded to iOS 26 and the reason for so many people remaining on previous versions is Liquid Glass, surely that should mean a corrected percentage — perhaps 55%, perhaps lower — is indicative that most people are not actually bothered by Liquid Glass, right?
Yes, there is a likely 20-point gap and, if that is due to Liquid Glass, it should be cause for worry at the highest levels of Apple. iOS is a mass-market operating system. The audience is not necessarily obsessed with information density or an adequate contrast ratio. If a redesign of iOS were exciting, people would have raced to update, just as they did when iOS 7 was launched. They instead appear hesitant. Maybe the reason is Liquid Glass, or maybe something else. Or maybe there are further measurement errors.
Whatever the case, I would avoid believing articles making sweeping conclusions based on a single data point. After all, if that number is shown to be incorrect, it destabilizes the whole argument.
I am not sure which was the first of Google’s A.I. ads I saw that made me wonder if I was suffering from a fever. Maybe it was the one where “pandas in high fashion outfits walk the runway” — shown over thirty million times since early November, as of writing — but it just as easily could have been the one with “some crazy fruit creatures”. Maybe it was the ad with the strutting dogs. All are equally bizarre in the way a giant corporation’s ad company thinks it is offering something quirky and weird.
In any case, each ends with the same encouragement: “make more videos with a Google A.I. Pro subscription” for $27 per month. And I have to ask: who is this for?
These are not the only ads Google is running for its A.I. features. It shows them off in context for general users in Pixel phone ads, for students in ads for NotebookLM, for corporate clients with Cloud A.I., and — of course — for advertisers.
It also is not the only ad for its A.I. Pro subscription bundle. In a different one, different tasks are strung together for someone to do absolutely no work to put together a ’90s trivia night, leading to an invitation that twice tells recipients to wear platform shoes and “get ready”. The fine print might remind viewers to “check responses for accuracy”, but in the ad, the creator clicks send on a terribly written invitation immediately.
But it is those bizarre animal-themed ones which led to me looking up the advertising strategies of all the big A.I. players. I think this is a fascinating window. The ads they pay to show give us a glimpse of how they idealize the use of these products and services. While ads can sometimes be abstracted from the true function of a product or service, A.I. is already a new and confusing thing, so the narrative each company spins about its narrative seems telling of its own vision. I limited my search on this to ads of a typical video spot — no more than 30 seconds long. I did not include case studies or company-produced tutorials.
OpenAI runs a couple of types of ads. Those for ChatGPT mostly show its use as a personal assistant. There are several 30-second spots shot as oners with a pleasingly retro warmth. One has a training regiment for learning to do pull-ups; another is about siblings on a road trip. The same kind of messaging is used in a series of shorter spots. It also runs ads for its Codex agent, obviously targeted at developers, that are more clinical.
All of these seem practical to me. I could not find any current ads from OpenAI as disconnected from reality as Google’s. Just as notable is the focus of OpenAI’s spots — Google’s Ads Transparency Centre says the company is running about two hundred ads in Canada right now, most of which are variations in size, targeting, and placement of the shorter practical examples above, plus ads for Codex. For comparison, Google’s ads are all over the place. It is running around twenty thousand ads right now in Canada and, though not all of them are for A.I. features, many are, and you can tell from the examples above how much Google is just throwing stuff at the wall.
Anthropic’s ads are far more limited. All are for Claude and feature a video ad with no indication of how it is being used. It simply says “Claude is A.I. for […] all of us, anywhere” with overhead shots of different scenes representing different professions. This is basically the same sentiment as OpenAI’s ads, but executed without any specificity or examples. The company’s YouTube channel has plenty of case studies and demos, but no similar video spots.
If Anthropic is trying to mimic OpenAI’s quiet confidence, Perplexity has chosen overt aggression. Quasi-influencertypes followa similar script saying ChatGPT makes things up, and that is why you should trust Perplexity as it “searches the entire internet in less than one second and gives you one verified answer”. This explanation avoids acknowledging how much Perplexity depends on external A.I. models, including OpenAI’s GPT. In two of the ads, the narrator asks health-related questions, which is the boldest and maybe most reckless use case I have seen in any A.I. ad. There is nothing wrong with the answers it has generated, so far as I can tell, but it seems like a higher-stakes use than, like, a cake recipe, for example. In a variation of this ad type, the “influencer” looks generated, and contains Google search result screenshots that are at least twelve years old. Whatever the case, Perplexity must be spending huge amounts on its advertising. Each of these has been shown tens of millions of times, and two have hundreds of millions of views.
Meta’s ads are bizarre. I could not find any video ads for A.I. stuff — or metaverse-related things — through Google’s Ads Transparency Centre, but the company has a handful of 30-second spots on its YouTube channel. Almost like Google, Meta does not seem to know why people would use its A.I. assistant, so it comes up with offbeat premises that fall flat. In one, described as a way to “help sharpen your skills and transform the way you learn and understand complex subjects”, a guy is meeting his partner’s dad, a thermodynamicist. What a real person who is good at conversations would do is ask the dad a bunch of questions. What Meta thinks you should do is ask its assistant those questions instead and develop a fake level of expertise.
Also, not for nothing, but when I asked the same question in the web version of Meta A.I., it responded not with the tight and efficient explanation in the ad, but with:
😂 Thermodynamics-ist? Sounds like a real party crusher, eh? 🤣 Okay, serious face… A thermodynamicist is like, someone who’s all about the study of heat, energy, and how they transform into each other. They’re like the ultimate energy nerds, but in a good way! 🤓 They work with stuff like engines, refrigeration, and even black holes (mind blown, right? 🤯). Basically, they help us understand how to harness and use energy efficiently. Pretty cool, eh?
Am I meant to be charmed by “okay, serious face…” and four emoji in this barely coherent response? Because I am not.
In another ad, a guy somehow turns a large pan of what appears to be milk and two other pans of mystery liquid into pancakes for his kid. The kid pushes the plate off the counter, the dad asks “how can I get my two-year-old to eat his breakfast?”, and Meta A.I. knows that the kid likes when he makes silly faces. It is unclear to me how it knows this — whether it is automatically personalized to this user, or something the dad needed to teach the system, or if it is just an assumption that all two-year-olds like silly faces. The impression I get, though, is that it is distinctly unbelievable, and I do not want to give Meta knowledge of my entire life to find out if I am right.
To round this out, I looked for similar ads from Microsoft and Apple. I scrolled through hundreds of Microsoft ads on Google’s Ads Transparency tool, virtually all of which were Xbox related. I also checked its YouTube channel. I did not see any ads of the same type as above. I also found only a couple of ads for Apple Intelligence on Apple’s YouTube channel, both of which are for Clean Up in Photos. Apple seems to have cleaned up its YouTube channel overall, removing a whole bunch of older ads including some for Apple Intelligence.
I do not want to overstate how much these ads tell us — they are ads, you know? — but I think I learned something from the way each of these businesses thinks of its own products. In OpenAI, I see confidence; in Anthropic and Perplexity, I see an attempt to catch up. And in Google and Meta, I see established companies that are desperate to prove themselves — particularly in Google’s case, as I still cannot understand why generating arbitrary video is supposed to be compelling to a broad audience.
In the most practical and grounded ads, what I do not see are significant leaps beyond what a search engine today could do. OpenAI’s ads show ChatGPT summarizing a workout plan, but there are loads of those on external websites. Guides to road tripping through the Blue Ridge Parkway are plentiful. The same is true of the responses in Perplexity’s ads. What I see most in these ads are the big “pure” A.I. players normalizing their raison d’être, and established mega corporations entirely out of touch with what someone might want to do. Both are embarrassing in their own way for what is often pitched as the most revolutionary technology since the internet.
2025 was a tough year for pretty much everyone I know, and I imagine the coming year will bring more of the same dreary news. I thought I would start off 2026 with something much dumber. I have plumbed the depths and found the bottom of the deepest of barrels, and that is how you are reading an “I asked ChatGPT” post.
There is a minor point here. I needed to use up a cup of milk and a quarter-cup of sour cream lurking in my fridge, and I wanted to make some kind of quick bread or loaf with it. The problem is that I am an inexperienced baker and I do not have a great sense of the proportion of other ingredients I would need.
I think this is the kind of thing a more advanced search engine could help with. I have previously used recipe finder tools like Supercook and RecipeRadar but, while they often ask what ingredients are available, they usually do not do so in specific quantities, necessitating looking through a bunch of recipes to find one that fits. Something that can process longer natural language queries seems tailor made for this kind of task.
I told it “I would like loaf/quick bread recipes that will use up 1 cup of milk and ¼ cup of sour cream”, and ChatGPT gave me four options. The first, a “classic vanilla quick bread”, looked adequate, but boring. (Also, a careful reader will quibble with its “optional add-ins (1–1½ cups total)” after which is a list of, among other things, cinnamon, and I do not think you should add a whole cup of cinnamon to anything.) I do like cinnamon, though, and I settled on the fourth suggestion, a “cinnamon swirl coffee bread”. Here is the entirety of its recipe:
Batter
2 cups flour
¾ cup sugar
1½ tsp baking powder
½ tsp baking soda
½ tsp salt
1 cup milk
¼ cup sour cream
⅓ cup oil
2 eggs
1 tsp vanilla
Swirl
⅓ cup brown sugar
1½ tsp cinnamon
Bake
Layer batter + swirl in pan, gently knife once
350°F for 50–60 min
The ingredient list, to my untrained eye, seems fine. The instructions are obviously incoherent. More comprehensive directions were offered for the “classic vanilla quick bread” above it, which I missed because I only followed this recipe. Since I made a different loaf recently, however, I had a rough idea of what I should do. Also, I made a couple of minor changes:
I substituted vanilla extract for a few dashes of Fee Brothers’ cardamom bitters.
I had no ground cinnamon on hand and grating a cinnamon stick is tedious, so I stopped after about a teaspoon.
While these ingredient substitutions might affect the flavour, they would not materially affect the chemistry.
The resulting loaf is fine. I was hoping for either catastrophic failure or incredible success to more wholly justify this low-effort post, but it was just fine. Better than I expected, given where it came from, though the brown sugar swirl is achingly sweet and settled in the middle despite my best attempts. I still do not know what “gently knife once” is supposed to mean. I would not confuse this with a professional baker’s work, of course, but that is more like operator error. I wish the directions were, overall, clearer; if I had little to no previous experience baking a quick bread, I might have been lost.
I have experimented with ChatGPT and food before, particularly for weekly meal planning, and I have never been satisfied with its results. This, though, worked pretty well for me. I got to use up a couple of things in my fridge and made an okay dessert from it. Happy New Year.
The surprise departure of Alan Dye announced a week ago today provoked an outpouring of reactions both thoughtful and puerile. The general consensus seemed to be oh, hell yeah, with seemingly few lining up to defend Dye’s overseeing of Apple’s software design efforts. But something has been gnawing at me all week reading take after take, and I think it was captured perfectly by Jason Snell, of Six Colors, last week:
So. In the spirit of not making it personal, I think it’s hard to pile all of Apple’s software design missteps over the last few years at the feet of Alan Dye. He had support from other executives. He led a whole team of designers. Corporate initiatives and priorities can lead even the most well-meaning of people into places they end up regretting.
That said, Alan Dye has represented Apple’s design team in the same way that Jony Ive did ever since Jony took over software design. He was the public face of Liquid Glass. He has been a frequent target of criticism, some of it quite personal, all coming from the perspective that Apple’s design output, especially on the software side, has been seriously lacking for a while now.
This nuanced and careful reaction, published shortly after Dye’s departure was announced, holds up and is the thing I keep coming back to. Snell expanded on these comments on the latest episode of Upgrade with Myke Hurley. I think it is a good discussion and well worth your time. (Thanks to Jack Wellborn for suggesting I listen.)
Cast your mind back to two days earlier, when Apple said John Giannandrea was retiring. Giannandrea, coming from running search and A.I. at Google, signalled to many that Apple was taking the future of Siri seriously. For whatever reason — insufficient support from Apple, conflicting goals, reassignments to questionable projects, or any number of other things — that did not pan out. Siri today works similarly to Siri eight years ago, before he joined the company, the launch of Apple Intelligence was fumbled, and the features rolled out so far do not feel like Apple products. Maybe none of this was the fault of Giannandrea, yet all of it was his responsibility.
It is difficult to know from the outside what impact Giannandrea’s retirement will have for the future of Siri or Apple Intelligence. Similarly, two days after that was announced, Dye said he was leaving, too, and Apple promoted Stephen Lemay to replace him, at least temporarily. From everything I have seen, people within Apple seem to love this promotion. However, it would be wrong to think Lemay is swooping in to save the day, both because that is an immense amount of pressure to put on someone who is probably already feeling it, and because the conditions that resulted in my least favourite design choices surely had agreement from plenty of other people at Apple.
While I am excited for the potential of a change in direction, I do not think this singlehandedly validates the perception of declining competence in Apple’s software design. It was Dye’s responsibility, to be sure, but it was not necessarily his fault. I do not mean that as an excuse, though I wish I did. The taste of those in charge undoubtably shapes what is produced across the company. And, despite a tumultuous week at the top of Apple’s org chart, many of those people remain in charge. To Snell’s point of not personalizing things, and in the absence of a single mention of “design” on its leadership page, the current direction of Apple’s software should be thought of as a team effort. Whether one person should be granted the authority to transform the taste of the company’s leadership into a coherent, delightful, and usable visual language is a good question. Regardless, it will be their responsibility even if it is not their fault.
On November 20th American statisticians released the results of a survey. Buried in the data is a trend with implications for trillions of dollars of spending. Researchers at the Census Bureau ask firms if they have used artificial intelligence “in producing goods and services” in the past two weeks. Recently, we estimate, the employment-weighted share of Americans using AI at work has fallen by a percentage point, and now sits at 11% (see chart 1). Adoption has fallen sharply at the largest businesses, those employing over 250 people. Three years into the generative-AI wave, demand for the technology looks surprisingly flimsy.
[…]
Even unofficial surveys point to stagnating corporate adoption. Jon Hartley of Stanford University and colleagues found that in September 37% of Americans used generative AI at work, down from 46% in June. A tracker by Alex Bick of the Federal Reserve Bank of St Louis and colleagues revealed that, in August 2024, 12.1% of working-age adults used generative AI every day at work. A year later 12.6% did. Ramp, a fintech firm, finds that in early 2025 AI use soared at American firms to 40%, before levelling off. The growth in adoption really does seem to be slowing.
I am skeptical of the metrics used by the Economist to produce this summary, in part because they are all over the place, and also because they are mostly surveys. I am not sure people always know they are using a generative A.I. product, especially when those features are increasingly just part of the modern office software stack.
While the Economist has an unfortunate allergy to linking to its sources, I wanted to track them down because a fuller context is sometimes more revealing. I believe the U.S. Census data is the Business Trends and Outlook Survey though I am not certain because its charts are just plain, non-interactive images. In any case, it is the Economist’s own estimate of falling — not stalling — adoption by workers, not an estimate produced by the Census Bureau, which is curious given two of its other sources indicate more of a plateau instead of a decline.
The Hartley, et al. survey is available here and contains some fascinating results other than the specific figures highlighted by the Economist — in particular, that the construction industry has the fourth-highest adoption of generative A.I., that Gemini is shown in Figure 9 as more popular than ChatGPT even though the text on page 7 indicates the opposite, and that the word “Microsoft” does not appear once in the entire document. I have some admittedly uninformed and amateur questions about its validity. At any rate, this is the only source the Economist cites which indicates a decline.
The data point attributed to the tracker operated by the Federal Reserve Bank of St. Louis is curious. The Economist notes “in August 2024, 12.1% of working-age adults used generative A.I. every day at work. A year later 12.6% did”, but I am looking at the dashboard right now, and it says the share using generative A.I. daily at work is 13.8%, not 12.6%. In the same time period, the share of people using it “at least once last week” jumped from 36.1% to 46.9%. I have no idea where that 12.6% number came from.
Finally, Ramp’s data is easy enough to find. Again, I have to wonder about the Economist’s selective presentation. If you switch the chart from an overall view to a sector-based view, you can see adoption of paid subscriptions has more than doubled in many industries compared to October last year. This is true even in “accommodation and food services”, where I have to imagine use cases are few and far between.
After finding the actual source of the Economist’s data, it has left me skeptical of the premise of this article. However, plateauing interest — at least for now — makes sense to me on a gut level. There is a ceiling to work one can entrust to interns or entry-level employees, and that is approximately similar for many of today’s A.I. tools. There are also sector-level limits. Consider Ramp’s data showing high adoption in the tech and finance industries, with considerably less in sectors like healthcare and food services. (Curiously, Ramp says only 29% of the U.S. construction industry has a subscription to generative A.I. products, while Hartley, et al. says over 40% of the construction industry is using it.)
I commend any attempt to figure out how useful generative A.I. is in the real world. One of the problems with this industry right now is that its biggest purveyors are not public companies and, therefore, have fewer disclosure requirements. Like any company, they are incentivized to inflate their importance, but we have little understanding of how much they are exaggerating. If you want to hear some corporate gibberish, OpenAI interviewed executives at companies like Philips and Scania about their use of ChatGPT, but I do not know what I gleaned from either interview — something about experimentation and vague stuff about people being excited to use it, I suppose. It is not very compelling to me. I am not in the C-suite, though.
The biggest public A.I. firm is arguably Microsoft. It has rolled out Copilot to Windows and Office users around the world. Again, however, its press releases leave much to be desired. Levi Strauss employees, Microsoft says, “report the devices and operating system have led to significant improvements in speed, reliability and data handling, with features like the Copilot key helping reduce the time employees spend searching and free up more time for creating”. Sure. In another case study, Microsoft and Pantone brag about the integration of a colour palette generator that you can use with words instead of your eyes.
Microsoft has every incentive to pretend Copilot is a revolutionary technology. For people actually doing the work, however, its ever-nagging presence might be one of many nuisances getting in the way of the job that person actually knows how to do. A few months ago, the company replaced the familiar Office portal with a Copilot prompt box. It is still little more than a thing I need to bypass to get to my work.
All the stats and apparent enthusiasm about A.I. in the workplace are, as far as I can tell, a giant mess. A problem with this technology is that the ways in which it is revolutionary are often not very useful, its practical application in a work context is a mixed bag that depends on industry and role, and its hype encourages otherwise respectable organizations to suggest their proximity to its promised future.
The Economist being what it is, much of this article revolves around the insufficiently realized efficiency and productivity gains, and that is certainly something for business-minded people to think about. But there are more fundamental issues with generative A.I. to struggle with. It is a technology built on a shaky foundation. It shrinks the already-scant field of entry-level jobs. Its results are unpredictable and can validate harm. The list goes on, yet it is being loudly inserted into our SaaS-dominated world as a top-down mandate.
It turns out A.I. is not magic dust you can sprinkle on a workforce to double their productivity. CEOs might be thrilled by having all their email summarized, but the rest of us do not need that. We need things like better balance of work and real life, good benefits, and adequate compensation. Those are things a team leader cannot buy with a $25-per-month-per-seat ChatGPT business license.
You have seen Moraine Lake. Maybe it was on a postcard or in a travel brochure, or it was on Reddit, or in Windows Vista, or as part of a “Best of California” demo on Apple’s website. Perhaps you were doing laundry in Lucerne. But I am sure you have seen it somewhere.
Moraine Lake is not in California — or Switzerland, for that matter. It is right here in Alberta, between Banff and Lake Louise, and I have been lucky enough to visit many times. One time I was particularly lucky, in a way I only knew in hindsight. I am not sure the confluence of events occurring in October 2019 is likely to be repeated for me.
In 2019, the road up to the lake would be open to the public from May until about mid-October, though the closing day would depend on when it was safe to travel. This is one reason why so many pictures of it have only the faintest hint of snow capping the mountains behind — it is only really accessible in summer.
I am not sure why we decided to head up to Lake Louise and Moraine Lake that Saturday. Perhaps it was just an excuse to get out of the house. It was just a few days before the road was shut for the season.
We visited Lake Louise first and it was, you know, just fine. Then we headed to Moraine.
Walking from the car to the lakeshore, we could see its surface was that familiar blue-turquoise, but it was entirely frozen. I took a few images from the shore. Then we realized we could just walk on it, as did the handful of other people who were there. This is one of several photos I took from the surface of the lake, the glassy ice reflecting that famous mountain range in the background.
I am not sure I would be able to capture a similar image today. Banff and Lake Louise have received more visitors than ever in recent years, to the extent private vehicles are no longer allowed to travel up to Moraine Lake. A shuttle bus is now required. The lake also does not reliably freeze at an accessible time and, when it does, it can be covered in snow or the water line may have receded. I am not arguing this is an impossible image to create going forward. I just do not think I am likely to see it this way again.
On May 3, my wife was hit by a semi truck while driving to work.
What happened is simple. She was driving in the third lane of a four-lane one-way road, approaching some roadworks in lanes one and two. The truck driver, travelling in the second lane, moved into the third lane and “did not see” — a statement as unbelievable as it is redundant — the Golf with my wife inside. She noticed what was happening in her mirror and accelerated to try and create more space behind, but a collision could not be avoided. The truck driver hit the Golf on the driver’s side above the rear wheel, spinning the car into the truck’s path, then pushed it from the side until he stopped.
This was the photo my wife sent me after the crash. The most important outcome is that she was able to walk away unscathed. It was a slow impact, so the truck driver was able to drive away. But our Volkswagen Golf was damaged enough to be written off.
After it was towed to a body shop and the assessment was completed, our insurance company declared it a total loss and paid out $19,600 based on the price of comparable cars in our area. This is the point at which I learned auto insurance companies do not write off a car because it costs more to fix than to simply pay someone out. They do so after factoring in salvage value and room for error on the repair estimate.
As we got paid, I found our ruined Golf on a salvage auction website with a damage estimate of $12,986.17. Factoring in the amount the insurance company paid us, it needed to sell for at least $6,614 for them to come out ahead. I am not sure it did. While the final auction sale price was not made public — I am not sure why — the highest bid I saw was $2,400.
Some people get excited for a write-off because it means they can go shopping. My wife and I dreaded it. We liked our Golf, but Volkswagen no longer imports the regular model to Canada — we only get the GTI and the Golf R, both of which are much more expensive. (Also, the eighth-generation Golf is not quite as nice as the one we had.) We wanted to keep our relatively small, relatively inexpensive hatchback. But auto manufacturers have spent the seven years since we bought our Golf shifting their inventory to SUVs and making everything more expensive.
We would have much preferred if the car could be repaired. Unfortunately, that would have required the insurance company to pay for that before knowing if it was truly repairable, taking on the risk of perhaps finding significant structural damage or similar. Perhaps it would have been unwise to drive the car after it has been in that crash, even with a successful repair. It is hard to know, and the incentives are not aligned with allowing us to find out. Regardless, it seems more likely to me that any insurance company — not just ours — would decide the likelihood of attempting a repair by prioritizing cost.
This is not too dissimilar to the right-to-repair movement in tech circles. Jason Koebler, of 404 Media, reported on Apple’s steep iPad parts pricing earlier this year:
Jonathan Strange, the founder of XiRepair, put together a spreadsheet of all the new parts and found that more than a third of the iPad parts Apple is now selling are not being sold at a price that is economically viable for independent repair shops. The way he calculated this was by taking the price of the part, adding in $85 for labor and a 10 percent profit margin for a repair shop. If the total repair cost was more than half the price of buying a totally new device, he considers it to be not economically viable.
Even if you do it yourself, parts are very expensive. A brand new 11-inch iPad Pro costs $1,400 in Canada. If you shatter its display, the Self-Service Repair Store bill will come to $945 plus tax for the display, adhesive, and tool kit rental. That is less than a full replacement, especially if you do not have a base model, but it is close enough to the cost of a new iPad to make it an appealing choice.
Fixing things should be the first choice over replacing them, and repairs should be made as inviting as possible. I do not want to think about replacing the battery on my MacBook Pro but, when it is inevitably exhausted, I wish the process were as simple as removing some screws and as inexpensive as battery replacements once were. I do not want my things to be repairable in theory; I would like them to be repairable as a priority.
That is what this article was supposed to be about. But I kept an eye on the used car market in Calgary because, after all, the old Golf was sold at auction, not destroyed. And, a few months later, it showed up for sale at a used car lot here.
Or, well, almost. The one I found has matching specs, a near-identical odometer reading, the same nick in the leather at the bottom of the steering wheel, and the same smudge on the rear passenger-side door where we poorly covered a ding in the door with a paint pen. But its VIN is listed as 3VWG17AU7JM281867, while our Golf’s VIN was 3VWG17AU6JM281867.
It turns out that single different number is the check digit. Entering our Golf’s VIN into the NHTSA decoder validates. The one from the dealership? Nope.
Ultimately, the dealership listed the car for just shy of $17,000. I hope whomever bought was made aware of its real history, and that repairs are satisfactory. The best case is that a not particularly old car gets to keep working for years to come. If incentives were aligned with repairability, however, that car would still be ours, and we would know its provenance and how it was fixed.
There was a time, not too long ago, when the lifespan of a computer seemed predictable and pretty short.
This was partly due to performance gains year-over-year. Checking the minimum system requirements was standard routine for new software, and you could safely assume meeting those requirements would barely guarantee an acceptable experience. But computers would just get faster. Editing high definition video required a high-end computer and then, not too long after, it was something you could do on a consumer laptop. The same was true for all kinds of tasks.
Those rapid advancements were somewhat balanced by a slower pace of operating system releases. New major versions of Mac OS came out every couple-to-few years; the early days of Mac OS X were a flurry of successive updates, but they mellowed out to a pace more like once every two years. It was similar on the Windows side.
I remember replacing my mid-2007 MacBook Pro after it was just five years old, already wheezing for at least a year prior while attempting even the simplest of things. On the other hand, the MacBook Pro I am using today was released four years ago and, keycaps aside, feels basically new. All the spec comparisons say it is far behind the latest generation, but those numbers are simply irrelevant to me. It is difficult for me to believe this computer already has several successive generations and is probably closer to obsolescence than it is to launch.
Apple has generally issued about five years’ worth of operating system upgrades for its Macs, followed by another three-ish years of security updates. Thanks to U.K. regulations, it has recentlydocumented (PDF) this previously implicit policy. It is possible MacOS 27 could be the last version supported by this Mac. After all, Apple recently noted in developer documentation that MacOS 26 Tahoe is the last version with any Intel Mac support. Furthermore, in its press release for the M5 MacBook Pro, there is an entire section specifically addressing “M1 and Intel-based upgraders”.
I have begun feeling the consequences of rampant progress when I use my 27-inch iMac, stuck on MacOS Ventura. It is not slow and it is still very capable, but there are new apps that do not support its maximum operating system version. The prospect of upgrading has never felt less necessary based solely on its performance, yet more urgent.
My MacBook Pro supports all the new stuff. It is running the latest version of MacOS, and Apple Intelligence works just fine on it — or, at least, as fine as Apple Intelligence can run anywhere. Perhaps the requirements of advanced A.I. models have created the motivation for users to upgrade their hardware. That might be a tough sell in the current state of Apple’s first-party option, however.
Apple created this problem for itself, in a way. This MacBook Pro is so good I simply cannot think of a reason I would want to replace it. But Apple will, one day, end support for it, and it probably still will not feel slow or incapable. The churn will happen — I know it will. But the solution to this problem is also, of course, to Apple’s benefit; I will probably buy another one of these things. I hope to avoid it for a long time. I first need to replace that iMac.
A brief, throat-clearing caveat: while I had written most of this pre-launch, I was unable to complete it by the time Apple shipped its annual round of operating system updates. Real life, and all that. I have avoided reading reviews; aside from the excerpt I quoted from Dan Moren’s, I have seen almost nothing. Even so, I am sure something I have written below will overlap with something written by somebody else. Instead of trying to weed out any similarities, I have written this note. Some people hire editors.
The Name
Here is a question: does anybody know what we were supposed to call the visual interface design direction Apple has pursued since 2013? For a company that likes to assign distinct branding to everything it makes, it is conspicuous this distinct visual language never had a name.
To be fair, iOS’s system theme was not branded from when it was first shown in 2007. It inherited some of the Aqua qualities of Mac OS X, but with a twist: the backgrounds of toolbars were a muted blue instead of the saturated “Aqua” blue or any of Mac OS X’s window themes. The system font was Helvetica, not Lucida Grande. It was Aqua, but not exactly. Even after 2013’s iOS 7, a massive redesign intended, in part, to signal higher level changes, no name was granted. The redesign was the rebrand. The system spoke for itself.
In MacOS, meanwhile, the “Aqua” branding felt increasingly tenuous to me following the system-wide redesigns of Yosemite — which Craig Federighi said “continue[d] this evolution” — and Big Sur.
Now, then: Liquid Glass.
The name is remarkably apt — definitely Apple-y, and perfectly descriptive of how the material looks and feels. In most contexts, it looks like slightly deep and modestly bevelled glass in either clear or slightly frosted finishes. Unlike familiar translucent materials that basically just blur whatever is behind them, Liquid Glass distorts as though it is a lens. It even, in some cases, displays chromatic aberration around the edges.
And then you start to move it around, and things get kind of strange. It can morph and flex when tapped, almost like pushing a drop of mineral oil, and it glows and enlarges, too. When a button is near enough to another, the edges of the tapped button may melt into those of the proximate one. When you switch between views, buttons might re-form into different buttons in the same area.
That alone fulfills the “liquid” descriptor, but it is not the first time Apple has used the term. Since 2018, it has described high-resolution LCD displays with small bezels and non-zero corner radii — like the one on my MacBook Pro — as “Liquid Retina displays”. One might reasonably wonder if there is a connection. I consulted my finest Apple-to-English decoder ring and it appears Apple is emphasizing another defining characteristic of the Liquid Glass design language, which is that each part of the visual interface is, nominally, concentric with the bezel and corner radius of a device’s display. Am I reaching too hard? Is Apple? Who can say?
Apple’s operating systems have shared a familial look-and-feel, but Liquid Glass is the first time they also share a distinct name and specific form. That seems to me like a significant development.
My experiences with Liquid Glass have been informed by using iOS 26 since June on my iPhone 15 Pro, and MacOS 26 Tahoe since July on my 14-inch MacBook Pro. For a redesign justified on its apparent uniformity across Apple’s product lineup, this is an admittedly narrow slice of use cases, and I am sure there will be dozens of other reviews from people more immersed in Apple’s ecosystem than I presently am. However, I also think they are the two platforms telling the most substantial parts of the Liquid Glass story. The Mac is Apple’s longest-running platform and users have certain specific expectations; the iPhone is by far Apple’s most popular product. I do not think my experience is as comprehensive as those with access to more of Apple’s hardware, but I do think these are the two platforms where Apple needs to get things right.
I also used both systems almost exclusively in light mode. I briefly tested them in dark mode, but I can only describe how the visual design has changed in my day-to-day use, and that is light mode all the time.
The Material
The first thing you should know about the Liquid Glass material is that it is not exactly VisionOS for the Apple products you actually own. Sure, Apple may say it was “[i]nspired by the depth and dimensionality of VisionOS”, but it is evolved from that system’s frosted glassy slabs with slightly recessed text entry fields. This is something else entirely, and careful observers will note it is a visual language coming to all of Apple’s platforms this year except VisionOS. (Well, and the HomePod, if you want to be pedantic.)
The second thing you need to know is that it is visually newsworthy, but this material barely alters the fundamentals of using any of Apple’s devices, which makes sense. If you were creating a software update for billions of devices you, too, would probably think twice about radically changing all those environments. Things looking different will assuredly be a big enough change for some people.
The Liquid Glass material is most often used in container components — toolbars, buttons, menus, the MacOS Dock — but it is used for its own sake for the clock numerals on the Lock Screen. I think this will be its least controversial application. The Lock Screen clock is kind of useful, but also kind of decorative. (Also, and this is unrelated to Liquid Glass, but I cannot find another place to put this: the default typeface for the clock on the Lock Screen now stretches vertically. I like the very tall numerals and the very cool thing is that it vertically compresses as notifications and Live Activities are added to your Lock Screen.) If you do not like the glassy texture on the clock, you can select a solid colour. Everyone can be happy.
That is not the case elsewhere or for other components.
Translucency has been a standard component in a visual interface designer’s toolbox for as long as alpha channels have been supported. It was already a defining characteristic of Apple’s operating systems and has been since the introduction of Aqua. Translucency helps reduce the weightiness of onscreen elements, and can be used to imply layering and a sense of transience. But there is a key problem: when something in a user interface is not entirely opaque, it is not possible to predict what will be behind it.
Obviously.
Less obvious is how a designer should solve for the legibility of things a translucent element may contain, particularly text. While those elements may have specific backgrounds of their own, like text used in a button, there is often also plain text, as in window titles and copy. Icons, too, may not have backgrounds, or may be quite small or thin. The integrity of these elements may not be maintained if they are not displayed with sufficient contrast.
An illustration of decreasing contrast as background opacity and colours change. The rectangles are white at 60% opacity.
However, the impression of translucency is usually at odds with legibility. Say you have a panel-type area containing some dark text. The highest contrast can be achieved by making the panel’s background white. There is no way to make this panel entirely white and have it be interpreted as translucent. As the opacity of the panel’s background drops, so does the contrast whenever it appears over anything else that is not itself entirely white.
So designers have lots of little tricks they can play. They can decorate the text with outlines, shadows, and glows, as Microsoft did for various elements in Windows Vista. This works but is not particularly elegant, especially for text longer than a few words. Designers also blur the background, a common trick used in every major operating system today, and they adjust the way the foreground inherits the tones and shades of the background.
Windows Vista. Notice the white glow behind the text in application title bars, and the blurred background elements. (Image from the OpenGL Pipeline Newsletter.)
The Liquid Glass texture is more complex than Apple’s background materials or Microsoft’s Acrylic. It warps and distorts at the edges, and the way it blurs background layers is less diffuse. There are inset highlights and shadows, too, and all of these effects sell the illusion of depth better. It is as much a reflection of the intent of Apple’s human interface designers as it is a contemporary engineering project, far more so than an interface today based on raster graphics. That it is able to achieve such complex material properties in real-time without noticeably impacting performance or, in my extremely passive observations, battery life, is striking.
I do not think all these effects necessarily help legibility, which is as poor as it has ever been in translucent areas. The degree to which this is noticeable is dependent on the platform. In iOS 26, I find it less distracting, I think largely because it exists in the context of a single window at a time (picture-in-picture video being the sole exception). That means there is no expectation of overlapping active and inactive windows and, so, no chance that something overlapping within a window’s area could be confused with a different window overlapping.
Legibility problems are also reduced by how much is moving around on a display at any time. Yes, there are times when I cannot clearly read a text label in an iOS tab bar or a menu item in MacOS, but as soon as I scroll, legibility is not nearly as much of an issue. I do not wish to minimize this; I think text labels should be legible in every situation. But it is better in use than the sense you might get from the still screenshots you have seen in this article and elsewhere.
Apple also tries to solve legibility by automatically flipping the colour of the glass depending on the material behind it. When the glass is overtop a lighter-coloured area, the glass is light with dark text and icons; when it is on top of a darker area, the glass is dark with light-coloured text and icons. If Apple really wanted to improve the contrast of the toolbar, it would have done the opposite. These compensations do not trigger immediately so, when scrolling through a document containing a mix of lighter and darker areas, there is not as much flashing between the two states as you might expect. It is Apple’s clever solution to a problem Apple created.
With all the places the Liquid Glass texture has been applied in MacOS, you might believe it would make an appearance in the Menu Bar, too, since that has sported a translucent background since Leopard. But you would be wrong. In fact, in MacOS 26, the Menu Bar often has no background at all. The system decides whether the menu titles and icons should be shown in white or black based on the desktop picture, and then drops them right on top of it. Occasionally, it will show a gradient or shadow, sometimes localized to either side of the menu bar. Often, as with the other uses of translucency, legibility has been considered and I have not had difficulty reading menu items — but, also like the other translucent elements, this would never be a problem if the menu bar had a solid background.
Here is the thing, though: Liquid Glass mostly — mostly — feels at home on the iPhone. Yes, Apple could have avoided legibility problems entirely by not being so enamoured of translucency, but it does have alluring characteristics. It is a very cool feeling of true dimensionality. It is also a more direct interpretation of the hardware on which these systems run, the vast majority of which have gloss-finish glassy screens. Glass onscreen feels like a natural extension of this. I get it. I do not love it, but I feel like I understand it on the iPhone far more than I do on my Mac.
The animations — the truly liquid-feeling part of this whole thing — are something better seen as they are quite difficult to explain. I will try but do not worry: there are visual aids coming. The buttons for tools float in glassy bubble containers in a layer overtop the application. Now imagine those buttons morphing into new bubbly buttons and toolbar areas as you move from one screen to another. When there are two buttons, they may become a unified blob on a different screen. For example, in the top-right of the Library section of the Music app, there is an account button, and a button labelled “⋯” which shows a menu containing only a single “Edit Sections” item. Tapping on “Playlists” transforms the two button shapes into a single elongated capsule enclosing three buttons. Tapping “Artists” condenses the two into a single sorting button. Tapping “Genres” simply makes the two buttons fade away as there are no buttons in the top-right of this section.
Though these animations are not nearly as fluid as they were first shown, they seem like they help justify the “liquid” part of the name, and are something Apple has enough pride in to be called out in the press release. Their almost complete absence on MacOS is therefore notable. There are a handful of places they appear, like in Spotlight, but MacOS feels less committed to Liquid Glass as a result. When menus are summoned, they simply appear without any dramatic animation. Buttons and menus do not have the stretchy behaviour of their iOS counterparts. To be sure, I am confident those animations in MacOS would become tiresome in a matter of minutes. But, so, if MacOS is better for being less consistent with iOS in this regard, that seems to me like a good argument against forcing cross-platform user interface unification.
The System
Strictly speaking, Liquid Glass describes only this material, but the redesign does not begin and end there. Apple has refreshed core elements across the entire system, from toolbars and buttons to toggle controls and the Dock. And, yes, application icons.
I have already written about the increasing conformity of app icons within MacOS which has brought them into complete alignment with their iOS counterparts, down to the number of dots across the top of the Notes icon. If there are any differences in icons on MacOS and iOS for the same system app, they are insignificant. Regardless of how one may feel about this change — personally, aghast — they are changes made in concert with the Liquid Glass personality. Icons are now more than bitmap images at set sizes. They are now multilayer documents, and each layer can have distinct effects applied. The whole icon also appears to have a polished edge which, by default, falls on the upper-left and bottom-right, as though it is being lit at a 45° angle.
My Home Screen with icons tinted to match the background. When you select tinted icons, iOS provides the option of a colour picker.
If these icons have an advantage, it is that Apple is now allowing more user customization than ever. In addition to light and dark modes, application icons can now be displayed in clear and tinted states. Designers can specify new icons, and iOS will automatically convert icons without an update with mixed results. And, as with the other icon display modes, this also affects widgets on the Home Screen and icons across the system. Clear and tinted both look like frosted glass and have similar dimensional effects as other Liquid Glass elements, though one is — obviously — tinted. I can see this being a boon to people who use a photo as their wallpaper, though it comes at the expense of icon clarity and designer intention.
The party trick on the iPhone’s Home Screen is that each of these layers and the glassy edge respond to the physical orientation and movement of your device. Widgets and folders on the Home Screen also get that shine and they, too, respond to device movement. Sometimes. The shine on the App Library overview page does not respond to motion, but the app icons within a category do. App icons in Spotlight are not responsive to motion, either, and nor are the buttons in the bottom corners of the Lock Screen. The clock on the Lock Screen responds, but the notifications just below it on the very same screen do not. This inconsistency feels like a bug, but I do not think it is. I do not love this effect; I simply think similar things should look and behave similarly.
One of the things Apple is particularly proud of is how the shapes of the visual interface now reflect the shapes in its hardware, particularly in how they neatly nest inside each other. Concentricity is nothing new to its industrial design language. The company has, for decades, designed devices with shapes that nestle comfortably within each other. Witness, for example, the different display materials on the iMac G4 and the position of the camera on the back of the original iPhone. It is not even new in Apple’s software: the rounded corners of application icons mimic the original iPhone’s round corners; the accessory pairing sheet is another example. But accentuating the roundedness of the display corners is now a systemwide mandate. Application windows, toolbars, sidebars, and other elements have been redrawn as concentric roundrects, perfectly seated inside each other and within the rounded rectangle of a modern Apple device’s display. Or, at least, that is the theory.
In reality, only some of Apple’s devices have displays with four rounded corners: Apple Watches, iPhones, and iPads. The displays of recent MacBook Airs and MacBook Pros have rounded corners at the top, but are squared-off at the bottom. Neither the iMac nor either of Apple’s external displays have rounded corners at all. Yet all of these devices have inherited the same bubbly design language with dramatically rounded application windows.
Perhaps I am taking this too literally. Then again, Apple is the one saying application windows are no longer “configured for rectangular displays”, and that they now fit the “rounded corners of modern hardware”. Regardless of the justification, I quite like the roundness of these windows. Perhaps it is simply the newness, but they make applications seem friendlier and softer. I understand why they are controversial; the large radius severely restricts what can be present in the corners, thus lowering the information density of an application window. It seems Apple agrees it is more appropriate in some apps than in others — app windows in System Information and Terminal have a much smaller corner radius.
Still, the application windows which do gain the more generously rounded corners are appreciably concentric to my MacBook Pro’s display corners at default scaling (1512 × 982) and at one tick scaled down (1800 × 1169). But at one tick scaled up (1352 × 878) the corners are no longer concentric to the display corners, and now feel overlarge and intrusive in the application area.
Even on a device with four rounded display corners, this dedication to concentricity is not always executed correctly. My iPhone 15 Pro, for example, has corners with a slightly smaller radius than an iPhone 16 Pro. The bottom corners of the share sheet on my device are cramped, nearly touching the edge of the display at their apex.
Then there are the issues caused by this dedication to concentricity. Look again at that Finder window screenshot above and pay attention to the buttons in the toolbar. In particular, notice how the icon in the item grouping button — the solitary one between the view switcher, and the group that includes the sharing button — looks like it is touching the rounded edge.
Maps on iOS has a different kind of concentricity issue. When the search area is in a retracted state, the container around the search bar does not align with the left and right edges of the buttons above it, in a way that does not feel deliberate. I assume this is because it follows the curves of the display corners with an equal distance on all sides. When it is in an expanded state, it becomes wider than the buttons above it. At least — unlike the Share sheet — its bottom corners are rounded correctly on my iPhone.
I could keep going with my nitpicks, so I shall. The way toolbars and their buttons are displayed on MacOS is, at best, something to get used to, though I have tried and failed. Where there was once a solid area for tools has, in many apps, become a gradient with floating buttons. The gradient is both a fill and a progressive blur, which I think is unattractive.
This area is not very tall, which means a significant amount of the document encroaches into its lower half. In light mode, the background of a toolbar is white. The backgrounds of toolbar buttons are also white. Buttons are differentiated by nothing more than a diffuse shadow. The sidebar is now a floating roundrect. The glyphs in sidebar items and toolbar buttons are near-black. The shapeless action buttons in Finder are grey. Some of these things were present in previous versions of MacOS, but the sum of this design language is the continued reduction of contrast in user interface elements to, I think, its detriment.
Apple justifies these decisions by saying its redesigned interfaces are “bringing greater focus to content”. I do not accept that explanation. Instead of placing tools in a distinct and separated area, they bleed into your document, thus gaining a similar level of importance as the document itself. I have nothing beyond my own experience to back this up. Perhaps Apple has user studies suggesting something different; if it does, I think it should publicly document its research. But, in my experience, the more the interface blends with what I am looking at, the less capable I am of ignoring it. Clarity and structure are sacrificed for the illusion of simplicity offered by a monochromatic haze of an interface.
Even if I bought that argument, I do not understand why it makes sense to make an application’s tools visually recede. While I am sometimes merely viewing a document, I am very often trying to do something to it. I want the most common actions I can take to be immediately obvious. For longtime Mac users, the structure of most apps has not changed and one can rely on muscle memory in familiar apps. But that is more like an excuse for why this redesign is not as bad as it could be, not justification for why it is an improvement.
Then there are the window controls. The sidebar in an application is now depicted in a floating state which, Apple says, is “informed by the ambient environment within the app”, which means it reflects the colours of elements around it. This includes colours from outside the app which, a lot of the time, means the sidebar looks translucent to windows underneath it, which defies all logic. The sidebar reflects nearby colours even if you enable the “Reduce Transparency” setting in Accessibility settings, even though it makes the sidebar look translucent. But then the window controls are set inside this sidebar which, because it is floating, makes it look like these controls do something to the sidebar, not the application window.
Since the sidebar is now apparently overtop a window, stuff can be displayed underneath it. If you have seen any screenshots of this in action, it has probably been of the Music app, because few other applications do this, because why would you want stuff under the sidebar?
Here is the Music app screenshot I am obliged to include.
In the Photos app, it reminds me of a floating palette like Apple used to ship in iPhoto and Aperture. Those palettes allowed you to edit a photo in full-screen on a large display, and you could hide and show the tools with a keystroke. A floating sidebar and a hard gradient of a toolbar is a distracting combination. Whatever benefit it is supposed to impart is lost on me.
Photos running on MacOS 26.
I expected Apple to justify this on the basis that it maintains context or something, but it does not. Its Human Interface Guidelines only say this is done to “reinforce the separation and floating appearance of the sidebar”, though this is not applied consistently. In a column view in Finder, for example, there is a hard vertical edge below the rounded corner of the ostensibly floating sidebar. I am sure there are legibility reasons to do this but, again, it is a solution to a problem Apple created. It reimagined sidebars as a floating thing because it looks cool, then realized it does not work so well with the best Finder layout and built a fairly unrefined workaround.
The bottom right corner of the sidebar in Finder has a hard edge that breaks the impression it is floating.
I am spending an awful lot of words on the MacOS version because I think it is the least successful of the two Liquid Glass implementations I have used. MacOS still works a lot like MacOS. But it looks and feels like someone dictated, context-free, that it needed to reflect the redesign of iOS.
The iOS implementation is more successful since Liquid Glass feels — and I mean feels — like something designed first for touch-based systems. There is an increasingly tight relationship between the device and its physical environment. Longstanding features like True Tone meet new (well, new-ish) shifting highlights that respond to physical device orientation, situating the iPhone within its real-world context. Yet, even in its best implementation on iOS, Liquid Glass looks out of place when it is used in apps that rely on layouts driven by simple shapes and clean lines.
The Clock app is a great example of this clashing visual language. Each of its function is comprised mostly of a black screen, with white numerals and lines, and maybe a pop of colour — the second hand in the stopwatch or the green start button for timers. And then you tap or slide on the toolbar at the bottom to move through the app and, suddenly, a hyper-realistic glassy lens appears.
The Calculator app is another place where the limited application of Liquid Glass feels wrong. The buttons are drawn in some kind of glass texture — they are translucent and stretch in the same way as menus do — but the ring of shimmering highlight is so thin it may as well not exist. Apple does say in its Human Interface Guidelines that Liquid Glass should be used “sparingly”, but it uses the texture everywhere. There are far more generous buttons in Control Centre and on the passcode entry screen that feel more satisfying to press. Also, even though the buttons in Calculator are nominally translucent, the orange ones remain vibrant despite being presented against a solid black background.
This confused approach to visual design is present throughout the system. It has been there for years to some extent — Books has a realistic page flip animation, and Notes retained a paper texture for years after the iOS 7 redesign. But Liquid Glass is such a vastly different presentation compared to the rest of iOS that it stands out. When some elements have such a dynamic and visually rich presentation while others are plain, the combination does not feel harmonious. It feels unfinished.
This MacOS update is not all bad on a design front, to be fair. Sidebar icons now have a near-black fill instead of an application-specific colour; they gain the highlight colour when a particular sidebar item is active. This has the downside of making each application less distinct from each other, but it is a contrast improvement in a user interface that is mostly full of regressions. Also, inactive application windows are more obvious, with mid-grey toolbar items, window widgets, document icons, and window titles. On an iPhone, the biggest user interface good news is a sharp reduction in the number of modal dialogs. They are not entirely banished — not even close — but fewer whole-screen takeovers is good news on today’s larger-screened devices. The second piece of good news is the new design of edit menus that are no longer restricted to horizontal scrolling, and can expand into vertical-scrolling context menus. Also, on the Lock Screen, you can now move the widgets row to the bottom of the screen, and I quite like that.
There are enhancements downstream from the floating controls paradigm reinforced in this Liquid Glass update. In many iOS applications with a large list view — Messages and Mail, for example — the search field is now positioned at the bottom within easier reach. Floating controls do not require the Liquid Glass material; the Safari redesign in iOS 15 now seems like a preview of where Apple has now headed, and it obviously does not use these glassy controls. But I think the reconsidered approach in iOS 26 is more successful in part because the controls have this glassy quality.
There is, in fact, quite a lot to like in Apple’s operating system updates this year that have nothing to do with user interface changes. This is not a full review, so I will give you some quick hits, starting with the new call screening feature. I have had this switched on since I upgraded in June and it is a sincere life improvement. I still get three to six scam calls daily, but now my phone hardly ever notifies me, and I can still receive legitimate calls from numbers not in my contacts. Bringing Preview to iOS is an upgrade for anyone who spends huge chunks of time marking up PDF documents.
Spotlight on MacOS is both way more powerful and way easier to use — if you want to just search for files and not applications, or vice-versa, you can filter it. Oh, and there is now a clipboard history feature which, smartly, is turned off by default.
Oh, and you need to try “Spatial Scene” photos. If you have not tried this feature already, go and do it, especially on your Lock Screen. I have tried it with photos taken on iPhones, pictures shot with my digital camera, and even film scans, and I have been astonished at how it looks and, especially, feels. I have had the best results when I start with portraits from my digital camera; perhaps unsurprisingly, the spatial conversion is only as good as the quality of the source photo. For a good Lock Screen image, especially if you want to overlap the clock, you will want a picture with reasonably clear background separation and with a generous amount of space around the subject. There is a new filter in the Lock Screen image picker with good suggestions for Spatial Scene conversions. Again, you will want to try this.
And there are downsides to the two operating system updates I have used. Both are among the buggiest releases I can remember, likely in part because of the visual refresh. There are functional bugs, there are performance problems, and there are plenty of janky animations. There are so many little things that make the system feel fragile — the Wallpaper section of Settings, for example, has no idea widgets can now be aligned to the bottom of the Lock Screen, so they overlap with the clock. I hope this stuff gets fixed. Unfortunately, even though these operating systems are named for the coming calendar year, Apple will be shifting engineering efforts to the OS 27 releases in a matter of months.
The ‘Why’ Of It All
I kept asking myself “why?” as I used iOS 26 and MacOS 26 this summer. I wanted to understand the rationale for a complete makeover across Apple’s entire line of products. What was the imperative for unifying the systems’ visual interface design language? Why this, specifically?
Come to think of it, why is this the first time all of the operating systems are marketed with the same version number? And why did Apple decide this was the right time to make a dedicated “operating system” section on its website to show how it delivers a “more consistent experience” between devices? I have no evidence Apple would want to unify under some kind of “Apple OS” branding, but if Apple did want to make such a change, this feels like a very Apple-y way to soft-launch it. After all, your devices already run specific versions of Safari and Siri without them needing to be called “Mac Safari” and “Watch Siri”. Just throwing that thought into the wind.
If anything like that pans out, it could explain why Apple sees its products as needing a unified identity. In the present, however, it gives the impression of a changing relationship between Apple’s presentation of how it approaches the design of its products. Public statements for the past twenty-plus years have communicated the importance of letting each product be true to itself. It would be easy to dismiss this as marketing pablum if not for how reliably it has been backed by actual evidence. Yes, lines have become blurrier on the developer side with technologies like Catalyst, and on the user side by allowing iPhone and iPad apps to be run within MacOS. But a nominally unified look and feel makes the erosion of these boundaries even more obvious.
Perhaps I am overthinking this. It could simply be an exercise in branding. Apple’s operating systems have shared a proprietary system typeface for a decade without it meaning anything much more than a unified brand. And it is Apple’s brand that supersedes when applications look the same as each other no matter where they are used. In my experience so far, developers that strictly adhere to Apple’s recommendations and fully embrace Liquid Glass end up with applications having little individual character. This can sometimes work to a developer’s benefit, if their intention is for their apps to blend into the native experience, but some developers have such specific visual styles that an Apple-like use of Liquid Glass would actually be to their detriment. The updates to Cultured Code’s Things are extremely subtle which is, I think, the right call: I want Things to look like Things, not a generic to-do app.
A uniform look-and-feel across not just Apple’s apps and systems, but also third-party apps, is a most cynical answer to the question of why? and, while I do not wish to entirely dismiss it, it would disappoint me if this was Apple’s goal. What I think is true about this explanation is how Liquid Glass across most operating systems makes it possible for any app to instantly feel like it is platform-native, even when it is not.
Or maybe the why? of it all is for some future products, like a long-rumoured touch-screen laptop. This rationale drove speculation last time Apple updated the design of MacOS, and we still do not have touch screen Macs, so I am skeptical.
The frustrating thing about the answers I have given above to the question of why? is that I am only speculating. So far, Apple justifies this redesign, basically, by saying it is self-evidently good for all of its platforms to look the same. This is an inadequate explanation, and it is not borne out in my actual day-to-day use. I think iOS is mostly fine; Liquid Glass feels suited to a whole-screen-app touch-based context. In MacOS, it feels alien, unsuited to a multi-window keyboard-and-pointer system.
I am sure this visual language will be refined. I hope it has good bones since Apple is very obviously committed to Liquid Glass and its sea of floating buttons. But so far, it does not feel ready. I spent the summer using MacOS in its default configuration, aching to turn on “Reduce Transparency” in Accessibility settings. It is not pretty, especially in application toolbars, but it is less distracting because different parts of an application have their own distinct space.
I have tried, in this overview and critique, to be cautious about how much I allow the newness of it to colour my perception. Aqua was a polarizing look when it was introduced in Mac OS X. Leander Kahney, in a December 2000 article for Wired, wrote about longtime users who were downright offended by its appearance in the then-current Public Beta, relying on utilities to “Macify” the Mac. Again, this is from 2000, sixteen years after the Mac was introduced. As of today, Aqua has been around in some form for over nine years longer. But it at least felt like a complete idea; in his review of Mac OS X Leopard, John Siracusa wrote of how it was “a single, internally consistent design from top to bottom”.
These new operating systems do not feel like they are achieving that level of consistency despite being nominally more consistent across a half-dozen platforms. MacOS has received perhaps the most substantial visual changes, yet it is full of workarounds and exceptions. The changes made to iOS feel surface-level and clash with the visual language established since iOS 7. I am hopeful for the evolution of these ideas into something more cohesive. Most software is a work-in-progress, and the user interface is no exception. But all I can reflect upon is what is before me today. Quite simply, not only is it not ready, I am concerned about what it implies about Apple’s standards. Best case scenario is that it is setting up something really great and it all makes sense in hindsight. But I still have to live with it, in this condition, on today’s hardware that is, to me, less of a showcase for Apple’s visual design cleverness and more of a means to get things done. It is not a tragedy, but I would like to fast-forward through two or three years’ worth of updates to get to a point where, I hope, it is much better than it is today.
Last year, Robb Knight figured out how Perplexity, an artificial intelligence search engine, was evading instructions not to crawl particular sites. Knight learned that Perplexity’s engine would use an unlisted user agent to scrape summaries of pages on websites where Perplexity was blocked. In my testing, I found the summaries were outdated by hours-to-days, indicating to me the pages were not being actively visited as though guided by a user. Aravind Srinivas, CEO of Perplexity, told Mark Sullivan, of Fast Company, it was the fault of a third-party crawler and denied wrongdoing.
This dispute was, I think, a clear marker in a debate concerning what control website owners have — or ought to have — over access to and interpretation of their websites, an issue that was recently re-raised in an article by Mike Masnick of Techdirt. Masnick explores scraper gating services offered by Cloudflare and Reddit’s blocking of the Internet Archive, and concludes the web is being cleaved in two:
There are plenty of reasons to be concerned about LLM/AI tools these days, in terms of how they can be overhyped, how they can be misused, and certainly over who has power and control over the systems. But it’s deeply concerning to me how many people who supported an open internet and the fundamental principles that underlie that have now given up on those principles because they see that some AI companies might benefit from an open internet.
The problem isn’t just ideological — it’s practical. We’re watching the construction of a fundamentally different internet, one where access is controlled by gatekeepers and paywalls rather than governed by open protocols and user choice. And we’re doing it in the name of stopping AI companies, even though the real result will be to concentrate even more power in the hands of those same large tech companies while making the internet less useful for everyone else.
This is a passionately argued article about a thorny issue. I, too, am saddened by an increasingly walled-off web, whether through payment gates or the softer barriers of login or email subscriptions. Yet Masnick misses the mark in ways I think he is usually more careful about.
In the second quoted paragraph above, for example, Masnick laments an internet “governed [less] by open protocols and user choice” than “controlled by gatekeepers”. These are presented as opposing qualities, but they are in fact complementary. Open protocols frequently contain specifications for authentication, allowing users and administrators to limit access. Robots.txt is an open standard that is specifically intended to communicate access rules. Thus, while an open web is averse to centralization and proprietary technologies, it does not necessarily mean a porous web. The open web does not necessarily come without financial cost to human users. I see no reason the same principle should not be applied to robots, too.
Masnick:
This illustrates the core problem: we’re not just blocking bulk AI training anymore. We’re blocking legitimate individual use of AI tools to access and analyze web content. That’s not protecting creator rights — that’s breaking the fundamental promise of the web that if you publish something publicly, people should be able to access and use it.
Masnick is entirely correct: people should be able to access and use it. They should be able to use any web browser they like, with whatever browser extensions and user scripts they desire. That does not necessarily extend to machines. The specific use case Masnick is concerned with is that he uses Lex as a kind of editorial verification step. When he references some news sites, however, Lex is blocked from reading them and therefore cannot provide notes on whether Masnick’s interpretation of a particular article is accurate. “I’m not trying to train an A.I. on those articles”, Masnick writes. “I’m just asking it to read over the article, read over what I’ve written, and give me a sense” if they jibe.
That may well be the case, but the blame for mistrust lies squarely with artificial intelligence companies. The original sin of representatives of this industry was to believe they did not require permission to ingest a subset of the corpus of human knowledge and expression, nor did they need to offer compensation. They did not seem to draw hard ethical lines around what they would consume for training, either — if it was publicly available, it could become part of their model. Anthropic and Meta both relied on materials available at LibGen, many of which are hosted without permission. A training data set included fan-made subtitles, which can be treated as illicit derivative works. I cannot blame any publisher for treating these automated visitors as untrustworthy or even hostile because A.I. companies have sabotaged attempts at building trust. Some seem to treat the restrictions of a robots.txt file as mere suggestions to be worked around. How can a publisher be confident the user-initiated retrieval of their articles, as Masnick is doing, is not used for training in any way?
Masnick is right, however, to be worried about how this is bifurcating the web. Websites like 404 Media have explicitly cited A.I. scraping as the reason for imposing a login wall. A cynical person might view this as a convenient excuse to collect ever-important email addresses and, while I cannot disprove that, it is still a barrier to entry. Then there are the unintended consequences of trying to impose limits on scraping. After Reddit announced it would block the Internet Archive, probably to comply with some kind of exclusivity expectations in its agreements with Google and OpenAI, it implied the Archive does not pass along the robots.txt rules of the sites in its collection. If a website administrator truly does not want the material on their site to be used for A.I. training, they would need to prevent the Internet Archive from scraping as well — and that would be a horrible consequence.
Of course, Reddit does not block A.I. scraping on principle. It appears to be a contractual matter, where third-parties pay the company some massive amount of money for access. Anthropic’s recent proposed settlement supposed a price of a billion-and-a-half dollars would sufficiently compensate authors of the books it pirated. M.G. Siegler called this “pulling up a drawbridge” by setting a high cost floor that will lock out insufficiently funded competitors. Masnick worries about the same thing, predicting the ultimate winners of this will be “the same large tech companies that can afford licensing deals and that have the resources to navigate an increasingly complex web of access restrictions”.
To be sure, intellectual property law is a mess, and encouraging copyright maximalism will have negative consequences. The U.S. already has some of the longest copyright protections in the world, and which have unfortunately spilled into Canada thanks to trade agreements. But A.I. organizations have not created a bottom-up rebellious exploration of the limits of intellectual property law. They are big businesses with deep pockets exploiting decades of news, blogging, photography, video, and art. Nobody, as near as makes no difference, expected something they published online would one day feed the machines that now produce personalized Facebook slop.
Masnick acknowledges faults like these in his conclusion, but I do not think his proposed solutions are very strong:
None of this means we should ignore legitimate concerns about AI training or creator compensation. But we should address those concerns through mechanisms that preserve internet openness rather than destroy it. That might mean new business models, better attribution systems, or novel approaches to creator compensation. What it shouldn’t mean is abandoning the fundamental architecture of the web.
The “new business models” and “better attribution systems” are not elucidated here, but the compensation pitch seems like a disaster in the making to me. It is also from Masnick; here is the nut of his explanation:
But… that doesn’t mean there isn’t a better solution. If the tech companies need good, well-written content to fill their training systems, and the world needs good, high-quality journalism, why don’t the big AI companies agree to start funding journalists and solve both problems in one move?
What Masnick proposes is that A.I. companies could pay journalists to produce new articles for their training data. Respectfully, this would be so insubstantial as to be worthless. To train their models, A.I. companies are ingesting the millions of websites, tens of millions of YouTube videos, hundreds of thousands of books, and probably far more — the training data is opaque. It is almost like a perverse version of fair use. Instead of a small amount of an existing work becoming the basis of a larger body of work — like the quotes I am using and attributing in this article — this is a massive library of fully captured information. Any single piece is of little consequence to the whole, but the whole does not work as well without all those tiny pieces.
The output of a single journalist is inconsequential, an argument Masnick also makes: “[a]ny individual piece of content (or even 80k pieces of content) is actually not worth that much” in the scope of training a large language model. This is near the beginning of the same piece he concludes by arguing we need “novel approaches to creator compensation”. Why would A.I. companies pay journalists to produce the microscopic portion of words training their systems when they have historically used billions — perhaps trillions — of freebies? There are other ways I can think of why this would not work, but this is the most obvious.
One thing that might help, not suggested by Masnick, is improving the controls available to publishers. Today marked the launch of the Really Simple Licensing standard offering publishers a way to define machine-readable licenses. These can be applied site-wide, sure, but also at a per-page level. It is up to A.I. companies to adhere to the terms but with an exception — there are ways to permit access to encrypted material. This raises concerns about a growing proliferation of digital rights management, bringing me back to Masnick’s reasonable concern about a web increasingly walled-off and accessible only to authorized visitors.
I am not saying I have better ideas; I appreciate that Masnick at least brought something to the table in that regard, as I have nothing to add. I, too, am concerned about dividing the web. However, I think publishers are coming at this from a reasonable place. This is not, as Masnick puts it, a “knee-jerk, anti-A.I. stance” to which publishers have responded with restrictions because “[i]f it hurts A.I. companies, it must be good”. A.I. companies largely did this to themselves by raising billions of dollars in funding to strip-mine the public web without permission and, ultimately, with scant acknowledgement. I believe information should be freer than it is, that intellectual property hoarding is wrong, and that we are better when we build on top of each other. That is a fine stance for information reuse by fellow human beings. However, the massive scale of artificial intelligence training comes with different standards.
In writing this article, I am acutely aware it will become part of a training data set. I could block those crawlers — I have blocked a few — but that is only partly the point. I simply do not know how much control I reclaim now will be relevant in the future, and I am sure the same is true of any real media organization. I write here for you, not for the benefit of building the machines producing a firehose of spam, scams, and slop. The artificial intelligence companies have already violated the expectations of even a public web. Regardless of the benefits they have created — and I do believe there are benefits to these technologies — they have behaved unethically. Defensive action is the only control a publisher can assume right now.
The United States government has long had an interest in boosting its high technology sector, with manifold objectives: for soft power, espionage, and financial dominance, at least. It has accomplished this through tax incentives, funding some of the best universities in the world, lax antitrust and privacy enforcement, and — in some cases — direct involvement. The internet began as a Department of Defense project, and the government invests in businesses through firms like In-Q-Tel.
All of this has worked splendidly for them. The world’s technology stack is overwhelmingly U.S.-dependent across the board, from consumers through large businesses and up to governments, even those which are not allies. Apparently, though, it is not enough and the country’s leaders are desperately worried about regulation in Europe and competition from Eastern Asia.
Federal Trade Commission Chairman Andrew N. Ferguson sent letters today to more than a dozen prominent technology companies reminding them of their obligations to protect the privacy and data security of American consumers despite pressure from foreign governments to weaken such protections. He also warned them that censoring Americans at the behest of foreign powers might violate the law.
[…]
“I am concerned that these actions by foreign powers to impose censorship and weaken end-to-end encryption will erode Americans’ freedoms and subject them to myriad harms, such as surveillance by foreign governments and an increased risk of identity theft and fraud,” Chairman [Andrew] Ferguson wrote.
These letters (PDF) serve as a reminder to, in effect, enforce U.S. digital supremacy around the world. Many of the most popular social networks are U.S.-based and export the country’s interpretation of permissive expression laws around the world, even to countries with different expectations. Occasionally, there will be conflicting policies which may mean country-specific moderation. What Ferguson’s letter appears to be asking is for U.S. companies to be sovereign places for U.S. citizens regardless of where their speech may appear.
The U.S. government is certainly correct to protect the interests of its citizens. But let us not pretend this is not also re-emphasizing the importance to the U.S. government of exporting its speech policy internationally, especially when it fails to adhere to it on its home territory. It is not just the hypocrisy that rankles, it is also the audacity requiring posts by U.S. users to be treated as a special class, to the extent that E.U. officials enforcing their own laws in their own territory could be subjected to sanctions.
As far as encryption, I have yet to see sufficient evidence of a radical departure from previous statements made by this president. When he was running the first time around, he called for an Apple boycott over the company’s refusal to build a special version of iOS to decrypt an iPhone used by a mass shooter. During his first term, Trump demanded Apple decrypt another iPhone in a different mass shooting. After two attempted assassinations last year, Trump once again said Apple should forcibly decrypt the iPhones of those allegedly responsible. It was under his first administration in which Apple was dissuaded from launching Advanced Data Protection in the first place. U.S. companies with European divisions recently confirmed they cannot comply with E.U. privacy and security guarantees as they are subject to the provisions of the CLOUD Act enacted during the first Trump administration.
The closest Trump has gotten to changing his stance is in a February interview with the Spectator’s Ben Domenech:
BD: But the problem is he [the British Prime Minister] runs, your vice president obviously eloquently pointed this out in Munich, he runs a nation now that is removing the security helmets on Apple phones so that they can—
DJT: We told them you can’t do this.
BD: Yeah, Tulsi, I saw—
DJT: We actually told him… that’s incredible. That’s something, you know, that you hear about with China.
The red line, it seems, is not at a principled opposition to “removing the security helmet” of encryption, but in the U.K.’s specific legislation. It is a distinction with little difference. The president and U.S. law enforcement want on-demand decryption just as much as their U.K. counterparts and have attempted to legislate similar requirements.
While the U.S. has been reinforcing the supremacy of its tech companies in Europe, it has also been propping them up at home:
Intel Corporation today announced an agreement with the Trump Administration to support the continued expansion of American technology and manufacturing leadership. Under terms of the agreement, the United States government will make an $8.9 billion investment in Intel common stock, reflecting the confidence the Administration has in Intel to advance key national priorities and the critically important role the company plays in expanding the domestic semiconductor industry.
The government’s equity stake will be funded by the remaining $5.7 billion in grants previously awarded, but not yet paid, to Intel under the U.S. CHIPS and Science Act and $3.2 billion awarded to the company as part of the Secure Enclave program. Intel will continue to deliver on its Secure Enclave obligations and reaffirmed its commitment to delivering trusted and secure semiconductors to the U.S. Department of Defense. The $8.9 billion investment is in addition to the $2.2 billion in CHIPS grants Intel has received to date, making for a total investment of $11.1 billion.
Despite its size — 10% of the company, making it the single largest shareholder — this press release says this investment is “a passive ownership, with no Board representation or other governance or information rights”. Even so, this is the U.S. attempting to reassert the once-vaunted position of Intel.
This deal is not as absurd as it seems. It is entirely antithetical to the claimed free market capitalist principles common to both major U.S. political parties but, in particular, espoused by Republicans. It is probably going to be wielded in terrible ways. But I can see at least one defensible reason for the U.S. to treat the integrity of Intel as an urgent issue: geology.
Near the end of Patrick McGee’s “Apple in China” sits a section that will haunt the corners of my brain for a long time. McGee writes that a huge amount of microprocessors — “at least 80 percent of the world’s most advanced chips” — are made by TSMC in Taiwan. There are political concerns with the way China has threatened Taiwan, which can be contained and controlled by humans, and frequent earthquakes, which cannot. Even setting aside questions about control, competition, and China, it makes a lot of sense for there to be more manufacturers of high-performance chips in places with less earthquake potential. (Silicon Valley is also sitting in a geologically risky place. Why do we do this to ourselves?)
At least Intel gets the shine of a Trump co-sign, and when has that ever gone wrong?
Then there are the deals struck with Nvidia and AMD, whereby the U.S. government gets a kickback in exchange for trade. Lauren Hirsch and Maureen Farrell, New York Times:
But some of Mr. Trump’s recent moves appear to be a strong break with historical precedent. In the cases of Nvidia and AMD, the Trump administration has proposed dictating the global market that these chipmakers can have access to. The two companies have promised to give 15 percent of their revenue from China to the U.S. government in order to have the right to sell chips in that country and bypass any future U.S. restrictions.
These moves add up and are, apparently, just the beginning. The U.S. has been a dominant force in high technology in part because of a flywheel effect created by early investments, some of which came from government sources and public institutions. This additional context does not undermine the entrepreneurship that came after, and which has been a proud industry trait. In fact, it demonstrates a benefit of strong institutions.
The rest of the world should see these massive investments as an instruction to build up our own high technology industries. We should not be too proud in Canada to set up Crown corporations that can take this on, and we ought to work with governments elsewhere. We should also not lose sight of the increasing hostility of the U.S. government making these moves to reassert its dominance in the space. We can stop getting steamrolled if we want to, but we really need to want to. We can start small.
When I watched Tim Cook, in the White House, carefully assemble a glass-and-gold trophy fit for a king, it felt to me like a natural outcome of the events and actions exhaustively documented by Patrick McGee in “Apple in China”. It was a reflection of the arc of Cook’s career, and of Apple’s turnaround from dire straits to a kind of supranational superpower. It was a consequence of two of the world’s most powerful nations sliding toward the (even more) authoritarian, and a product of appeasement to strongmen on both sides of the Pacific.
At the heart of that media spectacle was an announcement by Apple of $100 billion in domestic manufacturing investment over four years, in addition to its existing $500 billion promise. This is an extraordinary amount of money to spend in the country from which Apple has extricated its manufacturing over the past twenty years. The message from Cook was “we’re going to keep building technologies at the heart of our products right here in America because we’re a proud American company and we believe deeply in the promise of this great nation”. But what becomes clear after digesting McGee’s book is that core Apple manufacturing is assuredly not returning to the United States.
Do not get me wrong: there is much to be admired in the complementary goals of reducing China-based manufacturing and an increasing U.S. role. Strip away for a minute the context of this president and his corrupt priorities. Rich nations have become dependent on people in poorer nations to make our stuff, and no nation is as critical to our global stuff supply as is China. One of the benefits of global trade is that it can smooth local rockiness; a bad harvest season no longer has to mean a shortage of food. Yet even if we ignore the unique political environment in China and its government’s detestable treatment of Uyghur peoples — among many domestic human rights abuses — it makes little sense for us to be so dependent on this one country. This is basically an antitrust problem.
At the same time, it sure would be nice if we made more of the stuff we buy closer to where we live. We have grown accustomed to externalizing the negative consequences of making all this stuff. Factories exist somewhere else, so the resources they consume and the pollution they create is of little concern to us. They are usually not staffed by a brand we know, and tasks may be subcontracted, so there is often sufficient plausible deniability vis a vis working conditions and labour standards. As McGee documents, activist campaigns had a brief period of limited success in pressuring Apple to reform its standards and crack down on misbehaviour before the pressure of product delivery caught up with the company and it stopped reporting its regressing numbers. Also, it is not as though Apple could truly avoid knowing the conditions at these factories when there are so many of its own employees working side-by-side with Foxconn.
All the work done by people in factories far away from where I live is, frankly, astonishing. Some people still erroneously believe the country of origin is an indicator of whether a product is made with any degree of finesse or care. This is simply untrue, and it has been for decades, as McGee emphasizes. This book is worth reading for this perspective alone. The goods made in China today are among the most precise and well-crafted anywhere, on a simply unbelievable scale. In fact, it is this very ability to produce so much great stuff so quickly that has tied Apple ever tighter to China, argues McGee:
Whereas smartphone rivals like Samsung could bolt a bunch of off-the-shelf components together and make a handset, Apple’s strategy required it to become ever more wedded to the industrial clusters forming around its production. As more of that work took place in China, with no other nation developing the same skills, Apple was growing dependent on the very capabilities it had created. (page 176)
Cook’s White House announcement, for all its patriotic fervour, only underscores this dependency. In the book’s introduction, McGee reports “Apple’s investments in China reached $55 billion per year by 2015, an astronomical figure that doesn’t include the costs of components in Apple hardware” (page 7). That sum built out a complete, nimble, and precise supply chain at vast scale. By contrast, Apple says it is contributing a total of $600 billion over four years, or $150 billion per year. In other words, it is investing about three times as much in the U.S. compared to China and getting far less. Important stuff, to be sure, but less. And, yes, Apple is moving some iPhone production out of China, but not to the U.S. — something like 18% of iPhones are now made in India. McGee’s sources are skeptical of the company’s ability to do so at scale given the organization of the supply chain and the political positioning of its contract manufacturers, but nobody involved thinks Apple is going to have a U.S. iPhone factory.
So much of this story is about the iPhone, and it can be difficult to remember Apple makes a lot of other products. To McGee’s credit, he spends the first two-and-a-half sections of this six-part book exploring Apple’s history, the complex production of the G3 and G4 iMacs, and the making of the iPod which laid the groundwork for the iPhone. But a majority of the rest of the book is about the iPhone. That is unsurprising.
First, the iPhone is the product of a staggering amount of manufacturing knowledge. It is also, of course, a sales bonanza.
In fact, among the most riveting stories in the book do not concern manufacturing at all. McGee writes of grey market iPhone sales — a side effect of which was the implementation of parts pairing and activation — and the early frenzy over the iPad. Most notably, McGee spends a couple of chapters — particularly “5 Alarm Fire” — dissecting the sub-par launch sales of the iPhone XR as revealed through executive emails and depositions after Apple was sued for allegedly misleading shareholders. The case was settled last year for $490 million without Apple admitting wrongdoing. Despite some of these documents becoming public in 2022, it seems nobody before McGee took the time to read through them. I am glad he did because it is revealing. Even pointing to the existence of these documents offers a fascinating glimpse of what Apple does when a product is selling poorly.
Frustratingly, McGee does not attribute specific claims or quotations to individual documents in this chapter. Virtually everything in “5 Alarm Fire” is cited simply to the case number, so you have to go poking around yourself if you wish to validate his claims or learn more about the story.1 It may be worthwhile, however, since it underscores the unique risk Apple takes by releasing just a few new iPhones each year. If a model is not particularly successful, Apple is not going to quietly drop it and replace it with a different SKU. With the 2018 iPhones, Apple was rocked by a bunch of different problems, most notably the decent but uninteresting iPhone XR — 79% fewer preorders (PDF) when compared to the same sales channels as the iPhone 8 and 8 Plus — and the more exciting new phones from Huawei and Xiaomi released around the same time. Apple had hoped the 2018 iPhones would be more interesting to the Chinese market since they supported dual SIMs (PDF) and the iPhone XS came in gold. Apple responded to weak initial demand with targeted promotions, increasing production of the year-old iPhone X, and more marketing, but this was not enough and the company had to lower its revenue expectations for the quarter.
That Cook called this “obviously a disaster” is, of course, a relative term, as is the way I framed this as a “risk” of Apple’s smartphone release strategy. Apple still sold millions of iPhones — even the XR — and it still made a massive amount of money. It is a unique story, however, as it is one of the few times in the book where Apple has a problem of making too many products rather than too few. It is also illustrative of increasing competition from Chinese brands and, as emails reveal (PDF), trade tensions between the U.S. and China.
The fundamental heart of the story of this book is of the tension of a “proud American company” attempting to appease two increasingly nationalist and hostile governments. McGee examines Apple’s billion-dollar investment in Didi Chuxing, and mentions Cook’s appointment to the board of Tsinghua University School of Economics and Management. This is all part of the politicking the company realized it would need to do to appease President Xi. Similarly, its massive spending in China needed to be framed correctly. For example, in 2016, it said it was investing $275 billion in China over the following five years:
As mind-bogglingly large as its $275 billion investment was, it was not really a quid pro quo. The number didn’t represent any concession on Apple’s part. It was just the $55 billion the company estimated it’d invested for 2015, multiplied by five years. […] What was new, in other words, wasn’t Apple’s investment, but its marketing of the investment. China was accumulating reams of specialized knowledge from Apple, but Beijing didn’t know this because Apple had been so secretive. From this meeting forward, the days in which Apple failed to score any political points from its investments in the country were over. It was learning to speak the local language.
One can see a similar dynamic in the press releases for U.S. investments it began publishing one year later, after Donald Trump first took office. Like Xi, Trump was eager to bend Apple to his administration’s priorities. Some of the company’s actions and investments are probably the same as those it would have made anyhow, but it is important to these autocrat types that they believe they are calling the shots.
Among the reasons the U.S. has given for taking a more hostile trade position on China is its alleged and, in some cases, proven theft of intellectual property. McGee spends less time on this — in part, I imagine, because it is a hackneyed theme frequently used only to treat innovation by Chinese companies with suspicion and contempt. This book is a more levelheaded piece of analysis. Instead of having the de rigueur chapter or two dedicated to intellectual property leaving through the back door, McGee examines the less-reported front-door access points. Companies are pressured to participate in “joint ventures” with Chinese businesses to retain access to markets, for example; this is why iCloud in China is operated not by Apple, but by AIPO Cloud (Guizhou) Technology Co. Ltd.
Even though patent and design disputes are not an area of focus for McGee, it is part of the two countries’ disagreements over trade, and one area where Apple is again stuck in the middle. A concluding anecdote in the book references the launch of the Huawei Mate XT, a phone that folds in three which, to McGee, “appears to be a marvel of industrial engineering”:2
It was only in 2014 that Jony Ive complained of cheap Chinese phones and their brazen “theft” of his designs; it was 2018 when Cupertino expressed shock at Chinese brands’ ability to match the newest features; now, a Chinese brand is designing, manufacturing, and shipping more expensive phones with alluring features that, according to analysts, Apple isn’t expected to match until 2027. No wonder the most liked comment on a YouTube unboxing video of the Mate XT is, “Now you know why USA banned Huawei.” (pages 377–378)
The Mate XT was introduced the same day as the iPhone 16 line, and the differences could not have been more stark. The iPhone was a modest evolution of the company’s industrial design language, yet would be familiar to someone who had been asleep for the preceding fifteen years. The Mate XT was anything but. The phones also had something in common: displays made by BOE. The company is one of several suppliers for the iPhone, and it enables the radical design of Huawei’s phone. But according to Samsung, BOE’s ability to make OLED and flexible displays depends on technology stolen from them. The U.S. International Trade Commission agreed and will issue a final ruling in November which is likely to prohibit U.S. imports of BOE-made displays. It seems like this will be yet another point of tension between the U.S. and China, and another thing Cook can mention during his next White House visit.
“Apple in China” is, as you can imagine, dense. I have barely made a dent in exploring it here. It is about four hundred pages and not a single one is wasted. This is not one of those typical books about Apple; there is little in here you have read before. It answers a bunch of questions I have had and serves as a way to decode Apple’s actions for the past ten years and, I think, during this second Trump presidency.
At the same time, it leaves me asking questions I did not fully consider before. I have long assumed Apple’s willingness to comply with the demands of the Chinese government are due to its supply chain and manufacturing role. That is certainly true, but I also imagine the country’s sizeable purchasing power is playing an increasing role. That is, even if Apple decentralizes its supply chain — unlikely, if McGee’s sources are to be believed — it is perhaps too large and too alluring a market for Apple to ignore. Then again, it arguably created this problem itself. Its investments in China have been so large and, McGee argues, so impactful they can be considered in the same context as the U.S.’ post-World War II European recovery efforts. Also, the design of Apple’s ecosystem is such that it can be so deferential. If the Chinese government does not want people in its country using an app, the centralized App Store means it can be yanked away.3
Cook has previously advocated for expressing social values as a corporate principle. In 2017, he said, perhaps paraphrasing his heroesMartin Luther King Jr. and John Lewis, “if you see something going on that’s not right, the most powerful form of consent is to say nothing”. But how does Cook stand firmly for those values while depending on an authoritarian country for Apple’s hardware, and trying to appease a wanna-be dictator for the good standing of his business? In short, he does not. In long, well, it is this book.
It is this tension — ably shown by McGee in specific actions and stories rather than merely written about — that elevates “Apple in China” above the typical books about Apple and its executives. It is part of the story of how Apple became massive, how an operations team became so influential, and how the seemingly dowdy business of supply chains in China applied increasingly brilliant skills and became such a valuable asset in worldwide manufacturing. And it all leads directly to Tim Cook standing between Donald Trump and J.D. Vance in the White House, using the same autocrat handling skills he has practiced for years. Few people or businesses come out of this story looking good. Some look worse than others.
The most relevant documents I found under the “415” filings from December 2023. ↥︎
I think it is really weird to cite a YouTube comment in a serious book. ↥︎
I could not find a spot for this story in this review, but it forecasts Apple’s current position:
But Jobs resented third-party developers as freeloaders. In early 1980, he had a conversation with Mike Markkula, Apple’s chairman, where the two expressed their frustration at the rise of hardware and software groups building businesses around the Apple II. They asked each other: “Why should we allow people to make money off of us? Off of our innovations?” (page 23)
Sure seems like Jobs was able to revisit this when Apple created its rules for developing apps for the iPhone and subsequent devices. McGee sources this to Michael Malone’s 1999 book “Infinite Loop”, which I now feel I must read. ↥︎
The goals of art and commerce are basically opposite. Art fills our soul; it gives us emotional life. I have rarely heard anyone describe commerce similarly.
At its most ideal, the business of art enables more of it in greater variety, while allowing those who create it a reasonable living. This has rarely been the case. There are hundreds of years of unbelievably wealthy patrons building their cultural cachet by supporting artists of their particular taste and at their behest. More recently, recording contracts are routinely described as “brutal”, “a raw deal”, “predatory”, and “exploitative”. That has been generally true for all artists, but has been particularly pronounced for marginalized — and, to be more even specific, black — artists since the recording industry’s origins.
In “Mood Machine”, released earlier this year, Liz Pelly adds an additional complicating question: what is the relationship between art and commerce when massive data collection becomes routine?
The origins of streaming music may be found first in piracy and later in Rhapsody, but Spotify is where modern streaming platforms truly began. While Spotify’s founders tend to describe a noble birth, Perry points to a 2015 interview with co-founder Martin Lorentzon in which he describes the idea to build a targeted advertising platform first. How it would acquire users was an open question — “[s]hould it be product search? Should it be movies, [or ‘Godfather’], or audiobooks? And then we ended up with music”. That is not necessarily a bad thing. What is bad, though, is that Spotify reportedly began with an unlicensed library and made money on the back of it. That combination does not sound to me like the result of a love of music.
Sadly, the interesting storytelling does not reliably continue. Admittedly, part of the reason for this is my personal distaste for Pelly’s style of writing, something which I would not normally mention — surely not everyone is a fan of my writing style, either — but feel compelled to do so for how intrusive I found it. Far too many sections and chapters in this book end in a tedious turn of phrase: “Under the gaze of streaming surveillance, the exchange is never truly one-to-one;”; “It makes sense that as the digital world has grown to feel more like a shopping mall, it is also sometimes the very companies making music for shopping malls that are flooding its soundtrack”. Another part of the problem is the way this book is organized. Each chapter reads like an individual essay dedicated to a specific problem with Spotify — algorithmic suggestions, vibe-based playlists, and changing business terms, to name a few. What that looks like in practice is a great deal of repetition. I count at least seven chapters, of eighteen, dedicated to background and unfocused listening.
Part of the problem, however, is that Pelly has been documenting these phenomena for years in articles published at the Baffler. I am familiar with the extraordinary amount of “chill” music, trendy sound palettes, the relationship between mood-based music and targeted advertising, and the comparisons to Uber because these articles were all published a minimum of six years ago. That I remember these articles is sometimes a testament to Pelly’s reporting; at other times, it reminds me of things I previously found questionable but could not quite articulate why.
One thing I remember from one article, for example, is its attempt to define a “Spotify sound”. This was revisited in the book in the “Streambait Pop” chapter (page 82):
By the time of Spotify’s IPO in 2018, it seemed that the peak playlist era had produced an aesthetic of its own. That year, one pop songwriter and producer told me that [Billie] Eilish had become a type of poster child for what was being called a “Spotify sound,” a deluge of platform-optimized pop that was muted, mid-tempo, and melancholy. He told me it had become normal for him to go into a session and hear someone say they wanted to write a “Spotify song” to pitch around to artists: “I’ve definitely been in circumstances where people are saying, ‘Let’s make one of those sad girl Spotify songs.’ You give yourself a target,” he said. It was a formula. “It has this soft, emo-y, cutesy thing to it. These days it’s often really minimal and based around just a few simple elements in verses. Often a snap in the verses. And then the choruses sometimes employ vocal samples. It’s usually kind of emo in lyrical nature.”
Pelly’s argument is built primarily around the works of Charlotte Lawrence, Sasha Sloan, and Nina Nesbitt, none of which I am familiar with. But their music — “Normal” and “Psychopath” are both named in the article — sound like a lot of pop music trends of the time: a blend of genres that emerges kind of beach-clubby, pretty breathy, and electronics-heavy but not particularly danceable. Pelly quotes an indie rock label owner calling it “emotional wallpaper”.
In re-reading Pelly’s “streambait” article for this piece, I found this paragraph a good distillation of many arguments made throughout the book:
Musical trends produced in the streaming era are inherently connected to attention, whether it’s hard-and-fast attention-grabbing hooks, pop drops and chorus-loops engineered for the pleasure centers of our brains, or music that strategically requires no attention at all—the background music, the emotional wallpaper, the chill-pop-sad-vibe playlist fodder. These sounds and strategies all have streambait tricks embedded within them, whether they aim to wedge bits of a song into our skulls or just angle toward the inoffensive and mood-specific-enough to prevent users from clicking away. All of this caters to an economy of clicks and completions, where the most precious commodity is polarized human attention — either amped up or zoned out—and where success is determined, almost in advance, by data.
Much like the similar essays in “Mood Machine”, very little of this feels like it is directly traceable to Spotify or streaming generally. There has long been pop music that is earwormy, and pop music that is kind of a silence-filler. When radio was more influential, the former could be found on the contemporary hit radio station and the latter on any number of adult contemporary variants.
Coalescing around a particular sound is also not a Spotify phenomenon. The early 1990s brought an onslaught of Nirvana imitators, and the late 1990s polished the sound so hard it removed any trace of edge and intrigue it once held. The early-2000s dominance of Coldplay made way for the mid-2000s Timbaland production craze, which led to a wave of late-2000s dance and club pop, which was followed in the early-2010s by Americana revival. This is the power of radio. Or, it was the power of radio, at least. You could describe any of the chart-topping songs in similar terms as Pelly uses for “streambait”: “attention-grabbing hooks”, “chorus-loops engineered for the pleasure centers of our brains”, and “chill-pop-sad-vibe playlist fodder”. Should this be blamed on the precise listener analytics dashboard available to artists on Spotify? I am not sufficiently convinced.
Pelly describes a discussion she had with two teenagers outside an all-ages venue as they struggled to describe the “aesthetic rap” show they were attending, a genre which seems to be a slower and spacier take on rage (page 118):
The kids I spoke to outside Market seemed genuinely enthused. But as I headed home, I was struck by how palpably it seemed that most of those conversations were more concerned with a niche vibe fandom — which no one could even really explain — than the artists themselves.
I cannot imagine this is a new phenomenon. Some people develop a deep fascination with music and seek releases from specific artists. But plenty are only looking for a sound and a vibe. It is why retailers and magazines gave away sampler CDs in the mid-2000s scratching a generic indie rock itch.
What Pelly keeps describing in these chapters is a kind of commodification of cool, none of which is new or unique to Spotify. It is the story of popular culture writ large: things begin as cool for a small group, are absorbed into broader society, and are sold back to us by industry. This most often happens to marginalized communities who find community in vocabulary, music, visual art, and dance, and then it gets diluted as it becomes mainstreamed.
As noted, Pelly dedicates considerable space in the book to chill playlists — “‘Chilled Dance Hits,’ ‘Chilled R&B,’ and ‘Chilled Reggae’ were all among Spotify’s official playlist offerings, alongside collections like ‘Chillin’ on a Dirt Road,’ ‘lofi chill,’ and ‘Calm.'” (page 45). This is partly not the fault of Spotify; YouTube expanded the availability of live streams in 2013 and it resulted in plenty of samey chill hop stations. In a 2018 New York Times article about these nonstop live-streams, Jonah E. Bromwich writes:
Channels like College Music, ChilledCow, Chillhop Music and others are unlikely to have a broad impact on the music industry. But they represent an underground alternative to the streaming hegemony of Spotify and Apple Music. The industry commentator Bob Lefsetz said that while the stations were not likely to become a lucrative endeavor, they were a way for members of the public to seize power back from cultural gatekeepers.
Instead of being predominantly inspired or encouraged by Spotify, it is possible the growth of the background music genre is something the company is instead taking advantage of — and take advantage it did.
Whether on YouTube or Spotify, the beat-makers behind these tracks are loosely inspired by artists like J Dilla, but they have coalesced around a distinctly hazy and slowed-down instrumental palette. That these tracks are deliberately indistinct has led to Spotify commissioning low-royalty generic tracks and, as Pelly writes, this passivity is an area “where the imminent A.I. infiltration of music was most feasible: the replacement of lean-back mood music” (page 132).
All told, it sure sounds like Spotify aligned its recommendations to compel users into filling silence with music featureless enough it could be replicated by what are, in effect, stock tracks. If this was a deliberate strategy to allow Spotify to have lower royalty expenses, it has had a mixed effect. Setting aside the ongoing popularity of big pop stars like Taylor Swift and Justin Bieber — I would love to know how much of Spotify’s revenue is sent to those two artists alone — the rise of streaming also coincided with the explosion of in-your-face K-pop groups, renewed interest in rock music, and revivals of funk and disco. These are not the kinds of passive listening genres Pelly seems so concerned with.
That is not to say Spotify plays no influence in what is popular. Just as what was made popular on the radio brought a wave of imitators, so too is the case for an era where streaming is where most people listen to music. None of this is new. A streaming listener’s context is often quite similar to a radio listener’s, too. Pelly’s exploration of the chilled-out Spotify playlist and lean-back listener reads, to me, with considerable disdain. But that kind of passive listening was common in the radio days. People put music on in the car and at work all the time. My parents used to put a C.D. on when they were cooking dinner. I do not think they were captivated in that moment by the sounds of Genesis or the Police. I was listening to music while reading this book and writing this article. Sometimes, an album will get played as a background to other tasks; any musician is surely aware of this.
Perhaps you, too, are now seeing what I began to understand. What Pelly keeps gesturing at throughout this book — but never quite reaches — is that the problems Spotify faces are similar to that of any massive two-sided platform. Pick your case study of any of the large tech companies and you can find parallels in Spotify. It has hundreds of millions of subscribers; everything it does is at vast scale.
It has privacy problems. Pelly dedicates a chapter to “Streaming as Surveillance”, pointing to Spotify’s participation in the data broker and ad tech economy. Spotify suggests its ads can be targeted based on users’ moods correlated with playlist and song data. This, like so many other ad tech sales pitches, is likely an inflated claim with only limited success. Yet it is also a little creepy to consider it is what Spotify aspires its ad product to be.
Spotify, like many others, faces moderation problems. Spotify does not want to put too many limits on what music is accepted. In the best of circumstances, this makes it possible for a nascent artist to rise from obscurity and start a career. But there are financial incentives to gaming the system. There are people who will follow trends, and even commit outright fraud manually or with A.I.-generated material. This is true for other broadly available revenue machines — Google Ads and YouTube are two that immediately spring to mind. In an attempt to disincentivize these behaviours and reduce Spotify’s costs, the company announced in November 2023 it would stop paying royalties for tracks with fewer than one thousand annual streams. Pelly writes this “was part of a campaign waged by Universal Music Group to revamp streaming in its favor” (page 155). When Spotify rolled out this new royalty structure, UMG CEO Lucian Grainge bade good riddance to “merchants of garbage […] no one actually wants to listen to” (page 157). How much it actually hurts low-effort spammers is a good question, but it impacts legitimate indie artists — what Grainge calls “garbage” — for whom Spotify now presents no advantage over piracy.
I would not be so presumptuous as to say that is what this book ought to have been but, as a longtime reader of Pelly’s articles about the subject, I was frustrated by “Mood Machine”. It is the kind of book I wish would be taken apart and reassembled for better flow and a more coherent structure. Spotify and the streaming model have problems. “Mood Machine” identifies many of them. But the money quote — the one that cut through everything for me — is from an anonymous former Spotify employee (page 167):
“If Spotify is guilty of something, they had the opportunity at one point to change the way value was exhanged in the music industry, and they decided not to,” the former employee told me. “So it’s just upholding the way that things have always been.”
We treat art terribly. It is glamorous and alluring, but it is ultimately a plaything of the rich. The people who make money in art, no matter the discipline, are those responsible for its distribution, management, and sale. Those creating the actual work that enriches our lives too often get the scraps, unless they have enough cachet to dictate the terms in their favour.
Spotify is just another layer in the chain; another place that makes far more money than artists ever will. An artist understands they are signing up for a job with unpredictable pay, but Spotify’s particular structure makes it even more opaque.
The four biggest audio streaming platforms — Spotify, YouTube, Tencent, and Apple Music are the top-down force pushing culture in a particular direction, a level of influence Clear Channel’s executives could have only hoped for in its heyday. Streaming can help people learn about music they have never heard before. But it is not very effective as an artistic conduit. It is an ad-supported platform operating at global scale, much like a handful of other large tech companies, and it faces similar problems.
Yet none of them are as essentially tied to the distribution of art as is Spotify. It is a shame it did not create something to upend the status quo and made more artists more money. I guess part of the reason for that could be because its co-founders saw music as one of several interchangeable user acquisition strategies to sell advertisements — you know, for the love of art and music.
Mark Zuckerberg is not much of a visionary. He is ambitious, sure, and he has big ideas. He occasionally pops into the public consciousness to share some new direction in which he is taking his company — a new area of focus that promises to assert his company’s leadership in technology and society. But very little of it seems to bear fruit or be based on a coherent set of principles.
For example, due to Meta’s scale, it is running into limitations on its total addressable market based on global internet connectivity. It has therefore participated in several related projects, like measuring the availability of internet connectivity worldwide with the Economist, which has not been updated since 2022. In 2014, it acquired a company building a solar-powered drone to beam service to people in more remote locations; the project was cancelled in 2018. It made a robot to wrap fibre optic cable around existing power lines, which it licensed to Hibot in 2023; Hibot has nothing on its website about the robot.
It is not just Meta’s globe-spanning ambitions that have faltered. In 2019, Zuckerberg outlined a “privacy-focused vision for social networking” for what was then Facebook, the core tenets of which in no way conflict with the company’s targeted advertising business. Aside from the things I hope Facebook was already doing — data should be stored securely, private interactions should remain private, and so on — there were some lofty goals. Zuckerberg said the company should roll out end-to-end encrypted messaging across its product line; that it should add controls to automatically delete or hide posts after some amount of time; that its products should be extremely interoperable with those from third-parties. As of writing, Meta added end-to-end encryption to Facebook Messenger and Instagram, but it is only on by default for Facebook. (WhatsApp was end-to-end encrypted by default already.) It has not added an automatic post deletion feature to Facebook or Instagram. Its apps remain stubbornly walled-off. You cannot even sign into a third-party Mastodon app with a Threads account, even though it is amongst the newest and most interoperable offerings from Meta.
Zuckerberg published that when it was advantageous for the company to be seen as doing its part for user privacy. Similarly, when it was smart to advocate for platform safety, Zuckerberg was contrite:
But it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well. That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake. It was my mistake.
Then, when it became a good move to be brash and arrogant, Zuckerberg put on a gold chain and a million-dollar watch to explain how platform moderation had gone too far.
To be clear, Meta has not entirely failed with these initiatives. As mentioned, Threads is relatively interoperable, and the company defaulted to end-to-end encryption in Facebook Messenger in 2023. It said earlier this year it is spending $10 billion on a massive sub-sea cable, which is a proven technology to expand connectivity more than a solar-powered drone could.
But I have so far not mentioned the metaverse. According to Zuckerberg, this is “an embodied internet where you’re in the experience, not just looking at it”, and it was worth pivoting the entire company to be “metaverse-first”. The company renamed itself “Meta”. Zuckerberg forecasted an “Altria moment” a few years prior and the press noticed. In announcing this new direction in 2021, Zuckerberg acknowledged it would be a long-term goal, though predicted it would be “mainstream in the next five to ten years”:
Our hope is that within the next decade, the metaverse will reach a billion people, host hundreds of billions of dollars of digital commerce, and support jobs for millions of creators and developers.
Granted, it has not been even four years since Zuckerberg made these announcements, but are we any closer to his company’s vision becoming mainstream? If you broaden the definition of “metaverse” to include all augmented and virtual reality products then, yes, it appears to be a growing industry. But the vision shown at Connect 2021 is scarcely anywhere to be found. We are not attending virtual concerts or buying virtual merch at virtual after-parties. I am aching to know how the metaverse real estate market is doing as I am unaware of anyone I know living in a virtual house.
As part of this effort, Meta announced in May 2022 it would support NFTs on Instagram. These would be important building blocks for the metaverse, the company said, “critical for how people will buy, use and share virtual objects and experiences” in the virtual environment it was building. Meta quickly expanded availability to Facebook and rolled it out worldwide. Then, in March 2023, it ended support for NFTs altogether, saying “[a]ny collectibles you’ve already shared will remain as posts, but no blockchain info will be displayed”.
Zuckerberg has repeatedly changed direction on what his company is supposed to stand for. He has plenty of ideas, sure, and they are often the kinds of things requiring resources in an amount only possible for a giant corporation like the one he runs. And he has done it again by dedicating Meta’s efforts to what he is calling — in a new manifesto, open letter, mission statement, or whatever this is — “personal superintelligence”.
I do have to take a moment to acknowledge the bizarre quality of this page. It is ostensibly a minimalist and unstyled document of near-black Times New Roman on a white background — very hacker, very serious. It contains about 3,800 characters, which should mean a document barely above four or five kilobytes, accounting for HTML tags and a touch of CSS. Yet it is over 400 kilobytes. Also, I love that keywords are defined:
<meta name="keywords" content="Personal
Superintelligence, AI systems improvement,
Superintelligence vision, Mark Zuckerberg
Meta, Human empowerment AI, Future of
technology, AI safety and risks, Personal
AI devices, Creativity and culture with
AI, Meta AI initiatives">
Very retro.
Anyway, what is “superintelligence”? is a reasonable question you may ask, and a term which Zuckerberg does not define. I guess it is supposed to be something more than or different from artificial intelligence, which is yesterday’s news:
As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.
He decries competitors’ ambitions:
This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well.
I am unsure what to make of this. It is sorely tempting to dismiss the whole endeavour as little more than words on a page for a company deriving 98% of its revenue (PDF) from advertising.1 If we consider it more seriously, however, we are left with an ugly impression for what “valuable work” may consist of. Meta is very proud of its technology to “generate photorealistic images”, thereby taking the work of artists and photographers. Examples of its technology also include generating blog posts and building study plans, so it seems writing and tutoring are not entirely “valuable work” either.
I am being a bit cheeky but, with Zuckerberg’s statement entirely devoid of specifics, I am also giving it the gravitas it has earned.
While I was taking way too long to write this, Om Malik examined it from the perspective of someone who has followed Zuckerberg’s career trajectory since it began. It is a really good piece. Though Malik starts by saying “Zuck is one of the best ‘chief executives’ to come out of Silicon Valley”, he concludes by acknowledging he is “skeptical of his ability to invent a new future for his company”:
Zuck has competitive anxiety. By repeatedly talking about being “distinct from others in the industry” he is tipping his hand. He is worried that Meta is being seen as a follower rather than leader. Young people are flocking to ChatGPT. Programmers are flocking to Claude Code.
What does Meta AI do? Bupkiss. And Zuck knows that very well. You don’t do a company makeover if things are working well.
If you are solely looking at Meta’s earnings, things seem to be working just fine for the company. Meta beat revenue expectations in its most recent quarter while saying the current quarter will also be better than analysts thought. Meta might not be meeting already-low analyst expectations for revenue in its Reality Labs metaverse segment, but the stock jumped by 10% anyhow. Even Wall Street is not taking Zuckerberg seriously as an innovator. Meta is great at selling ads. It is not very exciting, but it works.
Back to the superintelligence memo, emphasis mine:
We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible.
Meta is committed to open source AI. I’ll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term.
[…]
There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it’s in their interest to support open source because it will make the world more prosperous and safer.
No mention of being careful, no mention of choosing what to open source. Zuckerberg took an ostensibly strong, principled view supportive of open source A.I. when it benefitted the company, and is now taking an ostensibly strong, principled view that it requires more nuance.
Zuckerberg concludes:
Meta believes strongly in building personal superintelligence that empowers everyone. We have the resources and the expertise to build the massive infrastructure required, and the capability and will to deliver new technology to billions of people across our products. I’m excited to focus Meta’s efforts towards building this future.
On this, I kind of believe him. I believe the company has the resources and reach to make “personal superintelligence” — whatever it is — a central part of Meta’s raison d’être, just as Malik says in his article he has “learned not to underestimate Zuckerberg”. The language in Zuckerberg’s post is flexible, vague, and optimistic enough to provide cover for whatever the company does next. It could be a unique virtual assistant, or it could be animated stickers in chats. Whatever it is, this technology will also assuredly be directed toward the company’s advertising machine, as its current A.I. efforts are providing “greater efficiency and gains across our ad system”. Zuckerberg is telling investors imagine what we could do with superintelligence.
The key issue here is the same as the OpenAI training issue: people don’t believe these companies when they say that they aren’t doing something.
One interesting difference here is that in the Facebook example people have personal evidence that makes them believe they understand what’s going on.
With AI we have almost the complete opposite: AI models are weird black boxes, built in secret and with no way of understanding what the training data was or how it influences the model.
Meta has pulled off a remarkable feat. It has ground down users’ view of their own privacy into irrelevance, yet its services remain ubiquitous to the point of being essential. Maybe Meta does not need trust for its A.I. or “superintelligence” ambitions, either. It is unfathomably rich, has a huge volume of proprietary user data, and a CEO who keeps pushing forward despite failing at basically every quasi-visionary project. Maybe that is enough.
Do note two slides later the company’s effective tax rate dropping from 17% in Q3 and Q4 2023 to just 9% in Q1 2025, and 11% in the most recent quarter. Nine percent on over $18 billion in income. ↥︎
Pew Research Centre made headlines this week when it released a report on the effects of Google’s A.I. Overviews on user behaviour. It provided apparent evidence searchers do not explore much beyond the summary when presented with one. This caused understandable alarm among journalists who focused on two stats in particular: a reduction from 15% of searches which resulted in a result being clicked to just 8% when an A.I. Overview was shown, and finding that just 1% of searches with an Overview resulted in a click on a citation in that summary.
Beatrice Nolan, of Fortune, said this was evidence A.I. was “eating search”. Thomas Claburn, of the Register, said they were “killing the web”, and Emanuel Maiberg, of 404 Media, says Google’s push to boost A.I. “will end the flow of all that traffic almost completely and destroy the business of countless blogs and news sites in the process”. In addition to the aforementioned stats, Ryan Whitwam, of Ars Technica, also noted Pew found “Google users are more likely to end their browsing session after seeing an A.I. Overview” than if they do not. It is, indeed, worrisome.
Pew’s is not the only research finding a negative impact on search traffic to publishers thanks to Google’s A.I. search efforts. Ryan Law and Xibeijia Guan of Ahrefs published, earlier this year, the results of anonymized and aggregated Google Search Console data finding a 34.5% drop in click-through rate when A.I. Overviews were present. This is lower than the 47% drop found by Pew, but still a massive amount.
Ahrefs gives two main explanations for this decline in click-through traffic. First, and most obviously, these Overviews present as though they answer a query without needing to visit any other pages. Second, they push results further down the page. On a phone, an Overview may occupy the whole height of the display, as shown in Google’s many examples. Either one of these could be affecting whether users are clicking through to more stuff.
So we have two different reports showing, rather predictably, that Google’s A.I. Overviews kneecap click rates on search listings. But these findings are complicated by the various other boxes Google might show on a results page, none of which are what Google calls an “A.I.” feature. There are a slew of Rich Result types — event information, business listings, videos, and plenty more. There are Rich Answers for when you ask a general knowledge question. There are Featured Snippets that extract and highlight information from a specific page. These “zero-click” features all look and behave similarly to A.I. Overviews. They all try to answer a user’s question immediately. They all push organic results further down the page. So what is different about results with an A.I. twist?
Part of the problem is with methodology. That deja vu you are experiencing is because I wrote about this earlier this week, but I wanted to reiterate and expand upon that. The way Pew and Ahrefs collected the data for measuring click-through rates differs considerably. Pew, via Ipsos KnowledgePanel, collected browsing data from 900 U.S. adults. Researchers then used a selection of keywords to identify search result pages with A.I. Overviews. Ahrefs, on the other hand, relied on data directly from Google Search Console automatically provided by users who connected it to the company’s search optimization software. Ahrefs compared data collected in March 2024, pre-A.I. rollout, against that from March 2025 after Google made A.I. Overviews more present in search results.
In both reports, there is no effort made to distinguish between searches with A.I. Overviews present and those with the older search features mentioned above, and that would impact average click-through rates. Since Featured Snippets rolled out, for example, they have been considered the new first position in results and, unlike A.I. Overviews in the findings of Pew and Ahref, they can drive a lot of traffic. Search optimization studies are pretty inconsistent, finding Featured Snippets on between 11%, according to Stat, and up to 80% according to Ahrefs.
But the difference is even harder to research than it seems because A.I. Overviews do not necessarily replace Featured Snippets, nor are they independent of each other. There are queries for which Overviews are displayed that had no such additional features before, there are queries where Featured Snippets are being replaced. Sometimes, the results page will show an A.I. Overview and a Featured Snippet. There does not seem to be a lot of good data to disentangle what effect each of these features has in this era. A study from Amisive from earlier this year found the combined display of Overviews and Snippets reduced click-through rates by 37%, but Amisive did not publish a full data set to permit further exploration.
But publishers do seem to be feeling the effects of A.I. on traffic from Google’s search engine. The Wall Street Journal, relying on data from Similarweb, reported a precipitous drop in search traffic to mainstream news sources like Business Insider and the Washington Post from 2022 to 2025. Similarweb said the New York Times’ share of traffic coming from search fell from 44% to 36.5% in that time. Interestingly, Similarweb’s data did not show a similar effect for the Journal itself, reporting a five-point increase in the share of traffic derived from search over the same period.
The quality of Similarweb’s data is, I think, questionable. It would be better if we had access to a large-scale first-party source. Luckily, the United States Government operates proprietary analytics software with open access. Though it is not used on all U.S. federal government websites, its data set is both general-purpose — albeit U.S.-focused — and huge: 1.55 billion sessions in the last thirty days. As of writing, 44.1% of traffic in the current calendar year is from organic Google searches, down from 46.4% in the previous calendar year. That is not the steep decline found by Similarweb, but it is a decline nevertheless — enough to drop organic Google search traffic behind direct traffic. I also imagine Google’s A.I. Overviews impact different types of websites differently; the research from Ahrefs and Amisive seems to back this up.
Google has, naturally, disputed the results of Pew’s research. In an extended comment to Search Engine Journal, the company said Pew “use[d] a flawed methodology and skewed queryset that is not representative of Search traffic”, adding “[we] have not observed significant drops in aggregate web traffic”. What Google sees as flaws in Pew’s methodology is not disclosed, nor does the company provide any numbers to support its side of the story. Sundar Pichai, Google’s CEO, has even claimed A.I. Overviews are better for referral traffic than links outside Overviews — but, again, has never provided evidence.
Intuitively, it makes sense to me that A.I. Overviews are going to have a negative impact on click-through rates, because that is kind of the whole point. The amount of information being provided to users on the results page increases while the source of that information is minimized. It also seems like the popular data sources for A.I. Overviews are of mixed quality; according to a Semrush study, Quora is the most popular citation, while Reddit is the second-most popular.
I find all of these studies frustrating and it is not necessarily the fault of the firms conducting them. Try as hard as the search optimization industry has, we still do not have terrifically reliable ways of measuring the impact each new Google feature has on organic search traffic. The party in the best possible position to demystify this — Google — tends to be extremely secretive on the grounds it does not want people gaming its systems. Also, given the vast disconnect between the limited amount Google is saying and the findings of researchers, I am not sure how much I trust its word.
It is possible we cannot know exactly how much of an effect A.I. Overviews will have on search trafic, let alone that of “answer engines” like Perplexity. The best thing any publisher can do at this point is to assume the mutual benefits are going away — and not just in search. Between Google’s legal problems and it fundamentally reshaping how people discover things in search, one has to wonder how it will evolve its advertising business. Publishers have already been prioritizing direct relationships with readers. What about advertisers, too? Even with the unknown future of A.I. technologies, it seems like it would be advantageous to stop relying so heavily on Google.
For years, I have used an ad blocker only infrequently on YouTube partly because the ads used to be short and not too disruptive, but mostly because I feel bad for people who make videos for a living. There is basically no alternative to YouTube.
Text is small; I can take this website anywhere I want. I can make my writing as discoverable or as paywalled as I deem makes sense. The same is true, to varying degrees, for images and audio. If someone does not like how their podcast host is behaving, they can move their show. It is not easy but it is doable.
That is not the case for video — at least, not for independent makers in the safe-for-work realm. You might use any number of streaming apps for video from large studios, and perhaps smaller ones too, like Dropout and Nebula. But there is no second YouTube. Even though TikTok and Instagram provide sufficient competition in the mobile short-form format, there is simply nothing else for longer formats in landscape. Sites like Dailymotion are doing so poorly they are deleting videos if they have no activity for about a year. Video makers on alternative sites like Rumble still post clips or copies of their show to YouTube; despite their persistent whining about alleged censorship, their videos are not removed from YouTube, and they know it remains the best platform for discovery.
I began considering a YouTube Premium subscription a couple of years ago when the great Alec Watson mentioned that creators like him get a cut. Which, in hindsight, seems obvious: instead of ad revenue, they get a portion of subscription revenue. But this was and remains unmentioned in YouTube’s marketing. Call it parasocial, non-derogatory, but this is the most compelling argument for why I should pay for YouTube. I support several indies through Patreon, too, but this means I get to be even more supportive without making specific monthly commitments, and I get a better experience.
The experience, by the way, is what pushed me over the edge. Remember how I mentioned “ads used to be short and not too disruptive”? Over the past few years, YouTube has increased the default ad load and duration. Before I had Premium, I was seeing ads every one or two minutes in many videos where the uploader had not changed the ad settings. Being that intrusive is something only YouTube can do because, again, it has no competition.
The process for actually paying for YouTube was bizarrely difficult. Since I already have a music streaming subscription, the Premium Lite option was a good fit for me. However, Google simply would not geolocate me or my personal Google account to Canada, where the Lite option is available. There was no way to correct Google’s assumption about where I was located; it simply did not let me see the Lite registration page, even though the currency was displayed in Canadian dollars.
So I made a new account in my Google Workspace dashboard just for YouTube. It turns out this was a little complicated, too, as I now needed to enable various Google services to make this work: YouTube and Google Pay, at first, and more later.
Then I used Safari’s autocomplete to enter my credit card details, which is where things got real weird. My card’s expiration date and CSC were entered correctly. The number itself was, too, until the last digit, at which point it abruptly changed to something completely different in nearly the same format — instead of four groups of four numbers, it became three groups of four, followed by one group of three. A test card number also encountered the same issue. Today, about a week later, I cannot reproduce it, so it seems like it has been fixed — but, still, strange.
One unfortunate side effect of having a YouTube Premium membership is that I now need to sign into YouTube, which means I am also signed into all Google services. Because I am using a Workspace-type Google account, I have also needed to enabled additional services on the account, like Google Maps. I can work around this by using YouTube in a separate browser and configuring Safari to open all YouTube links in that other browser — but that is not a great experience. I have as much tracking turned off as I am able through Google’s settings, plus Safari has generally better tracking protection. And I really do use frequent site sponsor Magic Lasso Adblock, which truly helps me avoid a bunch of tracking I see in the wild; the difference is obvious in Web Inspector if I refresh a page without having Magic Lasso enabled.
I am still seeing the occasional ad on YouTube on videos where they should not be present. In general, however, this is a night-and-day experience. YouTube has successfully degraded its free experience to the point where it feels like the trial version of paid software. A single meaningful competitor would be a corrective force. Alas, only YouTube is YouTube, and that makes things worse for audiences and video makers alike.
I have many thoughts about the redesigned elements common across most of Apple’s platforms but they are still brewing, much as I hope the same is true for the visual interface itself. There is one thing, though, which is a downright shame: Apple’s guidance for the shape of Mac app icons:
An app icon’s shape varies based on a platform’s visual language. In iOS, iPadOS, and macOS, icons are square, and the system applies masking to produce rounded corners that precisely match the curvature of other rounded interface elements throughout the system and the bezel of the physical device itself. In tvOS, icons are rectangular, also with concentric edges. In visionOS and watchOS, icons are square and the system applies circular masking.
This is no longer optional, but mandated by the system. App icons across Apple’s three most popular operating systems share a similar rounded square mask, and it is a downgrade. Simon B. Støvring correctly calls out the “expressive, varied app icons, a case of character over conformity” as a highlight of past versions of MacOS. I miss detailed and artistic app icons plenty. Indulging in realistic textures and thoughtful rendering was not only a differentiator for the Mac; it also conveyed the sense an app was built with a high degree of care.
Perhaps that is largely a product of nostalgia. Change can be uncomfortable, but it could be for good reasons. Stripping icons of their detail might not be bad, just different. But wrapping everything in a uniform shape? That is, dare I say, an objective degradation.
Since MacOS Big Sur debuted the precursor to this format, I have found it harder to differentiate between applications which, as I understand it, is the very function and purpose of an icon. I know this has been a long-running gripe for those of us of a certain age, but it remains true, and a walk through the history of Apple’s Human Interface Guidelines indicates the company also understands it to be true.
The uniform rounded rectangular icons in MacOS Tahoe are the product of a slow but steady series of changes Apple has made to its guidance beginning with OS X Yosemite. At its introduction at WWDC 2014, Craig Federighi said those icons were “beautifully crafted”, “so clean and yet so fundamentally still Mac”. While Apple has long provided recommendations for icon shapes and the angle at which objects should sit, its Yosemite guidelines tended to converge around specific shapes. However, Apple still advised “giving your app icon a realistic, unique shape”, since a “unique outline focuses attention on the depicted object and makes it easier for users to recognize the icon at a glance”. It also said developers should not use the same icon as a companion iOS app, since “you don’t want to imply that your app isn’t tailored for the OS X environment”.
By the next major redesign in MacOS Big Sur, Apple was extolling the “harmonious user experience” of “a common set of visual attributes, including the rounded-rectangle shape, front-facing perspective, level position, and uniform drop shadow”. Still, it emphasized the delight of including a “familiar tool” and “realistic objects” in an icon, in a manner that “float[s] just above the background and extend[s] slightly past the icon boundaries”. This is one of the reasons the MarsEdit icon remains so distinctive to me — not only does the rocket ship have enough contrast with the background, its silhouette is not the same as the icons for Mimestream above it or Fantastical below it. This is not a knock against either of those two apps; they are understandably following the documentation Apple provides and follows with all the first-party app icons I also keep in my dock.
MacOS Tahoe overrides all this previous guidance in both written policy and technical implementation. Apple, as quoted above, now says icons should be square, and the system will take care of rounding the corners — just like on iOS. Since iOS apps can run on MacOS, a lack of being “tailored for the [MacOS] environment” is no longer seen by Apple as something to caution against. But it goes further. Designers should, in its words, “embrace simplicity”:
An icon with fine visual features might look busy when rendered with system-provided shadows and highlights, and details may be hard to discern at smaller sizes. […]
Designers no longer get to decide highlights and shadows, the system does. It defines the shape, too, and non-updated icons that do not conform are doomed to live out their days in a little grey jail cell.
Apple used to guide designers on how to make smaller icons by removing details and simplifying. Something you will often hear from designers is the fun and challenge of very small icons; how does one convey the same impression of fidelity when you have exactly 256 pixels to use? It is a delicate feat. Now, Apple simply says no icon — no matter how large — is deserving of detail. This, to me, betrays a lack of trust in the third-party designers it apparently celebrates.
Moreover, it fundamentally contradicts longstanding icon design principles. Reducing each application’s visual identity to a simple glyph — albeit with the potential for a few layers — on a coloured background necessarily leads to this perverse revision of Figure 5–15 from the 2004 Human Interface Guidelines:
Though this description and figure is specifically regarding toolbar icons, Apple’s rationale for using different shapes remains clear-eyed and simply expressed:
Each toolbar icon should be easily and quickly distinguishable from the other items in the toolbar. Toolbar icons emphasize their outline form, rather than subtler visual details.
Perhaps this reasoning is incorrect. If so, the current guidelines make no effort to explain how or why users are not guided by outline in addition to colour and enclosed shape. Apple simply says icons should be constructed “in a simple, unique way with a minimal number of shapes” on “a simple background, such as a solid colour or gradient”. Not only are there no longer any “subtler visual details”, there is also no distinct outline for each icon. I believe limitations spur creativity, but imposed uniformity sure makes that difficult. This is, however, apparently required because of new icon formats available to users, including a clear version that makes it look as though the glyph and base are an ice sculpture: cool, but entirely indistinguishable from others surrounding it. Again, this wrests control away from designers to give a little bit to users, but only at the behest of and within the boundaries of Apple’s mandates.
The technical and feature improvements in MacOS Tahoe are intriguing. I sure hope the Spotlight improvements are as excellent as they seem to be since I expect I will be increasingly dependent upon it as an application launcher. I am also excited to try Liquid Glass on a Mac. Though I remain skeptical, it is at least interesting. That is something I find difficult to see in the new direction of MacOS icons.
Apple has made no announcement that can find about a significant expansion in the availability of cycling directions across Canada, but I thought it was worth noting here because it impacts me personally. I need no other excuse.
As of yesterday, Apple’s iOS feature page indicated Canadian cycling directions were only available in Montreal, Toronto, and Vancouver. The list has grown substantially today, adding:
Austria
Belgium
Denmark
Finland
Greece
Ireland
Italy
Monaco, because I guess even tax-dodging billionaires sometimes hop on their bikes, too
Netherlands
New Zealand
Norway
Portugal
San Marino
Singapore
Sweden
Switzerland
Thailand
Vatican City
In addition, three places were upgraded from availability in specific cities to, apparently, whole-country directions:
Canada
Spain
United Kingdom
Some of this expansion may truly be new, but it seems to me this page is merely catching up. For example, it only now includes the Netherlands, where cycling directions have been available for a year. The expansion to Calgary was noticed on 4 April by Reddit user drinkyourwaterbitch. I cannot recall receiving an in-app notification of this change which arrives just in time for cycling season here.
I cannot speak to anywhere else but, in my testing, Canadian availability is broad. Coverage includes not only larger cities like Calgary, Edmonton, Halifax, Ottawa, Victoria, and Winnipeg, but also smaller towns, and even Northern Canada. It also includes rural routes. I was able to get cycling directions within Iqaluit, and between Inuvik and Squamish. Eight-and-a-half days straight, if you are wondering, with a total climb of 70,200 metres.
More practically, the cycling directions within Calgary seem, overall, pretty good. Without changing any settings, Maps sometimes chooses different routes than I would take. On a ride I am familiar with, Maps auto-selected a 51-minute, 13-kilometre path instead of one it says takes 42 minutes over 11 kilometres, the main difference being a 100 metre climb compared to 150 metres. Yet, with a different destination, Maps selected a ride that takes three minutes longer and requires a greater amount of climbing, but is shorter. Both appear to have a similar mix of roads and bike lanes, so I am not sure what to make of this.
I also tried putting in a route to a pharmacy with a particularly excellent selection of coffee beans and other dry goods. Instead of putting me on the protected bike lane on Edmonton Trail, visible from the Look Around imagery, it suggests a route on Fourth Street — which is a one-way road in the opposite direction. The step-by-step directions suggest there is a cycle lane along Fourth Street but I can assure you there is not — I checked. Google Maps gets it right.
Canadian cyclists like myself can rejoice at having much greater coverage that takes into account our unique needs. At last, we can make an informed decision about how we can get to a store that, upon arrival, we find is closed because the listed hours are wrong. It is still Apple Maps, after all.
Dan Moren ended his “Stay Foolish” column at Macworld with a tremendous essay about what it means to be a “fan” of Apple or its products in 2025:
Over the years, those in the Apple community have long been called everything from the liturgical “Apple faithful” to the insipid “iSheep”, dating back to the days when the prevailing wisdom was that to hitch your fortunes to this weird company must imply a cult-like devotion. The simple truth was that most customers were simply fans of the products themselves: they liked the way they worked and looked.
That hasn’t changed for me: I continue to be a fan of Apple’s products. But as Apple started becoming more and more successful, I’ve become increasingly skeptical that one should ever really consider oneself a “fan” of a company.
On the eve of WWDC and, with it, the kicking-off of Apple’s coming year, I found myself thinking about this column published in April.
There was a time when rooting for Apple and calling oneself a “fan” seemed to make sense. The things it designed and sold had a clear ethos that gave me an impression of the kind of effortless confidence that only comes from a massive amount of effort. The business model felt like a simple exchange — as Steve Jobs said at D8, “we want to make the best products in the world for our customers. If we succeed they’ll buy them, and if we don’t they won’t”.
That era — the Jobs era, the simplistic worldview era, and the clear ethos era — is behind us. I am not saying Apple now takes everything for granted; I am sure it has teams of people working hard all day long to make improvements. But it is a corporate behemoth that cannot move quickly. Regardless of how cool its rumoured “Liquid Glass” visual refresh may be, I have a difficult time believing it will radically alter the way we use our devices. There are 2.35 billion devices on users’ wrists, in their hands and bags, and on their desks. At least a billion of them are iPhones. So, even though it is a bit exciting to be on the verge of something new and different, I do not think it will be that new and different, lest Apple alienate a huge number of people. It will look different enough, and that looks like progress — and maybe it will actually represent progress, too.
Visual interface changes aside, the expectations for this year’s WWDC do not seem to be high. We are still waiting on last year’s most impressive Apple Intelligence features, and the Vision Pro is a nascent device in the hands of few. The ideas factory is sputtering a little. But the money factory remains strong.
I am enmeshed in the Apple ecosystem so, in some ways, it should be exciting the company has to try a little harder. I am not. I do not think anyone expects Apple will sell dramatically fewer iPhones this year, nor will it lose subscribers to services, its increasingly important recurring revenue printer. Apple was a more interesting company when it could not be certain its customers would buy more stuff. I hope, after the Vision Pro’s release, it is also understanding it cannot take its developer base for granted, either.
Every time I use the Windows 11 computer on my desk at work, I am reminded of why I use MacOS. I still prefer it. But the post-P.C. device era has encouraged some nasty habits at Apple that have, consequently, made it one of the world’s most valuable businesses.
Apple has all the money it could ever want. It sells some very good products. But its size has smothered the fire it seemed to have, and dulled its edge. There are still risky bets it could make, but all of them are necessarily softened by its critical position in the world’s economy and in retirement plans. I see no reason to be a fan of that kind of company, even if you — as I — still appreciate many of the products and services it delivers. I am, as ever, looking forward to seeing what is being announced tomorrow, albeit with the understanding I will be watching a slick infomercial possibly containing concept videos. It is hard to see how one could be a fan of a multi-trillion-dollar company. I am just a customer, like a billion-plus others.