Search Results for: ostensibly

Kirk McElhearn:

I use Apple News to keep up on topics that I don’t find in sources I pay for (The Guardian and The New York Times). But there’s no way I’m going to pay the exorbitant price Apple wants for Apple News+ – £13 – because, while you get more publications, you still get ads.

And those ads have gotten worse recently. Many if not most of them look like and probably are scams. Here are a few examples from Apple News today.

Apple promotes News by saying it offers “trusted sources” in an app that is “rewriting the reading experience”. And, when Apple partnered with Taboola, Sara Fischer at Axios reported it would “establish certain levels of [quality] control around which advertisers it will sell through to Apple apps”. Unsurprisingly, the highest quality Taboola ads are still bait for doing some fraud, and it is all over Apple’s ostensibly selective apps. Probably good for services revenue, though.

Barry Schwartz, Search Engine Roundtable:

On Friday, Google announced it had filed a lawsuit (PDF) against SerpApi for scraping the Google search results. Google alleges that SerpApi is running an “unlawful” operation that bypasses Google’s security measures to scrape search results at an astonishing scale.

[…]

Google claims SerpApi uses hundreds of millions of fake search requests to mimic human behavior. This allows them to bypass CAPTCHAs and other automated defenses that Google uses to prevent bots from overwhelming its systems.

In October, as part of its lawsuit against Perplexity, Reddit sued SerpApi and a couple of other scraper companies. Figuring out the difference between the ostensibly bad kind scraping practices of SerpApi and the good ones of Google seems like it will require a narrow definition, one Google is happy to provide.

Halimah DeLaine Prado, Google’s general counsel:

Google follows industry-standard crawling protocols, and honors websites’ directives over crawling of their content. Stealthy scrapers like SerpApi override those directives and give sites no choice at all. SerpApi uses shady back doors — like cloaking themselves, bombarding websites with massive networks of bots and giving their crawlers fake and constantly changing names — circumventing our security measures to take websites’ content wholesale. This unlawful activity has increased dramatically over the past year.

This explanation is not wrong, per se, though it is quite self-serving. The way many people begin their search for a product, service, or local business is with Google. A typical website owner is therefore desperate for a Google link, to the extent that they will reconstruct their site on a regular basis to suit its shifting ranking criteria. That means Google has broad power to do basically whatever it wants to the web. If publishers wanted to rank highly in search results, they were required to adopt the company’s proprietary fork of HTML. It can inject links, scrape third-parties, and build a self-preferencing silo — and website owners have to be okay with it or lose valuable referral traffic from Google users.

All of that is nominally ethical in Google’s view. What is not, apparently, is a company using workarounds to get a window into Google’s practices. I sympathize with that argument. The only tool we have is robots.txt and, regardless of SerpApi’s intent, I do not think circumvention efforts should be tolerated, though that should be paired with aggressive antitrust action to prevent incumbent powers from abusing their position.

Recent actions taken by U.S. courts, for example, have found Google illegally maintained its search monopoly. In issuing proposed remedies earlier this year, the judge noted the rapidly shifting world of search thanks to the growth of generative artificial intelligence products. “OpenAI” is mentioned (PDF) thirty times as an example of a potential disruptor. However, the judge does not mention OpenAI’s live search data is at least partially powered by SerpApi.

Todd Vaziri:

As far as I can tell, Paul Haine was the first to notice something weird going on with HBO Max’ presentation. In one of season one’s most memorable moments, Roger Sterling barfs in front of clients after climbing many flights of stairs. As a surprise to Paul, you can clearly see the pretend puke hose (that is ultimately strapped to the back side of John Slattery’s face) in the background, along with two techs who are modulating the flow. Yeah, you’re not supposed to see that.

It appears as though this represents the original photography, unaltered before digital visual effects got involved. Somehow, this episode (along with many others) do not include all the digital visual effects that were in the original broadcasts and home video releases. It’s a bizarro mistake for Lionsgate and HBO Max to make and not discover until after the show was streaming to customers.

Eric Vilas-Boas, Vulture:

How did this happen? Apparently, this wasn’t actually HBO Max’s fault — the streamer received incorrect files from Lionsgate Television, a source familiar with the exchange tells Vulture. Lionsgate is now in the process of getting HBO Max the correct files, and the episodes will be updated as soon as possible.

It just feels clumsy and silly for Lionsgate to supply the wrong files in the first place, and for nobody at HBO to verify they are the correct work. An amateur mistake, frankly, for an ostensibly premium service costing U.S. $11–$23 per month.

Also, if I were king for a day, it would be illegal to sell or stream a remastered version of something — a show, an album, whatever — without the original being available alongside it.

In January, Mark Zuckerberg bade farewell to the ostensibly censorial administration of Joe Biden, welcoming in the nominally free speech offered by Donald Trump’s then-incoming presidency. The complaints about Biden aired by Zuckerberg on an episode of Joe Rogan’s podcast were weak, misleading, and silly, but they helped continue the narrative championed by many U.S. politicians who are now in a position to help Meta.

In a video announcing the changes to the company’s moderation policy, Zuckerberg lamented the “censorship” users have faced, and promised to collaborate with the government to fight those demands:

Finally, we’re going to work with President Trump to push back on governments around the world. They’re going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever-increasing number of laws, institutionalizing censorship, and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years when even the US government has pushed for censorship.

This explanation is mostly nonsense — and dishonest.

Nader Issa, WBEZ Chicago:

At the request of the U.S. Department of Justice, a Facebook group used by nearly 80,000 people to report sightings of federal immigration agents in the Chicago area has been taken down by the social media giant Meta, Facebook’s parent company.

The group, called “ICE Sighting-Chicagoland,” has been increasingly used over the last five weeks of “Operation Midway Blitz,” President Donald Trump’s intense deportation campaign, to warn neighbors that federal agents are near schools, grocery stores and other community staples so they can take steps to protect themselves.

If this group was actually used for “coordinated harm”, as Meta claims, surely it or the Department of Justice could give some specific examples. I could only find one archived copy of the page and I see nothing of the sort in what is admittedly a handful of posts. I also do not see anything looking remotely like “coordinated harm” in the posts cached by Google.

The point is not Meta’s hypocrisy on what it will remove compared to what it will defend, but what this hypocrisy achieves. Meta spent years using a socially conscious image to help marginalized people feel safer, albeit only after a long history of controversy over privacy violations, harassment, and gender-based abuse (PDF).

Now it is using a combination of regressive policies and assisting the government’s domestic quasi-military invasions to ingratiate itself with this administration. If Meta were trying to appeal to the public or advertisers, it would not be so subservient to this administration — people in the U.S. are more suspicious of government power than in recent memory, and disapprove of ICE. Meta is completely on-board with this administration’s demands. If there is a line these companies will not cross, we might find it if we reach it.

Dan Mangan, CNBC:

Google on Friday joined Apple in removing from its online store apps that can be used to anonymously report sightings of U.S. Immigration and Customs Enforcement agents and other law-enforcement authorities.

Apple on Thursday night said it was removing ICEBlock and other similar apps from its App Store that are used to track authorities.

Apple’s move came after direct pressure from Attorney General Pam Bondi, and amid controversy over the Trump administration’s aggressive enforcement of immigration law with ICE agents and other authorities.

Joseph Cox, 404 Media:

“I am incredibly disappointed by Apple’s actions today. Capitulating to an authoritarian regime is never the right move,” Joshua Aaron told 404 Media. “ICEBlock is no different from crowd sourcing speed traps, which every notable mapping application, including Apple’s own Maps app, implements as part of its core services. This is protected speech under the first amendment of the United States Constitution.”

If you believe ICE is simply a law enforcement agency operating within the normal protocols of the justice system — and, by extension, that law enforcement behaviour is generally reasonable and trustworthy — apps that allow people to anonymously report sightings could be seen as targeted harassment. I could understand why Apple might remove such an application in that high-trust environment. However, if you believe ICE is a hostile expression of state power — the flimsy justifications used to identify ostensibly undocumented immigrants, the suspension of due process, and the domestic surveillance machine paint a pretty bleak picture — then apps allowing people to protect themselves against this power seem are justifiable.

The U.S. government does not have the authority to demand the removal of these lawful apps. But it does have the authority to make Apple and Google pay dearly if they do not comply. Perhaps Apple finds its products will be implicated in higher tariffs, or the antitrust cases against both companies will have stiffer penalties. Apple knows how to work with authoritarian states, which I do not mean as a compliment, and it is applying the same playbook here.

At least with Google’s devices, sideloading is an option. Apple’s platform control is both a guarantee and a liability. I write this with the best of intentions: ICEBlock’s platform exclusive design has now become a problem for it. The best way to make these kinds of apps resilient is to put them on the web, though I am unsure how reliable push notifications are in web apps on iOS.

A brief, throat-clearing caveat: while I had written most of this pre-launch, I was unable to complete it by the time Apple shipped its annual round of operating system updates. Real life, and all that. I have avoided reading reviews; aside from the excerpt I quoted from Dan Moren’s, I have seen almost nothing. Even so, I am sure something I have written below will overlap with something written by somebody else. Instead of trying to weed out any similarities, I have written this note. Some people hire editors.

The Name

Here is a question: does anybody know what we were supposed to call the visual interface design direction Apple has pursued since 2013? For a company that likes to assign distinct branding to everything it makes, it is conspicuous this distinct visual language never had a name.

To be fair, iOS’s system theme was not branded from when it was first shown in 2007. It inherited some of the Aqua qualities of Mac OS X, but with a twist: the backgrounds of toolbars were a muted blue instead of the saturated “Aqua” blue or any of Mac OS X’s window themes. The system font was Helvetica, not Lucida Grande. It was Aqua, but not exactly. Even after 2013’s iOS 7, a massive redesign intended, in part, to signal higher level changes, no name was granted. The redesign was the rebrand. The system spoke for itself.

In MacOS, meanwhile, the “Aqua” branding felt increasingly tenuous to me following the system-wide redesigns of Yosemite — which Craig Federighi said “continue[d] this evolution” — and Big Sur.

Now, then: Liquid Glass.

The name is remarkably apt — definitely Apple-y, and perfectly descriptive of how the material looks and feels. In most contexts, it looks like slightly deep and modestly bevelled glass in either clear or slightly frosted finishes. Unlike familiar translucent materials that basically just blur whatever is behind them, Liquid Glass distorts as though it is a lens. It even, in some cases, displays chromatic aberration around the edges.

And then you start to move it around, and things get kind of strange. It can morph and flex when tapped, almost like pushing a drop of mineral oil, and it glows and enlarges, too. When a button is near enough to another, the edges of the tapped button may melt into those of the proximate one. When you switch between views, buttons might re-form into different buttons in the same area.

That alone fulfills the “liquid” descriptor, but it is not the first time Apple has used the term. Since 2018, it has described high-resolution LCD displays with small bezels and non-zero corner radii — like the one on my MacBook Pro — as “Liquid Retina displays”. One might reasonably wonder if there is a connection. I consulted my finest Apple-to-English decoder ring and it appears Apple is emphasizing another defining characteristic of the Liquid Glass design language, which is that each part of the visual interface is, nominally, concentric with the bezel and corner radius of a device’s display. Am I reaching too hard? Is Apple? Who can say?

Apple’s operating systems have shared a familial look-and-feel, but Liquid Glass is the first time they also share a distinct name and specific form. That seems to me like a significant development.

My experiences with Liquid Glass have been informed by using iOS 26 since June on my iPhone 15 Pro, and MacOS 26 Tahoe since July on my 14-inch MacBook Pro. For a redesign justified on its apparent uniformity across Apple’s product lineup, this is an admittedly narrow slice of use cases, and I am sure there will be dozens of other reviews from people more immersed in Apple’s ecosystem than I presently am. However, I also think they are the two platforms telling the most substantial parts of the Liquid Glass story. The Mac is Apple’s longest-running platform and users have certain specific expectations; the iPhone is by far Apple’s most popular product. I do not think my experience is as comprehensive as those with access to more of Apple’s hardware, but I do think these are the two platforms where Apple needs to get things right.

I also used both systems almost exclusively in light mode. I briefly tested them in dark mode, but I can only describe how the visual design has changed in my day-to-day use, and that is light mode all the time.

The Material

The first thing you should know about the Liquid Glass material is that it is not exactly VisionOS for the Apple products you actually own. Sure, Apple may say it was “[i]nspired by the depth and dimensionality of VisionOS”, but it is evolved from that system’s frosted glassy slabs with slightly recessed text entry fields. This is something else entirely, and careful observers will note it is a visual language coming to all of Apple’s platforms this year except VisionOS. (Well, and the HomePod, if you want to be pedantic.)

The second thing you need to know is that it is visually newsworthy, but this material barely alters the fundamentals of using any of Apple’s devices, which makes sense. If you were creating a software update for billions of devices you, too, would probably think twice about radically changing all those environments. Things looking different will assuredly be a big enough change for some people.

The Liquid Glass material is most often used in container components — toolbars, buttons, menus, the MacOS Dock — but it is used for its own sake for the clock numerals on the Lock Screen. I think this will be its least controversial application. The Lock Screen clock is kind of useful, but also kind of decorative. (Also, and this is unrelated to Liquid Glass, but I cannot find another place to put this: the default typeface for the clock on the Lock Screen now stretches vertically. I like the very tall numerals and the very cool thing is that it vertically compresses as notifications and Live Activities are added to your Lock Screen.) If you do not like the glassy texture on the clock, you can select a solid colour. Everyone can be happy.

That is not the case elsewhere or for other components.

Translucency has been a standard component in a visual interface designer’s toolbox for as long as alpha channels have been supported. It was already a defining characteristic of Apple’s operating systems and has been since the introduction of Aqua. Translucency helps reduce the weightiness of onscreen elements, and can be used to imply layering and a sense of transience. But there is a key problem: when something in a user interface is not entirely opaque, it is not possible to predict what will be behind it.

Obviously.

Less obvious is how a designer should solve for the legibility of things a translucent element may contain, particularly text. While those elements may have specific backgrounds of their own, like text used in a button, there is often also plain text, as in window titles and copy. Icons, too, may not have backgrounds, or may be quite small or thin. The integrity of these elements may not be maintained if they are not displayed with sufficient contrast.

An illustration of decreasing contrast as background opacity and colours change. The rectangles are white at 60% opacity.
Three blocks of lorem ipsum placeholder text set in a dark green. The left column has a near-white background; the middle column an orange background; the right column a photo of a leaf as the background.

However, the impression of translucency is usually at odds with legibility. Say you have a panel-type area containing some dark text. The highest contrast can be achieved by making the panel’s background white. There is no way to make this panel entirely white and have it be interpreted as translucent. As the opacity of the panel’s background drops, so does the contrast whenever it appears over anything else that is not itself entirely white.

So designers have lots of little tricks they can play. They can decorate the text with outlines, shadows, and glows, as Microsoft did for various elements in Windows Vista. This works but is not particularly elegant, especially for text longer than a few words. Designers also blur the background, a common trick used in every major operating system today, and they adjust the way the foreground inherits the tones and shades of the background.

Windows Vista. Notice the white glow behind the text in application title bars, and the blurred background elements. (Image from the OpenGL Pipeline Newsletter.)
A screenshot of the Windows Vista's Aero user interface.

The Liquid Glass texture is more complex than Apple’s background materials or Microsoft’s Acrylic. It warps and distorts at the edges, and the way it blurs background layers is less diffuse. There are inset highlights and shadows, too, and all of these effects sell the illusion of depth better. It is as much a reflection of the intent of Apple’s human interface designers as it is a contemporary engineering project, far more so than an interface today based on raster graphics. That it is able to achieve such complex material properties in real-time without noticeably impacting performance or, in my extremely passive observations, battery life, is striking.

I do not think all these effects necessarily help legibility, which is as poor as it has ever been in translucent areas. The degree to which this is noticeable is dependent on the platform. In iOS 26, I find it less distracting, I think largely because it exists in the context of a single window at a time (picture-in-picture video being the sole exception). That means there is no expectation of overlapping active and inactive windows and, so, no chance that something overlapping within a window’s area could be confused with a different window overlapping.

Legibility problems are also reduced by how much is moving around on a display at any time. Yes, there are times when I cannot clearly read a text label in an iOS tab bar or a menu item in MacOS, but as soon as I scroll, legibility is not nearly as much of an issue. I do not wish to minimize this; I think text labels should be legible in every situation. But it is better in use than the sense you might get from the still screenshots you have seen in this article and elsewhere.

Apple also tries to solve legibility by automatically flipping the colour of the glass depending on the material behind it. When the glass is overtop a lighter-coloured area, the glass is light with dark text and icons; when it is on top of a darker area, the glass is dark with light-coloured text and icons. If Apple really wanted to improve the contrast of the toolbar, it would have done the opposite. These compensations do not trigger immediately so, when scrolling through a document containing a mix of lighter and darker areas, there is not as much flashing between the two states as you might expect. It is Apple’s clever solution to a problem Apple created.

With all the places the Liquid Glass texture has been applied in MacOS, you might believe it would make an appearance in the Menu Bar, too, since that has sported a translucent background since Leopard. But you would be wrong. In fact, in MacOS 26, the Menu Bar often has no background at all. The system decides whether the menu titles and icons should be shown in white or black based on the desktop picture, and then drops them right on top of it. Occasionally, it will show a gradient or shadow, sometimes localized to either side of the menu bar. Often, as with the other uses of translucency, legibility has been considered and I have not had difficulty reading menu items — but, also like the other translucent elements, this would never be a problem if the menu bar had a solid background.

Menu Bar without a gradient background
Menu Bar with a full gradient background
Menu Bar with a gradient background on the left side
Menu Bar with a gradient background on the right side

Here is the thing, though: Liquid Glass mostly — mostly — feels at home on the iPhone. Yes, Apple could have avoided legibility problems entirely by not being so enamoured of translucency, but it does have alluring characteristics. It is a very cool feeling of true dimensionality. It is also a more direct interpretation of the hardware on which these systems run, the vast majority of which have gloss-finish glassy screens. Glass onscreen feels like a natural extension of this. I get it. I do not love it, but I feel like I understand it on the iPhone far more than I do on my Mac.

The animations — the truly liquid-feeling part of this whole thing — are something better seen as they are quite difficult to explain. I will try but do not worry: there are visual aids coming. The buttons for tools float in glassy bubble containers in a layer overtop the application. Now imagine those buttons morphing into new bubbly buttons and toolbar areas as you move from one screen to another. When there are two buttons, they may become a unified blob on a different screen. For example, in the top-right of the Library section of the Music app, there is an account button, and a button labelled “⋯” which shows a menu containing only a single “Edit Sections” item. Tapping on “Playlists” transforms the two button shapes into a single elongated capsule enclosing three buttons. Tapping “Artists” condenses the two into a single sorting button. Tapping “Genres” simply makes the two buttons fade away as there are no buttons in the top-right of this section.

Though these animations are not nearly as fluid as they were first shown, they seem like they help justify the “liquid” part of the name, and are something Apple has enough pride in to be called out in the press release. Their almost complete absence on MacOS is therefore notable. There are a handful of places they appear, like in Spotlight, but MacOS feels less committed to Liquid Glass as a result. When menus are summoned, they simply appear without any dramatic animation. Buttons and menus do not have the stretchy behaviour of their iOS counterparts. To be sure, I am confident those animations in MacOS would become tiresome in a matter of minutes. But, so, if MacOS is better for being less consistent with iOS in this regard, that seems to me like a good argument against forcing cross-platform user interface unification.

The System

Strictly speaking, Liquid Glass describes only this material, but the redesign does not begin and end there. Apple has refreshed core elements across the entire system, from toolbars and buttons to toggle controls and the Dock. And, yes, application icons.

I have already written about the increasing conformity of app icons within MacOS which has brought them into complete alignment with their iOS counterparts, down to the number of dots across the top of the Notes icon. If there are any differences in icons on MacOS and iOS for the same system app, they are insignificant. Regardless of how one may feel about this change — personally, aghast — they are changes made in concert with the Liquid Glass personality. Icons are now more than bitmap images at set sizes. They are now multilayer documents, and each layer can have distinct effects applied. The whole icon also appears to have a polished edge which, by default, falls on the upper-left and bottom-right, as though it is being lit at a 45° angle.

My Home Screen with icons tinted to match the background. When you select tinted icons, iOS provides the option of a colour picker.
iOS 26 Home Screen with slate-tinted glassy icons.

If these icons have an advantage, it is that Apple is now allowing more user customization than ever. In addition to light and dark modes, application icons can now be displayed in clear and tinted states. Designers can specify new icons, and iOS will automatically convert icons without an update with mixed results. And, as with the other icon display modes, this also affects widgets on the Home Screen and icons across the system. Clear and tinted both look like frosted glass and have similar dimensional effects as other Liquid Glass elements, though one is — obviously — tinted. I can see this being a boon to people who use a photo as their wallpaper, though it comes at the expense of icon clarity and designer intention.

The party trick on the iPhone’s Home Screen is that each of these layers and the glassy edge respond to the physical orientation and movement of your device. Widgets and folders on the Home Screen also get that shine and they, too, respond to device movement. Sometimes. The shine on the App Library overview page does not respond to motion, but the app icons within a category do. App icons in Spotlight are not responsive to motion, either, and nor are the buttons in the bottom corners of the Lock Screen. The clock on the Lock Screen responds, but the notifications just below it on the very same screen do not. This inconsistency feels like a bug, but I do not think it is. I do not love this effect; I simply think similar things should look and behave similarly.

One of the things Apple is particularly proud of is how the shapes of the visual interface now reflect the shapes in its hardware, particularly in how they neatly nest inside each other. Concentricity is nothing new to its industrial design language. The company has, for decades, designed devices with shapes that nestle comfortably within each other. Witness, for example, the different display materials on the iMac G4 and the position of the camera on the back of the original iPhone. It is not even new in Apple’s software: the rounded corners of application icons mimic the original iPhone’s round corners; the accessory pairing sheet is another example. But accentuating the roundedness of the display corners is now a systemwide mandate. Application windows, toolbars, sidebars, and other elements have been redrawn as concentric roundrects, perfectly seated inside each other and within the rounded rectangle of a modern Apple device’s display. Or, at least, that is the theory.

In reality, only some of Apple’s devices have displays with four rounded corners: Apple Watches, iPhones, and iPads. The displays of recent MacBook Airs and MacBook Pros have rounded corners at the top, but are squared-off at the bottom. Neither the iMac nor either of Apple’s external displays have rounded corners at all. Yet all of these devices have inherited the same bubbly design language with dramatically rounded application windows.

Screenshot of a Finder window in MacOS Tahoe showing a grid of application icons.

Perhaps I am taking this too literally. Then again, Apple is the one saying application windows are no longer “configured for rectangular displays”, and that they now fit the “rounded corners of modern hardware”. Regardless of the justification, I quite like the roundness of these windows. Perhaps it is simply the newness, but they make applications seem friendlier and softer. I understand why they are controversial; the large radius severely restricts what can be present in the corners, thus lowering the information density of an application window. It seems Apple agrees it is more appropriate in some apps than in others — app windows in System Information and Terminal have a much smaller corner radius.

Still, the application windows which do gain the more generously rounded corners are appreciably concentric to my MacBook Pro’s display corners at default scaling (1512 × 982) and at one tick scaled down (1800 × 1169). But at one tick scaled up (1352 × 878) the corners are no longer concentric to the display corners, and now feel overlarge and intrusive in the application area.

Even on a device with four rounded display corners, this dedication to concentricity is not always executed correctly. My iPhone 15 Pro, for example, has corners with a slightly smaller radius than an iPhone 16 Pro. The bottom corners of the share sheet on my device are cramped, nearly touching the edge of the display at their apex.

Screenshot of the lower part of the iOS share sheet in which the bottom rounded corners are nearly touching the device bezel.

Then there are the issues caused by this dedication to concentricity. Look again at that Finder window screenshot above and pay attention to the buttons in the toolbar. In particular, notice how the icon in the item grouping button — the solitary one between the view switcher, and the group that includes the sharing button — looks like it is touching the rounded edge.

Maps on iOS has a different kind of concentricity issue. When the search area is in a retracted state, the container around the search bar does not align with the left and right edges of the buttons above it, in a way that does not feel deliberate. I assume this is because it follows the curves of the display corners with an equal distance on all sides. When it is in an expanded state, it becomes wider than the buttons above it. At least — unlike the Share sheet — its bottom corners are rounded correctly on my iPhone.

Two screenshots of the Maps app within iPhone frames, with vertical lines showing the alignment of the buttons described above.

I could keep going with my nitpicks, so I shall. The way toolbars and their buttons are displayed on MacOS is, at best, something to get used to, though I have tried and failed. Where there was once a solid area for tools has, in many apps, become a gradient with floating buttons. The gradient is both a fill and a progressive blur, which I think is unattractive.

A screenshot of the Preview app's toolbar with a PDF document open.

This area is not very tall, which means a significant amount of the document encroaches into its lower half. In light mode, the background of a toolbar is white. The backgrounds of toolbar buttons are also white. Buttons are differentiated by nothing more than a diffuse shadow. The sidebar is now a floating roundrect. The glyphs in sidebar items and toolbar buttons are near-black. The shapeless action buttons in Finder are grey. Some of these things were present in previous versions of MacOS, but the sum of this design language is the continued reduction of contrast in user interface elements to, I think, its detriment.

Apple justifies these decisions by saying its redesigned interfaces are “bringing greater focus to content”. I do not accept that explanation. Instead of placing tools in a distinct and separated area, they bleed into your document, thus gaining a similar level of importance as the document itself. I have nothing beyond my own experience to back this up. Perhaps Apple has user studies suggesting something different; if it does, I think it should publicly document its research. But, in my experience, the more the interface blends with what I am looking at, the less capable I am of ignoring it. Clarity and structure are sacrificed for the illusion of simplicity offered by a monochromatic haze of an interface.

Even if I bought that argument, I do not understand why it makes sense to make an application’s tools visually recede. While I am sometimes merely viewing a document, I am very often trying to do something to it. I want the most common actions I can take to be immediately obvious. For longtime Mac users, the structure of most apps has not changed and one can rely on muscle memory in familiar apps. But that is more like an excuse for why this redesign is not as bad as it could be, not justification for why it is an improvement.

Then there are the window controls. The sidebar in an application is now depicted in a floating state which, Apple says, is “informed by the ambient environment within the app”, which means it reflects the colours of elements around it. This includes colours from outside the app which, a lot of the time, means the sidebar looks translucent to windows underneath it, which defies all logic. The sidebar reflects nearby colours even if you enable the “Reduce Transparency” setting in Accessibility settings, even though it makes the sidebar look translucent. But then the window controls are set inside this sidebar which, because it is floating, makes it look like these controls do something to the sidebar, not the application window.

Since the sidebar is now apparently overtop a window, stuff can be displayed underneath it. If you have seen any screenshots of this in action, it has probably been of the Music app, because few other applications do this, because why would you want stuff under the sidebar?

Here is the Music app screenshot I am obliged to include.
The Music app, with a couple of rows of tiles scrolled horizontally so some of the tiles are underneath the sidebar.

In the Photos app, it reminds me of a floating palette like Apple used to ship in iPhoto and Aperture. Those palettes allowed you to edit a photo in full-screen on a large display, and you could hide and show the tools with a keystroke. A floating sidebar and a hard gradient of a toolbar is a distracting combination. Whatever benefit it is supposed to impart is lost on me.

Photos running on MacOS 26.
A screenshot of Photos with an image zoomed-in so that the sidebar is partially overlapping the photo.

I expected Apple to justify this on the basis that it maintains context or something, but it does not. Its Human Interface Guidelines only say this is done to “reinforce the separation and floating appearance of the sidebar”, though this is not applied consistently. In a column view in Finder, for example, there is a hard vertical edge below the rounded corner of the ostensibly floating sidebar. I am sure there are legibility reasons to do this but, again, it is a solution to a problem Apple created. It reimagined sidebars as a floating thing because it looks cool, then realized it does not work so well with the best Finder layout and built a fairly unrefined workaround.

The bottom right corner of the sidebar in Finder has a hard edge that breaks the impression it is floating.
A screenshot of a Finder window with an inset showing the bottom-right corner of the sidebar.

I am spending an awful lot of words on the MacOS version because I think it is the least successful of the two Liquid Glass implementations I have used. MacOS still works a lot like MacOS. But it looks and feels like someone dictated, context-free, that it needed to reflect the redesign of iOS.

The iOS implementation is more successful since Liquid Glass feels — and I mean feels — like something designed first for touch-based systems. There is an increasingly tight relationship between the device and its physical environment. Longstanding features like True Tone meet new (well, new-ish) shifting highlights that respond to physical device orientation, situating the iPhone within its real-world context. Yet, even in its best implementation on iOS, Liquid Glass looks out of place when it is used in apps that rely on layouts driven by simple shapes and clean lines.

The Clock app is a great example of this clashing visual language. Each of its function is comprised mostly of a black screen, with white numerals and lines, and maybe a pop of colour — the second hand in the stopwatch or the green start button for timers. And then you tap or slide on the toolbar at the bottom to move through the app and, suddenly, a hyper-realistic glassy lens appears.

The Calculator app is another place where the limited application of Liquid Glass feels wrong. The buttons are drawn in some kind of glass texture — they are translucent and stretch in the same way as menus do — but the ring of shimmering highlight is so thin it may as well not exist. Apple does say in its Human Interface Guidelines that Liquid Glass should be used “sparingly”, but it uses the texture everywhere. There are far more generous buttons in Control Centre and on the passcode entry screen that feel more satisfying to press. Also, even though the buttons in Calculator are nominally translucent, the orange ones remain vibrant despite being presented against a solid black background.

This confused approach to visual design is present throughout the system. It has been there for years to some extent — Books has a realistic page flip animation, and Notes retained a paper texture for years after the iOS 7 redesign. But Liquid Glass is such a vastly different presentation compared to the rest of iOS that it stands out. When some elements have such a dynamic and visually rich presentation while others are plain, the combination does not feel harmonious. It feels unfinished.

This MacOS update is not all bad on a design front, to be fair. Sidebar icons now have a near-black fill instead of an application-specific colour; they gain the highlight colour when a particular sidebar item is active. This has the downside of making each application less distinct from each other, but it is a contrast improvement in a user interface that is mostly full of regressions. Also, inactive application windows are more obvious, with mid-grey toolbar items, window widgets, document icons, and window titles. On an iPhone, the biggest user interface good news is a sharp reduction in the number of modal dialogs. They are not entirely banished — not even close — but fewer whole-screen takeovers is good news on today’s larger-screened devices. The second piece of good news is the new design of edit menus that are no longer restricted to horizontal scrolling, and can expand into vertical-scrolling context menus. Also, on the Lock Screen, you can now move the widgets row to the bottom of the screen, and I quite like that.

There are enhancements downstream from the floating controls paradigm reinforced in this Liquid Glass update. In many iOS applications with a large list view — Messages and Mail, for example — the search field is now positioned at the bottom within easier reach. Floating controls do not require the Liquid Glass material; the Safari redesign in iOS 15 now seems like a preview of where Apple has now headed, and it obviously does not use these glassy controls. But I think the reconsidered approach in iOS 26 is more successful in part because the controls have this glassy quality.

There is, in fact, quite a lot to like in Apple’s operating system updates this year that have nothing to do with user interface changes. This is not a full review, so I will give you some quick hits, starting with the new call screening feature. I have had this switched on since I upgraded in June and it is a sincere life improvement. I still get three to six scam calls daily, but now my phone hardly ever notifies me, and I can still receive legitimate calls from numbers not in my contacts. Bringing Preview to iOS is an upgrade for anyone who spends huge chunks of time marking up PDF documents.

Spotlight on MacOS is both way more powerful and way easier to use — if you want to just search for files and not applications, or vice-versa, you can filter it. Oh, and there is now a clipboard history feature which, smartly, is turned off by default.

Oh, and you need to try “Spatial Scene” photos. If you have not tried this feature already, go and do it, especially on your Lock Screen. I have tried it with photos taken on iPhones, pictures shot with my digital camera, and even film scans, and I have been astonished at how it looks and, especially, feels. I have had the best results when I start with portraits from my digital camera; perhaps unsurprisingly, the spatial conversion is only as good as the quality of the source photo. For a good Lock Screen image, especially if you want to overlap the clock, you will want a picture with reasonably clear background separation and with a generous amount of space around the subject. There is a new filter in the Lock Screen image picker with good suggestions for Spatial Scene conversions. Again, you will want to try this.

And there are downsides to the two operating system updates I have used. Both are among the buggiest releases I can remember, likely in part because of the visual refresh. There are functional bugs, there are performance problems, and there are plenty of janky animations. There are so many little things that make the system feel fragile — the Wallpaper section of Settings, for example, has no idea widgets can now be aligned to the bottom of the Lock Screen, so they overlap with the clock. I hope this stuff gets fixed. Unfortunately, even though these operating systems are named for the coming calendar year, Apple will be shifting engineering efforts to the OS 27 releases in a matter of months.

The ‘Why’ Of It All

I kept asking myself “why?” as I used iOS 26 and MacOS 26 this summer. I wanted to understand the rationale for a complete makeover across Apple’s entire line of products. What was the imperative for unifying the systems’ visual interface design language? Why this, specifically?

Come to think of it, why is this the first time all of the operating systems are marketed with the same version number? And why did Apple decide this was the right time to make a dedicated “operating system” section on its website to show how it delivers a “more consistent experience” between devices? I have no evidence Apple would want to unify under some kind of “Apple OS” branding, but if Apple did want to make such a change, this feels like a very Apple-y way to soft-launch it. After all, your devices already run specific versions of Safari and Siri without them needing to be called “Mac Safari” and “Watch Siri”. Just throwing that thought into the wind.

If anything like that pans out, it could explain why Apple sees its products as needing a unified identity. In the present, however, it gives the impression of a changing relationship between Apple’s presentation of how it approaches the design of its products. Public statements for the past twenty-plus years have communicated the importance of letting each product be true to itself. It would be easy to dismiss this as marketing pablum if not for how reliably it has been backed by actual evidence. Yes, lines have become blurrier on the developer side with technologies like Catalyst, and on the user side by allowing iPhone and iPad apps to be run within MacOS. But a nominally unified look and feel makes the erosion of these boundaries even more obvious.

Perhaps I am overthinking this. It could simply be an exercise in branding. Apple’s operating systems have shared a proprietary system typeface for a decade without it meaning anything much more than a unified brand. And it is Apple’s brand that supersedes when applications look the same as each other no matter where they are used. In my experience so far, developers that strictly adhere to Apple’s recommendations and fully embrace Liquid Glass end up with applications having little individual character. This can sometimes work to a developer’s benefit, if their intention is for their apps to blend into the native experience, but some developers have such specific visual styles that an Apple-like use of Liquid Glass would actually be to their detriment. The updates to Cultured Code’s Things are extremely subtle which is, I think, the right call: I want Things to look like Things, not a generic to-do app.

A uniform look-and-feel across not just Apple’s apps and systems, but also third-party apps, is a most cynical answer to the question of why? and, while I do not wish to entirely dismiss it, it would disappoint me if this was Apple’s goal. What I think is true about this explanation is how Liquid Glass across most operating systems makes it possible for any app to instantly feel like it is platform-native, even when it is not.

Or maybe the why? of it all is for some future products, like a long-rumoured touch-screen laptop. This rationale drove speculation last time Apple updated the design of MacOS, and we still do not have touch screen Macs, so I am skeptical.

The frustrating thing about the answers I have given above to the question of why? is that I am only speculating. So far, Apple justifies this redesign, basically, by saying it is self-evidently good for all of its platforms to look the same. This is an inadequate explanation, and it is not borne out in my actual day-to-day use. I think iOS is mostly fine; Liquid Glass feels suited to a whole-screen-app touch-based context. In MacOS, it feels alien, unsuited to a multi-window keyboard-and-pointer system.

I am sure this visual language will be refined. I hope it has good bones since Apple is very obviously committed to Liquid Glass and its sea of floating buttons. But so far, it does not feel ready. I spent the summer using MacOS in its default configuration, aching to turn on “Reduce Transparency” in Accessibility settings. It is not pretty, especially in application toolbars, but it is less distracting because different parts of an application have their own distinct space.

I have tried, in this overview and critique, to be cautious about how much I allow the newness of it to colour my perception. Aqua was a polarizing look when it was introduced in Mac OS X. Leander Kahney, in a December 2000 article for Wired, wrote about longtime users who were downright offended by its appearance in the then-current Public Beta, relying on utilities to “Macify” the Mac. Again, this is from 2000, sixteen years after the Mac was introduced. As of today, Aqua has been around in some form for over nine years longer. But it at least felt like a complete idea; in his review of Mac OS X Leopard, John Siracusa wrote of how it was “a single, internally consistent design from top to bottom”.

These new operating systems do not feel like they are achieving that level of consistency despite being nominally more consistent across a half-dozen platforms. MacOS has received perhaps the most substantial visual changes, yet it is full of workarounds and exceptions. The changes made to iOS feel surface-level and clash with the visual language established since iOS 7. I am hopeful for the evolution of these ideas into something more cohesive. Most software is a work-in-progress, and the user interface is no exception. But all I can reflect upon is what is before me today. Quite simply, not only is it not ready, I am concerned about what it implies about Apple’s standards. Best case scenario is that it is setting up something really great and it all makes sense in hindsight. But I still have to live with it, in this condition, on today’s hardware that is, to me, less of a showcase for Apple’s visual design cleverness and more of a means to get things done. It is not a tragedy, but I would like to fast-forward through two or three years’ worth of updates to get to a point where, I hope, it is much better than it is today.

In 2018, the Toronto Star and CBC News jointly published an investigation into Ticketmaster’s sales practices:

Data journalists monitored Ticketmaster’s website for seven months leading up to this weekend’s show at Scotiabank Arena, closely tracking seats and prices to find out exactly how the box-office system works.

Here are the key findings:

  • Ticketmaster doesn’t list every seat when a sale begins.

  • Hikes prices mid-sale.

  • Collects fees twice on tickets scalped on its site.

Dave Seglins, Rachel Houlihan, Laura Clementson, CBC News:

Posing as scalpers and equipped with hidden cameras, the journalists were pitched on Ticketmaster’s professional reseller program.

[…]

TradeDesk allows scalpers to upload large quantities of tickets purchased from Ticketmaster’s site and quickly list them again for resale. With the click of a button, scalpers can hike or drop prices on reams of tickets on Ticketmaster’s site based on their assessment of fan demand.

Ticketmaster, of course, disputed these journalists’ findings. But the very existence of TradeDesk — owned by Ticketmaster — seems to be in direct opposition to Ticketmaster’s obligations to purchasers. One part of the company is ostensibly in the business of making sure legitimate buyers acquire no more than their fair share of tickets to a popular show, while another part facilitates easy reselling at massive scale. The TradeDesk platform is not something accessible by just anyone; you cannot create an account on demand. Someone from Ticketmaster has to set up your TradeDesk account for you.

These stories have now become a key piece of evidence in a lawsuit filed by the U.S. Federal Trade Commission against Live Nation, the owner of Ticketmaster:

The FTC alleges that in public, Ticketmaster maintains that its business model is at odds with brokers that routinely exceed ticket limits. But in private, Ticketmaster acknowledged that its business model and bottom line benefit from brokers preventing ordinary Americans from purchasing tickets to the shows they want to see at the prices artists set.

The complaint’s description (PDF) of the relationship between Ticketmaster and TradeDesk, beginning at paragraph 84 and continuing through paragraph 101, is damning. If true, Ticketmaster must be aware of the scalper economy it is effectively facilitating through TradeDesk.

Liz Reid, Google’s vice president of search:

Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year. Additionally, average click quality has increased and we’re actually sending slightly more quality clicks to websites than a year ago (by quality clicks, we mean those where users don’t quickly click back — typically a signal that a user is interested in the website). This data is in contrast to third-party reports that inaccurately suggest dramatic declines in aggregate traffic — often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the roll out of AI features in Search.

What “relatively stable” means is not explained and, incredibly, not a single number is used in this press release about quantifiable data. However, even giving Google an unearned benefit of the doubt, the company also says people “are searching more than ever”. If more searches are being done but the number of clicks is “relatively stable”, it effectively confirms a dropping click-through rate. None of this counters or disproves publishers’ findings of declining Google referral traffic. Even if aggregate traffic from Google has not dropped significantly, it is not clear it is going to the same places in similar amounts.

A big problem Google has is that it closely guards everything related to search, ostensibly to reduce gaming its ranking factors, and it is not a trustworthy narrator. It intuitively makes sense for A.I. Overviews to damage search traffic. We just do not know for which websites and by how much, and Reid’s post provides no clarity.

Reid:

The web has existed for over three decades, and we believe we’re entering its most exciting era yet. […]

As of the day this was published, exactly 34 years since the first website was launched.

Riccardo Mori:

Now, with these older iOS devices in particular, battery life is what it is, and I don’t always remember to keep them all charged at all times. It happens with my Mac laptops as well. Whenever I revive one of these devices, if it’s still able to access iCloud and other Apple ID-related services, I get a notification on all my other Apple devices that a certain device has now access to FaceTime and iMessage.

The wording in this notification has changed for the worse in more recent versions of Mac OS and iOS/iPadOS. […]

Michael Tsai:

The alert doesn’t actually mean that the the device was added in the user sense. Most of the time the device was already in my account, but a software update or something meant that Apple needed to do some kind of key refresh. It feels like I’m being interrupted for an implementation detail.

I do not see this as frequently as, it seems, Mori or Tsai — I do not have a stable of old devices I rotate between, nor am I a software developer. When I do, it is almost never because I have purchased a new device. It is usually because of, as Tsai writes, a software update or perhaps adding a travel SIM, so it is poorly confirming something I already know in an interruptive and ambiguous way. Occasionally, the software update was installed automatically, so I am surprised by the alert on a different device but have no way of understanding what happened. Then I think about what I should actually do with this information, particularly with the revised wording of this alert:

Your Apple ID and phone number are now being used for iMessage on a new Mac.

If you recently signed in to “[Device Name]”, you can ignore this notification.

[OK]

What do I do now? That is rhetorical; I understand I would search it. (I also asked Siri on iOS 26 — you know, the one with the product knowledge — and it, too, searched Google.) But what does a normal person do now? This is scary and unhelpful, yet the user interface says in the same breath it might be irrelevant.

It reminds me a little of the often-wrong map in the dialog box for two-factor authentication. These are features ostensibly to promote greater security but they only erode users’ awareness if they are not designed with more precision and care.

Mark Zuckerberg is not much of a visionary. He is ambitious, sure, and he has big ideas. He occasionally pops into the public consciousness to share some new direction in which he is taking his company — a new area of focus that promises to assert his company’s leadership in technology and society. But very little of it seems to bear fruit or be based on a coherent set of principles.

For example, due to Meta’s scale, it is running into limitations on its total addressable market based on global internet connectivity. It has therefore participated in several related projects, like measuring the availability of internet connectivity worldwide with the Economist, which has not been updated since 2022. In 2014, it acquired a company building a solar-powered drone to beam service to people in more remote locations; the project was cancelled in 2018. It made a robot to wrap fibre optic cable around existing power lines, which it licensed to Hibot in 2023; Hibot has nothing on its website about the robot.

It is not just Meta’s globe-spanning ambitions that have faltered. In 2019, Zuckerberg outlined a “privacy-focused vision for social networking” for what was then Facebook, the core tenets of which in no way conflict with the company’s targeted advertising business. Aside from the things I hope Facebook was already doing — data should be stored securely, private interactions should remain private, and so on — there were some lofty goals. Zuckerberg said the company should roll out end-to-end encrypted messaging across its product line; that it should add controls to automatically delete or hide posts after some amount of time; that its products should be extremely interoperable with those from third-parties. As of writing, Meta added end-to-end encryption to Facebook Messenger and Instagram, but it is only on by default for Facebook. (WhatsApp was end-to-end encrypted by default already.) It has not added an automatic post deletion feature to Facebook or Instagram. Its apps remain stubbornly walled-off. You cannot even sign into a third-party Mastodon app with a Threads account, even though it is amongst the newest and most interoperable offerings from Meta.

Zuckerberg published that when it was advantageous for the company to be seen as doing its part for user privacy. Similarly, when it was smart to advocate for platform safety, Zuckerberg was contrite:

But it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well. That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake. It was my mistake.

Then, when it became a good move to be brash and arrogant, Zuckerberg put on a gold chain and a million-dollar watch to explain how platform moderation had gone too far.

To be clear, Meta has not entirely failed with these initiatives. As mentioned, Threads is relatively interoperable, and the company defaulted to end-to-end encryption in Facebook Messenger in 2023. It said earlier this year it is spending $10 billion on a massive sub-sea cable, which is a proven technology to expand connectivity more than a solar-powered drone could.

But I have so far not mentioned the metaverse. According to Zuckerberg, this is “an embodied internet where you’re in the experience, not just looking at it”, and it was worth pivoting the entire company to be “metaverse-first”. The company renamed itself “Meta”. Zuckerberg forecasted an “Altria moment” a few years prior and the press noticed. In announcing this new direction in 2021, Zuckerberg acknowledged it would be a long-term goal, though predicted it would be “mainstream in the next five to ten years”:

Our hope is that within the next decade, the metaverse will reach a billion people, host hundreds of billions of dollars of digital commerce, and support jobs for millions of creators and developers.

Granted, it has not been even four years since Zuckerberg made these announcements, but are we any closer to his company’s vision becoming mainstream? If you broaden the definition of “metaverse” to include all augmented and virtual reality products then, yes, it appears to be a growing industry. But the vision shown at Connect 2021 is scarcely anywhere to be found. We are not attending virtual concerts or buying virtual merch at virtual after-parties. I am aching to know how the metaverse real estate market is doing as I am unaware of anyone I know living in a virtual house.

As part of this effort, Meta announced in May 2022 it would support NFTs on Instagram. These would be important building blocks for the metaverse, the company said, “critical for how people will buy, use and share virtual objects and experiences” in the virtual environment it was building. Meta quickly expanded availability to Facebook and rolled it out worldwide. Then, in March 2023, it ended support for NFTs altogether, saying “[a]ny collectibles you’ve already shared will remain as posts, but no blockchain info will be displayed”.

Zuckerberg has repeatedly changed direction on what his company is supposed to stand for. He has plenty of ideas, sure, and they are often the kinds of things requiring resources in an amount only possible for a giant corporation like the one he runs. And he has done it again by dedicating Meta’s efforts to what he is calling — in a new manifesto, open letter, mission statement, or whatever this is — “personal superintelligence”.

I do have to take a moment to acknowledge the bizarre quality of this page. It is ostensibly a minimalist and unstyled document of near-black Times New Roman on a white background — very hacker, very serious. It contains about 3,800 characters, which should mean a document barely above four or five kilobytes, accounting for HTML tags and a touch of CSS. Yet it is over 400 kilobytes. Also, I love that keywords are defined:

<meta name="keywords" content="Personal 
Superintelligence, AI systems improvement, 
Superintelligence vision, Mark Zuckerberg 
Meta, Human empowerment AI, Future of 
technology, AI safety and risks, Personal
AI devices, Creativity and culture with 
AI, Meta AI initiatives">

Very retro.

Anyway, what is “superintelligence”? is a reasonable question you may ask, and a term which Zuckerberg does not define. I guess it is supposed to be something more than or different from artificial intelligence, which is yesterday’s news:

As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.

He decries competitors’ ambitions:

This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well.

I am unsure what to make of this. It is sorely tempting to dismiss the whole endeavour as little more than words on a page for a company deriving 98% of its revenue (PDF) from advertising.1 If we consider it more seriously, however, we are left with an ugly impression for what “valuable work” may consist of. Meta is very proud of its technology to “generate photorealistic images”, thereby taking the work of artists and photographers. Examples of its technology also include generating blog posts and building study plans, so it seems writing and tutoring are not entirely “valuable work” either.

I am being a bit cheeky but, with Zuckerberg’s statement entirely devoid of specifics, I am also giving it the gravitas it has earned.

While I was taking way too long to write this, Om Malik examined it from the perspective of someone who has followed Zuckerberg’s career trajectory since it began. It is a really good piece. Though Malik starts by saying “Zuck is one of the best ‘chief executives’ to come out of Silicon Valley”, he concludes by acknowledging he is “skeptical of his ability to invent a new future for his company”:

Zuck has competitive anxiety. By repeatedly talking about being “distinct from others in the industry” he is tipping his hand. He is worried that Meta is being seen as a follower rather than leader. Young people are flocking to ChatGPT. Programmers are flocking to Claude Code.

What does Meta AI do? Bupkiss. And Zuck knows that very well. You don’t do a company makeover if things are working well.

If you are solely looking at Meta’s earnings, things seem to be working just fine for the company. Meta beat revenue expectations in its most recent quarter while saying the current quarter will also be better than analysts thought. Meta might not be meeting already-low analyst expectations for revenue in its Reality Labs metaverse segment, but the stock jumped by 10% anyhow. Even Wall Street is not taking Zuckerberg seriously as an innovator. Meta is great at selling ads. It is not very exciting, but it works.

Back to the superintelligence memo, emphasis mine:

We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible.

And here is what Zuckerberg wrote just one year ago:

Meta is committed to open source AI. I’ll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term.

[…]

There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it’s in their interest to support open source because it will make the world more prosperous and safer.

No mention of being careful, no mention of choosing what to open source. Zuckerberg took an ostensibly strong, principled view supportive of open source A.I. when it benefitted the company, and is now taking an ostensibly strong, principled view that it requires more nuance.

Zuckerberg concludes:

Meta believes strongly in building personal superintelligence that empowers everyone. We have the resources and the expertise to build the massive infrastructure required, and the capability and will to deliver new technology to billions of people across our products. I’m excited to focus Meta’s efforts towards building this future.

On this, I kind of believe him. I believe the company has the resources and reach to make “personal superintelligence” — whatever it is — a central part of Meta’s raison d’être, just as Malik says in his article he has “learned not to underestimate Zuckerberg”. The language in Zuckerberg’s post is flexible, vague, and optimistic enough to provide cover for whatever the company does next. It could be a unique virtual assistant, or it could be animated stickers in chats. Whatever it is, this technology will also assuredly be directed toward the company’s advertising machine, as its current A.I. efforts are providing “greater efficiency and gains across our ad system”. Zuckerberg is telling investors imagine what we could do with superintelligence.

In December 2023, Simon Willison wrote about the trust crisis in artificial intelligence, comparing it to the conspiracy theory that advertisers use audio from real-world conversations for targeting:

The key issue here is the same as the OpenAI training issue: people don’t believe these companies when they say that they aren’t doing something.

One interesting difference here is that in the Facebook example people have personal evidence that makes them believe they understand what’s going on.

With AI we have almost the complete opposite: AI models are weird black boxes, built in secret and with no way of understanding what the training data was or how it influences the model.

Meta has pulled off a remarkable feat. It has ground down users’ view of their own privacy into irrelevance, yet its services remain ubiquitous to the point of being essential. Maybe Meta does not need trust for its A.I. or “superintelligence” ambitions, either. It is unfathomably rich, has a huge volume of proprietary user data, and a CEO who keeps pushing forward despite failing at basically every quasi-visionary project. Maybe that is enough.


  1. Do note two slides later the company’s effective tax rate dropping from 17% in Q3 and Q4 2023 to just 9% in Q1 2025, and 11% in the most recent quarter. Nine percent on over $18 billion in income. ↥︎

Athena Chapekis and Anna Lieb, Pew Research Center:

Google users who encounter an AI summary are less likely to click on links to other websites than users who do not see one. Users who encountered an AI summary clicked on a traditional search result link in 8% of all visits. Those who did not encounter an AI summary clicked on a search result nearly twice as often (15% of visits).

Google users who encountered an AI summary also rarely clicked on a link in the summary itself. This occurred in just 1% of all visits to pages with such a summary.

I looked through this article and the methodology to see how this survey came together, since it seems to me the real question is if A.I. summaries are more or less damaging to search traffic than older features like snippets.

As far as I can figure out, the way Pew did this survey is that it looked for mentions of A.I. among users who consented to having their web browsing data tracked, and then categorized that traffic depending on whether it was a news article about A.I. or an A.I. feature being used. Any Google data without an A.I. summary was, as far as I can see, categorized as not containing an A.I. summary. But this latter category amounted to 82% of all Google searches, and there does not appear to be any differentiation in what features were shown for those. Some may have snippets; others may have some other “zero-click” feature. Some may have no such features at all. Lumping all those together makes it impossible to tell what impact A.I. summaries are having on search compared to Google’s previous attempts to keep users in its bubble.

This survey does a good job of showing how irrelevant the source links are in Google A.I. summaries to search traffic. Much like the citations at the end of a book, they serve as an indicator of something being referenced, but there is no expectation anyone will actually read it to confirm whether the information is accurate. There was such a citation to a Microsoft article ostensibly containing an Excel feature Google made up. Unlike citations in a book, Google’s A.I. summaries are entirely the product of a machine built by people who have only some idea of the output.

John Paul Tasker, CBC News:

U.S. President Donald Trump says he’s ending all trade discussions with Canada to hit back at Ottawa for slapping a tax on web giants — and he wants it removed before negotiations can begin again.

His objection is, ostensibly, about its apparent targeting of companies based in the United States. This is a very silly complaint. The U.S. seized the heart of the tech economy and, instead of cooperating with others, used it as leverage around the world. That is one reason for its unrivalled dominance in the industry. Any tax on tech companies will disproportionately affect U.S. businesses, but they have been exerting disproportionate influence around the world for decades.

Brendan Ruberry, Semafor:

Canada’s 3% digital services tax went into effect last year, but its first payments are due Monday, with US companies expected to shell out nearly $2.7 billion. Trump said US tariffs on Canadian goods would be applied within the next week. Last month, the US and UK agreed [to] a trade deal despite Westminster enacting a 2020 digital services tax.

This is just the latest thing our hostile neighbour can use to try and make us crack. If there were no tax, there would be something else to complain about, because we are not dealing with a reasonable administration that wants mutually beneficial trade arrangements.

As far as I can see, this tax makes sense. Unlike the Online News Act, which requires large platforms to pay for some traffic they send elsewhere, this act is specifically about revenue extracted from Canadians by businesses that are only beginning to see antitrust regulation.

Thinking about the energy “footprint” of artificial intelligence products makes it a good time to re-link to Mark Kaufman’s excellent 2020 Mashable article in which he explores the idea of a carbon footprint:

The genius of the “carbon footprint” is that it gives us something to ostensibly do about the climate problem. No ordinary person can slash 1 billion tons of carbon dioxide emissions. But we can toss a plastic bottle into a recycling bin, carpool to work, or eat fewer cheeseburgers. “Psychologically we’re not built for big global transformations,” said John Cook, a cognitive scientist at the Center for Climate Change Communication at George Mason University. “It’s hard to wrap our head around it.”

Ogilvy & Mather, the marketers hired by British Petroleum, wove the overwhelming challenges inherent in transforming the dominant global energy system with manipulative tactics that made something intangible (carbon dioxide and methane — both potent greenhouse gases — are invisible), tangible. A footprint. Your footprint.

The framing of most of the A.I. articles I have seen thankfully shies away from ascribing individual blame; instead, they point to systemic flaws. This is preferable, but it still does little at the scale of electricity generation worldwide.

Siddharth Venkataramakrishnan, Financial Times:

One hint that we might just be stuck in a hype cycle is the proliferation of what you might call “second-order slop” or “slopaganda”: a tidal wave of newsletters and X threads expressing awe at every press release and product announcement to hoover up some of that sweet, sweet advertising cash.

That AI companies are actively patronising and fanning a cottage economy of self-described educators and influencers to bring in new customers suggests the emperor has no clothes (and six fingers).

The (verified) X accounts producing threads of links to bad A.I. products, boosted by dozens of replies from other verified X accounts, are annoying spam, but seem relatively harmless. But the newsletters profiled here are curious.

One is Rowan Cheung’s Rundown AI newsletter, which Cheung bills as the “world’s most read daily AI newsletter”, which is different from the other newsletter in the article, Zain Kahn’s Superhuman AI, the “world’s biggest AI newsletter”. As alluded to by Venkataramakrishnan but not expanded upon, both attract some heavy-hitting sponsors. The Rundown was recently sponsored by Salesforce, HubSpot, Sana, and Writer, while Superhuman’s recent sponsors include companies like HubSpot, Sana, and Writer — no Salesforce.

While Cheung’s newsletter is mostly A.I. boosterism with sponsorships, there is a block near the end of each issue for a sibling product: the Rundown University. The first thing you need to know about it is that it is not a university, obviously. It is online training through individual “courses” offered at $50 apiece with individual lessons on using A.I. tools, some of which — like Gamma and Zapier — happen to have sponsored of the newsletter. Or, if you want access to all the “courses” plus workshops, tutorials, and some kind of group chat, you can get all that for just a penny shy of $1,000 per year. Just above the footer sits a carousel of inspirational quotes about A.I. from people like Sam Altman, Jeff Bezos, Mark Cuban, and Elon Musk. None are actually specific to Rundown “University”, just vibes about the importance of A.I. and its impact. It all feels a bit much.

I am fascinated by the knock-on effects of a hype cycle, and A.I. has produced some magnificent examples — including those above. It is illuminating to search Google for a phrase like intitle:"how" intitle:"is using ai to" and witnessing what is ostensibly breathtaking innovation across industries. A 2023 Guardian story says the “oldest surviving newspaper in the world” — untrue and in spirit only — is using ChatGPT to generate articles from council meeting minutes. According to the Harvard Business Review, construction companies are using large language models to, among other things, summarize documents. Stitch Fix is, according to a disguised ad in Vogue Business, using A.I. to detect trends.

There are countless examples of articles like these illustrating how companies are benefitting from the glow of hype while contributing to it. None of this is to say it is necessarily unearned — maybe detecting money laundering really is more reliable when you run it through A.I. processes. But I remember stories like these in the days when the hot new thing was called “machine learning” or, before that, “big data”. I would not be surprised if these technologies are truly beneficial yet it is hard not to feel the weight of hype and the marketing people behind it all.

Remember when new technology felt stagnant? All the stuff we use — laptops, smartphones, watches, headphones — coalesced around a similar design language. Everything became iterative or, in more euphemistic terms, mature. Attempts to find a new thing to excite people mostly failed. Remember how everything would change with 5G? How about NFTs? How is your metaverse house? The world’s most powerful hype machine could not make any of these things stick.

This is not necessarily a problem in the scope of the world. There should be a point at which any technology settles into a recognizable form and function. These products are, ideally, utilitarian — they enable us to do other stuff.

But here we are in 2025 with breakthroughs in artificial intelligence and, apparently, quantum computing and physics itself. The former is something I have written about at length already because it has become adopted so quickly and so comprehensively — whether we like it or not — that it is impossible to ignore. But the news in quantum computers is different because it is much, much harder for me to grasp. I feel like I should be fascinated, and I suppose I am, but mainly because I find it all so confusing.

This is not an explainer-type article. This is me working things out for myself. Join me. I will not get far.

Hartmut Neven, of Google, in December:

Today I’m delighted to announce Willow, our latest quantum chip. Willow has state-of-the-art performance across a number of metrics, enabling two major achievements.

  • The first is that Willow can reduce errors exponentially as we scale up using more qubits. This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years.

  • Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe.

Catherine Bolgar, Microsoft:

Microsoft today introduced Majorana 1, the world’s first quantum chip powered by a new Topological Core architecture that it expects will realize quantum computers capable of solving meaningful, industrial-scale problems in years, not decades.

It leverages the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.

Microsoft says it created a new state of matter and observed a particular kind of particle, both for the first time. In a twelve-minute video, the company defines this new era — called the “quantum age” — as a literal successor to the Stone Age and the Bronze Age. Jeez.

There is hype, and then there is hype. This is the latter. Even if it is backed by facts — I have no reason to suspect Microsoft is lying in large part because, to reiterate, I do not know anything about this — and even if Microsoft deserves this much attention, it is a lot. Maybe I have become jaded by one too many ostensibly world-changing product launches.

There is good reason to believe the excitement shown by Google and Microsoft is not pure hyperbole. The problem is neither company is effective at explaining why. As of writing the first sentence of this piece, my knowledge of quantum computers was only that they can be much, much, much faster than any computer today, thanks to the unique properties of quantum mechanics and, specifically, quantum bits. That is basically all. But what does a wildly fast computer enable in the real world? My brain can only grasp the consumer-level stuff I use, so I am reminded of something I wrote when the first Mac Studio was announced a few years ago: what utility does speed have?

I am clearly thinking in terms far too small. Domenico Vicinanza wrote a good piece for the Conversation earlier this year:

Imagine being able to explore every possible solution to a problem all at once, instead of once at a time. It would allow you to navigate your way through a maze by simultaneously trying all possible paths at the same time to find the right one. Quantum computers are therefore incredibly fast at finding optimal solutions, such as identifying the shortest path, the quickest way.

This explanation helped me — not a lot, but a little bit. What I remain confused by are the examples in the announcements from Google and Microsoft. Why quantum computing could help “discover new medicines” or “lead to self-healing materials” seems like it should be obvious to anyone reading, but I do not get it.

I am suspicious in part because technology companies routinely draw links between some new buzzy thing they are selling and globally significant effects: alleviating hunger, reducing waste, fixing our climate crisis, developing alternative energy sources, and — most of all — revolutionizing medical care. Search the web for (hyped technology) cancer and you can find this kind of breathless revolutionary language drawing a clear line between cancer care and 5G, 6G, blockchain, DAOs, the metaverse, NFTs, and Web3 as a whole. This likely speaks as much about insidious industries that take advantage of legitimate qualms with the medical system and fears of cancer, but it is nevertheless a pattern with these new technologies.

I am not even saying these promises are always wrong. Technological advancement has surely led to improving cancer care, among other kinds of medical treatments.

I have no big goal for this post — no grand theme or message. I am curious about the promises of quantum computers for the same reason I am curious about all kinds of inventions. I hope they work in the way Google, Microsoft, and other inventors in this space seem to believe. It would be great if some of the world’s neglected diseases can be cured and we could find ways to fix our climate.

But — and this is a true story — I read through Microsoft’s various announcement pieces and watched that video while I was waiting on OneDrive to work properly. I struggle to understand how the same company that makes a bad file syncing utility is also creating new states of matter. My brain is fully cooked.

In November 2023, two researchers at the University of California, Irvine, and their supervisor published “Dazed and Confused”, a working paper about Google’s reCAPTCHAv2 system. They wrote mostly about how irritating and difficult it is to use, and also explore its privacy and labour costs — and it is that last section about which I had some doubts when I first noticed the paper being passed around in July.

I was content to leave it there, assuming this paper would be chalked up as one more curiosity on a heap of others on arXiv. It has not been subjected to peer review at any journal, as far as I can figure out, nor can I find another academic article referencing it. (I am not counting the dissertation by one of the paper’s authors summarizing its findings.) Yet parts of it are on their way to becoming zombie statistics.

Mike Elgan, writing in his October Computerworld column, repeated the paper’s claim that “Google might have profited as much as $888 billion from cookies created by reCAPTCHA sessions”.

Ted Litchfield of PC Gamer included another calculation alleging solving CAPTCHAs “consum[ed] 7.5 million kWhs of energy[,] which produced 7.5 million pounds of CO2 pollution”; the article is headlined reCAPTCHAs “[…] made us spend 819 million hours clicking on traffic lights to generate nearly $1 trillion for Google”. In a Boing Boing article earlier this month, Mark Frauenfelder wrote:

[…] Through analyzing over 3,600 users, the researchers found that solving image-based challenges takes 557% longer than checkbox challenges and concluded that reCAPTCHA has cost society an estimated 819 million hours of human time valued at $6.1 billion in wages while generating massive profits for Google through its tracking capabilities and data collection, with the value of tracking cookies alone estimated at $888 billion.

I get why these figures are alluring. CAPTCHAs are heavily studied; a search of Google Scholar for “CAPTCHA” returns over 171,000 results. As you might expect, most are adversarial experiments, but there are several examining usability and, others, privacy. However, I could find just one previous paper correlating, say, emissions and CAPTCHA solving, and it was a joke paper (PDF) from the 2009 SIGBOVIK conference, “the Association for Computational Heresy Special Interest Group”. Choice excerpt: “CAPTCHAs were the very starting point for human computation, a recently proposed new field of Computer Science that lets computer scientists appear less dumb to the world”. Excellent.

So you can see why the claims of the U.C. Irvine researchers have resonated in the press. For example, here is what they — Andrew Searles, Renascence Tarafder Prapty, and Gene Tsudik — wrote in their paper (PDF) about emissions:

Assuming un-cached scenarios from our technical analysis (see Appendix B), network bandwidth overhead is 408 KB per session. This translates into 134 trillion KB or 134 Petabytes (194 x 1024 Terrabytes [sic]) of bandwidth. A recent (2017) survey estimated that the cost of energy for network data transmission was 0.06 kWh/GB (Kilowatt hours per Gigabyte). Based on this rate, we estimate that 7.5 million kWh of energy was used on just the network transmission of reCAPTCHA data. This does not include client or server related energy costs. Based on the rates provided by the US Environmental Protection Agency (EPA) and US Energy Information Administration (EIA), 1 kWh roughly equals 1-2.4 pounds of CO2 pollution. This implies that reCAPTCHA bandwidth consumption alone produced in the range of 7.5-18 million pounds of CO2 pollution over 9 years.

Obviously, any emissions are bad — but how much is 7.5–18 million pounds of CO2 over nine years in context? A 2024 working paper from the U.S. Federal Housing Finance Agency estimated residential properties each produce 6.8 metric tons of CO2 emissions from electricity and heating, or about 15,000 pounds. That means CAPTCHAs produced as much CO2 as providing utilities to 55–133 U.S. houses per year. Not good, sure, but not terrible — at least, not when you consider the 408 kilobyte session transfer against, say, Google’s homepage, which weighs nearly 2 MB uncached. CAPTCHAs are not a meaningful burden on the web or our environment.

The numbers in this discussion area are suspect. From these CO2 figures to the value of reCAPTCHA cookies — apparently responsible for nearly half of Google’s revenue from when it acquired the company — I find the evidence for them lacking. Yet they continue to circulate in print and, now, in a Vox-esque mini documentary.

The video, on the CHUPPL “investigative journalism” YouTube channel, was created by Jack Joyce. I found it via Frauenfelder, of Boing Boing, and it was also posted by Daniel Sims at TechSpot and Emma Roth at the Verge. The roughly 17-minute mini-doc has been watched nearly 200,000 times, and the CHUPPL channel has over 350,000 subscribers. Neither number is massive for YouTube, but it is not a small amount of viewers, either. Four of the ten videos from CHUPPL have achieved over a million views apiece. This channel has a footprint. But watching the first half of its reCAPTCHA video is what got me to open BBEdit and start writing this thing. It is a masterclass in how the YouTube video essay format and glossy production can mask bad journalism. I asked CHUPPL several questions about this video and did not receive a response by the time I published this.

Let me begin at the beginning:

How does this checkbox know that I’m not a robot? I didn’t click any motorcycles or traffic lights. I didn’t even type in distorted words — and yet it knew. This infamous tech is called reCAPTCHA and, when it comes to reach, few tools rival its presence across the web. It’s on twelve and a half million websites, quietly sitting on pages that you visit every day, and it’s actually not very good at stopping bots.

While Joyce provides sources for most claims in this video, there is not one for this specific number. According to BuiltWith, which tracks technologies used on websites, the claim is pretty accurate — it sees it used on about twelve million websites, and it is the most popular CAPTCHA script.

But Google has far more popular products than these if it wants to track you across the web. Google Maps, for example, is on over 15 million live websites, Analytics is on around 31 million, and AdSense is on nearly 49 million. I am not saying that we should not be concerned about reCAPTCHA because it is on only twelve million sites, but that number needs context. Google Maps is more popular, according to BuiltWith, than reCAPTCHA. If Google wants to track user activity across the web, AdSense is explicitly designed for that purpose. Yes, it is probably true that “few tools rival its presence across the web”, but you can say that of just about any technology from Google, Meta, Amazon, Cloudflare, and a handful of other giants — but, especially, Google.

Back to the intro:

It turns out reCAPTCHA isn’t what we think it is, and the public narrative around reCAPTCHA is an impossibly small sliver of the truth. And by accepting that sliver as the full truth, we’ve all been misled. For months, we followed the data, we examined glossed over research, and uncovered evidence that most people don’t know exists. This isn’t the story of an inconsequential box. It’s the story of a seemingly innocent tool and how it became a gateway for corporate greed and mass surveillance. We found buried lawsuits, whispers of the NSA, and echoes of Edward Snowden. This is the story of the future of the Internet and who’s trying to control it.

The claims in this introduction vastly oversell what will be shown in this video. The lawsuits are not “buried”, they were linked from the reCAPTCHA Wikipedia article as it appeared before the video was published. The “whispers” and “echoes” of mass surveillance disclosures will prove to be based on almost nothing. There are real concerns with reCAPTCHA, and this video does justice to almost none of them.

The main privacy problems with reCAPTCHA are found in its ubiquity and its ownership. Google swears up and down it collects device and user behaviour data through reCAPTCHA only for better bot detection. It issued a statement saying as much to Techradar in response to the “Dazed and Confused” paper circulating again. In a 2021 blog post announcing reCAPTCHA Enterprise — the latest version combining V2, V3, and the mobile SDKs under a single brand — Google says:

Today, reCAPTCHA Enterprise is a pure security product. Information collected is used to provide and improve reCAPTCHA Enterprise and for general security purposes. We don’t use this data for any other purpose.

[…] Additionally, none of the data collected can be used for personalized advertising by Google.

Google goes on to explain that it collects data as a user navigates through a website to help determine if they are a bot without having to present a challenge. Again, it is adamant none of this data is used to feed its targeted advertising machine.

There are a couple of problems with this. First, because Google does not disclose exactly how reCAPTCHA works, its promise requires that you trust the company. It is not a great idea to believe the word of corporations in general. Specifically, in Google’s case, a leak of its search ranking signals last year directly contradicted its public statements. But, even though Google was dishonest then, there is currently no evidence reCAPTCHA data is being misused in the way Joyce’s video suggests. Coyly asking questions with sinister-sounding music underneath is not a substitute for evidence.

The second problem is the way Google’s privacy policy can be interpreted, as reported by Thomas Claburn in 2020 in the Register:

Zach Edwards, co-founder of web analytics biz Victory Medium, found that Google’s reCAPTCHA’s JavaScript code makes it possible for the mega-corp to conduct “triangle syncing,” a way for two distinct web domains to associate the cookies they set for a given individual. In such an event, if a person visits a website implementing tracking scripts tied to either those two advertising domains, both companies would receive network requests linked to the visitor and either could display an ad targeting that particular individual.

You will hear from Edwards later in Joyce’s video making a similar argument. Just because Google can do this, it does not mean it is actually doing so. It has the far more popular AdSense for that.

ReCAPTCHA interacts with three Google cookies when it is present: AEC, NID, and OGPC. According to Google, AEC is “used to detect spam, fraud, and abuse” including for advertising click fraud. I could not find official documentation about OGPC, but it and NID appear to be used for advertising for signed-out users. Of these, NID is most interesting to me because it is also used to store Google Search preferences, so someone who uses Google’s most popular service is going to have it set regardless, and its value is fixed for six months. Therefore, it is possible to treat it as a unique identifier for that time.

I could not find a legal demand of Google specifically for reCAPTCHA history. But I did find a high-profile request to re-identify NID cookies. In 2017, the first Trump administration began seizing records from reporters, including those from the New York Times. The Times uses Google Apps for its email system. That administration and then the Biden one tried obtaining email metadata, too, while preventing Times executives from disclosing anything about it. In the warrant (PDF), the Department of Justice demands of Google:

PROVIDER is required to disclose to the United States the following records and other information, if available, for the Account(s) for the time period from January 14, 2017, through April 30, 2017, constituting all records and other information relating to the Account(s) (except the contents of communications), including:

[…]

Identification of any PROVIDER account(s) that are linked to the Account(s) by cookies, including all PROVIDER user IDs that logged into PROVIDER’s services by the same machine as the Account(s).

And by “cookies”, the government says that includes “[…] cookies related to user preferences (such as NID), […] cookies used for advertising (such as NID, SID, IDE, DSID, FLC, AID, TAID, and exchange_uid) […]” plus Google Analytics cookies. This is not the first time Google’s cookies have been used in intelligence or law enforcement matters — the NSA has, of course, been using them that way for years — but it is notable for being an explicit instance of tying the NID cookie, which is among those used with reCAPTCHA, to a user’s identity. (Google says site owners can use a different reCAPTCHA domain to disassociate its cookies.) Also, given the effort of the Times’ lawyers to release this warrant, it is not surprising I was unable to find another public document containing similar language. I could not find any other reporting on this cookie-based identification effort, so I think this is news. In this case, Google successfully fought the government’s request for email metadata.

Assuming Google retains these records, what the Department of Justice was demanding would be enough to connect a reCAPTCHA user to other Google product activity and a Google account holder using the shared NID cookie. Furthermore, it is a problem that so much of the web relies on a relative handful of companies. Google has long treated the open web as its de facto operating system, coercing site owners to use features like AMP or making updates to comply with new search ranking guidelines. It is not just Google that is overly controlling, to be fair — I regularly cannot access websites on my iMac because Cloudflare believes I am a robot and it will not let me prove otherwise — but it is the most significant example. Its fingers in every pie — from site analytics, to fonts, to advertising, to maps, to browsers, to reCAPTCHA — means it has a unique vantage point from which to see how billions of people use the web.

These are actual privacy concerns, but you will learn none of them from Joyce’s video. You will instead be on the receiving end of a scatterbrained series of suggestions of reCAPTCHA’s singularly nefarious quality, driven by just-asking-questions conspiratorial thinking, without reaching a satisfying destination.

From here on, I am going to use timecodes as reference points. 1:56:

Journalists told you such a small sliver of the truth that I would consider it to be deceptive.

Bad news: Joyce is about to be fairly deceptive while relying on the hard work of journalists.

At 3:24:

Okay, you’re probably thinking “why does any of this matter?”, and I agree with you.

I did agree with you. I actually halted this investigation for a few weeks because I thought it was quite boring — until I went to renew my passport. (Passport status dot state dot gov.)

I got a CAPTCHA — not a checkbox, not fire hydrants, but the old one. And I clicked it. And it took me here.

The “here” Joyce mentions is a page at captcha.org, which is redirected from its original destination at captcha.com. The material is similar on both. The ownership of the .org domain is unclear, but the .com is run by Captcha, Inc., and it sells the CAPTCHA package used by the U.S. Department of State among other government departments. I have a sneaking suspicion the .org retains some ties to Captcha, Inc. given the DNS records of each. Also, the list of CAPTCHA software on the .org site begins with all the packages offered by Captcha, Inc., and its listing for reCAPTCHA is outdated — it does not display Google as its owner, for example — but the directory’s operators found time to add the recaptcha.sucks website.

About that. 4:07:

An entire page dedicated to documenting the horrors of reCAPTCHA: alleging national security implications for the U.S. and foreign governments, its ability to doxx users, mentioning secret FISA orders — the same type of orders that Edward Snowden risked his life to warn us about. […]

Who put this together? “Anonymous”.

if you are a web-native journalist, wishing to get in touch, we doubt you are going to have a hard-time figuring out who we are anyway.

This felt like a key left in plain sight, whispering there’s a door nearby and it’s meant to be opened. This is what we’re good at. This is what we do.

The U.S. “national security implications”, as you can see on screen as these words are being said, are not present: “stay tuned — it will be continued”, the message from ten years ago reads. The FISA reference, meanwhile, is a quote from Google’s national security requests page acknowledging the types of data it can disclose under these demands. It is a note that FISA exists and, under that law, Google can be compelled to disclose user data — a policy that applies to every company.

This all comes from the ReCAPTCHA Sucks website. On the About page, the site author acknowledges they are a competitor and maintains their anonymity is due to trademark concerns:

a free-speech / gripe-site on trademarked domains must not be used in a ‘bad faith’ — what includes promotion of competing products and services.

and under certain legal interpretations disclosing of our identity here might be construed as a promotion of our own competing captcha product or service.

it frustrates us indeed, but those are the rules of the game.

The page concludes, as Joyce quoted:

if you are a web-native journalist, wishing to get in touch, we doubt you are going to have a hard-time figuring out who we are anyway.

Joyce reads this as a kind of spooky challenge yet, so far as I can figure out, did not attempt to contact the site’s operators. I asked CHUPPL about this and I have not heard back. It is not very difficult to figure out who they are. The site has a shared technical infrastructure, including a historic Google Analytics account, with captcha.com. It feels less like the work of a super careful anonymous tipster, and more like an open secret from an understandably cheesed competitor.

5:05:

Okay, let’s get this out of the way: reCAPTCHA is not and really has never been very good at stopping bots.

Joyce points to the success rate of a couple of reCAPTCHA breakers here as evidence of its ineffectiveness, though does not mention they were both against the audio version. What Joyce does not establish is whether these programs were used much in the real world.

In 2023, Trend Micro published research into the way popular CAPTCHA solving services operate. Despite the seemingly high success rate of automated techniques, “they break CAPTCHAs by farming out CAPTCHA-breaking tasks to actual human solvers” because there are a lot more services out there than reCAPTCHA. That is exactly how many CAPTCHA solvers market their services, though some are now saying they use A.I. instead. Also, it is not as though other types of CAPTCHAs are not subject to similar threats. In 2021, researchers solved hCAPTCHA (PDF) with a nearly 96% success rate. Being only okay at stopping bot traffic is not unique to reCAPTCHA, and these tools are just one of several technologies used to minimize automated traffic. And, true enough, none of these techniques is perfect, or even particularly successful. But that does not mean their purpose is nefarious, as Joyce suggests later in the video, at 11:45:

Google has said that they don’t use the data collected from reCAPTCHA for targeted advertising, which actually scares me a bit more. If not for targeted ads, which is their whole business model, why is Google acting like an intelligence agency?

Joyce does not answer this directly, instead choosing to speculate about a way reCAPTCHA data could be used to identify people who submit anonymous tips to the FBI — yes, really. More on that later.

5:49:

2018 was the launch of V3. According to researchers at U.C. Irvine, there’s practically no difference between V2 and V3.

Onscreen, Joyce shows an excerpt from the “Dazed and Confused” paper, and the sentence fragment “there is no discernable difference between reCAPTCHAv2 and reCAPTCHAv3” is highlighted. But just after that, you can see the sentence continues: “in terms of appearance or perception of image challenges and audio challenges”.

Screenshot from CHUPPL video.
Screenshot from CHUPPL video showing excerpt from an academic paper.

Remember: these researchers were mainly studying the usability of these CAPTCHAs. This section is describing how users perceive the similar challenges presented by both versions. They are not saying V2 and V3 have “practically no difference” in general terms.

At 6:56:

ReCAPTCHA “takes a pixel-by-pixel fingerprint” of your browser. A real-time map of everything you do on the internet.

This part contains a quote from a 2015 Business Insider article by Lara O’Reilly. O’Reilly, in turn, cites research by AdTruth, then — as now — owned by Experian. I can find plenty of references to O’Reilly’s article but, try as I might, I have not been able to find a copy of the original report. But, as a 2017 report from Cracked Labs (PDF) points out, Experian’s AdTruth “provides ‘universal device recognition’”, “creat[ing] a ‘unique user ID’ for each device, by collecting information such as IP addresses, device models and device settings”. To the extent “pixel-by-pixel fingerprint” means anything in this context — it does not, but it misleadingly sounds to me like it is taking screenshots — Experian’s offering also fits that description. It is a problem there are so many things which quietly monitor user activity across their entire digital footprint.

Unfortunately, at 7:41, Joyce whiffs hard while trying to make this point:

If there’s any part of this video you should listen to, it’s this. Stop making dinner, stop scrolling on your phone, and please listen.

When I tell you that reCAPTCHA is watching you, I’m not saying that in some abstract, metaphorical way. Right now, reCAPTCHA is watching you. It knows that you’re watching me. And it doesn’t want you to know.

This stumbles in two discrete ways. First, reCAPTCHA is owned by Google, but so is YouTube. Google, by definition, knows what you are doing on YouTube. It does not need reCAPTCHA to secretly gather that information, too.

Second, the evidence Joyce presents for why “it doesn’t want you to know” is that Google has added some CSS to hide a floating badge, a capability it documents. This is for one presentation of reCAPTCHAv2, which is as invisible background validation and where a checkbox is shown only to suspicious users.

Screenshot from CHUPPL video.
Screenshot from CHUPPL video.

I do not think Google “does not want you to know” about reCAPTCHA on YouTube. I think it thinks it is distracting. Google products using other Google technologies has not been a unique concern since the company merged user data and privacy policies in 2012.

The second half of the video, following the sponsor read, is a jumbled mess of arguments. Joyce spends time on a 2015 class action lawsuit filed against Google in Massachusetts alleging completing the old-style word-based reCAPTCHA was unfairly using unpaid labour to transcribe books. It was tossed in 2016 because the plaintiff (PDF) “failed to identify any statute assigning value to the few seconds it takes to transcribe one word”, and “Google’s profit is not Plaintiff’s damage”.

Joyce then takes us on a meandering journey through the way Google’s terms of use document is written — this is where we hear from Edwards reciting the same arguments as appeared in that 2020 Register article — and he touches briefly on the U.S. v. Google antitrust trial, none of which concerned reCAPTCHA. There is a mention of a U.K. audit in 2015 specifically related to its 2012 privacy policy merger. This is dropped with no explanation into the middle of Edwards’ questioning of what Google deems “security related” in the context of its current privacy policy.

Then we get to the FBI stuff. Remember earlier when I told you Joyce has a theory about how Google uses reCAPTCHA to unmask FBI tipsters? Here is when that comes up again:

Check this out: if you want to submit a tip to the FBI, you’re met with this notice acknowledging your right to anonymity. But even though the State Department doesn’t use reCAPTCHA, the FBI and the NSA do. […] If they want to know who submitted the anonymous report, Google has to tell them.

This is quite the theory. There is video of Edward Snowden and clips from news reports about the mysteries of the FISA court. Dramatic music. A chart of U.S. government requests for user data from Google.

But why focus on reCAPTCHA when the FBI and NSA — and a whole bunch of other government sites — also use Google Analytics? Though Google says Analytics cookies are distinct from those used by its advertising services, site owners can link them together, which would not be obvious to users. There is no evidence the FBI or any other government agency is doing so. The actual problem here is that sensitive and ostensibly anonymous government sites are using any Google services whatsoever, though that is probably because they are a massive U.S. corporation with lots of widely-used products and services.

Even so, many federal sites use the product offered by Captcha, Inc. and it seems to respect privacy by being self-hosted. All of them should just use that. The U.S. government has its own analytics service; the stats are public. The reason for inconsistencies is probably the same reason any massive organization’s websites are fragmented: it is a lot of work to keep them unified.

Non-U.S. government sites are not much better. RCMP Alberta also uses Google Analytics, though not reCAPTCHA, as does London’s Metropolitan Police.

Joyce juxtaposes this with the U.S. Secret Service’s use of Babel Street’s Locate X data. He does not explain any direct connection to reCAPTCHA or Google, and there is a very good reason for this: there is none. Babel Street obtained some of its location data from Venntel, which is owned by Gravy Analytics, which obtained it from personalized ads.

Joyce ultimately settles on a good point near the end of the video, saying Google uses various browsing signals “before, during, and after” clicking the CAPTCHA to determine whether you are likely human. If it does not have enough information about you — “you clear your cookies, you are browsing Incognito, maybe you are using a privacy-focused browser” — it is more likely to challenge you.

None of this is actually news. It has all been disclosed by Google itself on its website and in a 2014 Wired article by Andy Greenberg, linked from O’Reilly’s Business Insider story. This is what Joyce refers to at 7:24 in the video in saying “reCAPTCHA doesn’t need to be good at stopping bots because it knows who you are. The new reCAPTCHA runs in the background, is invisible, and only shows challenges to bots or suspicious users”. But that is exactly how reCAPTCHA stops bots, albeit not perfectly: it either knows who you are and lets you through without a challenge, or it asks you for confirmation.

It is this very frustration I have as I try to protect my privacy while still using the web. I hit reCAPTCHA challenges frequently, especially when working on something like this article, in which I often relied on Google’s superior historical index and advanced search operators to look up stories from ten years ago. As I wrote earlier, I run into Cloudflare’s bot wall constantly on one of my Macs but not the other, and I often cannot bypass it without restarting my Mac or, ironically enough, using a private browsing window. Because I use Safari, website data is deleted more frequently, which means I am constantly logging into services I use all the time. The web becomes more cumbersome to use when you want to be tracked less.

There are three things I want to leave you with. First, there is an interesting video to be made about the privacy concerns of reCAPTCHA, but this is not it. It is missing evidence, does not put findings in adequate context, and drifts conspiratorially from one argument to another while only gesturing at conclusions. Joyce is incorrect in saying “journalists told you such a small sliver of the truth that I would consider it to be deceptive”. In fact, they have done the hard work over many years to document Google’s many privacy failures — including in reCAPTCHA. That work should bolster understandable suspicions about massive corporations ruining our right to privacy. This video is technically well produced, but it is of shoddy substance. It does not do justice to the work of the better journalists whose work it relies upon.

Second, CAPTCHAs offer questionable utility. As iffy as I find the data in the discussion section of the “Dazed and Confused” paper, its other findings seem solid: people find it irritating to label images or select boxes containing an object. A different paper (PDF) with two of the same co-authors and four other researchers found people most like reCAPTCHA’s checkbox-only presentation — the one that necessarily compromises user privacy — but also found some people will abandon tasks rather than solve a CAPTCHA. Researchers in 2020 (PDF) found CAPTCHAs were an impediment to people with visual disabilities. This is bad. Unfortunately, we are in a new era of mass web scraping — one reason I was able to so easily find many CAPTCHA solving services. Site owners wishing to control that kind of traffic have options like identifying user agents or I.P. address strings, but all of these can be defeated. CAPTCHAs can, too. Sometimes, all you can do is pile together a bunch of bad options and hope the result is passable.

Third, this is yet another illustration of how important it is for there to be strong privacy legislation. Nobody should have to question whether checking a box to prove they are not a robot is, even in a small way, feeding a massive data mining operation. We are never going to make progress on tracking as long as it remains legal and lucrative.

Do you remember the “Twitter Files”?

I completely understand if you do not. Announced with great fanfare by Elon Musk after his eager-then-reluctant takeover of the company, writers like Lee Fang, Michael Shellenberger, Rupa Subramanya, Matt Taibbi, and Bari Weiss were permitted access to internal records of historic moderation decisions. Each published long Twitter threads dripping in gravitas about their discoveries.

But after stripping away the breathless commentary and just looking at the documents as presented, Twitter’s actions did not look very evil after all. Clumsy at times, certainly, but not censorial — just normal discussions about moderation. Contrary to Taibbi’s assertions, the “institutional meddling” was research, not suppression.

Now, Musk works for the government’s DOGE temporary organization and has spent the past two weeks — just two weeks — creating chaos with vast powers and questionable legality. But that is just one of his many very real jobs. Another one is his ownership of X where he also has an executive role. Today, he decided to accuse another user of committing a crime, and used his power to suspend their account.

What was their “crime”? They quoted a Wired story naming six very young people who apparently have key roles at DOGE despite their lack of experience. The full tweet read:1

Here’s a list of techies on the ground helping Musk gaining and using access to the US Treasury payment system.

Akash Bobba

Edward Coristine

Luke Farritor

Gautier Cole Killian

Gavin Kliger

Ethan Shaotran

I wonder if the fired FBI agents may want dox them and maybe pay them a visit.

In the many screenshots I have seen of this tweet, few seem to include the last line as it is cut off by the way X displays it. Clicking “Show more” would have displayed it. It is possible to interpret this as violative of X’s Abuse and Harassment rules, which “prohibit[s] behavior that encourages others to harass or target specific individuals or groups of people with abuse”, including “behavior that urges offline action”.

X, as Twitter before it, enforces these policies haphazardly. The same policy also “prohibit[s] content that denies that mass murder or other mass casualty events took place”, but searching “Sandy Hook” or “Building 7” turns up loads of tweets which would presumably also run afoul. Turns out moderation of a large platform is hard and the people responsible sometimes make mistakes.

But the ugly suggestion made in that user’s post might not rise to the level of a material threat — a “crime”, as it were — and, so, might still be legal speech. Musk’s X also suspended a user who just posted the names of public servants. And Musk is currently a government employee in some capacity. The “Twitter Files” crew, ostensibly concerned about government overreach at social media platforms, should be furious about this dual role and heavy-handed censorship.

It was at this point in drafting this article that Mike Masnick of Techdirt published his impressions much faster than I could turn it around. I have been bamboozled by my day job. Anyway:

Let’s be crystal clear about what just happened: A powerful government official who happens to own a major social media platform (among many other businesses) just declared that naming government employees is criminal (it’s not) and then used his private platform to suppress that information. These aren’t classified operatives — they’re public servants who, theoretically, work for the American people and the Constitution, not Musk’s personal agenda.

This doesn’t just “seem like” a First Amendment issue — it’s a textbook example of what the First Amendment was designed to prevent.

So far, however, we have seen from the vast majority of them no exhausting threads, no demands for public hearings — in fact, barely anything. To his extremely limited credit, Taibbi did acknowledge it is “messed up”, going on to write:

That new-car free speech smell is just about gone now.

“Now”?

Taibbi is the only one of those authors who has written so much as a tweet about Musk’s actions. Everyone else — Fang, Shellenberger, Subramanya, and Weiss — has moved on to unsubstantive commentary about newer and shinier topics.

This is not mere hypocrisy. What Musk is doing is a far more explicit blurring of the lines between government power and platform speech permissions. This could be an interesting topic that a writer on the free speech beat might want to explore. But for a lot of them, it would align them too similarly to mainstream reporting, and their models do not permit that.

It is one of the problems with being a shallow contrarian. Because these writers must position themselves as alternatives to mainstream news coverage — “focus[ing] on stories that are ignored or misconstrued in the service of an ideological narrative”, “for people who dare to think for themselves”. How original. They suggest they cannot cover the same news — or, at least, not from a similar perspective — as in the mainstream. This is not actually true, of course: each of them frequently publishes hot takes about high-profile stories along their particular ideological bent, which often coincide with standard centre-right to right-wing thought. They are not unbiased. Yet this widely covered story has either escaped their attention, or they have mostly decided it is not worth mentioning.

I am not saying this is a conspiracy among these writers, or that they are lackeys for Musk or Trump. What I am saying is that their supposed principles are apparently only worth expressing when they are able to paint them as speaking truth to power, and their concept of power is warped beyond recognition. It goes like this: some misinformation researchers partially funded by government are “power”, but using the richest man in the world as a source is not. It also goes like this: when that same man works for the government in a quasi-official capacity and also owns a major social media platform, it is not worth considering those implications because Rolling Stone already has an article.

They can prove me wrong by dedicating just as much effort to exposing the blurrier-than-ever lines between a social media platform and the U.S. government. Instead, it is busy reposting glowing profiles of now-DOGE staff. They are not interested in standing for specific principles when knee-jerk contrarianism is so much more thrilling.


  1. There are going to be a lot of x.com links in this post, as it is rather unavoidable. ↥︎

This week, the United States Supreme Court heard arguments about whether it is legal to require that TikTok be forced to divest from its parent company by January 19 or be banned. You may know this as the “TikTok ban” because that is how it has been reported basically everywhere. Seriously — I was going to list some examples, but if you visit your favourite news publication, you will almost certainly see it called the “TikTok ban”.

Pedants would be right to point out this is not technically a ban. All TikTok needs to do is become incorporated with entirely different ownership, with the word “all” doing most of the work in that phrase. Consider a hypothetical demand by a populous country that Meta divest Instagram to continue its operations locally. Not only is that not easy, I strongly suspect the U.S. government would intervene in that circumstance. No country wants another to take away their soft power.

Coverage of Supreme Court hearings is always a little funny to read because the justices are, ostensibly, impartial adjudicators of the law who are just asking questions of both sides, and are not supposed to tip their hand. That means reporters end up speculating about the vibes. Amy Howe, syndicated at SCOTUSblog,1 reports the justices were “skeptical” and “divided over the constitutionality” of the law. CNN’s reporters, meanwhile, wrote that they “appeared likely to uphold a controversial ban on TikTok”. While some justices were not persuaded by the potential for manipulation, they did seem to agree on the question of user data. I also think privacy is important, and perhaps for some intersecting reasons, but targeting a single app is the dumbest way to resolve that particular complaint.

Mathew Ingram wrote a great piece calling this week’s proceedings a slide into “even stupider” territory, which could refer to just about anything. How about NBC News’ reporting that the Biden Administration is looking into “ways to keep TikTok available in the United States if a ban that’s scheduled to go into effect Sunday proceeds”? Yes, apparently the government which signed this into law with bipartisan urgency is now undermining its own position.

Alas, Ingram’s article has nothing do to with that, but it is worth your time. I want to highlight one paragraph, though, which I believe is not as clear as it could be:

We’ve had decades of fear-mongering about both American and foreign companies manipulating people’s minds, including the Cambridge Analytica scandal, but there’s no evidence that any of it has actually changed people’s minds. All of the Russian manipulation of Facebook and other platforms that allegedly influenced the 2016 election amounted to not much of anything, according to social scientists. I would argue that Fox News is a far bigger problem than Russia ever was. And even if the Chinese government forces TikTok to block mentions of Tiananmen Square (as it has forced Google to), it’s a massive leap to assume that this would somehow affect the minds of gullible young TikTok users in any significant way. In my opinion, people should be a lot more concerned about how Apple — despite all of its bragging about protecting the privacy of its users — gave the Chinese government effective control over all of its data.

I get the feeling the discussions about manipulating users’ opinions will be never-ending, as have those about, say, the influence of violence in video games. Two recent articles I found persuasive are one by Henry Farrell, and another by Charlie Warzel and Mike Caulfield, in the Atlantic, calling the internet a “justification machine”.

But to Ingram’s argument about Apple, it should be noted that it gave over control of data about users in China, not “all of its data”. This is probably still a bad outcome for most of those users, yes, but the way Ingram wrote this makes it sound as though the Chinese government has control over my Apple-stored data. As far as I am aware, that is not true.


  1. The publisher of SCOTUSblog is facing charges today of tax evasion through fraudulent employment schemes. ↥︎

Wired has been publishing a series of predictions about the coming state of the world. Unsurprisingly, most concern artificial intelligence — how it might impact health, music, our choices, the climate, and more. It is an issue of the magazine Wired describes as its “annual trends briefing”, but it also kind of like a hundred-page op-ed section. It is a mixed bag.

A.I. critic Gary Marcus contributed a short piece about what he sees as the pointlessness of generative A.I. — and it is weak. That is not necessarily because of any specific argument, but because of how unfocused it is despite its brevity. It opens with a short history of OpenAI’s models, with Marcus writing “Generative A.I. doesn’t actually work that well, and maybe it never will”. Thesis established, he begins building the case:

Fundamentally, the engine of generative AI is fill-in-the-blanks, or what I like to call “autocomplete on steroids.” Such systems are great at predicting what might sound good or plausible in a given context, but not at understanding at a deeper level what they are saying; an AI is constitutionally incapable of fact-checking its own work. This has led to massive problems with “hallucination,” in which the system asserts, without qualification, things that aren’t true, while inserting boneheaded errors on everything from arithmetic to science. As they say in the military: “frequently wrong, never in doubt.”

Systems that are frequently wrong and never in doubt make for fabulous demos, but are often lousy products in themselves. If 2023 was the year of AI hype, 2024 has been the year of AI disillusionment. Something that I argued in August 2023, to initial skepticism, has been felt more frequently: generative AI might turn out to be a dud. The profits aren’t there — estimates suggest that OpenAI’s 2024 operating loss may be $5 billion — and the valuation of more than $80 billion doesn’t line up with the lack of profits. Meanwhile, many customers seem disappointed with what they can actually do with ChatGPT, relative to the extraordinarily high initial expectations that had become commonplace.

Marcus’ financial figures here are bizarre and incorrect. He quotes a Yahoo News-syndicated copy of a PC Gamer article, which references a Windows Central repackaging of a paywalled report by the Information — a wholly unnecessary game of telephone when the New York Times obtained financial documents with the same conclusion, and which were confirmed by CNBC. The summary of that Information article — that OpenAI “may run out of cash in 12 months, unless they raise more [money]”, as Marcus wrote on X — is somewhat irrelevant now after OpenAI proceeded to raise $6.6 billion at a staggering valuation of $157 billion.

I will leave analysis of these financials to MBA types. Maybe OpenAI is like Amazon, which took eight years to turn its first profit, or Uber, which took fourteen years. Maybe it is unlike either and there is no way to make this enterprise profitable.

None of that actually matters, though, when considering Marcus’ actual argument. He posits that OpenAI is financially unsound as-is, and that Meta’s language models are free. Unless OpenAI “come outs [sic] with some major advance worthy of the name of GPT-5 before the end of 2025″, the company will be in a perilous state and, “since it is the poster child for the whole field, the entire thing may well soon go bust”. But hold on: we have gone from ChatGPT is disappointing “many customers” — no citation provided — to the entire concept of generative A.I. being a dead end. None of this adds up.

The most obvious problem is that generative A.I. is not just ChatGPT or other similar chat bots; it is an entire genre of features. I wrote earlier this month about some of the features I use regularly, like Generative Remove in Adobe Lightroom Classic. As far as I know, this is no different than something like OpenAI’s Dall‍-‍E in concept: it has been trained on a large library of images to generate something new. Instead of responding to a text-based prompt, it predicts how it should replicate textures and objects in an arbitrary image. It is far from perfect, but it is dramatically better than the healing brush tool before it, and clone stamping before that.

There are other examples of generative A.I. as features of creative tools. It can extend images and replace backgrounds pretty well. The technology may be mediocre at making video on its own terms, but it is capable of improving the quality of interpolated slow motion. In the technology industry, it is good at helping developers debug their work and generate new code.

Yet, if you take Marcus at his word, these things and everything else generative A.I. “might turn out to be a dud”. Why? Marcus does not say. He does, however, keep underscoring how shaky he finds OpenAI’s business situation. But this Wired article is ostensibly about generative A.I.’s usefulness — or, in Marcus’ framing, its lack thereof — which is completely irrelevant to this one company’s financials. Unless, that is, you believe the reason OpenAI will lose five billion dollars this year is because people are unhappy with it, which is not the case. It simply costs a fortune to train and run.

The one thing Marcus keeps coming back to is the lack of a “moat” around generative A.I., which is not an original position. Even if this is true, I do not see this as evidence of a generative A.I. bubble bursting — at least, not in the sense of how many products it is included in or what capabilities it will be trained on.

What this looks like, to me, is commoditization. If there is a financial bubble, this might mean it bursts, but it does not mean the field is wiped out. Adobe is not disappearing; neither are Google or Meta or Microsoft. While I have doubts about whether chat-like interfaces will continue to be a way we interact with generative A.I., it continues to find relevance in things many of us do every day.

Catharine Tunney, CBC News:

Citing national security concerns, the federal government has ordered TikTok to close its Canadian operations — but users will still be able to access the popular app.

My position is that TikTok should not be banned; instead, governments should focus on comprehensive privacy legislation to protect users from all avenues of data exploitation. So it is kind of a good thing the Canadian government is not prohibiting the app or users’ access — except the government’s position appears to be entirely contradictory. It is very worried about user privacy:

Former CSIS director David Vigneault told CBC News it’s “very clear” from the app’s design that data gleaned from its users “is available to the government of China” and its large-scale data harvesting goals.

But laws drawn up in 2022 which would restrict these practices have been stuck in committee since May. So there is an ostensibly dangerous app posing a risk to Canadians, and the government’s response is to let people keep using it while shutting down the company’s offices? The Standing Committee on Industry and Technology had better get moving.