Adam Engst, TidBits, had some very kind things to say about my Liquid Glass piece. I also liked Engst’s attempt to answer the question of “why?”:

Why now? The answer may partly lie in available processing power. The balance between usability and aesthetics has always been informed by technical capabilities. Consider a few dates from Apple’s history: […]

Twenty-five years after alpha channels began appearing in our user interfaces, I think many of us have taken for granted the soft shadows and smooth corners enabled by translucent pixels. Back then, there were plenty of people who were worried about the performance impact of all these effects, just as there are now about Liquid Glass. I get it; I am not arguing that opinion is wrong or misguided.

Personally, however, I know my computer has capabilities wildly disproportionate to my actual use. Sometimes I will actually use that performance, but not most of the time. If a little bit of the M1 Pro in this Mac can go toward the stuff I see every day, I think that is a fair trade-off. If my iPhone can draw real-time lens distortion and chromatic aberration, that rocks. I think that is worth exploring.

Jason Koebler and Jules Roscoe, 404 Media:

To do this, we used a crowdsourced database of AI hallucination cases maintained by the researcher Damien Charlotin, which so far contains more than 410 cases worldwide, including 269 in the United States. Charlotin’s database is an incredible resource, but it largely focuses on what happened in any individual case and the sanctions against lawyers, rather than the often elaborate excuses that lawyers told the court when they were caught. Using Charlotin’s database as a starting point, we then pulled court records from around the country for dozens of cases where a lawyer offered a formal explanation or apology. Pulling this information required navigating clunky federal and state court record systems and finding and purchasing the specific record where the lawyer in question tried to explain themselves (these were often called “responses to order to show cause.”) We also reached out to lawyers who were sanctioned for using AI to ask them why they did it. Very few of them responded, but we have included explanations from the few who did.

A May 2024 Stanford study found A.I. legal research tools would invent case law in one-sixth to one-third of searches.

What is striking about 404’s reporting is how many of these lawyers simply disclaim responsibility. I know few people want to admit to being lazy and incautious, but the number of these expensive professionals who blame their assistants instead of taking responsibility for their own filings is shameful.

Anna Gross and Tim Bradshaw, Financial Times:

The UK government has issued a new order to Apple to create a backdoor into its cloud storage service, this time targeting only British users’ data, despite US claims that Britain had abandoned all attempts to break the tech giant’s encryption.

[…]

Apple made a complaint to the Investigatory Powers Tribunal over the original demand, backed by a parallel legal challenge from Privacy International and Liberty, another campaign group. That case was due to be heard early next year but the new order may restart the legal process.

When U.S. Director of National Intelligence Tulsi Gabbard announced in August that “the UK has agreed to drop its mandate for Apple to provide a ‘back door’”, I stressed the ambiguity in her statement. I had no additional information, but the wording of her tweet was vague.

Reporters like Tripp Mickle, at the New York Times, and Annabelle Timsit and Joseph Menn, of the Washington Post, were too eager to claim the U.K. would wholly abandon its pursuit of customer data. Neither allowed for different interpretations of Gabbard’s tweet. Journalists like these have sources who could have offered clarity. It is unclear in either article whether they did reach out to their contacts; if they did, their stories were misleading even with — or perhaps because of — that information.

Anyway, this sucks. I do not think Advanced Data Protection is coming back to the U.K. any time soon.

A brief, throat-clearing caveat: while I had written most of this pre-launch, I was unable to complete it by the time Apple shipped its annual round of operating system updates. Real life, and all that. I have avoided reading reviews; aside from the excerpt I quoted from Dan Moren’s, I have seen almost nothing. Even so, I am sure something I have written below will overlap with something written by somebody else. Instead of trying to weed out any similarities, I have written this note. Some people hire editors.

The Name

Here is a question: does anybody know what we were supposed to call the visual interface design direction Apple has pursued since 2013? For a company that likes to assign distinct branding to everything it makes, it is conspicuous this distinct visual language never had a name.

To be fair, iOS’s system theme was not branded from when it was first shown in 2007. It inherited some of the Aqua qualities of Mac OS X, but with a twist: the backgrounds of toolbars were a muted blue instead of the saturated “Aqua” blue or any of Mac OS X’s window themes. The system font was Helvetica, not Lucida Grande. It was Aqua, but not exactly. Even after 2013’s iOS 7, a massive redesign intended, in part, to signal higher level changes, no name was granted. The redesign was the rebrand. The system spoke for itself.

In MacOS, meanwhile, the “Aqua” branding felt increasingly tenuous to me following the system-wide redesigns of Yosemite — which Craig Federighi said “continue[d] this evolution” — and Big Sur.

Now, then: Liquid Glass.

The name is remarkably apt — definitely Apple-y, and perfectly descriptive of how the material looks and feels. In most contexts, it looks like slightly deep and modestly bevelled glass in either clear or slightly frosted finishes. Unlike familiar translucent materials that basically just blur whatever is behind them, Liquid Glass distorts as though it is a lens. It even, in some cases, displays chromatic aberration around the edges.

And then you start to move it around, and things get kind of strange. It can morph and flex when tapped, almost like pushing a drop of mineral oil, and it glows and enlarges, too. When a button is near enough to another, the edges of the tapped button may melt into those of the proximate one. When you switch between views, buttons might re-form into different buttons in the same area.

That alone fulfills the “liquid” descriptor, but it is not the first time Apple has used the term. Since 2018, it has described high-resolution LCD displays with small bezels and non-zero corner radii — like the one on my MacBook Pro — as “Liquid Retina displays”. One might reasonably wonder if there is a connection. I consulted my finest Apple-to-English decoder ring and it appears Apple is emphasizing another defining characteristic of the Liquid Glass design language, which is that each part of the visual interface is, nominally, concentric with the bezel and corner radius of a device’s display. Am I reaching too hard? Is Apple? Who can say?

Apple’s operating systems have shared a familial look-and-feel, but Liquid Glass is the first time they also share a distinct name and specific form. That seems to me like a significant development.

My experiences with Liquid Glass have been informed by using iOS 26 since June on my iPhone 15 Pro, and MacOS 26 Tahoe since July on my 14-inch MacBook Pro. For a redesign justified on its apparent uniformity across Apple’s product lineup, this is an admittedly narrow slice of use cases, and I am sure there will be dozens of other reviews from people more immersed in Apple’s ecosystem than I presently am. However, I also think they are the two platforms telling the most substantial parts of the Liquid Glass story. The Mac is Apple’s longest-running platform and users have certain specific expectations; the iPhone is by far Apple’s most popular product. I do not think my experience is as comprehensive as those with access to more of Apple’s hardware, but I do think these are the two platforms where Apple needs to get things right.

I also used both systems almost exclusively in light mode. I briefly tested them in dark mode, but I can only describe how the visual design has changed in my day-to-day use, and that is light mode all the time.

The Material

The first thing you should know about the Liquid Glass material is that it is not exactly VisionOS for the Apple products you actually own. Sure, Apple may say it was “[i]nspired by the depth and dimensionality of VisionOS”, but it is evolved from that system’s frosted glassy slabs with slightly recessed text entry fields. This is something else entirely, and careful observers will note it is a visual language coming to all of Apple’s platforms this year except VisionOS. (Well, and the HomePod, if you want to be pedantic.)

The second thing you need to know is that it is visually newsworthy, but this material barely alters the fundamentals of using any of Apple’s devices, which makes sense. If you were creating a software update for billions of devices you, too, would probably think twice about radically changing all those environments. Things looking different will assuredly be a big enough change for some people.

The Liquid Glass material is most often used in container components — toolbars, buttons, menus, the MacOS Dock — but it is used for its own sake for the clock numerals on the Lock Screen. I think this will be its least controversial application. The Lock Screen clock is kind of useful, but also kind of decorative. (Also, and this is unrelated to Liquid Glass, but I cannot find another place to put this: the default typeface for the clock on the Lock Screen now stretches vertically. I like the very tall numerals and the very cool thing is that it vertically compresses as notifications and Live Activities are added to your Lock Screen.) If you do not like the glassy texture on the clock, you can select a solid colour. Everyone can be happy.

That is not the case elsewhere or for other components.

Translucency has been a standard component in a visual interface designer’s toolbox for as long as alpha channels have been supported. It was already a defining characteristic of Apple’s operating systems and has been since the introduction of Aqua. Translucency helps reduce the weightiness of onscreen elements, and can be used to imply layering and a sense of transience. But there is a key problem: when something in a user interface is not entirely opaque, it is not possible to predict what will be behind it.

Obviously.

Less obvious is how a designer should solve for the legibility of things a translucent element may contain, particularly text. While those elements may have specific backgrounds of their own, like text used in a button, there is often also plain text, as in window titles and copy. Icons, too, may not have backgrounds, or may be quite small or thin. The integrity of these elements may not be maintained if they are not displayed with sufficient contrast.

An illustration of decreasing contrast as background opacity and colours change. The rectangles are white at 60% opacity.
Three blocks of lorem ipsum placeholder text set in a dark green. The left column has a near-white background; the middle column an orange background; the right column a photo of a leaf as the background.

However, the impression of translucency is usually at odds with legibility. Say you have a panel-type area containing some dark text. The highest contrast can be achieved by making the panel’s background white. There is no way to make this panel entirely white and have it be interpreted as translucent. As the opacity of the panel’s background drops, so does the contrast whenever it appears over anything else that is not itself entirely white.

So designers have lots of little tricks they can play. They can decorate the text with outlines, shadows, and glows, as Microsoft did for various elements in Windows Vista. This works but is not particularly elegant, especially for text longer than a few words. Designers also blur the background, a common trick used in every major operating system today, and they adjust the way the foreground inherits the tones and shades of the background.

Windows Vista. Notice the white glow behind the text in application title bars, and the blurred background elements. (Image from the OpenGL Pipeline Newsletter.)
A screenshot of the Windows Vista's Aero user interface.

The Liquid Glass texture is more complex than Apple’s background materials or Microsoft’s Acrylic. It warps and distorts at the edges, and the way it blurs background layers is less diffuse. There are inset highlights and shadows, too, and all of these effects sell the illusion of depth better. It is as much a reflection of the intent of Apple’s human interface designers as it is a contemporary engineering project, far more so than an interface today based on raster graphics. That it is able to achieve such complex material properties in real-time without noticeably impacting performance or, in my extremely passive observations, battery life, is striking.

I do not think all these effects necessarily help legibility, which is as poor as it has ever been in translucent areas. The degree to which this is noticeable is dependent on the platform. In iOS 26, I find it less distracting, I think largely because it exists in the context of a single window at a time (picture-in-picture video being the sole exception). That means there is no expectation of overlapping active and inactive windows and, so, no chance that something overlapping within a window’s area could be confused with a different window overlapping.

Legibility problems are also reduced by how much is moving around on a display at any time. Yes, there are times when I cannot clearly read a text label in an iOS tab bar or a menu item in MacOS, but as soon as I scroll, legibility is not nearly as much of an issue. I do not wish to minimize this; I think text labels should be legible in every situation. But it is better in use than the sense you might get from the still screenshots you have seen in this article and elsewhere.

Apple also tries to solve legibility by automatically flipping the colour of the glass depending on the material behind it. When the glass is overtop a lighter-coloured area, the glass is light with dark text and icons; when it is on top of a darker area, the glass is dark with light-coloured text and icons. If Apple really wanted to improve the contrast of the toolbar, it would have done the opposite. These compensations do not trigger immediately so, when scrolling through a document containing a mix of lighter and darker areas, there is not as much flashing between the two states as you might expect. It is Apple’s clever solution to a problem Apple created.

With all the places the Liquid Glass texture has been applied in MacOS, you might believe it would make an appearance in the Menu Bar, too, since that has sported a translucent background since Leopard. But you would be wrong. In fact, in MacOS 26, the Menu Bar often has no background at all. The system decides whether the menu titles and icons should be shown in white or black based on the desktop picture, and then drops them right on top of it. Occasionally, it will show a gradient or shadow, sometimes localized to either side of the menu bar. Often, as with the other uses of translucency, legibility has been considered and I have not had difficulty reading menu items — but, also like the other translucent elements, this would never be a problem if the menu bar had a solid background.

Menu Bar without a gradient background
Menu Bar with a full gradient background
Menu Bar with a gradient background on the left side
Menu Bar with a gradient background on the right side

Here is the thing, though: Liquid Glass mostly — mostly — feels at home on the iPhone. Yes, Apple could have avoided legibility problems entirely by not being so enamoured of translucency, but it does have alluring characteristics. It is a very cool feeling of true dimensionality. It is also a more direct interpretation of the hardware on which these systems run, the vast majority of which have gloss-finish glassy screens. Glass onscreen feels like a natural extension of this. I get it. I do not love it, but I feel like I understand it on the iPhone far more than I do on my Mac.

The animations — the truly liquid-feeling part of this whole thing — are something better seen as they are quite difficult to explain. I will try but do not worry: there are visual aids coming. The buttons for tools float in glassy bubble containers in a layer overtop the application. Now imagine those buttons morphing into new bubbly buttons and toolbar areas as you move from one screen to another. When there are two buttons, they may become a unified blob on a different screen. For example, in the top-right of the Library section of the Music app, there is an account button, and a button labelled “⋯” which shows a menu containing only a single “Edit Sections” item. Tapping on “Playlists” transforms the two button shapes into a single elongated capsule enclosing three buttons. Tapping “Artists” condenses the two into a single sorting button. Tapping “Genres” simply makes the two buttons fade away as there are no buttons in the top-right of this section.

Though these animations are not nearly as fluid as they were first shown, they seem like they help justify the “liquid” part of the name, and are something Apple has enough pride in to be called out in the press release. Their almost complete absence on MacOS is therefore notable. There are a handful of places they appear, like in Spotlight, but MacOS feels less committed to Liquid Glass as a result. When menus are summoned, they simply appear without any dramatic animation. Buttons and menus do not have the stretchy behaviour of their iOS counterparts. To be sure, I am confident those animations in MacOS would become tiresome in a matter of minutes. But, so, if MacOS is better for being less consistent with iOS in this regard, that seems to me like a good argument against forcing cross-platform user interface unification.

The System

Strictly speaking, Liquid Glass describes only this material, but the redesign does not begin and end there. Apple has refreshed core elements across the entire system, from toolbars and buttons to toggle controls and the Dock. And, yes, application icons.

I have already written about the increasing conformity of app icons within MacOS which has brought them into complete alignment with their iOS counterparts, down to the number of dots across the top of the Notes icon. If there are any differences in icons on MacOS and iOS for the same system app, they are insignificant. Regardless of how one may feel about this change — personally, aghast — they are changes made in concert with the Liquid Glass personality. Icons are now more than bitmap images at set sizes. They are now multilayer documents, and each layer can have distinct effects applied. The whole icon also appears to have a polished edge which, by default, falls on the upper-left and bottom-right, as though it is being lit at a 45° angle.

My Home Screen with icons tinted to match the background. When you select tinted icons, iOS provides the option of a colour picker.
iOS 26 Home Screen with slate-tinted glassy icons.

If these icons have an advantage, it is that Apple is now allowing more user customization than ever. In addition to light and dark modes, application icons can now be displayed in clear and tinted states. Designers can specify new icons, and iOS will automatically convert icons without an update with mixed results. And, as with the other icon display modes, this also affects widgets on the Home Screen and icons across the system. Clear and tinted both look like frosted glass and have similar dimensional effects as other Liquid Glass elements, though one is — obviously — tinted. I can see this being a boon to people who use a photo as their wallpaper, though it comes at the expense of icon clarity and designer intention.

The party trick on the iPhone’s Home Screen is that each of these layers and the glassy edge respond to the physical orientation and movement of your device. Widgets and folders on the Home Screen also get that shine and they, too, respond to device movement. Sometimes. The shine on the App Library overview page does not respond to motion, but the app icons within a category do. App icons in Spotlight are not responsive to motion, either, and nor are the buttons in the bottom corners of the Lock Screen. The clock on the Lock Screen responds, but the notifications just below it on the very same screen do not. This inconsistency feels like a bug, but I do not think it is. I do not love this effect; I simply think similar things should look and behave similarly.

One of the things Apple is particularly proud of is how the shapes of the visual interface now reflect the shapes in its hardware, particularly in how they neatly nest inside each other. Concentricity is nothing new to its industrial design language. The company has, for decades, designed devices with shapes that nestle comfortably within each other. Witness, for example, the different display materials on the iMac G4 and the position of the camera on the back of the original iPhone. It is not even new in Apple’s software: the rounded corners of application icons mimic the original iPhone’s round corners; the accessory pairing sheet is another example. But accentuating the roundedness of the display corners is now a systemwide mandate. Application windows, toolbars, sidebars, and other elements have been redrawn as concentric roundrects, perfectly seated inside each other and within the rounded rectangle of a modern Apple device’s display. Or, at least, that is the theory.

In reality, only some of Apple’s devices have displays with four rounded corners: Apple Watches, iPhones, and iPads. The displays of recent MacBook Airs and MacBook Pros have rounded corners at the top, but are squared-off at the bottom. Neither the iMac nor either of Apple’s external displays have rounded corners at all. Yet all of these devices have inherited the same bubbly design language with dramatically rounded application windows.

Screenshot of a Finder window in MacOS Tahoe showing a grid of application icons.

Perhaps I am taking this too literally. Then again, Apple is the one saying application windows are no longer “configured for rectangular displays”, and that they now fit the “rounded corners of modern hardware”. Regardless of the justification, I quite like the roundness of these windows. Perhaps it is simply the newness, but they make applications seem friendlier and softer. I understand why they are controversial; the large radius severely restricts what can be present in the corners, thus lowering the information density of an application window. It seems Apple agrees it is more appropriate in some apps than in others — app windows in System Information and Terminal have a much smaller corner radius.

Still, the application windows which do gain the more generously rounded corners are appreciably concentric to my MacBook Pro’s display corners at default scaling (1512 × 982) and at one tick scaled down (1800 × 1169). But at one tick scaled up (1352 × 878) the corners are no longer concentric to the display corners, and now feel overlarge and intrusive in the application area.

Even on a device with four rounded display corners, this dedication to concentricity is not always executed correctly. My iPhone 15 Pro, for example, has corners with a slightly smaller radius than an iPhone 16 Pro. The bottom corners of the share sheet on my device are cramped, nearly touching the edge of the display at their apex.

Screenshot of the lower part of the iOS share sheet in which the bottom rounded corners are nearly touching the device bezel.

Then there are the issues caused by this dedication to concentricity. Look again at that Finder window screenshot above and pay attention to the buttons in the toolbar. In particular, notice how the icon in the item grouping button — the solitary one between the view switcher, and the group that includes the sharing button — looks like it is touching the rounded edge.

Maps on iOS has a different kind of concentricity issue. When the search area is in a retracted state, the container around the search bar does not align with the left and right edges of the buttons above it, in a way that does not feel deliberate. I assume this is because it follows the curves of the display corners with an equal distance on all sides. When it is in an expanded state, it becomes wider than the buttons above it. At least — unlike the Share sheet — its bottom corners are rounded correctly on my iPhone.

Two screenshots of the Maps app within iPhone frames, with vertical lines showing the alignment of the buttons described above.

I could keep going with my nitpicks, so I shall. The way toolbars and their buttons are displayed on MacOS is, at best, something to get used to, though I have tried and failed. Where there was once a solid area for tools has, in many apps, become a gradient with floating buttons. The gradient is both a fill and a progressive blur, which I think is unattractive.

A screenshot of the Preview app's toolbar with a PDF document open.

This area is not very tall, which means a significant amount of the document encroaches into its lower half. In light mode, the background of a toolbar is white. The backgrounds of toolbar buttons are also white. Buttons are differentiated by nothing more than a diffuse shadow. The sidebar is now a floating roundrect. The glyphs in sidebar items and toolbar buttons are near-black. The shapeless action buttons in Finder are grey. Some of these things were present in previous versions of MacOS, but the sum of this design language is the continued reduction of contrast in user interface elements to, I think, its detriment.

Apple justifies these decisions by saying its redesigned interfaces are “bringing greater focus to content”. I do not accept that explanation. Instead of placing tools in a distinct and separated area, they bleed into your document, thus gaining a similar level of importance as the document itself. I have nothing beyond my own experience to back this up. Perhaps Apple has user studies suggesting something different; if it does, I think it should publicly document its research. But, in my experience, the more the interface blends with what I am looking at, the less capable I am of ignoring it. Clarity and structure are sacrificed for the illusion of simplicity offered by a monochromatic haze of an interface.

Even if I bought that argument, I do not understand why it makes sense to make an application’s tools visually recede. While I am sometimes merely viewing a document, I am very often trying to do something to it. I want the most common actions I can take to be immediately obvious. For longtime Mac users, the structure of most apps has not changed and one can rely on muscle memory in familiar apps. But that is more like an excuse for why this redesign is not as bad as it could be, not justification for why it is an improvement.

Then there are the window controls. The sidebar in an application is now depicted in a floating state which, Apple says, is “informed by the ambient environment within the app”, which means it reflects the colours of elements around it. This includes colours from outside the app which, a lot of the time, means the sidebar looks translucent to windows underneath it, which defies all logic. The sidebar reflects nearby colours even if you enable the “Reduce Transparency” setting in Accessibility settings, even though it makes the sidebar look translucent. But then the window controls are set inside this sidebar which, because it is floating, makes it look like these controls do something to the sidebar, not the application window.

Since the sidebar is now apparently overtop a window, stuff can be displayed underneath it. If you have seen any screenshots of this in action, it has probably been of the Music app, because few other applications do this, because why would you want stuff under the sidebar?

Here is the Music app screenshot I am obliged to include.
The Music app, with a couple of rows of tiles scrolled horizontally so some of the tiles are underneath the sidebar.

In the Photos app, it reminds me of a floating palette like Apple used to ship in iPhoto and Aperture. Those palettes allowed you to edit a photo in full-screen on a large display, and you could hide and show the tools with a keystroke. A floating sidebar and a hard gradient of a toolbar is a distracting combination. Whatever benefit it is supposed to impart is lost on me.

Photos running on MacOS 26.
A screenshot of Photos with an image zoomed-in so that the sidebar is partially overlapping the photo.

I expected Apple to justify this on the basis that it maintains context or something, but it does not. Its Human Interface Guidelines only say this is done to “reinforce the separation and floating appearance of the sidebar”, though this is not applied consistently. In a column view in Finder, for example, there is a hard vertical edge below the rounded corner of the ostensibly floating sidebar. I am sure there are legibility reasons to do this but, again, it is a solution to a problem Apple created. It reimagined sidebars as a floating thing because it looks cool, then realized it does not work so well with the best Finder layout and built a fairly unrefined workaround.

The bottom right corner of the sidebar in Finder has a hard edge that breaks the impression it is floating.
A screenshot of a Finder window with an inset showing the bottom-right corner of the sidebar.

I am spending an awful lot of words on the MacOS version because I think it is the least successful of the two Liquid Glass implementations I have used. MacOS still works a lot like MacOS. But it looks and feels like someone dictated, context-free, that it needed to reflect the redesign of iOS.

The iOS implementation is more successful since Liquid Glass feels — and I mean feels — like something designed first for touch-based systems. There is an increasingly tight relationship between the device and its physical environment. Longstanding features like True Tone meet new (well, new-ish) shifting highlights that respond to physical device orientation, situating the iPhone within its real-world context. Yet, even in its best implementation on iOS, Liquid Glass looks out of place when it is used in apps that rely on layouts driven by simple shapes and clean lines.

The Clock app is a great example of this clashing visual language. Each of its function is comprised mostly of a black screen, with white numerals and lines, and maybe a pop of colour — the second hand in the stopwatch or the green start button for timers. And then you tap or slide on the toolbar at the bottom to move through the app and, suddenly, a hyper-realistic glassy lens appears.

The Calculator app is another place where the limited application of Liquid Glass feels wrong. The buttons are drawn in some kind of glass texture — they are translucent and stretch in the same way as menus do — but the ring of shimmering highlight is so thin it may as well not exist. Apple does say in its Human Interface Guidelines that Liquid Glass should be used “sparingly”, but it uses the texture everywhere. There are far more generous buttons in Control Centre and on the passcode entry screen that feel more satisfying to press. Also, even though the buttons in Calculator are nominally translucent, the orange ones remain vibrant despite being presented against a solid black background.

This confused approach to visual design is present throughout the system. It has been there for years to some extent — Books has a realistic page flip animation, and Notes retained a paper texture for years after the iOS 7 redesign. But Liquid Glass is such a vastly different presentation compared to the rest of iOS that it stands out. When some elements have such a dynamic and visually rich presentation while others are plain, the combination does not feel harmonious. It feels unfinished.

This MacOS update is not all bad on a design front, to be fair. Sidebar icons now have a near-black fill instead of an application-specific colour; they gain the highlight colour when a particular sidebar item is active. This has the downside of making each application less distinct from each other, but it is a contrast improvement in a user interface that is mostly full of regressions. Also, inactive application windows are more obvious, with mid-grey toolbar items, window widgets, document icons, and window titles. On an iPhone, the biggest user interface good news is a sharp reduction in the number of modal dialogs. They are not entirely banished — not even close — but fewer whole-screen takeovers is good news on today’s larger-screened devices. The second piece of good news is the new design of edit menus that are no longer restricted to horizontal scrolling, and can expand into vertical-scrolling context menus. Also, on the Lock Screen, you can now move the widgets row to the bottom of the screen, and I quite like that.

There are enhancements downstream from the floating controls paradigm reinforced in this Liquid Glass update. In many iOS applications with a large list view — Messages and Mail, for example — the search field is now positioned at the bottom within easier reach. Floating controls do not require the Liquid Glass material; the Safari redesign in iOS 15 now seems like a preview of where Apple has now headed, and it obviously does not use these glassy controls. But I think the reconsidered approach in iOS 26 is more successful in part because the controls have this glassy quality.

There is, in fact, quite a lot to like in Apple’s operating system updates this year that have nothing to do with user interface changes. This is not a full review, so I will give you some quick hits, starting with the new call screening feature. I have had this switched on since I upgraded in June and it is a sincere life improvement. I still get three to six scam calls daily, but now my phone hardly ever notifies me, and I can still receive legitimate calls from numbers not in my contacts. Bringing Preview to iOS is an upgrade for anyone who spends huge chunks of time marking up PDF documents.

Spotlight on MacOS is both way more powerful and way easier to use — if you want to just search for files and not applications, or vice-versa, you can filter it. Oh, and there is now a clipboard history feature which, smartly, is turned off by default.

Oh, and you need to try “Spatial Scene” photos. If you have not tried this feature already, go and do it, especially on your Lock Screen. I have tried it with photos taken on iPhones, pictures shot with my digital camera, and even film scans, and I have been astonished at how it looks and, especially, feels. I have had the best results when I start with portraits from my digital camera; perhaps unsurprisingly, the spatial conversion is only as good as the quality of the source photo. For a good Lock Screen image, especially if you want to overlap the clock, you will want a picture with reasonably clear background separation and with a generous amount of space around the subject. There is a new filter in the Lock Screen image picker with good suggestions for Spatial Scene conversions. Again, you will want to try this.

And there are downsides to the two operating system updates I have used. Both are among the buggiest releases I can remember, likely in part because of the visual refresh. There are functional bugs, there are performance problems, and there are plenty of janky animations. There are so many little things that make the system feel fragile — the Wallpaper section of Settings, for example, has no idea widgets can now be aligned to the bottom of the Lock Screen, so they overlap with the clock. I hope this stuff gets fixed. Unfortunately, even though these operating systems are named for the coming calendar year, Apple will be shifting engineering efforts to the OS 27 releases in a matter of months.

The ‘Why’ Of It All

I kept asking myself “why?” as I used iOS 26 and MacOS 26 this summer. I wanted to understand the rationale for a complete makeover across Apple’s entire line of products. What was the imperative for unifying the systems’ visual interface design language? Why this, specifically?

Come to think of it, why is this the first time all of the operating systems are marketed with the same version number? And why did Apple decide this was the right time to make a dedicated “operating system” section on its website to show how it delivers a “more consistent experience” between devices? I have no evidence Apple would want to unify under some kind of “Apple OS” branding, but if Apple did want to make such a change, this feels like a very Apple-y way to soft-launch it. After all, your devices already run specific versions of Safari and Siri without them needing to be called “Mac Safari” and “Watch Siri”. Just throwing that thought into the wind.

If anything like that pans out, it could explain why Apple sees its products as needing a unified identity. In the present, however, it gives the impression of a changing relationship between Apple’s presentation of how it approaches the design of its products. Public statements for the past twenty-plus years have communicated the importance of letting each product be true to itself. It would be easy to dismiss this as marketing pablum if not for how reliably it has been backed by actual evidence. Yes, lines have become blurrier on the developer side with technologies like Catalyst, and on the user side by allowing iPhone and iPad apps to be run within MacOS. But a nominally unified look and feel makes the erosion of these boundaries even more obvious.

Perhaps I am overthinking this. It could simply be an exercise in branding. Apple’s operating systems have shared a proprietary system typeface for a decade without it meaning anything much more than a unified brand. And it is Apple’s brand that supersedes when applications look the same as each other no matter where they are used. In my experience so far, developers that strictly adhere to Apple’s recommendations and fully embrace Liquid Glass end up with applications having little individual character. This can sometimes work to a developer’s benefit, if their intention is for their apps to blend into the native experience, but some developers have such specific visual styles that an Apple-like use of Liquid Glass would actually be to their detriment. The updates to Cultured Code’s Things are extremely subtle which is, I think, the right call: I want Things to look like Things, not a generic to-do app.

A uniform look-and-feel across not just Apple’s apps and systems, but also third-party apps, is a most cynical answer to the question of why? and, while I do not wish to entirely dismiss it, it would disappoint me if this was Apple’s goal. What I think is true about this explanation is how Liquid Glass across most operating systems makes it possible for any app to instantly feel like it is platform-native, even when it is not.

Or maybe the why? of it all is for some future products, like a long-rumoured touch-screen laptop. This rationale drove speculation last time Apple updated the design of MacOS, and we still do not have touch screen Macs, so I am skeptical.

The frustrating thing about the answers I have given above to the question of why? is that I am only speculating. So far, Apple justifies this redesign, basically, by saying it is self-evidently good for all of its platforms to look the same. This is an inadequate explanation, and it is not borne out in my actual day-to-day use. I think iOS is mostly fine; Liquid Glass feels suited to a whole-screen-app touch-based context. In MacOS, it feels alien, unsuited to a multi-window keyboard-and-pointer system.

I am sure this visual language will be refined. I hope it has good bones since Apple is very obviously committed to Liquid Glass and its sea of floating buttons. But so far, it does not feel ready. I spent the summer using MacOS in its default configuration, aching to turn on “Reduce Transparency” in Accessibility settings. It is not pretty, especially in application toolbars, but it is less distracting because different parts of an application have their own distinct space.

I have tried, in this overview and critique, to be cautious about how much I allow the newness of it to colour my perception. Aqua was a polarizing look when it was introduced in Mac OS X. Leander Kahney, in a December 2000 article for Wired, wrote about longtime users who were downright offended by its appearance in the then-current Public Beta, relying on utilities to “Macify” the Mac. Again, this is from 2000, sixteen years after the Mac was introduced. As of today, Aqua has been around in some form for over nine years longer. But it at least felt like a complete idea; in his review of Mac OS X Leopard, John Siracusa wrote of how it was “a single, internally consistent design from top to bottom”.

These new operating systems do not feel like they are achieving that level of consistency despite being nominally more consistent across a half-dozen platforms. MacOS has received perhaps the most substantial visual changes, yet it is full of workarounds and exceptions. The changes made to iOS feel surface-level and clash with the visual language established since iOS 7. I am hopeful for the evolution of these ideas into something more cohesive. Most software is a work-in-progress, and the user interface is no exception. But all I can reflect upon is what is before me today. Quite simply, not only is it not ready, I am concerned about what it implies about Apple’s standards. Best case scenario is that it is setting up something really great and it all makes sense in hindsight. But I still have to live with it, in this condition, on today’s hardware that is, to me, less of a showcase for Apple’s visual design cleverness and more of a means to get things done. It is not a tragedy, but I would like to fast-forward through two or three years’ worth of updates to get to a point where, I hope, it is much better than it is today.

Robert Graham, clarifying the bad reporting of the big SIM farm bust in New York:

The Secret Service is lying to the press. They know it’s just a normal criminal SIM farm and are hyping it into some sort of national security or espionage threat. We know this because they are using the correct technical terms that demonstrate their understanding of typical SIM farm crimes. The claim that they will likely find other such SIM farms in other cities likewise shows they understand this is a normal criminal activity and not any special national security threat.

One of the things we must always keep in mind is that press releases are written to persuade. That is as true for businesses as it is for various government agencies. In this case, the Secret Service wanted attention, so they exaggerated the threat. And one wonders why public trust in institutions is falling.

Something I missed in posting about Apple’s critical appraisal of the Digital Markets Act is its timing. Why now? Well, it turns out the European Commission sought feedback beginning in July, and with a deadline of just before midnight on 24 September. That is why it published that statement, and why Google did the same.

Oliver Bethell, Google’s “senior director, competition”, a job title which implies a day spent chuckling to oneself:

Consider the DMA’s impact on Europe’s tourism industry. The DMA requires Google Search to stop showing useful travel results that link directly to airline and hotel sites, and instead show links to intermediary websites that charge for inclusion. This raises prices for consumers, reduces traffic to businesses, and makes it harder for people to quickly find reliable, direct booking information.

Key parts of the European tourism industry have already seen free, direct booking traffic from Google Search plummet by up to 30%. A recent study on the economic impact of the DMA estimates that European businesses across sectors could face revenue losses of up to €114 billion.

The study in question, though published by Copenhagen Business School, was funded by the Computer & Communications Industry Association, a tech industry lobbying firm funded in part by Google. I do not have the background to assess if the paper’s conclusions are well-founded, but it should be noted the low-end of the paper’s estimates was a loss of €8.5 billion, or just 0.05% of total industry revenue (page 45). The same lobbyists also funded a survey (PDF) conducted online by Nextrade Group.

Like Apple, Google clearly wants this law to go away. It might say it “remain[s] committed to complying with the DMA” and that it “appreciate[s] the Commission’s consistent openness to regulatory dialogue”, but nobody is fooled. To its credit, Google posted the full response (PDF) it sent the Commission which, though clearly defensive, has less of a public relations sheen than either of the company’s press releases.

In 2023 Lina Khan, then-chair of the U.S. Federal Trade Commission, sued Amazon over using (PDF) “manipulative, coercive, or deceptive user-interface designs known as ‘dark patterns’ to trick consumers into enrolling in automatically-renewing Prime subscriptions” and “knowingly complicat[ing] the cancellation process”. Some people thought this case was a long-shot, or attempted to use Khan’s scholarship against her.

Earlier this week, the trial began to adjudicate the government’s claims which, in addition to accusing Amazon itself, also involved charges against company executives. It was looking promising for the FTC.

Annie Palmer, CNBC:

The FTC notched an early win in the case last week when U.S. District Court Judge John Chun ruled Amazon and two senior executives violated the Restore Online Shoppers’ Confidence Act by gathering Prime members’ billing information before disclosing the terms of the service.

Chun also said that the two senior Amazon executives would be individually liable if a jury sides with the FTC due to the level of oversight they maintained over the Prime enrollment and cancellation process.

Then, just two days into the trial, the FTC announced it had reached a settlement:

The Federal Trade Commission has secured a historic order with Amazon.com, Inc., as well as Senior Vice President Neil Lindsay and Vice President Jamil Ghani, settling allegations that Amazon enrolled millions of consumers in Prime subscriptions without their consent, and knowingly made it difficult for consumers to cancel. Amazon will be required to pay a $1 billion civil penalty, provide $1.5 billion in refunds back to consumers harmed by their deceptive Prime enrollment practices, and cease unlawful enrollment and cancellation practices for Prime.

As usual for settlements like these, Amazon will admit no wrongdoing. The executives will not face liability, something Adam Kovacevich, head of the Chamber of Progress, a tech industry lobbying group, said today was a “wild … theory” driven by “Khan’s ego”. Nonsense. The judge in the case, after saying Amazon broke the law, gave credence to the concept these executives were personally liable for the harm they were alleged to have caused.

Former FTC commissioner Alvaro Bedoya on X:

Based on my initial read, do the executives need to do anything separate from that? Do they pay any fines? Are they being demoted? Are they subject to extra monitoring? Do they need to admit any guilt whatsoever? The answers, as far as I can tell are no, no, no, no, and no. What’s worse, the order applies to the executives for only three years — seven years less than the company.

Two-and-a-half billion is a lot of dollars in the abstract. CIRP estimates there are 197 million U.S. subscribers to Amazon Prime, which costs anywhere from $7 to $15 per month. For the sake of argument, assume everyone is — on average — on the annual plan of $11.58 per month. It will take barely more than one billing cycle for Amazon to recoup that FTC settlement. The executives previously charged will bear little responsibility for this outcome.

Those million-dollar inauguration “investments”, as CBS News put it, sure are paying off.

Catharine Tunney, CBC News:

The immensely popular social media app TikTok has been collecting sensitive information from hundreds of thousands of Canadians under 13 years old, a joint investigation by privacy authorities found.

[…]

The privacy commissioners said TikTok agreed to enhance its age verification and provide up-front notices about its wide-ranging collection of data.

Off the top, the Privacy Commissioner’s report was limited in scope and did not examine “perceived risks to national security” since they were not related to “privacy in the context of commercial activity” and have been adjudicated elsewhere. The results of national security reviews by other agencies have not been published. However, the Commissioner’s review of the company’s privacy practices is still comprehensive for what was in scope.

TikTok detects and removes about 500,000 accounts of Canadian children under 13 annually. Yet even though the company has dedicated significant engineering efforts to estimating users’ ages for advertising and to produce recommendations, it has not developed similar capabilities for restricting minors’ access.

Despite my skepticism of the Commissioner’s efficacy in cases like these, this investigation produced a number of results. TikTok made several changes as the investigation progressed, including restricting ad targeting to minors:

As an additional measure, in its response to the Offices’ Preliminary Report of Investigation, TikTok committed to limit ad targeting for users under 18 in Canada. TikTok informed the Offices that it implemented this change on April 1st, 2025. As a result, advertisers can no longer deliver targeted ads to users under 18, other than according to generic data (such as language and approximate location).

This is a restriction TikTok has in place for some regions, but not everywhere. It is not unique to TikTok, either; Meta and Google targeted minors, and Meta reportedly guessed teens’ emotional state for ad targeting purposes. This industry cannot police itself. All of these companies say they have rules against ad targeting to children and have done so for years, yet all of them have been found to ignore those rules when they are inconvenient.

Apple issued a press release criticizing the E.U.’s Digital Markets Act in a curious mix of countries. It published it on its European sites — of course — and in Australia, Canada, New Zealand, and the United States, all English-speaking. It also issued the same press release in Brazil, China, Guinea-Bissau, Indonesia, and Thailand — and a handful of other places — but not in Argentina, India, Japan, Mexico, or Singapore. Why this mix? Why did Apple bother to translate it into Thai but not Japanese? It is a fine mystery. Read into it what you will.

Anyway, you will be amazed to know how Apple now views the DMA:

It’s been more than a year since the Digital Markets Act was implemented. Over that time, it’s become clear that the DMA is leading to a worse experience for Apple users in the EU. It’s exposing them to new risks, and disrupting the simple, seamless way their Apple products work together. And as new technologies come out, our European users’ Apple products will only fall further behind.

[…]

That’s why we’re urging regulators to take a closer look at how the law is affecting the EU citizens who use Apple products every day. We believe our users in Europe deserve the best experience on our technology, at the same standard we provide in the rest of the world — and that’s what we’ll keep fighting to deliver.

It thinks the DMA should disappear.

Its reasoning is not great; Michael Tsai read the company’s feature delays more closely and is not convinced. One of the delayed features is Live Translation, about which I wrote:

This is kind of a funny limitation because fully half the languages Live Translation works with — French, German, and Spanish — are the versions spoken in their respective E.U. countries and not, for example, Canadian French or Chilean Spanish. […]

Because of its launch languages, I think Apple expects this holdup will not last for long.

I did not account for a cynical option: Apple is launching with these languages as leverage.

The way I read Apple’s press release is as a fundamental disagreement between the role each party believes it should play, particularly when it comes to user privacy. Apple seems to believe it is its responsibility to implement technical controls to fulfill its definition of privacy and, if that impacts competition and compatibility, too bad. E.U. regulators seem to believe it has policy protections for user privacy, and that users should get to decide how their private data is shared.

Adam Engst, TidBits:

Apple’s claim of “the same standard we provide in the rest of the world” rings somewhat hollow, given that it often adjusts its technology and services to comply with local laws. The company has made significant concessions to operate in China, doesn’t offer FaceTime in the United Arab Emirates, and removes apps from the still-functional Russian App Store at the Russian government’s request. Apple likely pushed back in less public ways in those countries, but in the EU, this public statement appears aimed at rallying its users and influencing the regulatory conversation.

I know what Engst is saying here, and I agree with the sentiment, but this is a bad group of countries to be lumped in together with. That does not mean the DMA is equal to the kinds of policies that restrict services in these other countries. It remains noteworthy how strict Apple is in restricting DMA-mandated features only to countries where they are required, but you can just change your region to work around the UAE FaceTime block.

Oscar Godsell, Sky News:

The opposition’s shadow finance minister James Paterson has since urged the Australian Labor government to follow suit.

Mr Paterson told Sky News if the US was able to create a “safer version” of TikTok, then Australia should liaise with the Trump administration to become part of that solution.

“It would be an unfortunate thing if there was a safe version of TikTok in the United States, but a version of TikTok in Australia which was still controlled by a foreign authoritarian government,” he said.

I am not sure people in Australia are asking for yet more of the country’s media to be under the thumb of Rupert Murdoch. Then again, I also do not think the world needs more social media platforms controlled by the United States, though that is very clearly the wedge the U.S. government is creating: countries can accept the existing version of TikTok, adopt the new U.S.-approved one, or ban them both. The U.S. spinoff does not resolve user privacy problems and it raises new concerns about the goals of its government-friendly ownership and management.

Do you manage a Patreon page as a “creator”? I do; it is where you can give me five dollars per month to add to my guilt over not finishing my thoughts about Liquid Glass.1 You do not have to give me five dollars. I feel guilty enough as it is.

Anyway, you might have missed an email Patreon sent today advising you that Autopilot will be switched on beginning October 1 unless you manually turn it off. According to Patreon’s email:

Autopilot is a growth engine that automatically sends your members and potential members strategic, timely offers which encourage them to join, upgrade, or retain your membership — without you having to lift a finger.

As an extremely casual user, I do not love this; I think it is basically spam. I am sympathetic toward those who make their living with Patreon. I turned this off. If you have a Patreon creator page and missed this email, now you know.

And if you are a subscriber to anyone on Patreon and begin receiving begging emails next week, please be gracious. They might not be aware this feature was switched on.


  1. I am most looking forward to reading others’ reviews when I am done, which I have so far avoided so my own piece is not tainted. ↥︎

Tonight, I set up a new Apple TV — well, as “new” as a refurbished 2022-though-still-current-generation model can be — and it was not a good time. I know Apple might be releasing a new model later this year, but any upgrades are probably irrelevant for how I have used my existing ten-year-old model. I do not even have a 4K television.

My older model has some drawbacks. It is pretty slow, and the storage space is pitiful — I think it is the 32 GB model — so it keeps offloading apps. What I wanted to do was get a new one and bump the old Apple TV to my kitchen, where I have a receiver and a set of speakers I have used with Bluetooth, and then I would be able to AirPlay music in all my entertaining spaces. Real simple stuff.

Jason Snell, in a sadly still-relevant Six Colors article:

The setup starts promisingly: You can bring your iPhone near the Apple TV, and it will automatically log your Apple ID in. If you’ve got the One Home Screen feature turned on, all your apps will load and appear in all the right places. It will feel like you’ve done a data transfer.

But it’s all a mirage.

One Home Screen is a nice feature, but it’s not an iCloud backup of your Apple TV, nor is it the Apple TV equivalent of Migration Assistant. It is exactly what its name suggests — a home-screen-syncing feature and nothing more.

I went into this upgrade realizing my wife and I would need to set up all our streaming apps again. (She was cool with it.) That is not great, but at least I had that expectation.

But even the “promising” parts of the setup experience did not work for me. When I brought my iPhone near the new Apple TV, it spun before throwing a mysterious error. After setting it up manually, it thought it was not connected to Wi-Fi — even though it was — and then it tried syncing the home screen. Some of the apps are right, but it has not synced all of them, and none of them are in the correct position.

Then I opened Music on my phone to try and AirPlay to both Apple TVs, only to find it was not listed. It turns out that is a separate step. I had to add it to my Home, which again involved me bringing my iPhone into close proximity and tapping a button. This failed the three times I tried it. So I restarted my Apple TV and my phone, and then Settings told me I needed to complete my Home setup. I guess it worked but somehow did not move to the next step. At last, AirPlay worked — and, frankly, it is pretty great.

I know bugs happen about as often as blog posts complaining about bugs. This thing is basically an appliance, though. I am glad Apple ultimately did not make a car.

The U.S. Secret Service:

The U.S. Secret Service dismantled a network of electronic devices located throughout the New York tristate area that were used to conduct multiple telecommunications-related threats directed towards senior U.S. government officials, which represented an imminent threat to the agency’s protective operations.

This protective intelligence investigation led to the discovery of more than 300 co-located SIM servers and 100,000 SIM cards across multiple sites.

That sure is a lot of SIM cards, and a scary-sounding mix of words in the press release:

  • “[…] telecommunications-related threats directed towards senior U.S. government officials […]”

  • “[…] these devices could be used to conduct a wide range of telecommunications attacks […]”

  • “These devices were concentrated within 35 miles of the global meeting of the United Nations General Assembly […]”

Reporters pounced. The New York Times, NBC News, CBS News, and even security publications like the Record seized on dramatic statements like those, and another said by the special agent in a video the Service released: “this network had the potential to […] essentially shut down the cellular network in New York City”. Scary stuff.

When I read the early reports, it sure looked to me like some reporters were getting a little over their skis.

For a start, emphasizing the apparent proximity to the U.N. in New York seems to me like a stretch. A thirty-five mile area around the U.N. looks like this — and that is diameter, not radius. If you cannot see that or this third-party website goes away at some point, that is a circle encompassing just about the entire island of Manhattan, going deep into Brooklyn and Queens, stretching all the way up to Chappaqua, and out into Connecticut and New Jersey. That is a massive area. One could just as easily say it was within thirty-five miles of any number of New York-based landmarks and be just as accurate.

Second, the ability to “facilitat[e] anonymous, encrypted communication between potential threat actors and criminal enterprises” is common to basically any internet-connected device. The scale of this one is notable, but you do not need a hundred-thousand SIM cards to make criminal plans. And the apparent possibility of “shut[ting] down the cellular network in New York” is similarly common to any large-scale installation. This is undeniably peculiar, huge, and it seems to be nefarious, but a lot of this seems to be a red herring.

Andy Greenberg, Lily Hay Newman, and Matt Burgess, Wired:

Despite speculation in some reporting about SIM farm operation that suggests it was created by a foreign state such as Russia or China and used for espionage, it’s far more likely that the operation’s central focus was scams and other profit-motivated forms of cybercrime, says Ben Coon, who leads intelligence at the cybersecurity firm Unit 221b and has carried out multiple investigations into SIM farms. “The disruption of cell services is possible, flooding the network to the degree that it couldn’t take any more traffic,” Coon says. “My gut is telling me there was some type of fraud involved here.”

These reporters point to a CNN article by John Miller and Celina Tebor elaborating on the threat to “senior U.S. government officials”: they were swatting calls targeting various lawmakers. Not nothing and certainly dangerous, but this is not looking anything like how many reporters have described it, nor what the U.S. Secret Service is suggesting through its word choices.

This story of how Full Fact geolocated a viral video claiming to be shot in London is intriguing because it disproves its own headline’s claim that “A.I. helped”.

Charlotte Green, Full Fact:

But in this case, directly reverse image searching through Google took me to a TikTok video with a location marker for ‘Pondok Pesantren Al Fatah Temboro’, in Indonesia.

This is enough information to give the Full Fact team a great start: translated, it is a school in Temboro.

Green:

We found a slightly different compilation of similar videos on Facebook, seemingly from the same area, also with women in Islamic dress, but with more geographical features visible, such as a sign and clearer views of buildings.

Using stills from this video as references, we asked the AI chatbot ChatGPT if it could provide coordinates to the location, using the possible location of the Al Fatah school in Indonesia.

Up to the point where ChatGPT was invoked, there is no indication any A.I. tools were used. After that — and I do not intend to be mean — it is unclear to me why anyone would ask ChatGPT for coordinates to a known, named location when you can just search Google Maps. It is the third one down in my searches; the first two would quickly be eliminated when comparing to either video.

Green:

But this did not match the location of the original video we were trying to fact check—or anywhere in the near vicinity. While we were very confident the video had been filmed in Temboro, we needed to investigate further to prove this.

After this, no A.I. tools were used. ChatGPT was only able to do as much as a basic Google Maps search. After that, Full Fact had to do some old-fashioned comparative geolocation, and were ultimately successful.

I found this via Charles Arthur, who writes:

And thus we see the positive uses of geolocation by chatbots.

On the contrary, this proved little about the advantages of A.I. geolocation. These tools can certainly be beneficial; Green links to an experiment in Bellingcat in comparison to Google’s reverse image search tools.

I think Full Fact did great work in geolocating this video and deflating its hateful context in that tweet. But a closer reading of the actual steps taken shows any credit to ChatGPT or A.I. is overblown.

Allison Smith, Modern Retail (via Michael Tsai):

Amazon revealed at its annual Accelerate seller conference in Seattle that it is shutting down its long-running “commingling” program — a move that drew louder applause from sellers than any other update of the morning.

The decision marks the end of a controversial practice in which Amazon pooled identical items from different sellers under one barcode. The system, intended to speed deliveries and save warehouse space, had also allowed counterfeit or expired goods to be mixed in with authentic ones, according to The Wall Street Journal. For years, brands complained that commingling made it difficult to trace problems back to specific sellers and left their reputations vulnerable when customers received knockoffs. In 2013, Johnson & Johnson temporarily pulled many of its consumer products from Amazon, arguing the retailer wasn’t doing enough to curb third-party sales of damaged or expired goods.

I had no idea Amazon did this until I complained on Mastodon about how terrible its shopping experience is, and Ben replied referencing this practice, nor did I know it has been doing so for at least twelve years. I am certain I have received counterfeit products more than once from Amazon, and I think this is how it happened.

John Walker, Kotaku

Rather than because of wifi, the reason this happened is because these so-called AIs are just regurgitating information that has been parsed from scanning the internet. It will have been trained on recipes written by professional chefs, home cooks and cookery sites, then combined this information to create something that sounds a lot like a recipe for a Korean sauce. But it, not being an intelligence, doesn’t know what Korean sauce is, nor what recipes are, because it doesn’t know anything. So it can only make noises that sound like the way real humans have described things. Hence it having no way of knowing that ingredients haven’t already been mixed — just the ability to mimic recipe-like noises. The recipes it will have been trained on will say “after you’ve combined the ingredients…” so it does too.

I would love to know how this demo was supposed to go. In an ideal world, is it supposed to walk you through the preparation ingredient-by-ingredient? If Jack Mancuso had picked up the soy sauce, would it have guided the recipe-suggested amount? That would be impressive, if it had worked. The New York Times’ tech reporters got to try the glasses for about thirty minutes and, while they shared no details, said it was “as spotty as Mr. Zuckerberg’s demonstration”.

I think Walker is too hard on the faux off-the-cuff remarks, though they are mock-worthy in the context of the failed demo. But I think the diagnosis of this is entirely correct: what we think of as “A.I.” is kind of overkill for this situation. I can see some utility. For example, I could not find a written recipe that exactly matched the ingredients on Mancuso’s bench, but perhaps Meta’s A.I. software can identify the ingredients, and assume the lemons are substituting for rice vinegar. Sure. After that, what would actually be useful is a straightforward recitation of a specific recipe: measure out a quarter-cup of soy sauce and pour it into a bowl; next, stir in one tablespoon of honey — that kind of thing. This is pretty basic text-to-speech stuff, though it would be cool if it can respond to questions like how much ginger?, and did I already add the honey?, too.

Also, I would want to know which recipe it was following. A.I. has a terrible problem with not crediting its sources of information in general, and it is no different here.

Also — and this probably goes without saying — even if these glasses worked as well as Meta suggests they should, there is no way I would buy a pair. You are to tell me that I should strap a legacy of twenty years of privacy violations and user hostility to my face? Oh, please.

In 2018, the Toronto Star and CBC News jointly published an investigation into Ticketmaster’s sales practices:

Data journalists monitored Ticketmaster’s website for seven months leading up to this weekend’s show at Scotiabank Arena, closely tracking seats and prices to find out exactly how the box-office system works.

Here are the key findings:

  • Ticketmaster doesn’t list every seat when a sale begins.

  • Hikes prices mid-sale.

  • Collects fees twice on tickets scalped on its site.

Dave Seglins, Rachel Houlihan, Laura Clementson, CBC News:

Posing as scalpers and equipped with hidden cameras, the journalists were pitched on Ticketmaster’s professional reseller program.

[…]

TradeDesk allows scalpers to upload large quantities of tickets purchased from Ticketmaster’s site and quickly list them again for resale. With the click of a button, scalpers can hike or drop prices on reams of tickets on Ticketmaster’s site based on their assessment of fan demand.

Ticketmaster, of course, disputed these journalists’ findings. But the very existence of TradeDesk — owned by Ticketmaster — seems to be in direct opposition to Ticketmaster’s obligations to purchasers. One part of the company is ostensibly in the business of making sure legitimate buyers acquire no more than their fair share of tickets to a popular show, while another part facilitates easy reselling at massive scale. The TradeDesk platform is not something accessible by just anyone; you cannot create an account on demand. Someone from Ticketmaster has to set up your TradeDesk account for you.

These stories have now become a key piece of evidence in a lawsuit filed by the U.S. Federal Trade Commission against Live Nation, the owner of Ticketmaster:

The FTC alleges that in public, Ticketmaster maintains that its business model is at odds with brokers that routinely exceed ticket limits. But in private, Ticketmaster acknowledged that its business model and bottom line benefit from brokers preventing ordinary Americans from purchasing tickets to the shows they want to see at the prices artists set.

The complaint’s description (PDF) of the relationship between Ticketmaster and TradeDesk, beginning at paragraph 84 and continuing through paragraph 101, is damning. If true, Ticketmaster must be aware of the scalper economy it is effectively facilitating through TradeDesk.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

With over 5,000 five star reviews, Magic Lasso Adblock is simply the best ad blocker for your iPhone, iPad, and Mac.

Magic Lasso Adblock: No ads, no trackers, no annoyances, no worries

Designed from the ground up to protect your privacy, Magic Lasso blocks all intrusive ads, trackers, and annoyances. It stops you from being followed by ads around the web and with App Ad Blocking it stops your app usage being harvested by ad networks.

So, join over 350,000 users and download Magic Lasso Adblock today.

Rani Molla, Sherwood News:

While the prerecorded videos of the products in use were slick and highly produced, some of the live demos simply failed.

“Glasses are the ideal form factor for personal superintelligence because they let you stay present in the moment while getting access to all of these AI capabilities to make you smarter, help you communicate better, improve your memory, improve your senses,” CEO Mark Zuckerberg reiterated at the start of the event, but the ensuing bloopers certainly didn’t make it feel that way.

I like that Meta took a chance with live demos but, in addition to the bloopers, Connect felt like another showcase of an inspiration-bereft business. The opening was a more grounded — figuratively and literally — version of the Google Glass skydive from 2012. Then, beginning at about 52 minutes, Zuckerberg introduced the wrist-based control system, saying “every new computing platform has a new way to interact with it”, summarizing a piece of the Macworld 2007 iPhone introduction. It is not that I am offended by Meta cribbing others’ marketing. What I find amusing, more than anything, is Zuckerberg’s clear desire to be thought of as an inventor and futurist, despite having seemingly few original ideas.

If you want reviews of the iPhone 17 — mostly the Pro — from the perspective of photography, two of the best come from Chris Niccolls and Jordan Drake of PetaPixel and Tyler Stalman. Coincidentally, both from right here in Calgary. I am not in the market for an upgrade, but I think these are two of the most comprehensive and interesting reviews I have seen specifically about the photo and video features. Alas, both are video-based reviews, so if that is not your bag, sorry.

Niccolls and Drake walk you through the typical PetaPixel review, just as you want it. The Portrait Mode upgrades they show are obvious to me. Stalman’s test of Action Mode plus the 8× zoom feature is wild. He also took a bunch of spectacular photos at the Olds Rodeo last week. Each of these reviews focuses on something different, with notably divergent opinions on some video features.