Search Results for: "color"

Jay Tan and Alexis Copeland, of Microsoft:

Our studies showed that while our illustrations could be described as colorful, inclusive, and genial on a surface level, they were received within consumer culture as uninteresting and emotionless. The flat vectorized style that was once hugely popular across the industry was now communicating sub-optimally and potentially evoking ideas and themes that were misaligned with our company values.

To continue being relevant and create delightful customer experiences, we had to apply the lessons we were learning from our current illustrations and progress in tandem with Microsoft’s evolving brand and culture.

Without intending to be cruel to the two credited authors of Microsoft’s blog post, the language used is indicative of a pretty bleak design practice. Over a thousand words are used to say there is a new suite of illustrations which is more dimensional in a soft and trendy way, and based on a new palette of colours. There is some reasonable justification, too. But most of this feels like it was written by an MBA who once heard about the value of design thinking from a TED Talk.

Also, while there are plenty of visuals shown, there are only two examples of these icons in use — and only one of those feels beneficial to its context. This is true despite Microsoft insisting “illustrations were approached more as an afterthought” but is now “turning them into visuals that are not only aesthetically pleasing but also rich in meaning and emotion”.

Sebastiaan de With reacted to the embedded sizzle reel:

My theory is that the designers making all these slick marketing graphics are kept away from the people making the software at all costs. Armed guards are involved. Whatever happens, this joy and whimsy cannot touch the actual software.

This is a consistent problem with Microsoft’s concept videos and the reality of its products. I use a Windows 11 PC at work and it is fine — it is probably the nicest Windows has ever looked. But these concept videos give the system an unearned abundance of richness, texture, and visual interest. The actual system has vast swathes of off-greys, brittle buttons, and misaligned window elements. There are plenty of hard-to-read panels made of a material that looks simultaneously very thick, owing to the amount of background blur, yet entirely lacks any feeling of depth.

Apple is not innocent of this crime, either. Its MacOS Big Sur design video presents window elements and the Control Centre with a structure and crispness not actually present in the shipping version. Instead, we are treated to a sea of blur.

I was proved wrong after I speculated last month the new monthly permissions prompt for legacy screen recording might not be in the released build of MacOS Sequoia:

I think it is possible MacOS 15.0 ships without this dialog. In part, that is because its text — “requesting to bypass the system window picker” — is technical and abstruse, written with seemingly little care for average user comprehension. I also think that could be true because it is what happened last year with MacOS 14.0. […]

It turns out this prompt, awkward language and all, made it into the public release.

Andrew Cunningham, in his review for Ars Technica, thinks this is a good idea in isolation:

The recurring screen recording permissions request is especially justifiable — it’s good for macOS to check in periodically about this kind of potentially data-scraping app, so attackers or domestic abusers can’t just install one once, click through the initial permissions requests, and have access for as long as you have the computer.

However, he dislikes the cumulative “constant barrage of requests and notifications [which] is an element of confusion and fatigue and of users clicking through boxes just to make them go away”.

Jason Snell, of Six Colors, is also frustrated:

In the name of making the Mac a safer place to be, right now Apple’s also making it a worse place to be. This is not an acceptable trade-off. It’s incumbent on Apple to make the Mac safer without compromising usability.

Put bluntly, macOS Sequoia fails this test.

In the latest beta release of MacOS 15.1, Apple added a new device management key, forceBypassScreenCaptureAlert, to override the monthly permissions request. (Thanks to Josh Calvetti.) However, my understanding is this cannot be used by more general users; it is only for managed devices.

Update: Added more context to my summary of Cunningham’s position.

Congratulations to Jason Snell and Dan Moren for ten years of Six Colors. From its beginning as a result of a painful media layoff, it is wonderful to see its continued independent success. I have read it just about daily since its launch, and have quoted Snell, Moren, and other contributors more times than I can count. I must still actively remind myself not to put a “u” in the site’s name, though.

Maryclaire Dale, Associated Press:

A U.S. appeals court revived on Tuesday a lawsuit filed by the mother of a 10-year-old Pennsylvania girl who died attempting a viral challenge she allegedly saw on TikTok that dared people to choke themselves until they lost consciousness.

While federal law generally protects online publishers from liability for content posted by others, the court said TikTok could potentially be found liable for promoting the content or using an algorithm to steer it to children.

Notably, the “Blackout Challenge” or the “Choking Game” is one of few internet challenges for teenagers which is neither a media-boosted fiction nor relatively harmless. It has been circulating for decades, and was connected with 82 deaths in the United States alone between 1995–2007. Which, yes, is before TikTok or even social media as we know it today. Melissa Chan reported in a 2018 Time article that its origins go back to at least the 1930s.

Mike Masnick, of Techdirt, not only points out the extensive Section 230 precedent ignored by the Third Circuit in its decision, he also highlights the legal limits of publisher responsibility:

We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

Matt Stoller, on the other hand, celebrates the Third Circuit’s ruling as an end to “big tech’s free ride on Section 230”:

Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.

And that’s a huge rollback of Section 230.

On a legal level, both Masnick and Stoller agree the Third Circuit’s ruling creates a massive change in U.S. internet policy and, because of current structures, the world. But their interpretations of this are in vehement disagreement on whether this is a good thing. Masnick says it is not, and I am inclined to agree. Not only is there legal precedent on his side, there are plenty of very good reasons for why Section 230 is important to preserve more-or-less the way it has existed for decades.

However, it seems unethical for TikTok to have no culpability for how users’ dangerous posts are recommended, especially to children. Perhaps legal recourse is wrong in this case and others like it, yet it just feels wrong for this case to eventually — after appeals and escalation to, probably, the Supreme Court — be summarily dismissed on the grounds that corporations have little responsibility or care for automated recommendations. There is a real difference between teenagers spreading this challenge one-on-one for decades and teenagers broadcasting it — or, at least, there ought to be a difference.

Chance Miller, 9to5Mac:

Apple has changed its screen recording privacy prompt in the latest beta of macOS Sequoia. As we reported last week, Apple’s initial plan was to prompt users to grant screen recording permissions weekly.

In macOS Sequoia beta 6, however, Apple has adjusted this policy and will now prompt users on a monthly basis instead. macOS Sequoia will also no longer prompt you to approve screen recording permissions every time you reboot your Mac.

After I wrote about the earlier permissions prompt, I got an email from Adam Selby, who manages tens of thousands of Macs in an enterprise context. Selby wanted to help me understand the conditions which trigger this alert, and to give me some more context. The short version is that Apple’s new APIs allow clearer and more informed user control over screen recording to the detriment of certain types of application, and — speculation alert — it is possible this warning will not appear in the first versions of MacOS Sequoia shipped to users.

Here is an excerpt from the release notes for the MacOS 15.0 developer beta:

Applications utilizing deprecated APIs for content capture such as CGDisplayStream & CGWindowListCreateImage can trigger system alerts indicating they might be able to collect detailed information about the user. Developers need to migrate to ScreenCaptureKit and SCContentSharingPicker. (120910350)

It turns out the “and” in that last sentence is absolutely critical. In last year’s beta releases of MacOS 14, Apple began advising developers it would be deprecating CoreGraphics screenshot APIs, and that applications should migrate to ScreenCaptureKit. However, this warning was removed by the time MacOS 14.0 shipped to users, only for it to reappear in the beta versions of 14.4 released to developers earlier this year. Apple’s message was to get on board — and fast — with ScreenCaptureKit.

ScreenCaptureKit was only the first part of this migration for developers. The second part — returning to the all-important “and” from the 15.0 release notes — is SCContentSharingPicker. That is the selection window you may have seen if you have recently tried screen sharing with, say, FaceTime. It has two agreeable benefits: first, it is not yet another permissions dialog; second, it allows the user to know every time the screen is being recorded because they are actively granting access through a trusted system process.

This actually addresses some of the major complaints I have with the way Apple has built out its permissions infrastructure to date:

[…] Even if you believe dialog boxes are a helpful intervention, Apple’s own sea of prompts do not fulfil the Jobs criteria: they most often do not tell users specifically how their data will be used, and they either do not ask users every time or they cannot be turned off. They are just an occasional interruption to which you must either agree or find some part of an application is unusable.

Instead of the binary choices of either granting apps blanket access to record your screen or having no permissions dialog at all for what could be an abused feature, this picker gives users the control and knowledge over how an app may record their screen. This lacks a scary catch-all dialog in favour of ongoing consent. A user will know exactly when an app is recording their screen, and exactly what it is recording, because that permission is no longer something an app gets, but something given to it by this picker.

This makes sense for a lot of screen recording use cases — for example, if someone is making a demo video, or if they are showing their screen in an online meeting. But if someone is trying to remotely access a computer, there is a sort of Möbius strip of permissions where you need to be able to see the remote screen in order to grant access to be able to see the screen. The Persistent Content Capture entitlement is designed to fix that specific use case.

Even though I think this structure will work for most apps, most of the time, it will add considerable overhead for apps like xScope, which allows you to measure and sample anything you can see, or ScreenFloat — a past sponsor — which allows you to collect, edit, and annotate screenshots and screen recordings. To use these utilities and others like them, a user will need to select the entire screen from the window picking control every time they wish to use a particular tool. Something as simple as copying an onscreen colour is now a clunky task without, as far as I can tell, any workaround. That is basically by design: what good is it to have an always-granted permission when the permissions structure is predicated on ongoing consent? But it does mean these apps are about to become very cumbersome. Either you need to grant whole-screen access every time you invoke a tool (or launch the app), or you do so a month at a time — and there is no guarantee the latter grace period will stick around in future versions of MacOS.

I think it is possible MacOS 15.0 ships without this dialog. In part, that is because its text — “requesting to bypass the system window picker” — is technical and abstruse, written with seemingly little care for average user comprehension. I also think that could be true because it is what happened last year with MacOS 14.0. That is not to say it will be gone for good; Apple’s intention is very clear to me. But hopefully there will be some new APIs or entitlement granted to legitimately useful utility apps built around latent access to seeing the whole screen when a user commands. At the very least, users should be able to grant access indefinitely.

I do not think it is coincidental this Windows-like trajectory for MacOS has occurred as Apple tries to focus more on business customers. In an investor call last year, Tim Cook said Apple’s “enterprise business is growing”. In one earlier this month, he seemed to acknowledge it was a factor, saying the company “also know[s] the importance of security for our users and enterprises, so we continue to advance protections across our products” in the same breath as providing an update on the company’s Mac business. This is a vague comment and I am wary of reading too much into it, but it is notable to see the specific nod to Mac enterprise security this month. I hope this does not birth separate “Home” and “Professional” versions of MacOS.

Still, there should be a way for users to always accept the risks of their actions. I am confident in my own ability to choose which apps I run and how to use my own computer. For many people — maybe most — it makes sense to provide a layer of protection for possibly harmful actions. But there must also be a way to suppress these warnings. Apple ought to be doing better on both counts. As Michael Tsai writes, the existing privacy system “feels like it was designed, not to help the user understand what’s going on and communicate their preferences to the system, but to deflect responsibility”. The new screen recording picker feels like an honest attempt at restricting what third-party apps are able to do without the user’s knowledge, and without burdening users with an uninformative clickwrap agreement.

But, please, let me be riskier if I so choose. Allow me to let apps record the entire screen all the time, and open unsigned apps without going through System Settings. Give me the power to screw myself over, and then let me get out of it. One does not get better at cooking by avoiding tools that are sharp or hot. We all need protections from our own stupidity at times, but there should always be a way to bypass them.

In response to Apple’s increasingly distrustful permissions prompts, it is worth thinking about what benefits this could provide. For example, apps can start out trustworthy and later become malicious through updates or ownership changes, and users should be reminded of the permissions they have afforded it. There is a recent example of this in Bartender. But I am not sure any of this is helped by yet another alert.

The approach seems to be informed by the Steve Jobs definition of privacy, as he described it at D8 in 2010:

Privacy means people know what they’re signing up for — in plain English, and repeatedly. That’s what it means.

I’m an optimist. I believe people are smart, and some people want to share more data than other people do. Ask ’em. Ask ’em every time. Make them tell you to stop asking them, if they get tired of your asking them. Let them know precisely what you’re gonna do with their data.

Some of the permissions dialogs thrown by Apple’s operating systems exist to preempt abuse, while others were added in response to specific scandals. The prompt for accessing your contacts, for example, was added after Path absorbed users’ lists.

The new weekly nag box for screen recording in the latest MacOS Sequoia is also conceivably a response to a specific incident. Early this year, the developer of Bartender sold the app to another developer without telling users. The app has long required screen recording permissions to function. It made some users understandably nervous about transferring that power, especially because the transition was done so quietly to a new shady owner.

I do not think this new prompt succeeds in helping users make an informed decision. There is no information in the dialog’s text informing you who the developer is, and if it has changed. It does not appear the text of the dialog can be customized for the developer to provide a reason. If this is thrown by an always-running app like Bartender, a user will either become panicked or begin passively accepting this annoyance.

The latter is now the default response state to a wide variety of alerts and cautions. Car alarms are ineffective. Hospitals and other medical facilities are filled with so many beeps staff become “desensitized”. People agree to cookie banners without a second of thought. Alert fatigue is a well-known phenomenon, such that it informed the Canadian response in the earliest days of the pandemic. Without more thoughtful consideration of how often and in what context to inform people of something, it is just pollution.

There is apparently an entitlement which Apple can grant, but it is undocumented. It is still the summer and this could all be described in more robust terms over the coming weeks. Yet it is alarming this prompt was introduced with so little disclosure.

I believe people are smart, too. But I do not believe they are fully aware of how their data is being collected and used, and none of these dialog boxes do a good job of explaining that. An app can ask to record your screen on a weekly basis, but the user is not told any more than that. It could ask for access to your contacts — perhaps that is only for local, one-time use, or the app could be sending a copy to the developer, and a user has no way of knowing which. A weather app could be asking for your location because you requested a local forecast, but it could also be reselling it. A Mac app can tell you to turn on full disk access for plausible reasons, but it could abuse that access later.

Perhaps the most informative dialog boxes are the cookie consent forms you see across the web. In their most comprehensive state, you can see which specific third-parties may receive your behavioural data, and they allow you to opt into or out of categories of data use. Yet nobody actually reads those cookie consents because they have too much information.

Of course, nobody expects dialog boxes to be a complete solution to our privacy and security woes. A user places some trust in each layer of the process: in App Review, if they downloaded software from the App Store; in built-in protections; in the design of the operating system itself; and in the developer. Even if you believe dialog boxes are a helpful intervention, Apple’s own sea of prompts do not fulfil the Jobs criteria: they most often do not tell users specifically how their data will be used, and they either do not ask users every time or they cannot be turned off. They are just an occasional interruption to which you must either agree or find some part of an application is unusable.

Users are not typically in a position to knowledgeably authorise these requests. They are not adequately informed, and it is poor policy to treat these as individualized problems.

Jason Snell, Six Colors:

Apple’s recent feature changes suggest a value system that’s wildly out of balance, preferring to warn (and control) users no matter how damaging it is to the overall user experience. Maybe the people in charge should be forced to sit down and watch that Apple ad that mocks Windows Vista. Vista’s security prompts existed for good reasons — but they were a user disaster. The Apple of that era knew it. I’d guess a lot of people inside today’s Apple know it, too — but they clearly are unable to win the arguments when it matters.

The first evidence of this relentless slog of permissions prompts occurred on iOS. Want to allow this app to use the camera? Tap allow. See your location? Tap allow. Access your contacts? Tap allow. Send you notifications? Tap allow. On and on it goes, sweeping up the Mac in this relentless offloading of responsibility onto users.

On some level, I get it. Our devices are all synced with one another, passing our identities and secret information between them constantly. We install new applications without thinking too much about what they could be doing in the background. We switch on automatic updates with similar indifference. (If you are somebody who does not do these things, please do not write. I know you are there; I respect you; you are one of few.)

But relentless user confirmation is not a good answer for privacy, security, or competition. It merely kicks the can down the road, and suggests users cannot be trusted, yet must bear all the responsibility for their choices.

Jason Snell, Six Colors:

Last quarter, Apple made about $22 billion in profit from products and $18 billion from Services. It’s the closest those two lines have ever come to each other.

This is what was buzzing in the back of my head as I was going over all the numbers on Thursday. We’re not quite there yet, but it’s hard to imagine that there won’t be a quarter in the next year or so in which Apple reports more total profit on Services than on products.

When that happens, is Apple still a products company? Or has it crossed some invisible line?

The most important thing Snell gets at in this article, I think, is that the “services” which likely generate the most revenue for Apple — the App Store, Apple Pay transactions, AppleCare, and the Google search deal — are all things which are tied specifically to its hardware. It sells subscriptions to its entertainment services elsewhere, for example, but they are probably not as valuable to the company as these four categories. It would be disappointing if Apple sees its hardware products increasingly as vehicles for recurring revenue.

The European Commission:

Today, the European Commission has informed Apple of its preliminary view that its App Store rules are in breach of the Digital Markets Act (DMA), as they prevent app developers from freely steering consumers to alternative channels for offers and content.

The problems cited by the Commission are so far entirely related to in-app referrals for external purchases. The Commission additionally says it is looking into Apple’s terms for third-party app stores — including the Core Technology Fee — but that is not what these specific findings are about.

Jesper:

In the DMA, the ground rule is for sideloading apps to be allowed, and to only very minimally be reigned in under very specific conditions. Apple chose to take these conditions and lawyer them into “always, unless you pay us sums of money that are plainly prohibitive for most actors”. Apple knew the rules and understood the intent and chose to evade them, in order to retain additional income.

Separately, earlier this month — the weekend before WWDC, in fact — Apple rejected an emulator after holding it in review for two months.

Benjamin Mayo, 9to5Mac:

App Review has rejected a submission from the developers of UTM, a generic PC system emulator for iPhone and iPad.

The open source app was submitted to the store, given the recent rule change that allows retro game console emulators, like Delta or Folium. App Review rejected UTM, deciding that a “PC is not a console”. What is more surprising, is the fact that UTM says that Apple is also blocking the app from being listed in third-party app stores in the EU.

Michael Tsai compiled the many disapproving reactions to Apple’s decision, adding:

The bottom line for me is that Apple doesn’t want general-purpose emulators, it’s questionable whether the DMA lets it block them, and even siding with Apple on this it isn’t consistently applying its own rules.

Jason Snell, Six Colors:

The whole point of the DMA is that Apple does not get to act as an arbitrary approver or disapprover of apps. If Apple can still reject or approve apps as it sees fit, what’s the point of the DMA in the first place?

The Commission continues:

In parallel, the Commission will continue undertaking preliminary investigative steps outside of the scope of the present investigation, in particular with respect to the checks and reviews put in place by Apple to validate apps and alternative app stores to be sideloaded.

Riley Testut:

When we first met with the EC a few months ago, we were asked repeatedly if we trusted Apple to be in charge of Notarization. We emphatically said yes.

However, it’s clear to us now that Apple is indeed using Notarization to not only delay our apps, but also to determine on a case-by-case basis how to undermine each release — such as by changing the App Store rules to allow them

If you are somebody who believes it is only fair to take someone at their word and assume good faith, I am right there with you. Even though Apple has a long history of capricious App Review processes, it was fair to consider its approach to the E.U. a begrudging but earnest attempt at compliance. Even E.U. Commissioner Margrethe Vestager did, telling CNBC she was “very surprised that we would have such suspicions of Apple being non-compliant”.

That is, however, a rather difficult position to maintain, given the growing evidence Apple seems determined to evade both the letter and spirit of this legislation. Perhaps there are legitimate security concerns in the UTM emulator. The burden of proof for that claim rests on Apple, however, and its ability to be a reliable narrator is sometimes questionable. Consider the possible conflicts of interest in App Tracking Transparency rules raised by German competition authorities.

Manton Reece:

When a company withholds a feature from the EU because of the DMA — Apple for AI, Meta today for the fediverse — they should document which sections of the DMA would potentially be violated. Let users fact-check whether there’s a real problem.

Agreed. This would allow people to understand what businesses see are the limitations of the DMA on the merits. Users may not be the best judge of whether a legal problem exists — especially since laws get interpreted and reinterpreted by different experts all the time — but any details would be better than a void filled with speculation.

Apple’s Human Interface Guidelines:

[Beginning in iOS 18 and iPadOS 18] People can customize the appearance of their app icons to be light, dark, or tinted. You can create your own variations to ensure that each one looks exactly the way you way you want. See Apple Design Resources for icon templates.

Design your dark and tinted icons to feel at home next to system app icons and widgets. You can preserve the color palette of your default icon, but be mindful that dark icons are more subdued, and tinted icons are even more so. A great app icon is visible, legible, and recognizable, even with a different tint and background.

Louie Mantia:

Apple’s announcement of “dark mode” icons has me thinking about how I would approach adapting “light mode” icons for dark mode. I grabbed 12 icons we made at Parakeet for our clients to illustrate some ways of going about it.

I appreciated this deep exploration of different techniques for adapting alternate icon appearances. Obviously, two days into the first preview build of a new operating system is not the best time to adjudicate its updates. But I think it is safe to say a quality app from a developer that cares about design will want to supply a specific dark mode icon instead of relying upon the system-generated one. Any icon with more detail than a glyph on a background will benefit.

Also, now that there are two distinct appearances, I also think it would be great if icons which are very dark also had lighter alternates, where appropriate.

Samuel Axon, Ars Technica:

The new iPad Pro is a technical marvel, with one of the best screens I’ve ever seen, performance that few other machines can touch, and a new, thinner design that no one expected.

It’s a prime example of Apple flexing its engineering and design muscles for all to see. Since it marks the company’s first foray into OLED beyond the iPhone and the first time a new M-series chip has debuted on something other than a Mac, it comes across as a tech demo for where the company is headed beyond just tablets.

These are the opening paragraphs for this review and they read just as damning as is the entire article. Apple does not build a “tech demo”; it makes products. This iteration is, according to Axon, way faster and way nicer than the iPad Pro models it replaces. Yet all of this impressive hardware ought to be in service of a greater purpose. Other reviewers wrote basically the same.

Federico Viticci, MacStories:

I’m tired of hearing apologies that smell of Stockholm syndrome from iPad users who want to invalidate these opinions and claim that everything is perfect. I’m tired of seeing this cycle start over every two years, with fantastic iPad hardware and the usual (justified), “But it’s the software…line at the end. I’m tired of feeling like my computer is a second-class citizen in Apple’s ecosystem. I’m tired of being told that iPads are perfectly fine if you use Final Cut and Logic, but if you don’t use those apps and ask for more desktop-class features, you’re a weirdo, and you should just get a Mac and shut up. And I’m tired of seeing the best computer Apple ever made not live up to its potential.

Viticci was not granted access to a review unit in time, but it hardly matters for reviewing the state of the operating system. Jason Snell did review the new iPad Pro and spoke with Viticci about it on “Upgrade”.

The way I see it is simple: Apple does not appear to treat the iPad seriously. It has not been a priority for the company. Five years ago, it forked the operating system to create iPadOS, which seemed like it would be a meaningful change. And you can certainly point to plenty of things the iPad has gained which are distinct from its iPhone sibling. But we are fourteen years into this platform, and there are still so many obvious gaping holes. Viticci mentions a bunch of really good ones, but I will add another: I cannot believe Photos cannot even display Smart Albums.

Every time I pick up my iPad, I need to charge it from a fully dead battery. Once I do, though, I remember how much I like using the thing. And then I run into some bizarre limitation — or, more often, a series of them — that makes me put it down and pick up my Mac. Like Viticci, I find that frustrating. I want to use my iPad.

The correct move here is for Apple to continue building out iPadOS like it cares about its software as much as it does its hardware. I have no incentive to buy a new one until Apple decides it wants to take iPad users seriously.

Kudos to Mark Gurman — Apple really did introduce the M4 SoC in the new iPad Pro models. The M4 comes just six months after Apple launched the M3, which is currently used in half the Mac lineup. The other half — the Mac Mini, the Mac Studio, and the Mac Pro — all use processors in the M2 family. Two of those models were only just launched at WWDC last year.

None of this makes sense to anybody outside of Apple. Perhaps it is not supposed to: any given processor is perhaps good enough that you do not need to worry. But Apple itself set up this comparison when it decided to use the same processors in iPads and Macs, and name them to clearly show which ones are newer. I am sure there are legitimate and plentiful performance improvements in each generation of new processors but it is a dizzying set of choices from a buyer’s perspective. Maybe there will be updates to the three Mac desktops at WWDC this year.

Update: Jason Snell, Six Colors:

Why the M4 now? It mostly has to do with Apple shifting chip production at TSMC (the company that fabs Apple’s chips) from the first-generation 3nm process to a new, more efficient second-generation 3nm process. There’s a whole backstory about TSMC’s change in 3nm processes that’s not worth getting into here, but suffice it to say that the first-generation process is largely a dead end, and the company is moving to a new set of 3nm processes.

That is the kind of backstory I would be interested in. However, as I wrote above, this is the kind of explanation which is logical for Apple but produces a confusing result for the rest of us.

Apple:

[…] Today, we’re introducing two additional conditions in which the CTF is not required:

  • First, no CTF is required if a developer has no revenue whatsoever. […]

  • Second, small developers (less than €10 million in global annual business revenue*) that adopt the alternative business terms receive a 3-year free on-ramp to the CTF to help them create innovative apps and rapidly grow their business. […]

Two fundamental issues remain with the Core Technology Fee — namely, that developers still need to pay Apple even if their app is distributed exclusively outside the App Store and in-app payments are handled by a third-party processor, and the fee is an unknown and surprising future charge. One marvels at how the Mac could remain such a successful developer platform for so long without the support of a per-install fee.

But I was wrong. This is a meaningful relaxation of terms for entirely free apps, like the young developer example raised by Riley Testut during the March DMA compliance workshop.

The new A.I. Pin from Humane is, according to those who have used one, bad. Even if you accept the premise of wearing a smart speaker and use it to do a bunch of the stuff for which you used to rely on your phone, it is not good at those things — again, according to those who have used one, and I have not. Why is it apparently controversial to say that with intention?

Cherlynn Low, of Engadget, “cannot recommend anyone spend this much money for the one or two things it does adequately”. David Pierce, of the Verge, says it is “so thoroughly unfinished and so totally broken in so many unacceptable ways”. Arun Maini said the “total amount of effort required to perform any given action is just higher with the Pin”. Raymond Wong, of Inverse, wrote the most optimistic review of all those I saw but, after needing a factory reset of his review unit and then a wind gust blowing it off his shirt, it sounds like he is only convinced by the prospect of future versions, not the “textbook […] first-generation product” he is actually using.

It was Marques Brownlee’s blunt review title — “The Worst Product I’ve Ever Reviewed… For Now” — which caught the attention of a moderately popular Twitter user. The review itself was more like Wong’s, seeing some promise in the concept while dismissing this implementation, but the tweet itself courted controversy. Is the role of a reviewer to be kind to businesses even if their products suck, or is it to be honest?

I do not think it makes sense to dwell on an individual tweet. What is more interesting to me is how generous all of the reviewers have been so far, even while reaching such bleak conclusions. Despite having a list of cons including “unreliable”, and “slow”, and Low saying she burned herself “several times” because it was so hot, Engadget still gave it a score of 50 out of 100. The Verge gave it a 4 out of 10, and compared the product’s reception to that of the “dumpster fire” Nexus Q of 2012, which it gave a score of 5 out of 10.

That last review is a relevant historic artifact. The Nexus Q was a $300 audio and video receiver which users would, in theory, connect to a television or a Hi-Fi speaker system. It was controlled through software on an Android phone, and its standout feature was collaborative playlists. But the Verge found it had “connectivity problems” with different phones and different Nexus Q review units, videos looked “noticeably poor”, it was under-featured, and different friends adding music to the playback queue worked badly. Aside from the pretty hardware, there simply was no there there, and it was canned before a wide release.

But that was from Google, an established global corporation. Humane may have plenty of ex-Apple staff and lots of venture capital money, but it is still a new company. I have no problem grading on a reasonable curve. But how in the world is the Humane getting 40% or 50% of a perfect grade when every reviewer seems to think this product is bad and advises people not to buy one?

Even so, all of them seem compelled to give it the kind of tepid score you would expect for something that is flawed, but not a disaster. Some of the problems do not seem to be a direct fault of Humane; they are a consequence of the technological order. But that does not justify spending $700 plus a $24 per month subscription which you will need to keep paying in perpetuity to prevent your A.I. Pin from becoming a fridge magnet.

Maybe this is just a problem with trying to assign numerical scores. I have repeatedly complained about this because I think it gives mixed messages. What people need to know is whether something is worth buying, which consists of two factors: whether it addresses an actual problem, and whether it is effective at solving that problem. It appears the answer to the first is “maybe”, and the answer to the second is “hell no”. It does not matter how nice the hardware may be, or how interesting the laser projecting screen is. It apparently burns you while you barely use it.

In that light, giving this product an even tepid score is misleading. It is not respectful of potential buyers nor of the team which helped make it. It seems there are many smart people at Humane who thought they had a very good idea, and many people were intrigued. If a reviewer’s experience was poor, it is not cruel for them to be honest and say that it is, in a word, bad.

In the 1970s and 1980s, in-house researchers at Exxon began to understand how crude oil and its derivatives were leading to environmental devestation. They were among the first to comprehensively connect the use of their company’s core products to the warming of the Earth, and they predicted some of the harms which would result. But their research was treated as mere suggestion by Exxon because the effects of obvious legislation would “alter profoundly the strategic direction of the energy industry”. It would be a business nightmare.

Forty years later, the world has concluded its warmest year in recorded history by starting another. Perhaps we would have been more able to act if businesses like Exxon equivocated less all these years. Instead, they publicly created confusion and minimized lawmakers’ knowledge. The continued success of their industry lay in keeping these secrets.


“The success lies in the secrecy” is a shibboleth of the private surveillance industry, as described in Byron Tau’s new book, “Means of Control”. It is easy to find parallels to my opening anecdote throughout though, to be clear, a direct comparison to human-led ecological destruction is a knowingly exaggerated metaphor. The erosion of privacy and civil liberties is horrifying in its own right, and shares key attributes: those in the industry knew what they were doing and allowed it to persist because it was lucrative and, in a post-9/11 landscape, ostensibly justified.

Tau’s byline is likely familiar to anyone interested in online privacy. For several years at the Wall Street Journal, he produced dozens of deeply reported articles about the intertwined businesses of online advertising, smartphone software, data brokers, and intelligence agencies. Tau no longer writes for the Journal, but “Means of Control” is an expansion of that earlier work and carefully arranged into a coherent set of stories.

Tau’s book, like so many others describing the current state of surveillance, begins with the terrorists attacks of September 11 2001. This was the early days, when Acxiom realized it could connect its consumer data set to flight and passport records. The U.S. government ate it up and its appetite proved insatiable. Tau documents the growth of an industry that did not exist — could not exist — before the invention of electronic transactions, targeted advertising, virtually limitless digital storage, and near-universal smartphone use. This rapid transformation occurred not only with little regulatory oversight, but with government encouragement, including through investments in startups like Dataminr, GeoIQ, PlaceIQ, and PlanetRisk.

In near-chronological order, Tau tells the stories which have defined this era. Remember when documentation released by Edward Snowden showed how data created by mobile ad networks was being used by intelligence services? Or how a group of Colorado Catholics bought up location data for outing priests who used gay-targeted dating apps? Or how a defence contractor quietly operates nContext, an adtech firm, which permits the U.S. intelligence apparatus to effectively wiretap the global digital ad market? Regarding the latter, Tau writes of a meeting he had with a source who showed him a “list of all of the advertising exchanges that America’s intelligence agencies had access to”, and who told him American adversaries were doing the exact same thing.

What impresses most about this book is not the volume of specific incidents — though it certainly delivers on that front — but the way they are all woven together into a broader narrative perhaps best summarized by Tau himself: “classified does not mean better”. That can be true for volume and variety, and it is also true for the relative ease with which it is available. Tracking someone halfway around the world no longer requires flying people in or even paying off people on the ground. Someone in a Virginia office park can just make that happen and likely so, too, can other someones in Moscow and Sydney and Pyongyang and Ottawa, all powered by data from companies based in friendly and hostile nations alike.

The tension running through Tau’s book is in the compromise I feel he attempts to strike between acknowledging the national security utility of a surveillance state while describing how the U.S. has abdicated the standards of privacy and freedom it has long claimed are foundational rights. His reporting often reads as an understandable combination of awe and disgust. The U.S. has, it seems, slid in the direction of the kinds of authoritarian states its administration routinely criticizes. But Tau is right to clarify in the book’s epilogue that the U.S. is not, for example, China, separated from the standards of the latter by “a thin membrane of laws, norms, social capital, and — perhaps most of all — a lingering culture of discomfort” with concentrated state power. However, the preceding chapters of the book show questions about power do not fully extend into the private sector, where there has long been pride in the scale and global reach of U.S. businesses but concern about their influence. Tau’s reporting shows how U.S. privacy standards have been exported worldwide. For a more pedestrian example, consider the frequent praise–complaint sandwiches of Amazon, Meta, Starbucks, and Walmart, to throw a few names out there.

Corporate self-governance is an entirely inadequate response. Just about every data broker and intermediary from Tau’s writing which I looked up promised it was “privacy-first” or used similar language. Every business insists in marketing literature it is concerned about privacy and says they ensure they are careful about how they collect and use information, and they have been doing so for decades — yet here we are. Entire industries have been built on the backs of tissue-thin user consent and a flexible definition of “privacy”.

When polled, people say they are concerned about how corporations and the government collect and use data. Still, when lawmakers mandate choices for users about their data collection preferences, the results do not appear to show a society that cares about personal privacy.

In response to the E.U.’s General Data Privacy Regulation, websites decided they wanted to continue collecting and sharing loads of data with advertisers, so they created the now-ubiquitous cookie consent sheet. The GPDR does not explicitly mandate this mechanism and many remain non-compliant with the rules and intention of the law, but they are a particularly common form of user consent. However, if you arrive at a website and it asks you whether you are okay with it sharing your personal data with hundreds of ad tech firms, are you providing meaningful consent with a single button click? Hardly.

Similarly, something like 10–40% of iOS users agree to allow apps to track them. In the E.U., the cost of opting out of Meta’s tracking will be €6–10 per month which, I assume, few people will pay.

All of these examples illustrate how inadequately we assess cost, utility, and risk. It is tempting to think of this as a personal responsibility issue akin to cigarette smoking but, as we are so often reminded, none of this data is particularly valuable in isolation — it must be aggregated in vast amounts. It is therefore much more like an environmental problem.

As with global warming, exposé after exposé after exposé is written about how our failure to act has produced extraordinary consequences. All of the technologies powering targeted advertising have enabled grotesque and pervasive surveillance as Tau documents so thoroughly. Yet these are abstract concerns compared to a fee to use Instagram, or the prospect of reading hundreds of privacy policies with a lawyer and negotiating each of them so that one may have a smidge of control over their private information.

There are technical answers to many of these concerns, and there are also policy answers. There is no reason both should not be used.

I have become increasingly convinced the best legal solution is one which creates a framework limiting the scope of data collection, restricting it to only that which is necessary to perform user-selected tasks, and preventing mass retention of bulk data. Above all, users should not be able to choose a model that puts them in obvious future peril. Many of you probably live in a society where so much is subject to consumer choice. What I wrote sounds pretty drastic, but it is not. If anything, it is substantially less radical than the status quo that permits such expansive surveillance on the basis that we “agreed” to it.

Any such policy should also be paired with something like the Fourth Amendment is Not For Sale Act in the U.S. — similar legislation is desperately needed in Canada as well — to prevent sneaky exclusions from longstanding legal principles.

Last month, Wired reported that Near Intelligence — a data broker you can read more about in Tau’s book — was able to trace dozens of individual trips to Jeffrey Epstein’s island. That could be a powerful investigative tool. It is also very strange and pretty creepy all that information was held by some random company you probably have not heard of or thought about outside stories like these. I am obviously not defending the horrendous shit Epstein and his friends did. But it is really, really weird that Near is capable of producing this data set. When interviewed by Wired, Eva Galperin, of the Electronic Frontier Foundation, said “I just don’t know how many more of these stories we need to have in order to get strong privacy regulations.”

Exactly. Yet I have long been convinced an effective privacy bill could not be implemented in either the United States nor European Union, and certainly not with any degree of urgency. And, no, Matt Stoller: de facto rules on the backs of specific FTC decisions do not count. Real laws are needed. But the products and services which would be affected are too popular and too powerful. The E.U. is home to dozens of ad tech firms that promise full identity resolution. The U.S. would not want to destroy such an important economic sector, either.

Imagine my surprise when, while I was in middle of writing this review, U.S. lawmakers announced the American Privacy Rights Act (PDF). If passed, it would give individuals more control over how their information — including biological identifiers — may be collected, used, and retained. Importantly, it requires data minimization by default. It would be the most comprehensive federal privacy legislation in the U.S., and it also promises various security protections and remedies, though I think lawmakers’ promise to “prevent data from being hacked or stolen” might be a smidge unrealistic.

Such rules would more-or-less match the GDPR in setting a global privacy regime that other countries would be expected to meet, since so much of the world’s data is processed in the U.S. or otherwise under U.S. legal jurisdiction. The proposed law borrows heavily from the state-level California Consumer Privacy Act, too. My worry is that it will be treated by corporations similarly to the GDPR and CCPA by continuing to offload decision-making to users while taking advantage of a deliberate imbalance of power. Still, any progress on this front is necessary.

So, too, is it useful for anyone to help us understand how corporations and governments have jointly benefitted from privacy-hostile technologies. Tau’s “Means of Control” is one such example. You should read it. It is a deep exploration of one specific angle of how data flows from consumer software to surprising recipients. You may think you know this story, but I bet you will learn something. Even if you are not a government target — I cannot imagine I am — it is a reminder that the global private surveillance industry only functions because we all participate, however unwillingly. People get tracked based on their own devices, but also those around them. That is perhaps among the most offensive conclusions of Tau’s reporting. We have all been conscripted for any government buying this data. It only works because it is everywhere and used by everybody.

For all they have erred, democracies are not authoritarian societies. Without reporting like Tau’s, we would be unable to see what our own governments are doing and — just as important — how that differs from actual police states. As Tau writes, “in China, the state wants you to know you’re being watched. In America, the success lies in the secrecy“. Well, the secret is out. We now know what is happening despite the best efforts of an industry to keep it quiet, just like we know the Earth is heating up. Both problems massively affect our lived environment. Nobody — least of all me — would seriously compare the two. But we can say the same about each of them: now we know. We have the information. Now comes the hard part: regaining control.

Foo Yun Chee, Reuters:

Apple on Monday fended off criticism that it has not done enough to open up its closed eco-system as required under the European Union’s Digital Markets Act, saying it has complied with the landmark legislation.

[…]

The company told apps developers, business users and rivals at a day-long hearing organised by the European Commission that it has redesigned its systems to comply with the DMA.

Dan Moren, Six Colors:

During the workshop, [Riley] Testut used his time to ask about the Core Technology Fee. Under Apple’s new business terms in Europe (required for apps looking to be distributed via non-Apple app marketplaces or the web), there’s a €0.50 fee per app install over the first million. Testut rightly points out that a free app, such as the one he made in high school, that becomes popular could easily accrue enough costs to ruin a young developer’s life.

Apple VP of Legal Kyle Andeer responded sympathetically, saying that the company is continuing to try and find a good solution, and to “stay tuned.”

Even with this softened tone, I am certain the Core Technology Fee is just about the last thing Apple will meaningfully relax due to either regulatory pressure or developer outcry. Still, a flash of hope, and something to check in on later.

Other, similar compliance workshops are coming up all week long. Meta’s begins just a few hours from the time I am writing this.

Update: Steve Troughton-Smith ran the hearing through MacWhisper to create an unofficial transcript. It may not be wholly accurate but it is on my reading list anyhow.

During a White House press briefing on March 12, CBS News’ Ed O’Keefe asked press secretary Karine Jean-Pierre if photos of the president or other members of the White House are ever digitally altered. Jean-Pierre laughed and asked, in response, “why would we digitally alter photos? Are you comparing us to what’s going on in the U.K.?” O’Keefe said he was just doing due diligence. Jean-Pierre said, regarding digital photo manipulation, “that is not something that we do here”.

It is unclear to me whether Jean-Pierre was specifically declining the kind of multi-frame stacking apparent in the photo of the Princess of Wales and her children, or digital alterations more broadly. But it got me thinking — there is a strain of good-faith question to be asked here: are public bodies meeting the standards of editorial photography?

Well, first, it depends on which standards one refers to. There are many — the BBC has its own, as does NPR, the New York Times, and the National Press Photographers Association. Oddly, I could not find comparable documentation for the expectations of the official White House photographer. But it is the standards of the Associated Press which are the subject of the Princess of Wales photo debacle, and they are both representative and comprehensive:

Minor adjustments to photos are acceptable. These include cropping, dodging and burning, conversion into grayscale, elimination of dust on camera sensors and scratches on scanned negatives or scanned prints and normal toning and color adjustments. These should be limited to those minimally necessary for clear and accurate reproduction and that restore the authentic nature of the photograph. Changes in density, contrast, color and saturation levels that substantially alter the original scene are not acceptable. Backgrounds should not be digitally blurred or eliminated by burning down or by aggressive toning. The removal of “red eye” from photographs is not permissible.

If I can summarize these rules: changes should minimize the influence of the camera on how the scene was captured, and represent the scene as true to how it would be seen in real life. Oh, and photographers cannot remove red eye. Those are the standards I am expecting from the White House photographer to claim they do not digitally “alter” photos.

Happily, we can find out if those expectations are met even from some JPEG exports. Images edited using Adobe Lightroom carry metadata describing the edits made in surprising detail, and you can view that data using Photoshop or ExifTool. I opened a heavily manipulated photo of my own — the JPEG, not the original RAW file — and found in its metadata a record of colour and light correction, adjustment masks, perspective changes, and data about how much I healed and cloned. It was a lot and for clarification, that photo would not be acceptable by editorial standards.

To find out what was done by the White House, I downloaded the original-sized JPEG copies of many images from the Flickr accounts of the last three U.S. presidents. Then I examined the metadata. Even though O’Keefe’s question pertained specifically to the president, vice president, and other people in the White House, I broadened my search to include any photo. Surely all photos should meet editorial standards. I narrowed my attention to the current administration and the previous one because the Obama administration covered two terms, and that is a lot of pictures to go through.

We will start with an easy one. Remember that picture from the Osama Bin Laden raid? It is obviously manipulated and it says so right there in the description: “a classified document seen in this photograph has been obscured”. I think most people would believe that is a fair alteration.

But the image’s metadata reveals several additional spot exposure adjustments throughout the image. I am guessing some people in the back were probably under-exposed in the original.

This kind of exposure adjustment is acceptable by editorial standards — it is the digital version of dodging and burning. It is also pretty standard across administrations. A more stylized version was used during the Trump administration on pictures like this one to make some areas more indigo, and the Biden administration edited parts of this picture to make the lights bluer.

All administrations have turned some colour pictures greyscale, and have occasionally overdone it. The Trump administration increased the contrast and crushed the black levels in parts of this photo, and I wonder if that would be up to press standards.

There are lots more images across all three accounts which have gradient adjustments, vignettes, and other stylistic changes. These are all digital alterations to photos which are, at most, aesthetic choices that do not meaningfully change the scene or the way the image is interpreted.

But I also found images which had more than those simple adjustments. The Biden administration published a photo of a lone officer in the smoke of a nineteen-gun salute. Its metadata indicates the healing brush tool was used in a few places (line breaks added to fit better inline):

<crs:RetouchInfo>
    <rdf:Seq>
        <rdf:li>
                centerX = 0.059098, 
                centerY = 0.406924, 
                radius = 0.011088, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.037496, 
                sourceY = 0.387074, 
                spotType = heal
        </rdf:li>
        <rdf:li>
                centerX = 0.432986, 
                centerY = 0.119173, 
                radius = 0.010850, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.460986, 
                sourceY = 0.106420, 
                spotType = heal
        </rdf:li>
        <rdf:li>
                centerX = 0.622956, 
                centerY = 0.430625, 
                radius = 0.010763, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.652456, 
                sourceY = 0.430625, 
                spotType = heal
        </rdf:li>
        <rdf:li>
                centerX = 0.066687, 
                centerY = 0.104860, 
                radius = 0.011204, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.041687, 
                sourceY = 0.104860, 
                spotType = heal
        </rdf:li>
    </rdf:Seq>
</crs:RetouchInfo>

I am not sure exactly what was removed from the image, but there appears to be enough information here to indicate where the healing brush was used. Unfortunately, I cannot find any documentation about how to read these tags. (My guess is that these are percent coordinates and that 0,0 is the upper-left corner.) If all that was removed is lens or sensor crud, it would probably be acceptable. But if objects were removed, it would not meet editorial standards.

The Trump administration also has photos that have been retouched (line breaks added to fit better inline):

<crs:RetouchInfo>
    <rdf:Seq>
        <rdf:li>
                centerX = 0.451994, 
                centerY = 0.230277, 
                radius = 0.009444, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.431994, 
                sourceY = 0.230277, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.471218, 
                centerY = 0.201147, 
                radius = 0.009444, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.417885, 
                sourceY = 0.264397, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.912961, 
                centerY = 0.220015, 
                radius = 0.009444, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.904794, 
                sourceY = 0.254265, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.097888, 
                centerY = 0.603009, 
                radius = 0.009444, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.069790, 
                sourceY = 0.606021, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.044445, 
                centerY = 0.443587, 
                radius = 0.009444, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.076612, 
                sourceY = 0.451837, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.388536, 
                centerY = 0.202074, 
                radius = 0.009444, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.274036, 
                sourceY = 0.201324, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.744251, 
                centerY = 0.062064, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.794084, 
                sourceY = 0.158064, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.715719, 
                centerY = 0.155432, 
                radius = 0.012959, 
                sourceState = sourceSetExplicitly, 
                sourceX = 0.782736, 
                sourceY = 0.190757, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.667622, 
                centerY = 0.118204, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.659455, 
                sourceY = 0.078204, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.631788, 
                centerY = 0.082258, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.643121, 
                sourceY = 0.120008, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.768446, 
                centerY = 0.089400, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.786446, 
                sourceY = 0.124150, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.805172, 
                centerY = 0.059118, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.810672, 
                sourceY = 0.100618, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.525624, 
                centerY = 0.138548, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.482791, 
                sourceY = 0.162548, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.509623, 
                centerY = 0.182811, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.482790, 
                sourceY = 0.175061, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.417535, 
                centerY = 0.076733, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.373202, 
                sourceY = 0.076483, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.223111, 
                centerY = 0.275574, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.256444, 
                sourceY = 0.275574, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.201020, 
                centerY = 0.239967, 
                radius = 0.012959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.216353, 
                sourceY = 0.204467, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.097134, 
                centerY = 0.132270, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.121134, 
                sourceY = 0.138270, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.045526, 
                centerY = 0.096486, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.020859, 
                sourceY = 0.137486, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.062159, 
                centerY = 0.113695, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.039326, 
                sourceY = 0.140945, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.058762, 
                centerY = 0.134971, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.042762, 
                sourceY = 0.161471, 
                spotType = clone
        </rdf:li>
        <rdf:li>
                centerX = 0.413132, 
                centerY = 0.425824, 
                radius = 0.010959, 
                sourceState = sourceAutoComputed, 
                sourceX = 0.439799, 
                sourceY = 0.425824, 
                spotType = clone
        </rdf:li>
    </rdf:Seq>
</crs:RetouchInfo>

Even though there are lots more edits to this photo, it seems plausible they were made to remove lens or sensor dust made more obvious by the heavy use of the dehaze (+14), contrast (+50), and clarity (+2) adjustments.

For what it is worth, this does not seem like a scandal to me — at least, not unless it can be shown edits to White House photos were made to alter what was actually in the frame. But, to review: does the White House digitally alter images? Yes, at least a little. Does the White House conform to accepted editorial standards? I am not sure. Should it? In my view, yes, always — and so should the products of any government photographer. Has the White House done anything remotely close to that Princess of Wales image? Not that I have seen. Should I stop writing this as a series of rhetorical questions? Oh, hell, yes.

Dan Moren, Six Colors:

That’s where optics comes into play. Apple’s not publishing a 1500-word piece about why it disagrees with the EC’s ruling in order to convince the EC to change its mind. Presumably it made all of these arguments in its discussions with the regulator, and if it did not, then its army of lawyers is not doing its job.

No, this piece is for the public and the press (who will relay said arguments to the broad swath of the public that hasn’t consumed them firsthand). It’s there to point out all the great things that Apple does and cast it as the one being targeted unfairly by Europe. Apple’s just here making the world a better place! Fundamentally, Apple wants you to be party to its point of view here: that it’s the one being taken advantage of.

But that argument falls a bit flat when you boil the argument down to its essence.

It is still bizarre to read that press release even keeping in mind a presumed audience of journalists who might neutrally relay a few quotes and link to it. Surely someone at Apple knew what they were doing when they approved this thing; I do not run communications at a multitrillion-dollar company so this strategy is clearly lost on me.

Big news out of Brussels:

The European Commission has fined Apple over €1.8 billion for abusing its dominant position on the market for the distribution of music streaming apps to iPhone and iPad users (‘iOS users’) through its App Store. In particular, the Commission found that Apple applied restrictions on app developers preventing them from informing iOS users about alternative and cheaper music subscription services available outside of the app (‘anti-steering provisions’). This is illegal under EU antitrust rules.

Margrethe Vestager, executive vice president of the European Commission, in the transcript of a speech announcing the Commission’s findings and penalty:

Let me give you three examples of Apple’s anti-steering obligations:

  • First, music streaming developers were not allowed to inform their users, inside their own apps, of cheaper prices for the same subscription on the internet.

  • Second, they were also not allowed to include links in their apps to lead consumers to their websites and pay lower prices there.

  • And third, they were also not allowed to contact their own newly acquired users, for instance by email, to inform them about pricing options after they set up an account.

These anti-steering rules have been among the most aggressively policed of all the App Store policies. They have snared apps for violations like having a link buried in some documentation, requiring even large developers to create special pages — perhaps because Apple saw even small transgressions as opening the door to loopholes. Better be as tedious and cautious as possible.

Nevertheless, a few years ago, the Commission started looking into complaints that streaming music services — specifically — were disadvantaged by these policies. One could argue its interest in this specific category is because it is one area where European developers have some clout: in addition to Spotify, Deezer and SoundCloud are also European products. That is not a criticism: it should be unsurprising for European regulators to investigate an area where they have the grounds to do so. Alas, this is a relatively narrow investigation ahead of the more comprehensive enforcement of the Digital Markets Act, so treat this as a preview of what is to come for non-compliant companies.

The Commission has illustrated this in its press release with an image that features the icons of — among other apps — Beats Music, which Apple acquired in 2014 and turned into Apple Music, and Rdio, which was shut down in 2015.

Aside from the curious infographic, the Commission released this decision without much supporting documentation, as usual. It promises more information is to come after it removes confidential details. It is kind of an awkward statement if you are used to reading legal opinions made by regulatory bodies elsewhere, many of which post the opinion is alongside the decision so it is possible to work through the reasoning. Here, you get a press release and a speech — that is all.

Apple’s response to this decision is barely restrained and looks, frankly, terrible for one of the world’s largest and most visible corporations. There is no friendly soft-touch language here, nor is it a zesty spare statement. This is a press release seasoned with piss and vinegar:

The primary advocate for this decision — and the biggest beneficiary — is Spotify, a company based in Stockholm, Sweden. Spotify has the largest music streaming app in the world, and has met with the European Commission more than 65 times during this investigation.

[…]

Despite that success, and the App Store’s role in making it possible, Spotify pays Apple nothing. That’s because Spotify — like many developers on the App Store — made a choice. Instead of selling subscriptions in their app, they sell them on their website. And Apple doesn’t collect a commission on those purchases.

[…]

When it comes to doing business, not everyone’s going to agree on the best deal. But it sure is hard to beat free.

Strictly speaking — and we all know how much Apple likes that — Spotify pays more than “nothing” to distribute its app on iOS because a developer membership is not free.

But — point taken. Apple is making its familiar claim that iOS software avoids its in-app purchase model is basically freeloading, but it is very happy for any developer’s success. Happy, happy, happy. Real fuckin’ happy. Left unsaid is how much of this infrastructure — hosting, updates, developer tooling, and so on — is required by Apple’s policies to be used by third-party developers. It has the same condescending vibe as the letter sent to Basecamp in 2020 amidst the Hey app fiasco. At the time, the App Review Board wrote “[t]hese apps do not offer in-app purchase — and, consequently, have not contributed any revenue to the App Store over the last eight years”, as though it is some kind of graceful obligation for Apple to support applications that do not inflate its own services income.

Nevertheless, Apple is standing firm. One might think it would reconsider its pugilism after facing this €1.8 billion penalty, investigations on five continents specifically regarding its payment policies, new laws written to address them, and flagging developer relations — but no. It wants to fight and it does not seem to care how that looks.

Today, Spotify has a 56 percent share of Europe’s music streaming market — more than double their closest competitor’s — […]

Apple does not state Spotify’s closest European competitor but, according to an earlier media statement, it is Amazon Music, followed Apple Music. This is a complicated comparison: Spotify has a free tier, and Amazon bundles a version of its service with a Prime membership. Apple Music’s free tier is a radio-only service.

On that basis, it does seem odd from this side of the Atlantic if the Commission concluded Apple’s in-app payment policies were responsible for increased prices if the leading service is available free. But that is not what the Commission actually found. It specifically says the longtime policies “preventing [apps] from informing iOS users about alternative and cheaper music subscription services available outside of the app” are illegal, especially when combined with Apple’s first-party advantages. One effect among many could be higher prices paid by consumers. In the cases of Deezer and SoundCloud, for example, that is true: both apps charge more for in-app purchased subscriptions, compared to those purchased from the web, to cover Apple’s commission. But that is only one factor.

Carrying on:

[…] and pays Apple nothing for the services that have helped make them one of the most recognizable brands in the world. A large part of their success is due to the App Store, along with all the tools and technology that Spotify uses to build, update, and share their app with Apple users around the world.

This model has certainly played a role in Apple’s own success, according to an Apple-funded study (PDF): “Apple benefits as well, when the ecosystem it established expands and grows, either directly through App Store commissions or indirectly as the value users get from their iPhones increases”. Apple seems fixated on the idea that many apps of this type have their own infrastructure and, therefore, have little reason to get on board with Apple’s policies other than to the extent required. Having a universal software marketplace is probably very nice, but having each Spotify bug fix vetted by App Review probably provides less value than Apple wants to believe.

Like many companies, Spotify uses emails, social media, text messages, web ads, and many other ways to reach potential customers. Under the App Store’s reader rule, Spotify can also include a link in their app to a webpage where users can create or manage an account.

We introduced the reader rule years ago in response to feedback from developers like Spotify. And a lot of reader apps use that option to link users to a webpage — from e-readers to video streaming services. Spotify could too — but they’ve chosen not to.

About that second paragraph:

  • This change was not made because of developer requests. It was agreed to as part of a settlement with authorities in Japan in September 2021.

    Meanwhile, the European Commission says it began investigating Apple in June 2020, and informed the company of its concerns in April 2021, then narrowing them last year. I mention this in case there was any doubt this policy change was due to regulatory pressure.

  • This rule change may have been “introduced” in September 2021, but it was not implemented until the end of March 2022. It has been in effect for less than two years — hardly the “years ago” timeframe Apple says.

  • For clarification, external account management links are subject to strict rules and Apple approval. Remember how Deezer and SoundCloud offer in-app purchases? Apple’s policies say that means they cannot offer an account management link in their apps.

    This worldwide policy is specific to “reader” apps and is different from region-specific external purchase capabilities for non-“reader” apps. It only permits a single external link — one specific URL — which is only capable of creating and managing accounts, not individually purchased items. Still it is weird how Spotify does not take advantage of this permission.

  • Spotify, a “reader” app, nevertheless attempted to ship app updates which included a way to get an email with information about buying audiobooks. These updates were rejected because Spotify is only able to email customers in ways that do not circumvent in-app purchases for specific items.

You can quibble with Spotify’s attempts to work around in-app purchase rules — it is obviously trying to challenge them in a very public way — but it is Apple which has such restrictive policies around external links, down to how they may be described. It is a by-the-letter reading to be as strict as possible, lest any loopholes be exploited. This inflexibility would surely be explained by Apple as its “level playing field”, but we all know that is not entirely true.

Instead, Spotify wants to bend the rules in their favor by embedding subscription prices in their app without using the App Store’s In-App Purchase system. They want to use Apple’s tools and technologies, distribute on the App Store, and benefit from the trust we’ve built with users — and to pay Apple nothing for it.

It is not entirely clear Spotify actually wants to do any of these things; it is more honest to say it has to do them if it wants to have an iPhone app. Spotify has routinely disputed various iOS policies only to turn around and reject Apple’s solutions. Spotify complained that users could not play music natively through the HomePod, but has not taken advantage of third-party music app support on the device added in 2020. Instead, it was Apple’s Siri improvements last year that brought Spotify to the HomePod, albeit in an opaque way.

If we accept Apple’s premise, however, it remains a mystery why Apple applies its platform monetization policy to iOS and the related operating systems it has spawned, but not to MacOS. By what criteria, other than Apple’s policy choices, are Mac developers able to sell digital goods however they want — unless they use the Mac App Store — but iOS developers must ask Apple’s permission to include a link to an external payment flow? And that is the conceded, enhanced freedom version of this policy.

There is little logic to the iOS in-app purchase rules, which do not apply equally to physical goods, reader apps, or even some à la carte digital goods. Nobody has reason to believe this façade any longer.

Apple obviously believes the Mac is a different product altogether with different policies, and that is great. The relatively minor restrictions it has imposed still permit a level of user control unimaginable on iOS, and Apple does not seem to have an appetite to further lock it down to iOS levels. But the differences are a matter of policy, not technology.

Apple justifies its commission saying it “reflects the value Apple provides developers through ongoing investments in the tools, technologies, and services”. That is a new standard which apparently applies only to its iOS-derived platforms, compared to the way it invested in tools for Mac development. Indeed, Apple used to charge more for developer memberships when third-party software was only for the Mac, but even the top-of-the-line $3,500 Premier membership was probably not 30% of most developers’ revenue. Apple also charged for new versions of Mac OS X at this time. Now, it distributes all that for free; developers pay a small annual fee, and a more substantial rate to use the only purchasing mechanism they can use for most app types in most parts of the world.

For whatever reason — philosophical or financial — Apple’s non-Mac platforms are restricted and it will defend that stance until it is unable to do so. And, no matter how bad that looks, I kind of get it. I believe there could be a place for a selective and monitored software distribution system, where some authority figure has attested to the safety and authenticity of an app. That is not so different conceptually from how Apple’s notarization policies will be working in Europe.

I oscillate between appreciating and detesting an app store model, even if the App Store is a mess. But even when I am in a better mood, however, it seems crystal clear that such a system would be far better if it were not controlled by the platform owner. The conflict of interest is simply too great. It would be better if some arm’s-length party, perhaps spiritually similar to Meta’s Oversight Board, would control software and developer policies. I doubt that would fix every complaint with the App Store and App Review process but I bet it would have been a good start.

The consequences of being so pugnacious for over fifteen years of the App Store model has, I think, robbed Apple of the chance to set things right. Regulators around the world are setting new inconsistent standards based on fights between large corporations and gigantic ones, with developers of different sizes lobbying for their own wish lists. Individual people have no such influence, but all of these corporations likely believe they are doing what is right and best for their users.

As the saying goes, pressure makes diamonds, and Apple’s policies are being tested. I hope it can get this right, yet press releases like this one gives me little reason to believe in positive results from Apple’s forcibly loosened grip on its most popular platform. And with the Digital Markets Act now in effect, those stakes are high. I never imagined Apple would be thrilled for the rules of its platform to be upended by courts and lawmakers nor excited by a penalty in the billions, but it sure seems like it would be better for everybody if Apple embraced reality.

It is that time of year again. A panel of smart people, and also me, have completed Jason Snell’s annual survey of how we think Apple is doing when it comes to products, services, and social obligations.

The grades I gave were generally aligned with the rest of the panel — just look at that steep drop in the iPad’s grade, for good reasons. Where I seem to differ from many other people, based on the average grade, is in software quality.

I remain disappointed by how poorly Apple’s software often works for me. A MacOS Ventura update last year introduced a strange problem where my MacBook Pro would seize up any time HDR media was displayed, similar to problems early in the product’s release. No amount of troubleshooting fixed it until I upgraded to MacOS Sonoma which, alas, introduced new issues of its own, like notifications that sometimes fade onscreen instead of animating from the right, and text drawing problems. Smaller details, to be sure, but it all adds up to fragile experience. I routinely see graphical inconsistencies, hanging first-party applications, Siri problems, and insufficient contrast across all Apple devices I use.

My expectations are not that high. I only wish MacOS, in particular, would not feel as though it was rusting beneath the surface.