Month: August 2024

I do not wish to make a whole big thing out of this, but I have noticed a bunch of little things which make my iPhone a little bit harder to use. For this, I am setting aside things like rearranging the Home Screen, which still feels like playing Tetris with an adversarial board. These are all things which are relatively new, beginning with the always-on display and the Island in particular, neither of which I had on my last iPhone.

The always-on display is a little bit useful and a little bit of a gimmick. I have mine set to hide the wallpaper and notifications. In this setup, however, the position of media controls becomes unpredictable. Imagine you are listening to music when someone wishes to talk to you. You reach down to the visible media controls and tap where the pause button is, knowing that this only wakes the display. You go in for another tap to pause but — surprise — you got a notification at some point and, so, now that you have woken up the display, the notification slides in from the bottom and moves the media controls up, so you have now tapped on a notification instead.

I can resolve this by enabling notifications on the dimmed lock screen view, but that seems more like a workaround than a solution to this unexpected behaviour. A simple way to fix this would be to not show media controls when the phone is locked and the display is asleep. They are not functional, but they create an expectation for where those controls will be, which is not necessarily the case.

The Dynamic Island is fussy, too. I frequently interact with it for media playback, but it has a very short time-out. That is, if I pause media from the Dynamic Island, the ability to resume playback disappears after just a few seconds; I find this a little disorientating.

I do not understand how to swap the priority or visibility of Dynamic Island Live Activities. That is to say the Dynamic Island will show up to two persistent items, one of which will be minimized into a little circular icon, while the other will wrap around the display cutout. Apple says I should be able to swap the position of these by swiping horizontally, but I can only seem to make one of the Activities disappear no matter how I swipe. And, when I do make an Activity disappear, I do not know how I can restore it.

I find a lot of the horizontal swiping gestures too easy to activate in the Dynamic Island — I have unintentionally made an Activity disappear more than once — and across the system generally. It seems only a slightly off-centre angle is needed to transform a vertical scrolling action into a horizontal swiping one. Many apps make use of “sloppy” swiping — being able to swipe horizontally anywhere on the display to move through sequential items or different pages — and vertical scrolling in the same view, but the former is too easy for me to trigger when I intend the latter.

I also find the area above the Dynamic Island too easy to touch when I am intending to expand the current Live Activity. This will be interpreted as touching the Status Bar, which will jump the scroll position of the current view to the top.

Lastly, the number of unintended taps I make has, anecdotally, skyrocketed. One reason for this is a change made several iOS versions ago to recognize touches more immediately. If I am scrolling a long list and I tap the display to stop the scroll in-place, resting my thumb onscreen is sometimes read as a tap action on whatever control is below it. Another reason for accidental touches is that pressing the sleep/wake button does not immediately stop interpreting taps on the display. You can try this now: open Mail, press the sleep/wake button, then — without waiting for the display to fall asleep — tap some message in the list. It is easy to do this accidentally when I return my phone to my pocket, for example.

These are all little things but they are a cumulative irritation. I do not think my motor skills have substantially changed in the past seventeen years of iOS device use, though I concede they have perhaps deteriorated a little. I do notice more things behaving unexpectedly. I think part of the reason is this two-dimensional slab of glass is being asked to interpret a bunch of gestures in some pretty small areas.

Lauren Theisen, Defector:

Columbus Blue Jackets winger Johnny Gaudreau and his brother Matthew were killed by a car while biking in Oldmans Township, New Jersey on Thursday night, according to New Jersey State Police. Johnny was 31, and Matthew was 29.

The brothers, originally from New Jersey, were in the area for their sister Katie’s wedding, which was scheduled for Friday. Around 8:00 p.m., police say, the driver of a Jeep Grand Cherokee hit them from behind while trying to pass an SUV that had made room for the bikers. The driver has been charged with two counts of death by auto, and police suspect that the driver had been drinking.

I am not much of a sports person; I do not really follow hockey. But I knew of Gaudreau as a longtime Calgary Flames player. His death and that of his brother were completely avoidable if this driver had not been drinking, had not attempted to pass so recklessly, or was not driving an SUV.

As Theisen writes, over a thousand cyclists were killed by drivers in 2022 in the United States alone. This is a high-profile tragedy, but not an outlier.

Juli Clover, MacRumors:

With the third beta of iOS 18.1, Apple has introduced new Apple Intelligence features for notifications. The notification summarization option that was previously available for the Mail and Messages apps now works with all of your apps.

Matt Birchler posted a video of the screen advertising this feature, showing how the “crazy ones” script could be summarized:

Woof, come up with a better example for this during iOS 18.1 startup, Apple. Sucking all the life out of the “here’s to the crazy ones” piece is a bad look.

Not the worst crime of all time or anything, but not great for those who are upset about AI feature sucking the humanity out of art.

Aside from the gall of simplifying an iconic ad campaign to a single-sentence description, this screen barely makes sense. I am guessing few people receive poems or creative writing in an application’s notifications. Those who do would probably prefer it not be summarized. Surely the whole point of a feature like this is to remove the corporate mumbo jumbo from an executive’s email, or to condense a set of alerts from the same app into a single notification.

Sometimes, it is worth taking a second to think about how things look. Part of what makes new technologies special is how they enable human creativity and expression. Not every new invention will be to that end, but surely technology should not be treated as a goal unto itself. If the showcase use of A.I. summarization is to strip a poem — albeit one written for an ad — down to its literal message, what are we even trying to do here?

Cyrille Louis, Le Figaro, originally in French and translated here with DeepL:

After four days in police custody, Pavel Dourov, founder and boss of the encrypted messaging service Telegram, was indicted in Paris on Wednesday evening by two examining magistrates for a litany of offences relating to organised crime, Paris prosecutor Laure Beccuau announced in a statement. The 39-year-old entrepreneur was released under a strict judicial supervision order, which includes the obligation to post a €5 million bond, to report to the police twice a week and to refrain from leaving French territory.

The charges are related to criminal uses of Telegram’s platform and its refusal to cooperate with authorities. I know there are some people who are worried about the potential implications of this for other services. I am not yet sure whether these concerns are merited.

TJ McIntyre:

Anyway, what legal issues arise from the investigation? The content moderation ones are easiest; if Telegram has been notified of CSAM, etc. and has failed to act then it loses the hosting immunity under Art 6 DSA and may be liable under French law on complicity.

The issue of failure to respond to official requests for data may be more difficult. The Telegram entities seem to be based in multiple non-EU jurisdictions, including the British Virgin Islands and Dubai, and Telegram may attempt to argue that French orders do not have extraterritorial effect.

Adam Satariano and Cecilia Kang, of the New York Times, compared Durov’s arrest to those of Megaupload’s Kim Dotcom and the Silk Road’s Ross Ulbricht, neither of which I find particularly controversial. Perhaps I should; let me know if you think either arrest was unjustified. If Durov knew about criminal activity on Telegram and took little action to curtail it — which seems to be the case — it seems reasonable to hold him accountable for his company’s facilitation of that activity.

And from an un-bylined story in Le Monde:

His [Durov’s] lawyer David-Olivier Kaminski said it was “absurd” to suggest Durov could be implicated in any crime committed on the app, adding: “Telegram complies in all respects with European rules concerning digital technology.”

Separately, Durov is also being investigated on suspicion of “serious acts of violence” towards one of his children while he and an ex-partner, the boy’s mother, were in Paris, a source said. She also filed another complaint against Durov in Switzerland last year.

Maybe Durov is a piece of shit and Telegram sucks and this is also worrisome for civil liberties. But we do not yet have evidence for any of these things yet.

Paul Frazee, on Bluesky’s blog, announced a set of new “anti-toxicity” features. This one seems particularly good:

As of the latest app version, released today (version 1.90), users can view all the quote posts on a given post. Paired with that, you can detach your original post from someone’s quote post.

Quoted posts are a good feature, says someone who writes a website largely built around quotes from others, and I appreciate the benefits they provide. But there are also times when someone could be inundated with hostile mentions because they were quoted by someone with a large audience. This is a good way of allowing them to back out while retaining the feature.

Bluesky continues to do some really interesting stuff — from new things like Starter Packs, to rethinking established norms of social media platforms. I hope it succeeds.

French magistrate Laure Beccuau (PDF) on Monday disclosed the reasons for Pavel Durov’s arrest and detainment. The first two pages are in French; the last two are in English.

Mike Masnick, Techdirt:

In the end, though, a lot of this does seem potentially very problematic. So far, there’s been no revelation of anything that makes me say “oh, well, that seems obviously illegal.” A lot of the things listed in the charge sheet are things that lots of websites and communications providers could be said to have done themselves, though perhaps to a different degree.

Among the things being investigated by French authorities “against person unnamed” — not necessarily Durov — are “complicity” with various illegal communications, money laundering, and providing cryptography tools without authorization or registration. The latter category has raised the eyebrows of many but, I believe, must be read in the context of the whole list of charges. That is, this is not a pure objection to encrypted communications — to the extent Telegram chats may be encrypted — but unauthorized encryption used in complicity with other crimes.

In a way, that might be worse — all forms of communication, no matter whether they are encrypted, are used to facilitate crime. But providers of end-to-end encryption are facing seemingly endless proposals to weaken its protections. I do not think this is France trying to create a backdoor.

I think France is trying to pressure one of its own — Durov is a French citizen — to moderate the massive social network he runs within sensible boundaries. It is proudly carefree, which means it ignores CSAM reports and, according to an April report (PDF) from the Stanford Internet Observatory, does not appear to scan for known CSAM at all.

Telegram appears to believe it is a dumb pipe for users no matter whether they are communicating one-on-one or to a crowd of hundreds of thousands. It seems to think it has no obligation to cooperate with law enforcement in almost any circumstance.

Casey Newton, Platformer:

Anticipating these requests, Telegram created a kind of jurisdictional obstacle course for law enforcement that (it says) none of them have successfully navigated so far. From the FAQ again:

To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data. […] To this day, we have disclosed 0 bytes of user data to third parties, including governments.

It is important to more fully contextualize Telegram’s claim since it does not seem to be truthful. In 2022, Der Spiegel reported Telegram had turned over data to German authorities about users who had abused its platform. However, following an in-app user vote, it seems Telegram’s token willingness to cooperate with law enforcement on even the most serious of issues dried up.

I question whether Telegram’s multi-jurisdiction infrastructure promise is even real, much less protective against legal demands, given it says so in the same FAQ section as its probably wrong “0 bytes of user data” claim. Even so, Telegram says it “can be forced to give up data only if an issue is grave and universal enough” for several unrelated and possibly adversarial governments to agree on the threat. CSAM is globally reviled. Surely even hostile governments could agree on tracking those predators. Yet it seems Telegram, by its own suspicious “0 bytes” statistic, has not complied with even those requests.

Durov’s arrest presents an internal conflict for me. A world in which facilitators of user-created data are responsible for their every action is not conducive to effective internet policy. On the other hand, I think corporate executives should be more accountable for how they run their businesses. If Durov knew about severe abuse and impeded investigations by refusing to cooperate with information the company possessed, that should be penalized.

As of right now, though, all we have are a lot of questions about what this arrest means. There is simply little good information right now, and what crumbs are available lead to yet more confusion.

The extremely normal U.S. House Committee on the Judiciary posted a letter sent from Mark Zuckerberg to Chairman Jim Jordan.1 In it, Zuckerberg says Meta felt “pressured” by the Biden administration to more aggressively moderate users’ posts during the COVID-19 pandemic, that the administration was “wrong” for doing so, and says he “regret[s] that we were not more outspoken about it”.

This is substantially not news. Ryan Tracy of the Wall Street Journal reported last June the existence of these grievances within Meta. To be clear, this is contrition over Meta’s reluctance to more forcefully respond to government complaints about platform moderation. Nevertheless, it set off a wave of coverage about the Biden administration’s social media complaints during the pandemic.

Look a little closer, though, and it is a fairly embarrassing message which comes across less as a “big win for free speech”, as the Committee called it, and more like sophistry. Zuckerberg admits Meta decided its own moderation policy. It chose which actions to take, including issuing a direct response to the administration at the time. The government’s actions were also not as chilling as they sound. Indeed, many of the same issues were raised in Murthy v. Missouri, and were grossly misrepresented to portray U.S. officials as censorial and threatening instead of tense conversations made during a global pandemic.

But I wanted to draw your attention to something specific in Zuckerberg’s letter, as summarized by Hannah Murphy, of the Financial Times:

Zuckerberg also said he would no longer make a contribution to support electoral infrastructure via the Chan Zuckerberg Initiative, his philanthropic group, as he had previously done. The donations totalled more than $400mn and were made to non-profit groups including the Chicago-based Center for Tech and Civic Life. They were intended to make sure local election jurisdictions would have appropriate voting resources during the pandemic, he said. But he added that they had been interpreted as “benefiting one party over the other”.

Zuckerberg does not say who, specifically, interpreted his foundation’s contributions toward promoting information about voting as a somehow partisan effort, nor does Zuckerberg question the validity of these ridiculous complaints. But his concerns about the appearance of personal partisanship do not seem to carry over to his company. To name just one example, Meta is listed as a sponsor of the 2024 Canada Strong and Free Regional Networking Conference, a conservative activist event which this year is hosting Chris Rufo. That sponsorship is what kicked me into writing this whole thing instead of being satisfied with a couple of snarky posts. How is it that Meta will happily contribute to an explicitly partisan group, but Zuckerberg’s foundation promoting the general concept of voting is beyond the pale?

This letter is Zuckerberg ingratiating himself with lawmakers investigating a supposed conspiracy between tech companies, watchdog organizations, and an opposition political party. It is politically beneficial to a specific party and viewpoint. For Zuckerberg, whose objective is nominally to “not play a role one way or another — or to even appear to be playing a role”, this seems like a dishonest choice.


  1. The letter’s paragraphs are fully justified but hyphenation has not been enabled, so it looks like crap and readability is impacted. ↥︎

Joseph Cox, 404 Media:

Media giant Cox Media Group (CMG) says it can target adverts based on what potential customers said out loud near device microphones, and explicitly points to Facebook, Google, Amazon, and Bing as CMG partners, according to a CMG presentation obtained by 404 Media.

The deck says things like “smart devices capture real-time intent data by listening to our conversations” which seems like an obviously privacy-hostile invention on its face. But I continue to doubt any of this voice collection is actually happening, no matter how many buzzwords Cox Media Group throws in a PowerPoint presentation, when there is a far simpler explanation: they are lying. It already feels like behavioural advertising is targeting every word we say, so why not lean into that? Unscrupulous marketers love that kind of stuff. Feed them what they want.

If anyone from Cox Media Group would like to prove to me this is happening as described, give me a demo. I would love to see your creepy technology.

Jess Weatherbed, the Verge:

Image manipulation techniques and other methods of fakery have existed for close to 200 years — almost as long as photography itself. (Cases in point: 19th-century spirit photography and the Cottingley Fairies.) But the skill requirements and time investment needed to make those changes are why we don’t think to inspect every photo we see. Manipulations were rare and unexpected for most of photography’s history. But the simplicity and scale of AI on smartphones will mean any bozo can churn out manipulative images at a frequency and scale we’ve never experienced before. It should be obvious why that’s alarming.

This excellent piece is a necessary correction for too-simple comparisons between Google’s Reimagine feature and Adobe Photoshop. It also encouraged me re-read my own article about the history of photo manipulation to see if it holds up and, thankfully, I think it mostly does, even as Google’s A.I. editing tools have advanced from useful to irresponsible.

Last year’s features mostly allowed users to reposition and remove objects from their shots. This still seems fine, but one aspect of my description has not aged well. I wrote, in the context of removing a trampoline from a photo of a slam dunk, that Google’s tools make it “a little bit easier […] to lie”. For object removal, that remains true; for object addition — which is what Google’s Reimagine feature allows — it is much easier.

Me:

The questions that are being asked of the Pixel 8’s image manipulation capabilities are good and necessary because there are real ethical implications. But I think they need to be more fully contextualized. There is a long trail of exactly the same concerns and, to avoid repeating ourselves yet again, we should be asking these questions with that history in mind. This era feels different. I think we should be asking more precisely why that is.

Between Weatherbed’s piece and Sarah Jeong’s article on similar themes, I think some better context is rapidly taking shape, driven largely by Google’s decision to include additive features with few restrictions. A more responsible implementation of A.I. additions would limit the kinds of objects which could be added — balloons, fireworks, a big red dog. But, no, it is more important to Google — and X — to demonstrate their technological bonafides.

These technologies are different because they allow basically anyone to make basically any image realistically and on command with virtually no skill. Oh, and they can share them instantly. Two hundred years of faked photos cannot prepare us for the wild ride ahead.

Kate Conger and Ryan Mac, in an excerpt from their forthcoming book “Character Limit” published in the New York Times:

Mr. Musk’s fixation on Blue extended beyond the design, and he engaged in lengthy deliberations about how much it should cost. Mr. [David] Sacks insisted that they should raise the price to $20 a month, from its current $4.99. Anything less felt cheap to him, and he wanted to present Blue as a luxury good.

[…]

Mr. Musk also turned to the author Walter Isaacson for advice. Mr. Isaacson, who had written books on Steve Jobs and Benjamin Franklin, was shadowing him for an authorized biography. “Walter, what do you think?” Mr. Musk asked.

“This should be accessible to everyone,” Mr. Isaacson said, no longer just the fly on the wall. “You need a really low price point, because this is something that everyone is going to sign up for.”

I learned a new specific German word today as a direct result of this article: fremdschämen. It is more-or-less the opposite of schadenfreude; instead of being pleased by someone else’s embarrassment, you instead feel their pain.

This is humiliating for everyone involved: Musk, Sacks — who compared Twitter’s blue checkmarks to a Chanel handbag — and Jason Calacanis of course. But most of all, this is another blow to Isaacson’s credibility as an ostensibly careful observer of unfolding events.

Max Tani, of Semafor, was tipped off to Isaacson’s involvement earlier this year by a single source:

“I wanted to get in touch because we’re including an item in this week’s Semafor media newsletter reporting that you actually set the price for Twitter Premium,” I wrote to Isaacson in March. “We’ve heard that while you were shadowing Elon Musk for your book, he told Twitter staff that you had advised him on what the price should be, and he thought it was a good idea and implemented it.”

“Hah! That’s the first I’d heard of this. It’s not true. I’m not even sure what the price is. Sorry,” he replied.

This denial is saved from being a lie only by the grounds that Isaacson did not literally “set the price”, as Tani put it, on the subscription service. In all meaningful ways, though, it is deceptive.

An un-bylined report in Le Monde:

French judicial authorities on Sunday extended the detention of the Russian-born founder and chief of Telegram Pavel Durov after his arrest at a Paris airport over alleged offenses related to the popular but controversial messaging app.

I believe it is best to wait until there is a full description of the crimes French authorities are accusing Durov of committing before making judgements about the validity of this arrest. Regardless of what is revealed, I strongly suspect a lot of the more loudmouthed knee-jerk reactionary crowd will look pretty stupid and will, in all likelihood, dig in their heels looking even stupider in the process. Best to wait until we know more.

This Le Monde article goes on to describe Telegram as an “encrypted messaging app”.

Matthew Green:

But this arrest is not what I want to talk about today.

What I do want to talk about is one specific detail of the reporting. Specifically: the fact that nearly every news report about the arrest refers to Telegram as an “encrypted messaging app.” […]

This phrasing drives me nuts because in a very limited technical sense it’s not wrong. Yet in every sense that matters, it fundamentally misrepresents what Telegram is and how it works in practice. And this misrepresentation is bad for both journalists and particularly for Telegram’s users, many of whom could be badly hurt as a result.

Despite the company’s press page saying “[e]verything sent on Telegram is securely encrypted” and building much of its marketing around how “safe” and “secure” it is, there is a big difference between what Telegram does and the end-to-end encryption used by services like Signal and WhatsApp. There is, in fact, no way to enable what Telegram calls “secret chats” by default.

One can quibble with Telegram’s choices. How appealing it is to be using an app which does not support end-to-end encryption by default is very much a user’s choice. But one can only make that choice if Telegram provides accurate and clear information. I have long found Apple’s marketing of iMessage deceptive. Telegram’s explanation of its own privacy and security is far more exploitative of users’ trust.

Paris Marx:

[…] If he [Sam Altman] was serious about wanting to extend people’s lifespans by 10 years, he wouldn’t be looking at sci-fi fantasies, but at the policies that can deliver those benefits and how to get the US political system to move them forward.

[…]

Silicon Valley claims we can solve these serious social problems through technological innovation. On its face, that might seem to make sense. We can see many examples through history where the rollout of new technologies has improved our quality of life and increased our lifespans. But when tech billionaires use that term, they actually means letting VC-funded tech companies deploy whatever they want on an unsuspecting public with little regulation and no threat of accountability when things go wrong.

One weird thing that happens to me more than it should is that I reserve a bunch of books at the library, each of which has a long queue of other borrowers in front of it, and I assume these books will slowly trickle down to me — but, what actually happens is that all of them become available at the same time. Then I feel compelled to churn through them as quickly as I can so I am able to return them in a timely manner. Anyway, I chased Kyle Chayka’s “Filterworld” with Evgeny Morozov’s “To Save Everything, Click Here”, and I found it particularly thoughtful. I sometimes disagreed with Morozov’s conclusions, but his interrogation of the Silicon Valley ethos is necessary and considered.

The kinds of ideas Marx is writing about here are what Morozov would call “technological solutionism”. These are the procedural changes and supposedly revolutionary products and services intended to produce a desired social outcome when, instead, there are proven effective public policies which ought to be preferred. There might be a role for new technologies, of course, but “biohacking” is not going to be as effective as, say, universal healthcare for extending the lifespan of most people.

Katie Notopoulos, Business Insider:

A company that makes parental monitoring software called Qustodio recently released a report about app use for kids and teens based on its analysis of anonymous data from about 180,000 of its US users. Some of the information about what young people are doing online is what you’d expect: teenagers love watching TikTok and using Snapchat; younger kids under 13 are most interested in Roblox (53%) and YouTube (52%).

But there was one statistic that made my head spin: 31% of 7- to 9-year-olds use the X (Twitter) app.

I’m sorry, but … I can’t believe there’s any way in any possible universe that’s true.

The same report found 29% of kids aged 7–9 use Disney Plus and, I am sorry, if you have any faith in this data, please ensure your bullshit detector is better calibrated.

Chance Miller, 9to5Mac:

Apple has changed its screen recording privacy prompt in the latest beta of macOS Sequoia. As we reported last week, Apple’s initial plan was to prompt users to grant screen recording permissions weekly.

In macOS Sequoia beta 6, however, Apple has adjusted this policy and will now prompt users on a monthly basis instead. macOS Sequoia will also no longer prompt you to approve screen recording permissions every time you reboot your Mac.

After I wrote about the earlier permissions prompt, I got an email from Adam Selby, who manages tens of thousands of Macs in an enterprise context. Selby wanted to help me understand the conditions which trigger this alert, and to give me some more context. The short version is that Apple’s new APIs allow clearer and more informed user control over screen recording to the detriment of certain types of application, and — speculation alert — it is possible this warning will not appear in the first versions of MacOS Sequoia shipped to users.

Here is an excerpt from the release notes for the MacOS 15.0 developer beta:

Applications utilizing deprecated APIs for content capture such as CGDisplayStream & CGWindowListCreateImage can trigger system alerts indicating they might be able to collect detailed information about the user. Developers need to migrate to ScreenCaptureKit and SCContentSharingPicker. (120910350)

It turns out the “and” in that last sentence is absolutely critical. In last year’s beta releases of MacOS 14, Apple began advising developers it would be deprecating CoreGraphics screenshot APIs, and that applications should migrate to ScreenCaptureKit. However, this warning was removed by the time MacOS 14.0 shipped to users, only for it to reappear in the beta versions of 14.4 released to developers earlier this year. Apple’s message was to get on board — and fast — with ScreenCaptureKit.

ScreenCaptureKit was only the first part of this migration for developers. The second part — returning to the all-important “and” from the 15.0 release notes — is SCContentSharingPicker. That is the selection window you may have seen if you have recently tried screen sharing with, say, FaceTime. It has two agreeable benefits: first, it is not yet another permissions dialog; second, it allows the user to know every time the screen is being recorded because they are actively granting access through a trusted system process.

This actually addresses some of the major complaints I have with the way Apple has built out its permissions infrastructure to date:

[…] Even if you believe dialog boxes are a helpful intervention, Apple’s own sea of prompts do not fulfil the Jobs criteria: they most often do not tell users specifically how their data will be used, and they either do not ask users every time or they cannot be turned off. They are just an occasional interruption to which you must either agree or find some part of an application is unusable.

Instead of the binary choices of either granting apps blanket access to record your screen or having no permissions dialog at all for what could be an abused feature, this picker gives users the control and knowledge over how an app may record their screen. This lacks a scary catch-all dialog in favour of ongoing consent. A user will know exactly when an app is recording their screen, and exactly what it is recording, because that permission is no longer something an app gets, but something given to it by this picker.

This makes sense for a lot of screen recording use cases — for example, if someone is making a demo video, or if they are showing their screen in an online meeting. But if someone is trying to remotely access a computer, there is a sort of Möbius strip of permissions where you need to be able to see the remote screen in order to grant access to be able to see the screen. The Persistent Content Capture entitlement is designed to fix that specific use case.

Even though I think this structure will work for most apps, most of the time, it will add considerable overhead for apps like xScope, which allows you to measure and sample anything you can see, or ScreenFloat — a past sponsor — which allows you to collect, edit, and annotate screenshots and screen recordings. To use these utilities and others like them, a user will need to select the entire screen from the window picking control every time they wish to use a particular tool. Something as simple as copying an onscreen colour is now a clunky task without, as far as I can tell, any workaround. That is basically by design: what good is it to have an always-granted permission when the permissions structure is predicated on ongoing consent? But it does mean these apps are about to become very cumbersome. Either you need to grant whole-screen access every time you invoke a tool (or launch the app), or you do so a month at a time — and there is no guarantee the latter grace period will stick around in future versions of MacOS.

I think it is possible MacOS 15.0 ships without this dialog. In part, that is because its text — “requesting to bypass the system window picker” — is technical and abstruse, written with seemingly little care for average user comprehension. I also think that could be true because it is what happened last year with MacOS 14.0. That is not to say it will be gone for good; Apple’s intention is very clear to me. But hopefully there will be some new APIs or entitlement granted to legitimately useful utility apps built around latent access to seeing the whole screen when a user commands. At the very least, users should be able to grant access indefinitely.

I do not think it is coincidental this Windows-like trajectory for MacOS has occurred as Apple tries to focus more on business customers. In an investor call last year, Tim Cook said Apple’s “enterprise business is growing”. In one earlier this month, he seemed to acknowledge it was a factor, saying the company “also know[s] the importance of security for our users and enterprises, so we continue to advance protections across our products” in the same breath as providing an update on the company’s Mac business. This is a vague comment and I am wary of reading too much into it, but it is notable to see the specific nod to Mac enterprise security this month. I hope this does not birth separate “Home” and “Professional” versions of MacOS.

Still, there should be a way for users to always accept the risks of their actions. I am confident in my own ability to choose which apps I run and how to use my own computer. For many people — maybe most — it makes sense to provide a layer of protection for possibly harmful actions. But there must also be a way to suppress these warnings. Apple ought to be doing better on both counts. As Michael Tsai writes, the existing privacy system “feels like it was designed, not to help the user understand what’s going on and communicate their preferences to the system, but to deflect responsibility”. The new screen recording picker feels like an honest attempt at restricting what third-party apps are able to do without the user’s knowledge, and without burdening users with an uninformative clickwrap agreement.

But, please, let me be riskier if I so choose. Allow me to let apps record the entire screen all the time, and open unsigned apps without going through System Settings. Give me the power to screw myself over, and then let me get out of it. One does not get better at cooking by avoiding tools that are sharp or hot. We all need protections from our own stupidity at times, but there should always be a way to bypass them.

Apple today announced forthcoming iOS changes for E.U. users, including a more informative first-run browser choice screen — one that will require users to scroll to the bottom before confirming — and the ability to delete every default app except Phone and Settings. Also, this:

For users in the EU, iOS 18 and iPadOS 18 will also include a new Default Apps section in Settings that lists defaults available to each user. In future software updates, users will get new default settings for dialing phone numbers, sending messages, translating text, navigation, managing passwords, keyboards, and call spam filters. To learn more, view Update on apps distributed in the European Union.

The way this works currently is the user taps on any app capable of being set as a default for a particular category, then taps the submenu for setting the default app, then picks whichever. If you want to set DuckDuckGo as your default browser, for example, you can do so from the Default Browser App submenu in DuckDuckGo, Safari, or any other web browser app you have installed.

I do not think this is particularly confusing, but I do think the version Apple is creating specifically for the E.U. is a far clearer piece of design. Not only is it what I would be looking for if I were trying to change a default app, it also tacitly advertises the ability to customize an iPhone or iPad. It is a solution designed to appease regulators and, in doing so, makes things better for users. It reminds me of the European regulator influenced version of the Amazon Prime cancellation flow which, for users, is far superior to the one available elsewhere.

If someone were designing visual interfaces for clarity, they would end up with the European version of these screens. Which makes me half-wonder — and half-assume — the motives for designing them the other way.

Niléane, MacStories:

These changes to the browser choice screen and the ability to select new default apps on iOS and iPadOS come a few months after the European Commission announced their intention to open a non-compliance investigation against Apple in regard to the DMA.

It is unclear to me if Apple needs to publicly announce these changes in order to allow regulators to review them. I imagine there is not a confidential process by design, perhaps to put public pressure on gatekeepers to follow through with proposed updates.

Still, I am hopeful changes like the Default Apps screen will both appease regulators and become available globally. Perhaps Apple will never enable third-party app stores elsewhere until forced by law, but there are many features created to satisfy E.U. regulators which I believe would benefit iPhone and iPad users everywhere.

Sarah Jeong, the Verge:

If I say Tiananmen Square, you will, most likely, envision the same photograph I do. This also goes for Abu Ghraib or napalm girl. These images have defined wars and revolutions; they have encapsulated truth to a degree that is impossible to fully express. There was no reason to express why these photos matter, why they are so pivotal, why we put so much value in them. Our trust in photography was so deep that when we spent time discussing veracity in images, it was more important to belabor the point that it was possible for photographs to be fake, sometimes.

This is all about to flip — the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.

I have written about the long history of manipulated photographs, but I think Jeong’s framing accurately captures how these new technologies will shift expectations of how they reflect reality. There is a key difference between something which has always been possible, and something which is increasingly simple. I am not sure if there will be a critical mass moment, but the slide — first gradual and then sudden — is worth reckoning with. The mere threat that just anyone is able to make convincing fakes is reason enough to erode reality.

Here is a little postscript to that earlier piece I wrote about A.I.-faked images and to another I wrote about altered images in news coverage. One of the most-downloaded generated images on Adobe Stock in searches for war-related material depicts a rose growing in rubble. When I reverse-searched the image, I stumbled across a different version of the same concept:

Image of a rose growing from a crack in the pavement

This picture has been reproduced widely across the web; I got this one from a tweet. It is often accompanied by the text of the third verse of Tupac Shakur’s “The Rose That Grew From Concrete”. Look at the annotation in that link to the lyrics website Genius, and you will see the same image. Look a little closer and you will see the watermark in the lower-left corner: this image is by a user of Worth1000.com.

For those not already familiar, Worth1000 was a long-running contest site with separate categories for photo manipulation and photography. The site was acquired by DesignCrowd which, until recently, preserved an archive of the contests.

Here is where things get strange. As I was looking into this image, I was sure I would find it was entered in one of Worth1000’s Photoshop contests. Then I could write an article about the parallels between an A.I.-generated image and one faked by a person, and that would be very neat. But after coming up empty-handed in my searches for it in the Photoshop contests section, I looked in the photography contests — and found it in “Song Title Literalisms 2010” entered by a user named “Supagray”. I had a link to that contest to prove my point but, sadly, DesignCrowd erased its archives sometime in the past year. I tried tracking down this “Supagray” user, but was unsuccessful.

I really thought my expectations would be proven correct — that I would find this image was created in software. All the indicators were there. But I was wrong. I do not find it an especially interesting photo. But I appreciate the user who made it found a way to capture it for real, probably by jamming a grocery store rose into some pavement. Maybe we will collectively experience a similar feeling when we know an improbable image was not generated by A.I. tools, but was actually made for real.

If we do, it will likely pale in comparison to the number of times the opposite will be true — or, perhaps more often, when it is even possible the opposite could be true. Since anyone can now radically and realistically alter an entire scene within minutes of taking a photo, our expectations need to change. But we still need to be able to believe real newsworthy photos and videos are, indeed, real.

Tiffany Ng wrote a fantastic article for MIT Technology Review about the gradient of recommendations that runs between the automated and the more personal. I think the whole thing is worth reading — call it a personal recommendation — but I wanted to highlight a few specific things in no particular order. First:

Music enthusiasts are creating new ways to reinvigorate this sense of curiosity, building everything from competitive recommendation leagues to interactive music maps. Before streaming, discovering music was work that brought a distinctly emotional reward. […] Sharing music was a much more personal, peer-to-peer exercise, and making a mixtape for a crush was a substantial labor of love. […]

This is followed by an immediate comparison to today’s automated systems which allow anyone to generate a playlist with little effort or emotional investment. This is an agreeable argument, but I also think much of the emotional connection comes from the personal connection the giver — and, ideally, the recipient — are hoping to achieve. Put another way, if you found someone else’s mixtape on the ground, you might treat its recommendations as barely more consequential than those from Apple Music or Spotify.

Next:

Similar to Music League is a private Facebook community called Oddly Specific Playlists, a group that connects users from all corners of the internet with playlists inspired by (as the name suggests) very specific things. […]

“If a social network is any good, then it has to have some actual people putting new content into the ecosystem and organizing it in a coherent way — like someone making a hand-curated playlist,” says Kyle Chayka, a New Yorker staff writer and author of Filterworld: How Algorithms Flatten Culture. That’s just what the members of Oddly Specific Playlists do, even if the results can be hard to manage.

Oddly Specific Playlists reminds me of a long-defunct service called the Yams. The Yams allowed members to text one of their operators with playlist requests using as specific or as vague language as you wanted. When I asked Shannon Connolly, CEO of the Yams, about scalability she mentioned having a larger staff, but I still had concerns about its longevity — concerns that were, it turns out, sadly justified. A Facebook group of hundreds of thousands of people sure is one way to achieve a similar result at scale.

Also, I just finished Chayka’s book, and I did not love it. The premise is very good: how our world is shaped by automated recommendation features created by companies with their own motives. But few of the examples felt complete and I did not feel like I was learning much. Chayka spent too many pages on the interior design trends of coffee shops and Airbnbs. You may like it more than I did, and if you are looking for something along similar lines, I preferred Tom Vanderbilt’s “You May Also Like” and especially Cathy O’Neil’s “Weapons of Math Destruction”.

One last thing:

Alex Antenna, who has created a website called Unchartify to offer a more manual way of navigating Spotify’s database, attributes these pigeonholes to Spotify’s push for personalization. He built his site to bypass the plethora of “made for you” playlists and highlight lesser-known corners of Spotify’s database.

Unchartify is extremely cool, and you do not need to be a Spotify user to take advantage of it — just click “continue as guest” on the homepage. You can browse by genre or, more helpfully, begin with an artist, album, or label you already like and fall down a narrowing genre rabbit hole.

Erin Brooks:

I use many brands of cameras for my professional work: Leica, FujiX, Canon, as well as Zeiss attachments for my phone. But the fact remains that more than 50% of my work continues to be shot on iPhone, using the native camera app, and editing in Lightroom Mobile, because it’s the camera I have with me. The photo I took that won this year’s award was taken in a very brief quiet moment, in an otherwise busy aquarium, of my young nephew who never stands still for longer than a second. The 2016 photo was similar: my then toddler gave me a very small window in which she was willing to sit there with the leaf that matched her eyes.

Do check out Brooks’ stunning work in this post, and the images created by her and other photographers in this year’s iPhone Photography Awards gallery. But notice how, as Brooks points out, very little of it depends solely on the technology in hand. Yes, some of the finalists in this year’s awards are using very recent iPhone models; others, though, are not, and I do not think it detracts from the work they have created. Even the Portrait Mode glitch I think I see in one photo is completely fine.

Photographers have captured memorable images on everything from the best medium- and large-format cameras, to instant cameras with expired film. But catching those specific moments? That is all on the photographer. Sometimes it is the result of exceptional planning; at other times, it is a lucky catch. A good photographer can prepare for the former and anticipate the latter. I think Brooks is right: I think people are getting better at this.

M.G. Siegler:

Just close your eyes and imagine a single interface where all the world’s content is served up to you and you’re just one click away from watching any of it. Not a few clicks and navigating some other UI. Not a click and a dialog box saying you can’t access the content. Just a click and you’re watching.

This sounds like… well, iTunes. Or if you want to use the heir apparent in our streaming age: Spotify. Again, that’s sort of the dream. That interface and ease of use, but for all video content. No more need to use Google to see which show is playing on what service. Or which movie is coming when to a streaming service you already subscribe to. It all just works.

We almost had this in the first years of Netflix, when it was chock full of licensed movies and shows you could stream on demand. Then the handful of large corporations responsible for all layers of media production and distribution realized they could stream their own library. Now, over half of Netflix’s library is original movies and shows, and it competes with Disney Plus, Hulu (also owned by Disney), Max (owned by Warner), and Peacock (owned by NBCUniversal). As Siegler points out, all of these are being offered in various bundle deals. Canadian ISP Rogers, for example, includes Disney Plus access in cable TV subscriptions.

I think people would love a model more similar to streaming music. I think media conglomerates would hate it. Their relationship with the iTunes Store was less stable than music labels’, and they continue to be more interested in fighting illegal copies of their media than in trying to meet viewers where they already are.

Leyland Cecco, the Guardian:

Canada’s Conservative party has deleted a social media campaign video with a heavily nationalist message after much of the video featured scenes from other countries, including Ukrainian farmers, Slovenian homes, London’s Richmond Park and a pair of Russian fighter jets.

I am deliberately linking to the Guardian because this is an international embarrassment. There are many nonsensical things about this ad — the nauseating jingoism, the hyper-specific view of what a Canadian family looks like, and the hack messaging. But it is wild how much of this footage actively avoids being Canadian when you consider the microstock photography model effectively began in Calgary, hometown of Conservative Party leader Pierre Poilievre. There are so many stock photo and video providers based in this city alone, including Dissolve and Hero Images to name just two; there are more elsewhere in the province, such as Indigenous Images.

Yet, it is not surprising this party’s nationalist rhetoric is not matched by patriotic behaviour. Most of Canada’s major political parties operate some kind of merch store. The t-shirts sold by the Liberal Party are made in Canada. The Green Party’s merchandise is printed in B.C. by a local business. The New Democratic Party does not operate a store, but donors can get a t-shirt which is, based on what I can find, made and printed in Canada. NDP leader Jagmeet Singh previously offered merch made in Vancouver.

The Conservative Party’s merch, on the other hand, does not appear to be made in Canada. All of their t-shirts are “made with ancient dyes, no Liberal nonsense” — I do not know what they mean by that try-hard politicisation of clothing dyes — but I did not get a reply to my email asking where they were made. As with the stock video used in the ad, it is not as though there are no Canadian t-shirt manufacturers — far from it. But I guess the Conservative Party is less interested in actually supporting local businesses than it is in saying it does.