The Fight for End-to-End Encryption Is Worldwide

Since 2022, the European Parliament has been trying to pass legislation requiring digital service providers to scan for and report CSAM as it passes through their services.

Giacomo Zandonini, Apostolis Fotiadis, and Luděk Stavinoha, Balkan Insight, with a good summary in September:

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.


The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

This is going to require a little back-and-forth, and I will pick up the story with quotations from Matthew Green’s introductory remarks to a panel before the European Internet Services Providers Association in March 2023:

The only serious proposal that has attempted to address this technical challenge was devised — and then subsequently abandoned — by Apple in 2021. That proposal aimed only at detecting known content using a perceptual hash function. The company proposed to use advanced cryptography to “split” the evaluation of hash comparisons between the user’s device and Apple’s servers: this ensured that the device never received a readable copy of the hash database.


The Commission’s Impact Assessment deems the Apple approach to be a success, and does not grapple with this failure. I assure you that this is not how it is viewed within the technical community, and likely not within Apple itself. One of the most capable technology firms in the world threw all their knowledge against this problem, and were embarrassed by a group of hackers: essentially before the ink was dry on their proposal.

Daniel Boffey, the Guardian, in May 2023:

Now leaked internal EU legal advice, which was presented to diplomats from the bloc’s member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year.

The European Parliament in a November 2023 press release:

In the adopted text, MEPs excluded end-to-end encryption from the scope of the detection orders to guarantee that all users’ communications are secure and confidential. Providers would be able to choose which technologies to use as long as they comply with the strong safeguards foreseen in the law, and subject to an independent, public audit of these technologies.

Joseph Menn, Washington Post, in March, reporting on the results of a European court ruling:

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.


The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”

This is not directly about the proposed CSAM measures, but it is precedent for European regulators to follow.

Natasha Lomas, TechCrunch, this week:

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Meredith Whittaker, CEO of Signal, issued a PDF statement criticizing the proposal:

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Patrick Breyer, of Germany’s Pirate Party:

Only Germany, Luxembourg, the Netherlands, Austria and Poland are relatively clear that they will not support the proposal, but this is not sufficient for a “blocking minority”.

Ella Jakubowska on X:

The exact quote from [Věra Jourová] the Commissioner for Values & Transparency: “the Commission proposed the method or the rule that even encrypted messaging can be broken for the sake of better protecting children”

Věra Jourová on X, some time later:

Let me clarify one thing about our draft law to detect online child sexual abuse #CSAM.

Our proposal is not breaking encryption. Our proposal preserves privacy and any measures taken need to be in line with EU privacy laws.

Matthew Green on X:

Coming back to the initial question: does installing surveillance software on every phone “break encryption”? The scientist in me squirms at the question. But if we rephrase as “does this proposal undermine and break the *protections offered by encryption*”: absolutely yes.

Maïthé Chini, the Brussels Times:

It was known that the qualified majority required to approve the proposal would be very small, particularly following the harsh criticism of privacy experts on Wednesday and Thursday.


“[On Thursday morning], it soon became clear that the required qualified majority would just not be met. The Presidency therefore decided to withdraw the item from today’s agenda, and to continue the consultations in a serene atmosphere,” a Belgian EU Presidency source told The Brussels Times.

That is a truncated history of this piece of legislation: regulators want platform operators to detect and report CSAM; platforms and experts say that will conflict with security and privacy promises, even if media is scanned prior to encryption. This proposal may be specific to the E.U., but you can find similar plans to curtail or invalidate end-to-end encryption around the world:

I selected English-speaking areas because that is the language I can read, but I am sure there are more regions facing threats of their own.

We are not served by pretending this threat is limited to any specific geography. The benefits of end-to-end encryption are being threatened globally. The E.U.’s attempt may have been pushed aside for now, but another will rise somewhere else, and then another. It is up to civil rights organizations everywhere to continue arguing for the necessary privacy and security protections offered by end-to-end encryption.

Apple Says It Will Prevent E.U. Users From Accessing Select New Features, Including Apple Intelligence, Until It Has Achieved DMA Compliance

Javier Espinoza and Michael Acton, Financial Times:

Apple has warned that it will not roll out the iPhone’s flagship new artificial intelligence features in Europe when they launch elsewhere this year, blaming “uncertainties” stemming from Brussels’ new competition rules.

This article carries the headline “Apple delays European launch of new AI features due to EU rules”, but it is not clear to me these features are “delayed” in the E.U. or that they would “launch elsewhere this year”. According to the small text in Apple’s WWDC press release, these features “will be available in beta […] this fall in U.S. English”, with “additional languages […] over the course of the next year”. This implies the A.I. features in question will only be available to devices set to U.S. English, and acting upon text and other data also in U.S. English.

To be fair, this is a restriction of language, not geography. Someone in France or Germany could still want to play around with Apple Intelligence stuff even if it is not very useful with their mostly not-English data. Apple is saying they will not be able to. It aggressively region-locks alternative app marketplaces to Europe and, I imagine, will use the same infrastructure to keep users out of these new features.

There is an excerpt from Apple’s statement in this Financial Times article explaining which features will not launch in Europe this year: iPhone Mirroring, better screen sharing with SharePlay, and Apple Intelligence. Apple provided a fuller statement to John Gruber. This is the company’s explanation:

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

Apple does not explain specifically how these features run afoul of the DMA — or why it would not or could not build them to clearly comply with the DMA — so this could be mongering, but I will assume it is a good-faith effort at compliance in the face of possible ambiguity. I am not sure Apple has earned a benefit of the doubt, but that is a different matter.

It seems like even the possibility of lawbreaking has made Apple cautious — and I am not sure why that is seen as an inherently bad thing. This is one of the world’s most powerful corporations, and the products and services it rolls out impact a billion-something people. That position deserves significant legal scrutiny.

I was struck by something U.S. FTC chair Lina Khan said in an interview at a StrictlyVC event this month:

[…] We hear routinely from senior dealmakers, senior antitrust lawyers, who will say pretty openly that as of five or six or seven years ago, when you were thinking about a potential deal, antitrust risk or even the antitrust analysis was nowhere near the top of the conversation, and now it is up front and center. For an enforcer, if you’re having companies think about that legal issue on the front end, that’s a really good thing because then we’re not going to have to spend as many public resources taking on deals that we believe are violating the laws.

Now that competition laws are being enforced, businesses have to think about them. That is a good thing! I get a similar vibe from this DMA response. It is much newer than antitrust laws in both the U.S. and E.U. and there are things about which all of the larger technology companies are seeking clarity. But it is not an inherently bad thing to have a regulatory layer, even if it means delays.

Is that not Apple’s whole vibe, anyway? It says it does not rush into things. It is proud of withholding new products until it feels it has gotten them just right. Perhaps you believe corporations are a better judge of what is acceptable than a regulatory body, but the latter serves as a check on the behaviour of the former.

Apple is not saying Europe will not get these features at all. It is only saying it is not sure it has built them in a DMA compliant way. We do not know anything more about why that is the case at this time, and it does not make sense to speculate further until we do.

On Robots and Text

After Robb Knight found — and Wired confirmed — Perplexity summarizes websites which have followed its opt out instructions, I noticed a number of people making a similar claim: this is nothing but a big misunderstanding of the function of controls like robots.txt. A Hacker News comment thread contains several versions of these two arguments:

  • robots.txt is only supposed to affect automated crawling of a website, not explicit retrieval of an individual page.

  • It is fair to use a user agent string which does not disclose automated access because this request was not automated per se, as the user explicitly requested a particular page.

That is, publishers should expect the controls provided by Perplexity to apply only to its indexing bot, not a user-initiated page request. Wary of being the kind of person who replies to pseudonymous comments on Hacker News, this is an unnecessarily absolutist reading of how site owners expect the Robots Exclusion Protocol to work.

To be fair, that protocol was published in 1994, well before anyone had to worry about websites being used as fodder for large language model training. And, to be fairer still, it has never been formalized. A spec was only recently proposed in September 2022. It has so far been entirely voluntary, but the draft standard proposes a more rigid expectation that rules will be followed. Yet it does not differentiate between different types of crawlers — those for search, others for archival purposes, and ones which power the surveillance economy — and contains no mention of A.I. bots. Any non-human means of access is expected to comply.

The question seems to be whether what Perplexity is doing ought to be considered crawling. It is, after all, responding to a direct retrieval request from a user. This is subtly different from how a user might search Google for a URL, in which case they are asking whether that site is in the search engine’s existing index. Perplexity is ostensibly following real-time commands: go fetch this webpage and tell me about it.

But it clearly is also crawling in a more traditional sense. The New York Times and Wired both disallow PerplexityBot, yet I was able to ask it to summarize a set of recent stories from both publications. At the time of writing, the Wired summary is about seventeen hours outdated, and the Times summary is about two days old. Neither publication has changed its robots.txt directives recently; they were both blocking Perplexity last week, and they are blocking it today. Perplexity is not fetching these sites in real-time as a human or web browser would. It appears to be scraping sites which have explicitly said that is something they do not want.

Perplexity should be following those rules and it is shameful it is not. But what if you ask for a real-time summary of a particular page, as Knight did? Is that something which should be identifiable by a publisher as a request from Perplexity, or from the user?

The Robots Exclusion Protocol may be voluntary, but a more robust method is to block bots by detecting their user agent string. Instead of expecting visitors to abide by your “No Homers Club” sign, you are checking IDs. But these strings are unreliable and there are often good reasons for evading user agent sniffing.

Perplexity says its bot is identifiable by both its user agent and the IP addresses from which it operates. Remember: this whole controversy is that it sometimes discloses neither, making it impossible to differentiate Perplexity-originating traffic from a real human being — and there is a difference.

A webpage being rendered through a web browser is subject to the quirks and oddities of that particular environment — ad blockers, Reader mode, screen readers, user style sheets, and the like — but there is a standard. A webpage being rendered through Perplexity is actually being reinterpreted and modified. The original text of the page is transformed through automated means about which neither the reader or the publisher has any understanding.

This is true even if you ask it for a direct quote. I asked for a full paragraph of a recent article and it mashed together two separate sections. They are direct quotes, to be sure, but the article must have been interpreted to generate this excerpt.1

It is simply not the case that requesting a webpage through Perplexity is akin to accessing the page via a web browser. It is more like automated traffic — even if it is being guided by a real person.

The existing mechanisms for restricting the use of bots on our websites are imperfect and limited. Yet they are the only tools we have right now to opt out of participating in A.I. services if that is something one wishes to do, short of putting pages or an entire site behind a user name and password. It is completely reasonable for someone to assume their signal of objection to any robotic traffic ought to be respected by legitimate businesses. The absolute least Perplexity can do is respecting those objections by clearly and consistently identifying itself, and excluding websites which have indicated they do not want to be accessed by these means.

  1. I am not presently blocking Perplexity, and my argument is not related to its ability to access the article. I am only illustrating how it reinterprets text. ↥︎

Perplexity Is a Bullshit Machine

Dhruv Mehrotra and Tim Marchman, of Wired, were able to confirm Robb Knight’s finding that Perplexity ignores the very instructions it gives website owners to opt out of scraping. And there is more:

The WIRED analysis also demonstrates that despite claims that Perplexity’s tools provide “instant, reliable answers to any question with complete sources and citations included,” doing away with the need to “click on different links,” its chatbot, which is capable of accurately summarizing journalistic work with appropriate credit, is also prone to bullshitting, in the technical sense of the word.

I had not played around with Perplexity very much, but I tried asking it “what is the bullshit web?”. Its summaries in response to prompts with and without a question mark are slightly different but there is one constant: it does not cite my original article, only a bunch of (nice) websites which linked to or reblogged it.

A.I. Cannot Fix What Automation Already Broke

Takeshi Narabe, the Asahi Shimbun:

SoftBank Corp. announced that it has developed voice-altering technology to protect employees from customer harassment.

The goal is to reduce the psychological burden on call center operators by changing the voices of complaining customers to calmer tones.

The company launched a study on “emotion canceling” three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call.

Penny Crosman, the American Banker:

Call center agents who have to deal with angry or perplexed customers all day tend to have through-the-roof stress levels and a high turnover rate as a result. About 53% of U.S. contact center agents who describe their stress level at work as high say they will probably leave their organization within the next six months, according to CMP Research’s 2023-2024 Customer Contact Executive Benchmarking Report.

Some think this is a problem artificial intelligence can fix. A well-designed algorithm could detect the signs that a call center rep is losing it and do something about it, such as send the rep a relaxing video montage of photos of their family set to music.

Here we have examples from two sides of the same problem: working in a call centre sucks because dealing with usually angry, frustrated, and miserable customers sucks. The representative probably understands why some corporate decision made the customer angry, frustrated, and miserable, but cannot really do anything about it.

So there are two apparent solutions here — the first reconstructs a customer’s voice in an effort to make them sound less hostile, and the second shows call centre employees a “video montage” of good memories as an infantilizing calming measure.

Brian Merchant wrote about the latter specifically, but managed to explain why both illustrate the problems created by how call centres work today:

If this showed up in the b-plot of a Black Mirror episode, we’d consider it a bit much. But it’s not just the deeply insipid nature of the AI “solution” being touted here that gnaws at me, though it does, or even the fact that it’s a comically cynical effort to paper over a problem that could be solved by, you know, giving workers a little actual time off when they are stressed to the point of “losing it”, though that does too. It’s the fact that this high tech cost-saving solution is being used to try to fix a whole raft of problems created by automation in the first place.

A thoughtful exploration of how A.I. is really being used which, combined with the previously linked item, does not suggest a revolution for anyone involved. It looks more like cheap patch on society’s cracking dam.

McDonald’s Is Ending Its Drive-Through A.I. Test

Jonathan Maze, Restaurant Business Online:

McDonald’s is ending its two-year-old test of drive-thru, automated order taking (AOT) that it has conducted with IBM and plans to remove the technology from the more than 100 restaurants that have been using it.


McDonald’s has taken a deliberative approach on drive-thru AI even as many other restaurant chains have jumped fully on board. Checkers and Rally’s, Hardee’s, Carl’s Jr., Krystal, Wendy’s, Dunkin and Taco Johns are either testing or have implemented the technology in its drive-thrus.

Some of those chains “fully on board” with A.I. order-taking are customers of Presto which, according to reporting last year in Bloomberg, relied on outsourced workers in the Philippines for roughly 70% of the orders processed through its “A.I.” system. In a more recent corporate filing, human intervention has fallen to 54% of orders at “select locations” where Presto has launched what it calls its “most advanced version of [its] A.I. technology”. However, that improvement only applies to 55 of 202 restaurant locations where Presto is used. It does not say in that filing how many orders need human intervention at the other 147 locations.

Perhaps I am being unfair. Any advancements in A.I. are going to start off rocky, and will take a while to improve. They will understandably be mired in controversy, too. I am fond of how Cory Doctorow put it:

[…] their [A.I. vendors’] products aren’t anywhere near good enough to do your job, but their salesmen are absolutely good enough to convince your boss to fire you and replace you with an AI model that totally fails to do your job.

We can choose to create a world where even the smallest expressions of human creativity in our work are eliminated to technology — or we can choose not to. I am not a doomsday person about A.I.; I have found it sometimes useful in home and work contexts. But I am not buying the hype either. The problem is that I think Doctorow might be right: the people making decisions may hold their nose over any concerns they could have about trust as they realize how much more productive someone can be when they no longer have to think so much, and how much less they can be paid. And then whatever standards we have for good enough fall off a cliff.

But the McDonald’s experiment is probably just silly.

Nvidia Is the World’s Most Valuable Bubb— Sorry, Company

Kif Leswing, CNBC:

Nvidia, long known in the niche gaming community for its graphics chips, is now the most valuable public company in the world.


Nvidia shares are up more than 170% so far this year, and went a leg higher after the company reported first-quarter earnings in May. The stock has multiplied by more than ninefold since the end of 2022, a rise that’s coincided with the emergence of generative artificial intelligence.

I know computing is math — even drawing realistic pictures really fast — but it is so funny to me that Nvidia’s products have become so valuable for doing applied statistics instead of for actual graphics work.

Gender Discrimination Lawsuit Filed Against Apple

Patrick McGee, Financial Times, August 2022:

In interviews with 15 female Apple employees, both current and former, the Financial Times has found that Mohr’s frustrating experience with the People group has echoes across at least seven Apple departments spanning six US states.

The women shared allegations of Apple’s apathy in the face of misconduct claims. Eight of them say they were retaliated against, while seven found HR to be disappointing or counterproductive.

Ashley Belanger, Ars Technica, last week:

Apple has spent years “intentionally, knowingly, and deliberately paying women less than men for substantially similar work,” a proposed class action lawsuit filed in California on Thursday alleged.


The current class action has alleged that Apple continues to ignore complaints that the company culture fosters an unfair and hostile workplace for women. It’s hard to estimate how much Apple might owe in back pay and other damages should women suing win, but it could easily add up if all 12,000 class members were paid thousands less than male counterparts over the complaint’s approximately four-year span. Apple could also be on the hook for hundreds in civil penalties per class member per pay period between 2020 and 2024.

I pulled the 2022 Financial Times investigation into this because one of the plaintiffs in the lawsuit filed last week also alleges sexual harassment by a colleague which was not adequately addressed.

Stephen Council, SFGate:

The lawyer said that asking women about pay expectations “locks” past pay discrimination in and that the requirements of a job should determine pay. Finberg isn’t new to the fight over tech pay; he represented employees suing Oracle and Google for gender-based pay discrimination, securing $25 million and $118 million settlements, respectively.

Last year, Apple paid $25 million to settle claims it discriminated in U.S. hiring in favour of people whose ability to remain in the U.S. depended on their employment status.

Adobe Codifies Pledge Not to Train A.I. on Customer Data

Ina Fried, Axios:

Adobe on Tuesday updated its terms of service to make explicit that it won’t train AI systems using customer data.

The move follows an uproar over largely unrelated changes Adobe made in recent days to its terms of service — which contained wording that some customers feared was granting Adobe broad rights to customer content.

Again, I must ask whether businesses are aware of how little trust there currently is in technology firms’ A.I. use. People misinterpret legal documents all the time — a minor consequence of how we have normalized signing a non-negotiable contract every time we create a new account. Most people are not equipped to read and comprehend the consequences of those contracts, and it is unsurprising they can assume the worst.

U.S. Federal Trade Commission Sues Adobe Over Subscription Practices

The U.S. Federal Trade Commission:

The Federal Trade Commission is taking action against software maker Adobe and two of its executives, Maninder Sawhney and David Wadhwani, for deceiving consumers by hiding the early termination fee for its most popular subscription plan and making it difficult for consumers to cancel their subscriptions.

A federal court complaint filed by the Department of Justice upon notification and referral from the FTC charges that Adobe pushed consumers toward the “annual paid monthly” subscription without adequately disclosing that cancelling the plan in the first year could cost hundreds of dollars. Wadhwani is the president of Adobe’s digital media business, and Sawhney is an Adobe vice president.

The inclusion of two Adobe executives as co-defendants is notable, though not entirely unique — in September, the FTC added three executives to its complaint against Amazon, a move a judge recently upheld.

The contours of the case itself bear similarities to the Amazon Prime one, too. In both cases, customers are easily coerced into subscriptions which are difficult to cancel. Executives were aware of customer complaints, according to the FTC, yet they allegedly allowed or encouraged these practices. But there are key differences between these cases as well. Amazon Prime is a monthly cancel-anytime subscription — if you can navigate the company’s deliberately confusing process. Adobe, on the other hand, offers three ways to pay for many of its products: on a monthly basis which can be cancelled at any time, on an annual basis, or on a monthly basis locked into an annual contract. However, it predominantly markets its products with the latter option, and preselects it when subscribing. That is where the pain begins.

The difficulty and cost of cancelling an Adobe subscription is legendary. It is right up there with gyms for how badly it treats its customers. It has designed a checkout process that defaults people into an annual contract, and a cancellation workflow which makes extricating oneself from that contract tedious, time-consuming, and expensive. If Adobe wanted to make it obvious what users were opting into at checkout, and easy for them to end a subscription, it could have designed those screens in that way. Adobe did not.

Perplexity A.I. Is Lying About Its User Agent

Robb Knight blocked various web scrapers via robots.txt and through nginx. Yet Perplexity seemed to be able to access his site:

I got a perfect summary of the post including various details that they couldn’t have just guessed. Read the full response here. So what the fuck are they doing?


Before I got a chance to check my logs to see their user agent, Lewis had already done it. He got the following user agent string which certainly doesn’t include PerplexityBot like it should: […]

I am sure Perplexity will respond to this by claiming it was inadvertent, and it has fixed the problem, and it respects publishers’ choices to opt out of web scraping. What matters is how we have only a small amount of control over how our information is used on the web. It defaults to open and public — which is part of the web’s brilliance, until the audience is no longer human.

Unless we want to lock everything behind a login screen, the only mechanisms for control that we have are dependent on companies like Perplexity being honest about their bots. There is no chance this problem only affects the scraping of a handful of independent publishers; this is certainly widespread. Without penalty or legal reform, A.I. companies have little incentive not to do exactly the same as Perplexity.

Clearview Class Action Settlement Proposal Would Make Investors Out of Victims

Kashmir Hill, New York Times:

[Clearview AI] A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

This is an awful move by an awful company. It turns U.S. victims of its global privacy invasion into people who are invested and complicit in its success.

Microsoft Delays Launch of Recall

Pavan Davuluri, of Microsoft:

Today, we are communicating an additional update on the Recall (preview) feature for Copilot+ PCs. Recall will now shift from a preview experience broadly available for Copilot+ PCs on June 18, 2024, to a preview available first in the Windows Insider Program (WIP) in the coming weeks. Following receiving feedback on Recall from our Windows Insider Community, as we typically do, we plan to make Recall (preview) available for all Copilot+ PCs coming soon.

Microsoft has always struggled to name its products coherently, but Microsoft Copilot+ PCs with Recall (preview) available first through the Windows Insider Program (WIP) has to take the cake. Absolute gibberish.

Anyway, it is disappointing to see Microsoft botch the announcement of this feature so badly. Investors do not seem to care about how untrustworthy the company is because, face it, how many corporations big and small are going to abandon Windows and Office? As long as its leadership keeps saying the right things, it seems it is still comfortable to sit in the afterglow of its A.I. transformation.

Sponsor: Magic Lasso Adblock: Incredibly Private and Secure Safari Web Browsing

Online privacy isn’t just something you should be hoping for — it’s something you should expect. You should ensure your browsing history stays private and is not harvested by ad networks.

By blocking ad trackers, Magic Lasso Adblock stops you being followed by ads around the web.

Screenshot of Magic Lasso Adblock

It’s a native Safari content blocker for your iPhone, iPad, and Mac that’s been designed from the ground up to protect your privacy.

Rely on Magic Lasso Adblock to:

  • Remove ad trackers, annoyances and background crypto-mining scripts

  • Browse common websites 2.0× faster

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

So, join over 300,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

The Three C’s of Data Participation in the Age of A.I.

Eryk Salvaggio, Tech Policy Press:

People are growing ever more frustrated by the intrusiveness of tech. This frustration feeds a cycle of fear that can be quickly dismissed, but doing so strikes me as either foolish or cynical. I am not a lawyer, but lately I have been in a lot of rooms with lawyers discussing people’s rights in the spheres of art and AI. One of the things that has come up recently is the challenge of translating oftentimes unfiltered feelings about AI into a legal framework.


I would never claim to speak to the concerns of everyone I’ve spoken with about AI, but I have made note of a certain set of themes. I understand these as three C’s for data participation: Context, Consent, and Control.

This is a thoughtful essay about what it means for creation to be public, and the imbalanced legal architecture covering appropriation and reuse. I bet many people feel this in their gut — everything is a remix, yet there are vast differences between how intellectual property law deals with individuals compared to businesses.

If I were creating music by hand which gave off the same vibes as another artist, I would be worried about a resulting lawsuit, even if I did not stray into the grey area of sampling. And I would have to obtain everything legally — if I downloaded a song off the back of a truck, so to speak, I would be at risk of yet more legal jeopardy, even if it was for research or commentary. Yet an A.I. company can scrape all the music that has ever been published to the web, and create a paid product that will reproduce any song or artist you might like without credit or compensation; they are arguing this is fair use.

This does not seem like a fair situation, and it is not one that will be remedied by making copyright more powerful. I appreciated Salvaggio’s more careful assessment.

ProPublica: Microsoft Refused to Fix Flaw Years Before SolarWinds Hack

Renee Dudley and Doris Burke, reporting for ProPublica which is not, contrary to the opinion of one U.S. Supreme Court jackass justice, “very well-funded by ideological groups” bent on “look[ing] for any little thing they can find, and they try[ing] to make something out of it”, but is instead a distinguished publication of investigative journalism:

Microsoft hired Andrew Harris for his extraordinary skill in keeping hackers out of the nation’s most sensitive computer networks. In 2016, Harris was hard at work on a mystifying incident in which intruders had somehow penetrated a major U.S. tech company.


Early on, he focused on a Microsoft application that ensured users had permission to log on to cloud-based programs, the cyber equivalent of an officer checking passports at a border. It was there, after months of research, that he found something seriously wrong.

This is a deep and meaningful exploration of Microsoft’s internal response to the conditions that created 2020’s catastrophic SolarWinds breach. It seems that both Microsoft and the Department of Justice knew well before anyone else — perhaps as early as 2016 in Microsoft’s case — yet neither did anything with that information. Other things were deemed more important.

Perhaps this was simply a multi-person failure in which dozens of people at Microsoft could not see why Harris’ discovery was such a big deal. Maybe they all could not foresee this actually being exploited in the wild, or there was a failure to communicate some key piece of information. I am a firm believer in Hanlon’s razor.

On the other hand, the deep integration of Microsoft’s entire product line into sensitive systems — governments, healthcare, finance — magnifies any failure. The incompetence of a handful of people at a private corporation should not result in 18,000 infected networks.

Ashley Belanger, Ars Technica:

Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.


Microsoft did not dispute ProPublica’s report. Instead, the company provided a statement that almost seems to contradict Smith’s testimony to Congress today by claiming that “protecting customers is always our highest priority.”

Microsoft’s public relations staff can say anything they want. But there is plenty of evidence — contemporary and historic — showing this is untrue. Can it do better? I am sure Microsoft employs many intelligent and creative people who desperately want to change this corrupted culture. Will it? Maybe — but for how long is anybody’s guess.

Japan Becomes the Next Region to Mandate Alternative App Stores

The Asahi Shimbun, in a non-bylined report:

The new law designates companies that are influential in four areas: smartphone operating systems, app stores, web browsers and search engines.

The new law will prohibit companies from giving preferential treatment for the operator’s own payment system and from preventing third-party companies from launching new application stores.


The new legislation sets out exceptional rules in cases to protect security, privacy and youth users.

Penalties are 20–30% of Japanese revenue. Japan is one of very few countries in the world where the iPhone’s market share exceeds that of Android phones. I am interested to know if Apple keeps its policies for developers consistent between the E.U. and Japan, or if they will diverge.

BNN Breaking Was an A.I. Sham

Conspirador Norteño” in January 2023:

BNN (the “Breaking News Network”, a news website operated by tech entrepreneur and convicted domestic abuser Gurbaksh Chahal) allegedly offers independent news coverage from an extensive worldwide network of on-the-ground reporters. As is often the case, things are not as they seem. A few minutes of perfunctory Googling reveals that much of BNN’s “coverage” appears to be mildly reworded articles copied from mainstream news sites. For science, here’s a simple technique for algorithmically detecting this form of copying.

Kashmir Hill and Tiffany Hsu, New York Times:

Many traditional news organizations are already fighting for traffic and advertising dollars. For years, they competed for clicks against pink slime journalism — so-called because of its similarity to liquefied beef, an unappetizing, low-cost food additive.

Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy. Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.

See, it is not just humans producing abject garbage; robots can do it, too — and way better. There was a time when newsrooms could be financially stable on display ads. Those days are over for a team of human reporters, even if all they do is rewrite rich guy tweets. But if you only need to pay a skeleton operations staff to ensure the robots continue their automated publishing schedule, well that becomes a more plausible business venture.

Another thing of note from the Times story:

Before ending its agreement with BNN Breaking, Microsoft had licensed content from the site for, as it does with reputable news organizations such as Bloomberg and The Wall Street Journal, republishing their articles and splitting the advertising revenue.

I have to wonder how much of an impact this co-sign had on the success of BNN Breaking. Syndicated articles on MSN like these are shown in various places on a Windows computer, and are boosted in Bing search results. Microsoft is increasingly dependent on A.I. for editing its MSN portal with predictable consequences.

Conspirador Norteño” in April:

The YouTube channel is not the only data point that connects Trimfeed to BNN. A quick comparison of the bylines on BNN’s and Trimfeed’s (plagiarized) articles shows that many of the same names appear on both sites, and several X accounts that regularly posted links to BNN articles prior to April 2024 now post links to Trimfeed content. Additionally, BNN seems to have largely stopped publishing in early April, both on its website and social media, with the Trimfeed website and related social media efforts activating shortly thereafter. It is possible that BNN was mothballed due to being downranked in Google search results in March 2024, and that the new Trimfeed site is an attempt to evade Google’s decision to classify Trimfeed’s predecessor as spam.

The Times reporters definitively linked the two and, after doing so, Trimfeed stopped publishing. Its domain, like BNN Breaking, now redirects to BNNGPT, which ostensibly uses proprietary technologies developed by Chahal. Nothing about this makes sense to me and it smells like bullshit.

Dark Mode App Icons

Apple’s Human Interface Guidelines:

[Beginning in iOS 18 and iPadOS 18] People can customize the appearance of their app icons to be light, dark, or tinted. You can create your own variations to ensure that each one looks exactly the way you way you want. See Apple Design Resources for icon templates.

Design your dark and tinted icons to feel at home next to system app icons and widgets. You can preserve the color palette of your default icon, but be mindful that dark icons are more subdued, and tinted icons are even more so. A great app icon is visible, legible, and recognizable, even with a different tint and background.

Louie Mantia:

Apple’s announcement of “dark mode” icons has me thinking about how I would approach adapting “light mode” icons for dark mode. I grabbed 12 icons we made at Parakeet for our clients to illustrate some ways of going about it.

I appreciated this deep exploration of different techniques for adapting alternate icon appearances. Obviously, two days into the first preview build of a new operating system is not the best time to adjudicate its updates. But I think it is safe to say a quality app from a developer that cares about design will want to supply a specific dark mode icon instead of relying upon the system-generated one. Any icon with more detail than a glyph on a background will benefit.

Also, now that there are two distinct appearances, I also think it would be great if icons which are very dark also had lighter alternates, where appropriate.

Rich Idiot Tweets

Jason Koebler, 404 Media:

Monday, Elon Musk tweeted a thing about Apple’s marketing event, an act that took Musk three seconds but then led to a large portion of the dwindling number of employed human tech journalists to spring into action and collectively spend many hours writing blogs about What This Thing That Probably Won’t Happen All Means.

Karl Bode, Techdirt:

Journalists are quick to insist that it’s their noble responsibility to cover the comments of important people. But journalism is about informing and educating the public, which isn’t accomplished by redirecting limited journalistic resources to cover platform bullshit that means nothing and will result in nothing meaningful. All you’ve done is made a little money wasting people’s time.

The speed at which some publishers insist these “articles” are posted combined with a lack of constraints in airtime or physical paper means the loudest people know they can draw attention by posting deranged nonsense. All those people who got into journalism because they thought they could make a difference are instead cajoled into adding something resembling substance to forty-four tweeted words from the fingers of a dipshit.

Apple Intelligence

Daniel Jalkut, last month year:

Which leads me to my somewhat far-fetched prediction for WWDC: Apple will talk about AI, but they won’t once utter the letters “AI”. They will allude to a major new initiative, under way for years within the company. The benefits of this project will make it obvious that it is meant to serve as an answer to comparable efforts being made by OpenAI, Microsoft, Google, and Facebook. During the crescendo to announcing its name, the letters “A” and “I” will be on all of our lips, and then they’ll drop the proverbial mic: “We’re calling it Apple Intelligence.” Get it?


Apple today introduced Apple Intelligence, the personal intelligence system for iPhone, iPad, and Mac that combines the power of generative models with personal context to deliver intelligence that’s incredibly useful and relevant. Apple Intelligence is deeply integrated into iOS 18, iPadOS 18, and macOS Sequoia. It harnesses the power of Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks. With Private Cloud Compute, Apple sets a new standard for privacy in AI, with the ability to flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers.

To Apple’s credit, the letters “A.I.” were only enunciated a handful of times during its main presentation today, far less often than I had expected. Mind you, in sixty-odd places, “A.I.” was instead referred to by the branded “Apple Intelligence” moniker which is also “A.I.” in its own way. I want half-right points.

There are several concerns with features like these, and Apple answered two of them today: how it was trained, and the privacy and security of user data. The former was not explained during today’s presentation, nor in its marketing materials and developer documentation. But it was revealed by John Giannandrea, senior vice president of Machine Learning and A.I. Strategy, in an afternoon question-and-answer session hosted by Justine Ezarik, as live-blogged by Nilay Patel at the Verge:1

What have these models actually been trained on? Giannandrea says “we start with the investment we have in web search” and start with data from the public web. Publishers can opt out of that. They also license a wide amount of data, including news archives, books, and so on. For diffusion models (images) “a large amount of data was actually created by Apple.”

If publishers wish to opt out of Apple’s training models but continue to permit crawling for things like Siri and Spotlight, they should add a disallow rule for Applebot-Extended. Because of Apple’s penchant for secrecy, that usage control was not added until today. That means a site may have been absorbed into training data unless its owners opted out of all Applebot crawling. Hard to decline participating in something you do not even know about.

Additionally, in April, Katie Paul and Anna Tong reported for Reuters that Apple struck a licensing agreement with Shutterstock for image training purposes.

Apple is also, unsurprisingly, promoting heavily the privacy and security policies it has in place. It noted some of these attributes in its presentation — including some auditable code and data minimization — and elaborated on Private Cloud Compute on its security blog:

With services that are end-to-end encrypted, such as iMessage, the service operator cannot access the data that transits through the system. One of the key reasons such designs can assure privacy is specifically because they prevent the service from performing computations on user data. Since Private Cloud Compute needs to be able to access the data in the user’s request to allow a large foundation model to fulfill it, complete end-to-end encryption is not an option. Instead, the PCC compute node must have technical enforcement for the privacy of user data during processing, and must be incapable of retaining user data after its duty cycle is complete.


  • User data is never available to Apple — even to staff with administrative access to the production service or hardware.

Apple can make all the promises it wants, and it appears it does truly want to use generative A.I. in a more responsible way. For example, the images you can make using Image Playground cannot be photorealistic and — at least for those shown so far — are so strange you may avoid using them. Similarly, though I am not entirely sure, it seems plausible the query system is designed to be more private and secure than today’s Siri.

Yet, as I wrote last week, users may not trust any of these promises. Many of these fears are logical: people are concerned about the environment, creative practices, and how their private information is used. But some are more about the feel of it — and that is okay. Even if all the training data were fully licensed and user data is as private and secure as Apple says, there is still an understandable ick factor for some people. The way companies like Apple, Google, and OpenAI have trained their A.I. models on the sum of human creativity represents a huge imbalance of power, and the only way to control Apple’s public data use was revealed yesterday. Many of the controls Apple has in place are policies which can be changed.

Consider how, so far as I can see, there will be no way to know for certain if your Siri query is being processed locally or by Apple’s servers. You do not know that today when using Siri, though you can infer it based on what you are doing and if something does not work when Apple’s Siri service is down. It seems likely that will be the case with this new version, too.

Then there are questions about the ethos of generative intelligence. Apple has long positioned its products as tools which enable people to express themselves creatively. Generative models have been pitched as almost the opposite: now, you do not have to pay for someone’s artistic expertise. You can just tell a computer to write something and it will do so. It may be shallow and unexciting, but at least it was free and near-instantaneous. Apple notably introduced its set of generative services only a month after it embarrassed itself by crushing analogue tools into an iPad. Happily, it seems this first set of generative features is more laundry and less art — making notifications less intrusive, categorizing emails, making Siri not-shit. I hope I can turn off things like automatic email replies.

You will note my speculative tone. That is because Apple’s generative features have not been made available yet, including in developer beta builds of its new operating system. None of us have any idea how useful these features are, nor what limitations they have. All we can see are Apple’s demonstrations and the metrics it has shared. So, we will see how any of this actually pans out. I have been bamboozled by this same corporation making similar promises before.

“May you live in interesting times”, indeed.

  1. The Verge’s live blog does not have per-update permalinks so you will need to load all the messages and find this for yourself. ↥︎

Research as Leisure Activity

Celine Nguyen:

But this isn’t really about the software. It’s about what software promises us — that it will help us become who we want to be, living the lives we find most meaningful and fulfilling. The idea of research as leisure activity has stayed with me because it seems to describe a kind of intellectual inquiry that comes from idiosyncratic passion and interest. It’s not about the formal credentials. It’s fundamentally about play. It seems to describe a life where it’s just fun to be reading, learning, writing, and collaborating on ideas.

This is a wonderful essay, albeit one which leaves me with a question of how a reader distinguishes between an amateur’s interpretation of what they read, and an expert’s more considered exploration of a topic — something I have wondered about before.

The amateur or non-professional has their place, of course; I am staking mine on this very website. The expert may not always be correct. But adjudicating the information from each is not a realistic assignment for a layperson. Consider the vast genre of multi-hour YouTube essays, or even short but seemingly authoritative TikTok digests of current events. We are ingesting more information than ever before with fewer gatekeepers — for good and otherwise.

Sponsor: Magic Lasso Adblock: 2.0× Faster Web Browsing in Safari

Want to experience twice as fast load times in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock — the ad blocker designed for you. It’s easy to setup, blocks all ads, and doubles the speed at which Safari loads.

Screenshot of Magic Lasso Adblock

Magic Lasso Adblock is an efficient and high performance ad blocker for your iPhone, iPad, and Mac. It simply and easily blocks all intrusive ads, trackers and annoyances in Safari. Just enable to browse in bliss.

By cutting down on ads and trackers, common news websites load 2x faster and use less data.

Over 300,000+ users rely on Magic Lasso Adblock to:

  • Improve their privacy and security by removing ad trackers

  • Block annoying cookie notices and privacy prompts

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad and Mac.

Download today via the Magic Lasso website.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Justin Trudeau on ‘Hard Fork’

Canadian Prime Minister Justin Trudeau appeared on the New York Times’ “Hard Fork” podcast for a discussion about artificial intelligence, election security, TikTok, and more.

I have to agree with Aaron Vegh:

[…] I loved his messaging on Canada’s place in the world, which is pragmatic and optimistic. He sees his job as ambassador to the world, and he plays the role well.

I just want to pull some choice quotes from the episode that highlight what I enjoyed about Trudeau’s position on technology. He’s not merely well-briefed; he clearly takes an interest in the technology, and has a canny instinct for its implications in society.

I understand Trudeau’s appearance serves as much to promote his government’s efforts in A.I. as it does to communicate any real policy positions — take a sip every time Trudeau mentions how we “need to have a conversation” about something. But I also think co-hosts Kevin Roose and Casey Newton were able to get a real sense of how the Prime Minister thinks about A.I. and Canada’s place in the global tech industry.

Anti Trust in Tech

If you had just been looking at the headlines from major research organizations, you would see a lack of confidence from the public in big business, technology companies included. For years, poll after poll from around the world has found high levels of distrust in their influence, handling of private data, and new developments.

If these corporations were at all worried about this, they are not much showing it in their products — particularly the A.I. stuff they have been shipping. There has been little attempt at abating last year’s trust crisis. Google decided to launch overconfident summaries for a variety of search queries. Far from helping to sift through all that has ever been published on the web to mash together a representative summary, it was instead an embarrassing mess that made the company look ill prepared for the concept of satire. Microsoft announced a product which will record and interpret everything you do and see on your computer, but as a good thing.

Can any of them see how this looks? If not — if they really are that unaware — why should we turn to them to fill gaps and needs in society? I certainly would not wish to indulge businesses which see themselves as entirely separate from the world.

It is hard to imagine they do not, though. Sundar Pichai, in an interview with Nilay Patel, recognised there were circumstances in which an A.I. summary would be inappropriate, and cautioned that the company still considers it a work in progress. Yet Google still turned it on by default in the U.S. with plans to expand worldwide this year.

Microsoft has responded to criticism by promising Recall will now be a feature users must opt into, rather than something they must turn off after updating Windows. The company also says there are more security protections for Recall data than originally promised but, based on its track record, maybe do not get too excited yet.

These product introductions all look like hubris. Arrogance, really — recognition of the significant power these corporations wield and the lack of competition they face. Google can poison its search engine because where else are most people going to go? How many people would turn off Recall, something which requires foreknowledge of its existence, under Microsoft’s original rollout strategy?

It is more or less an admission they are all comfortable gambling with their customers’ trust to further the perception they are at the forefront of the new hotness.

None of this is a judgement on the usefulness of these features or their social impact. I remain perplexed by the combination of a crisis of trust in new technologies, and the unwillingness of the companies responsible to engage with the public. There seems to be little attempt at persuasion. Instead, we are told to get on board because this rocket ship is taking off with or without us. Concerned? Too bad: the rocket ship is shaped like a giant middle finger.

What I hope we see Monday from Apple — a company which has portrayed itself as more careful and practical than many of its contemporaries — is a recognition of how this feels from outside the industry. Expect “A.I.” to be repeated in the presentation until you are sick of those two letters; investors are going to eat it up. When normal people update their phones in September, though, they should not feel like they are being bullied into accepting our A.I. future.

People need to be given time to adjust and learn. If the polls are representative, very few people trust giant corporations to get this right — understandably — yet these tech companies seem to believe we are as enthusiastic about every change they make as they are. Sorry, we are not, no matter how big a smile a company representative is wearing when they talk about it. Investors may not be patient but many of the rest of us need time.

Screen Time is Buggy

Joanna Stern, Wall Street Journal:

Porn, violent images, illicit drugs. I could see it all by typing a special string of characters into the Safari browser’s address bar. The parental controls I had set via Apple’s Screen Time? Useless.

Security researchers reported this particular software bug to Apple multiple times over the past three years with no luck. After I contacted Apple about the problem, the company said it would release a fix in the next software update. The bug is a bad one, allowing users to easily circumvent web restrictions, although it doesn’t appear to have been well-known or widely exploited.

It seems lots of parents are frustrated by Screen Time. It is not reliable software but, for privacy reasons, it is hard for third-parties to differentiate themselves as they rely on the same framework.


  • Screen usage chart. Want to see your child’s screen usage for the day? The chart is often inaccurate or just blank.

I find this chart is always wildly disconnected from actual usage figures for my own devices. My iMac recently reported a week straight of 24-hour screen-on time per day, including through a weekend when I was out of town, because of a web browser tab I left open in the background.

One could reasonably argue nobody should entirely depend on software to determine how devices are used by themselves or their children, but I do not think many people realistically do. It is part of a combination of factors. Screen Time should perform the baseline functions it promises. It sucks how common problems are basically ignored until Stern writes about them.

The Rise and Fall of Preview

Howard Oakley:

Prior to Mac OS X, Adobe Acrobat, both in its free viewer form and a paid-for Pro version, were the de facto standard for reading, printing and working with PDF documents on the Mac. The Preview app had originated in NeXTSTEP in 1989 as its image and PDF viewer, and was brought across to early versions of Mac OS X, where it has remained ever since.

The slow decline of Preview — and Mac PDF rendering in general — since MacOS Sierra is one of the more heartbreaking products of Apple’s annual software churn cycle. To be entirely fair, many of the worst bugs have been fixed, but some remain: sometimes, highlights and notes stop working; search is a mess; copying text is unreliable.

Unfortunately, the apps which render PDF files the most predictably and consistently are Adobe Acrobat and Reader. Both became hideous Electron Chromium-based apps at some point and, so, are gigantic packages which behave nothing like Mac software. It is all pretty disappointing.

Update: A Hacker News commenter rightly pointed out that Acrobat and Reader are not truly Electron apps, and are instead Chromium-based apps. That is to say both are generic-brand shitty instead of the name-brand stuff.

Meta’s Big Squeeze

Ashley Belanger, reporting for Ars Technica in July 2022 in what I will call “foreshadowing”:

Despite all the negative feedback [over then-recent Instagram changes], Meta revealed on an earnings call that it plans to more than double the number of AI-recommended Reels that users see. The company estimates that in 2023, about a third of Instagram and Facebook feeds will be recommended content.

Ed Zitron:

In this document [leaked to Zitron], they discuss the term “meaningful interactions,” the underlying metric which (allegedly) guides Facebook today. In January 2018, Adam Mosseri, then Head of News Feed, would post that an update to the News Feed would now “prioritize posts that spark conversations and meaningful interactions between people,” which may explain the chaos (and rot) in the News Feed thereafter.

To be clear, metrics around time spent hung around at the company, especially with regard to video, and Facebook has repeatedly and intentionally made changes to manipulate its users to satisfy them. In his book “Broken Code,” Jeff Horwitz notes that Facebook “changed its News Feed design to encourage people to click on the reshare button or follow a page when they viewed a post,” with “engineers altering the Facebook algorithm to increase how often users saw content reshared from people they didn’t know.”

Zitron, again:

When you look at Instagram or Facebook, I want you to try and think of them less as social networks, and more as a form of anthropological experiment. Every single thing you see on either platform is built or selected to make you spend more time on the app and see more things that Meta wants you to see, be they ads, sponsored content, or suggested groups that you can interact with, thus increasing the amount of your “time spent” on the app, and increasing the amount of “meaningful interactions” you have with content.

Zitron is a little too eager, for my tastes, to treat Meta’s suggestions of objectionable and controversial posts as deliberate. It seems much more likely the company simply sucks at moderating this stuff at scale and is throwing in the towel.

Kurt Wagner, Bloomberg:

In late 2021, TikTok was on the rise, Facebook interactions were declining after a pandemic boom and young people were leaving the social network in droves. Chief Executive Officer Mark Zuckerberg assembled a handful of veterans who’d built their careers on the Big Blue app to figure out how to stop the bleeding, including head of product Chris Cox, Instagram boss Adam Mosseri, WhatsApp lead Will Cathcart and head of Facebook, Tom Alison.

During discussions that spanned several meetings, a private WhatsApp group, and an eventual presentation at Zuckerberg’s house in Palo Alto, California, the group came to a decision: The best way to revive Facebook’s status as an online destination for young people was to start serving up more content from outside a person’s network of friends and family.

Jason Koebler, 404 Media:

At first, previously viral (but real) images were being run through image-to-image AI generators to create a variety of different but plausibly believable AI images. These images repeatedly went viral, and seemingly tricked real people into believing they were real. I was able to identify a handful of the “source” or “seed” images that formed the basis for this type of content. Over time, however, most AI images on Facebook have gotten a lot easier to identify as AI and a lot more bizarre. This is presumably happening because people will interact with the images anyway, or the people running these pages have realized they don’t need actual human interaction to go viral on Facebook.

Sarah Perez, TechCrunch:

Instagram confirmed it’s testing unskippable ads after screenshots of the feature began circulating across social media. These new ad breaks will display a countdown timer that stops users from being able to browse through more content on the app until they view the ad, according to informational text displayed in the Instagram app.

These pieces each seem like they are circling a theme of a company finding the upper bound of its user base, and then squeezing it for activity, revenue, and promising numbers to report to investors. Unlike Zitron, I am not convinced we are watching Facebook die. I think Koebler is closer to the truth: we are watching its zombification.

Inside the Copilot Recall ‘Disaster’

Kevin Beaumont:

At a surface level, it [Recall] is great if you are a manager at a company with too much to do and too little time as you can instantly search what you were doing about a subject a month ago.

In practice, that audience’s needs are a very small (tiny, in fact) portion of Windows userbase — and frankly talking about screenshotting the things people in the real world, not executive world, is basically like punching customers in the face. The echo chamber effect inside Microsoft is real here, and oh boy… just oh boy. It’s a rare misfire, I think.

Via Eric Schwarz:

This fact that this feature is basically on by default and requires numerous steps to disable is going to create a lot of problems for people, especially those who click through every privacy/permission screen and fundamentally don’t know how their computer actually operates — I’ve counted way too many instances where I’ve had to help people find something and they have no idea where anything lives in their file system (mostly work off the Desktop or Downloads folders). How are they going to even grapple with this?

The problems with Recall remind me of the minor 2017 controversy around “brassiere” search results in Apple’s Photos app. Like Recall, it is entirely an on-device process with some security and privacy protections. In practice, automatically cataloguing all your photos which show a bra is kind of creepy, even if it is being done only with your own images on your own phone.

Shit’s on Fire, Yo!

Deviant Ollam gave a brand new talk at CackalackyCon this year about fire safety standards from a pentesting perspective. It is as entertaining as just about anything you may have seen from Ollam, despite being about two hours long.

Caitlin Dewey Wants to See Your Old Gmail Messages

Caitlin Dewey:

In April, Gmail turned 20; the service is two-thirds as old as I am. “We now have a huge accidental archive of our collective past,” wrote the editors at New York, to mark the occasion.


You have emails like this too, I’d imagine — happy emails and sad ones. Emails lost to time or memory or the unrelenting deluge of other, newer messages. Maybe it’s the first or last email you got from someone you treasure, or an announcement that changed your life, or a conversation you remembered wrong. Whatever forms this sort of long-lost email takes for you, I would love to see them.

If you would like to participate, there are more details in Dewey’s post, or you can visit the Google Form. Obviously, you can also forward messages to Dewey at, because if this project did not have a Gmail address, it would be a shame.

See Also: UIs with accidental memories, previously linked.

Two TikTok Updates

Drew Harwell, Washington Post:

But the extent to which the United States evaluated or disregarded TikTok’s proposal, known as Project Texas, is likely to be a core point of dispute in court, where TikTok and its owner, ByteDance, are challenging the sale-or-ban law as an “unconstitutional assertion of power.”

The episode raises questions over whether the government, when presented with a way to address its concerns, chose instead to back an effort that would see the company sold to an American buyer, even though some of the issues officials have warned about — the opaque influence of its recommendation algorithm, the privacy of user data — probably would still be unresolved under new ownership.

You may recognize the deal Harwell is writing about if you read my exploration of the divestment law. While TikTok claimed in its lawsuit (PDF) that the Biden administration was the party responsible for cancelling this deal with CFIUS, I did not see that confirmed anywhere else. Harwell’s reporting appears to support TikTok’s side of events. Still, there is frustratingly little explanation for why the U.S. was unsatisfied with this settlement.

Krystal Hu and Sheila Dang, Reuters:

TikTok is working on a clone of its recommendation algorithm for its 170 million U.S. users that may result in a version that operates independently of its Chinese parent and be more palatable to American lawmakers who want to ban it, according to sources with direct knowledge of the efforts.

The work on splitting the source code ordered by TikTok’s Chinese parent ByteDance late last year predated a bill to force a sale of TikTok’s U.S. operations that began gaining steam in Congress this year. The bill was signed into law in April.

TikTok says this story is “misleading and factually inaccurate” and reiterates that divestiture is, according to them, impossible. But TikTok already began preparing for this eventuality in 2020, so it is hard to believe the company would not want to figure out ways to make this possible should its current lawsuit fail and the law be allowed to stand.

Amazon Executives May Be Personally Liable for Unintentional Prime Registrations

Ashley Belanger, Ars Technica:

But the judge apparently did not find Amazon’s denials completely persuasive. Viewing the FTC’s complaint “in the light most favorable to the FTC,” Judge John Chun concluded that “the allegations sufficiently indicate that Amazon had actual or constructive knowledge that its Prime sign-up and cancellation flows were misleading consumers.”


One such trick that Chun called out saw Amazon offering two-day free shipping with the click of a button at checkout that also signed customers up for Prime even if they didn’t complete the purchase.

“With the offer of Amazon Prime for the purpose of free shipping, reasonable consumers could assume that they would not proceed with signing up for Prime unless they also placed their order,” Chun said, ultimately rejecting Amazon’s claims that all of its “disclosures would be clear and conspicuous to any reasonable consumer.”

This is far from the only instance of scumbag design cited by Chun, and it is bizarre to me that anybody would defend choices like these.

Inner Workings

I have very little to add to Tyler Hall’s idea for revealing per-document settings, other than to say that it is so joyful and it makes complete sense. If you watch one thirty-second design demo today, make it this one.

Battery Replacements Should Be the Easiest Repair for Any Device

Jeff Johnson:

Yesterday I took the M1 MacBook Pro to my local Apple-authorized service provider that I’ve been going to for many years, who performed all of the work on my Intel MacBook Pro, including the battery replacements and a Staingate screen replacement. This is a third-party shop, not an Apple Store. To my utter shock, they told me that they couldn’t replace the battery in-house, because starting with the Apple silicon transition, Apple now requires that the MacBook Pro be mailed in to Apple for battery replacement! What. The. Hell.

The battery in my 14-inch MacBook Pro seems to be doing okay, with 89% capacity remaining after nearly two years of use. But I hope to use it for as long as I did my MacBook Air — about ten years — and I swapped its battery twice. This spooked me. So I called my local third-party repair place and asked them about replacing the battery. They told me they could change it in the store with same-day turnaround for $350, about the same as what Apple charges, using official parts. It is unclear to me if a Apple could replace the battery in-store or would need to send it out, but every Mac service I have had from my local Apple Store has required me to leave my computer with them for several days.

The situation likely varies by geography. Apple’s Self Service Repair program is not available in Canada, which means a battery swap has to be done either by a technician, or using unofficial parts. If you are concerned about this, I recommend contacting your local shops and seeing what their policies are like.

In a recent interview with Marques Brownlee, John Ternus, Apple’s head of hardware engineering, compared ease of repair and long-term durability:

On an iPhone, on any phone, a battery is something […] that’s gonna need to be replaced, right? Batteries wear out. But as we’ve been making iPhones for a long time, in the early days, one of the most common types of failures was water ingress, right? Where you drop it in a pool, or you spill your drink on it, and the unit fails. And so we’ve been making strides over all those years to get better and better and better in terms of minimizing those failures.

This is a fair argument. While Apple has not — to my knowledge — acknowledged any improvements to liquid resistance on MacBook Pros, I spilled half a glass of water across mine in November, and it suffered no damage whatsoever. Ternus’ point is that Apple’s solution for preventing liquid damage to all components, including the battery, compromised the ease of repairing an iPhone, but the company saw it as a reasonable trade-off.

But it is also a bit of a red herring for two reasons. The first is that Apple actually made recent iPhone models more repairable without reducing water or dust resistance, indicating this compromise is not exactly as simple as Ternus implies. It is possible to have easier repairs and better durability.

The second reason is because batteries eventually need replacing on all devices. They are a consumable good with a finite — though not always predictable — lifespan, most often shorter than the actual lifetime usability of the product. The only reason I do not use my AirPods any more is because the battery in each bud lasts less than twenty minutes; everything else is functional. If there is any repair which should be straightforward and doable without replacing unrelated components or the entire device, it is the battery.

See Also: The comments on Michael Tsai’s post.

Apple finished naming what it — well, its “team of experts alongside a select group of artists […] songwriters, producers, and industry professionals” — believes are the hundred best albums of all time. Like pretty much every list of the type, it is overwhelmingly Anglocentric, there are obvious picks, surprise appearances good and bad, and snubs.

I am surprised the publication of this list has generated as much attention as it has. There is a whole Wall Street Journal article with more information about how it was put together, a Slate thinkpiece arguing this ranking “proves [Apple has] lost its way”, and a Variety article claiming it is more-or-less “rage bait”.

Frankly, none of this feels sincere. Not Apple’s list, and not the coverage treating it as meaningful art criticism. I am sure there are people who worked hard on it — Apple told the Journal “about 250” — and truly believe their rating carries weight. But it is fluff.

Make no mistake: this is a promotional exercise for Apple Music more than it is criticism. Sure, most lists of this type are also marketing for publications like Rolling Stone and Pitchfork and NME. Yet, for how tepid the opinions of each outlet often are, they have each given out bad reviews. We can therefore infer they have specific tastes and ideas about what separates great art from terrible art.

Apple has never said a record is bad. It has never made you question whether the artist is trying their best. It has never presented criticism so thorough it makes you wince on behalf of the people who created the album.

Perhaps the latter is a poor metric. After Steve Jobs’ death came a river of articles questioning the internal culture he fostered, with several calling him an “asshole”. But that is mixing up a mean streak and a critical eye — Jobs, apparently, had both. A fair critic can use their words to dismantle an entire project and explain why it works or, just as important, why it does not. The latter can hurt; ask any creative person who has been on the receiving end. Yet exploring why something is not good enough is an important skill to develop as both a critic and a listener.

Dan Brooks, Defector:

There has been a lot of discussion about what music criticism is for since streaming reduced the cost of listening to new songs to basically zero. The conceit is that before everything was free, the function of criticism was to tell listeners which albums to buy, but I don’t think that was ever it. The function of criticism is and has always been to complicate our sense of beauty. Good criticism of music we love — or, occasionally, really hate — increases the dimensions and therefore the volume of feeling. It exercises that part of ourselves which responds to art, making it stronger.

There are huge problems with the way music has historically been critiqued, most often along racial and cultural lines. There are still problems. We will always disagree about the fairness of music reviews and reviewers.

Apple’s list has nothing to do with any of that. It does not interrogate which albums are boring, expressionless, uncreative, derivative, inconsequential, inept, or artistically bankrupt. So why should we trust it to explain what is good? Apple’s ranking of albums lacks substance because it cannot say any of these things. Doing so would be a terrible idea for the company and for artists.

It is beyond my understanding why anyone seems to be under the impression this list is anything more than a business reminding you it operates a music streaming platform to which you can subscribe for eleven dollars per month.

Speaking of the app — some time after I complained there was no way in Apple Music to view the list, Apple added a full section, which I found via foursliced on Threads. It is actually not bad. There are stories about each album, all the reveal episodes from the radio show, and interviews.

You will note something missing, however: a way to play a given album. That is, one cannot visit this page in Apple Music, see an album on the list they are interested in, and simply tap to hear it. There are play buttons on the website and, if you are signed in with your Apple Music account, you can add them to your library. But I cannot find a way to do any of this from within the app.

Benjamin Mayo found a list, but I cannot through search or simply by browsing. Why is this not a more obvious feature? It makes me feel like a dummy.

B.C. Winemakers Grapple With the Climate Crisis

Paloma Pacheco, the Narwhal:

Just a year after the extreme temperature drop in December 2022, another deep freeze descended on wine growers. For several days in January 2024, temperatures across the Okanagan and Similkameen, as well as in the Thompson Valley to the north, dropped below -25 C from unseasonable daytime highs of 10 to 13 C (Canada’s warmest winter on record). The damage from the previous winter’s cold snap had already resulted in a nearly 60 per cent loss of grape and wine production across the province. For the 2024 harvest, the industry is predicting a 97 to 99 per cent loss from both bud and vine damage. In short: decimation.

I am still in shock over how devastating this single cold snap was for so many Okanagan winemakers. It sounds like they are done grieving and are trying to make the most of it, but it is going to be a difficult few years — at least.

The Deskilling of Web Development

Baldur Bjarnason:

But instead we’re all-in on deskilling the industry. Not content with removing CSS and HTML almost entirely from the job market, we’re now shifting towards the model where devs are instead “AI” wranglers. The web dev of the future will be an underpaid generalist who pokes at chatbot output until it runs without error, pokes at a copilot until it generates tests that pass with some coverage, and ships code that nobody understand and can’t be fixed if something goes wrong.

There are parallels in the history of software development to the various abstractions accumulated in a modern web development stack. Heck, you can find people throughout history bemoaning how younger generations lack some fundamental knowledge since replaced by automation or new technologies. It is always worth a gut-check about whether newer ideas are actually better. In the case of web development, what are we gaining and losing by eventually outsourcing much of it to generative software?

I think Bjarnason is mostly right: if web development becomes accessible by most through layers of A.I. and third-party frameworks, it is abstracted to such a significant extent that it becomes meaningless gibberish. In fairness, the way plain HTML, CSS, and JavaScript work is — to many — meaningless gibberish. It really is better for many people that creating things for the web has become something which does not require a specialized skillset beyond entering a credit card number. But that is distinct from web development. When someone has code-level responsibility, they have an obligation to understand how things work.

How Shein and Temu Snuck Up on Amazon

Louise Matsakis, Big Technology:

Shein and Temu’s users aren’t just browsing. Shein reportedly earned roughly $45 billion last year, and is currently trying to go public. PDD Holdings, Temu’s Chinese parent company, reported earlier this week that its revenue surged more than 130% in the first quarter. PDD is now the most valuable e-commerce company in China.

The two startups are sending so many orders from China to the US that it’s causing air cargo rates to spike, and USPS workers have said publicly that they are overwhelmed by the sheer volume of Temu’s signature bright orange packages they have to deliver. “I’m tired of this Temu shit, ya’ll killing me,” one mailman said in a TikTok video last year with over two million likes. “Everyday it’s Temu, Temu, Temu — I’m Temu tired.”

You might recognize how both Shein and Temu grew using the same tactic as TikTok: relentless advertising. (Which is something Snap CEO Evan Spiegel complained about despite TikTok’s huge spending on Snapchat.)

Both these companies are an aggressive distillation of plentiful supply and low cost to buyers. For people with lower incomes or who are economically stressed, the extreme affordability they offer can be a lifeline. Not everybody who shops with either fits that description; Matsakis cites a UBS report finding an average Shein customer earns $65,000 per year and spends more than $100 per month on clothes. But there are surely plenty of people who shop on both sites — and Amazon — because they simply cannot afford to buy anywhere else.

Every time I think about these retailers, I cannot shake a pervasive sadness. Saddened by how some people in rich countries have been compromised so much they rely on stores they may have moral qualms with. Saddened by the ripple effect of exploitation. Saddened by the environmental cost of producing, shipping, and disposing of these often brittle products — a wasteful exercise for many customers who can afford longer-lasting goods, and the many people who cannot.

Derek Guy has written about the brutality of the garment industry in the U.S., but notes how clearly different these fast and ultra-fast fashion brands are from inexpensive clothing:

Given the opacity in the supply chain, your best single measure for whether something is amiss is price. If you are paying $5 for a cut-and-sewn shirt, something bad is happening. Does this mean that every expensive shirt was ethically made? No. But you know the $5 shirt is bad.

Guy also wrote about the difference between cheap and fast fashion.

This whole industry bums me out because I try to appreciate clothing and fashion. I like finding things I like, dressing a particular way, and putting some effort into how I present myself. Yet every peek behind the curtain is a mountain of waste and abuse, and the worst offenders are companies like Shein and Temu — and, for what it is worth, AliExpress and facilitators like Amazon.

More on That Zombie Photos Bug

The bad news: Apple shipped an alarming bug in iOS 17.5 which sometimes revealed photos previously deleted by the user and, in the process, created a reason for users to mistrust how their data is handled. This was made especially confusing by Apple’s lack of commentary.

The good news: Apple patched the bug within a week. Also, the lone story about deleted photos reappearing on a wiped iPad given to someone else was deleted and seems to be untrue.

The bad news: aside from acknowledging this “rare issue where photos that experienced database corruption could reappear in the Photos library even if they were deleted”, there was still little information about exactly what happened. Users quite reasonably expect things they deleted to stay deleted, and when they do not, they are going to have some questions.

The good news: as I predicted, Apple gave an explanation to 9to5Mac, which generously allowed for it to be on background. Chance Miller:

One question many people had is how images from dates as far back as 2010 resurfaced because of this problem. After all, most people aren’t still using the same devices now as they were in 2010. Apple confirmed to me that iCloud Photos is not to be blamed for this. Instead, it all boils to the corrupt database entry that existed on the device’s file system itself.

A much more technically-minded answer was provided by Synacktiv, a security firm that reverse-engineered the bug fix release and compared it to the original 17.5 release.

Bugs are only as bad as the effects they have. I heard from multiple readers who said this bug damaged how much they trust iOS and Apple. This is self-selecting — I likely would not have heard from people who both experienced this bug and thought it was no big deal. I can imagine a normal user who does not read 9to5Mac and finding their deleted photos restored are still going to be spooked.

Killing Time for TikTok

Finally. The government of the United States finally passed a law that would allow it to force the sale of, or ban, software and websites from specific countries of concern. The target is obviously TikTok — it says so right in its text — but crafty lawmakers have tried to add enough caveats and clauses and qualifiers to, they hope, avoid it being characterized as a bill of attainder, and to permit future uses. This law is very bad. It is an ineffective and illiberal position that abandons democratic values over, effectively, a single app. Unfortunately, TikTok panic is a very popular position in the U.S. and, also, here in Canada.

The adversaries the U.S. is worried about are the “covered nationsdefined in 2018 to restrict the acquisition by the U.S. of key military materials from four countries: China, Iran, North Korea, and Russia. The idea behind this definition was that it was too risky to procure magnets and other important components of, say, missiles and drones from a nation the U.S. considers an enemy, lest those parts be compromised in some way. So the U.S. wrote down its least favourite countries for military purposes, and that list is now being used in a bill intended to limit TikTok’s influence.

According to the law, it is illegal for any U.S. company to make available TikTok and any other ByteDance-owned app — or any app or website deemed a “foreign adversary controlled application” — to a user in the U.S. after about a year unless it is sold to a company outside the covered countries, and with no more than twenty percent ownership stake from any combination of entities in those four named countries. Theoretically, the parent company could be based nearly anywhere in the world; practically, if there is a buyer, it will likely be from the U.S. because of TikTok’s size. Also, the law specifically exempts e-commerce apps for some reason.

This could be interpreted as either creating an isolated version specifically for U.S. users or, as I read it, moving the global TikTok platform to a separate organization not connected to ByteDance or China.1 ByteDance’s ownership is messy, though mostly U.S.-based, but politicians worried about its Chinese origin have had enough, to the point they are acting with uncharacteristic vigour. The logic seems to be that it is necessary for the U.S. government to influence and restrict speech in order to prevent other countries from influencing or restricting speech in ways the U.S. thinks are harmful. That is, the problem is not so much that TikTok is foreign-owned, but that it has ownership ties to a country often antithetical to U.S. interests. TikTok’s popularity might, it would seem, be bad for reasons of espionage or influence — or both.


So far, I have focused on the U.S. because it is the country that has taken the first step to require non-Chinese control over TikTok — at least for U.S. users but, due to the scale of its influence, possibly worldwide. It could force a business to entirely change its ownership structure. So it may look funny for a Canadian to explain their views of what the U.S. ought to do in a case of foreign political interference. This is a matter of relevance in Canada as well. Our federal government raised the alarm on “hostile state-sponsored or influenced actors” influencing Canadian media and said it had ordered a security review of TikTok. There was recently a lengthy public inquiry into interference in Canadian elections, with a special focus on China, Russia, and India. Clearly, the popularity of a Chinese application is, in the eyes of these officials, a threat.

Yet it is very hard not to see the rush to kneecap TikTok’s success as a protectionist reaction to shaking the U.S. dominance of consumer technologies, as convincingly expressed by Paris Marx at Disconnect:

In Western discourses, China’s internet policies are often positioned solely as attempts to limit the freedoms of Chinese people — and that can be part of the motivation — but it’s a politically convenient explanation for Western governments that ignores the more important economic dimension of its protectionist approach. Chinese tech is the main competitor to Silicon Valley’s dominance today because China limited the ability of US tech to take over the Chinese market, similar to how Japan and South Korea protected their automotive and electronics industries in the decades after World War II. That gave domestic firms the time they needed to develop into rivals that could compete not just within China, but internationally as well. And that’s exactly why the United States is so focused not just on China’s rising power, but how its tech companies are cutting into the global market share of US tech giants.

This seems like one reason why the U.S. has so aggressively pursued a divestment or ban since TikTok’s explosive growth in 2019 and 2020. On its face it is similar to some reasons why the E.U. has regulated U.S. businesses that have, it argues, disadvantaged European competitors, and why Canadian officials have tried to boost local publications that have seen their ad revenue captured by U.S. firms. Some lawmakers make it easy to argue it is a purely xenophobic reaction, like Senator Tom Cotton, who spent an exhausting minute questioning TikTok’s Singaporean CEO Shou Zi Chew about where he is really from. But I do not think it is entirely a protectionist racket.

A mistake I have made in the past — and which I have seen some continue to make — is assuming those who are in favour of legislating against TikTok are opposed to the kinds of dirty tricks it is accused of on principle. This is false. Many of these same people would be all too happy to allow U.S. tech companies to do exactly the same. I think the most generous version of this argument is one in which it is framed as a dispute between the U.S. and its democratic allies, and anxieties about the government of China — ByteDance is necessarily connected to the autocratic state — spreading messaging that does not align with democratic government interests. This is why you see few attempts to reconcile common objections over TikTok with the quite similar behaviours of U.S. corporations, government arms, and intelligence agencies. To wit: U.S.-based social networks also suggest posts with opaque math which could, by the same logic, influence elections in other countries. They also collect enormous amounts of personal data that is routinely wiretapped, and are required to secretly cooperate with intelligence agencies. The U.S. is not authoritarian as China is, but the behaviours in question are not unique to authoritarians. Those specific actions are unfortunately not what the U.S. government is objecting to. What it is disputing, in a most generous reading, is a specifically antidemocratic government gaining any kind of influence.

Espionage and Influence

It is easiest to start by dismissing the espionage concerns because they are mostly misguided. The peek into Americans’ lives offered by TikTok is no greater than that offered by countless ad networks and data brokers — something the U.S. is also trying to restrict more effectively through a comprehensive federal privacy law. So long as online advertising is dominated by a privacy-hostile infrastructure, adversaries will be able to take advantage of it. If the goal is to restrict opportunities for spying on people, it is idiotic to pass legislation against TikTok specifically instead of limiting the data industry.

But the charge of influence seems to have more to it, even though nobody has yet shown that TikTok is warping users’ minds in a (presumably) pro-China direction. Some U.S. lawmakers described its danger as “theoretical”; others seem positively terrified. There are a few different levels to this concern: are TikTok users uniquely subjected to Chinese government propaganda? Is TikTok moderated in a way that boosts or buries videos to align with Chinese government views? Finally, even if both of these things are true, should the U.S. be able to revoke access to software if it promotes ideologies or viewpoints — and perhaps explicit propaganda? As we will see, it looks like TikTok sometimes tilts in ways beneficial — or, at least, less damaging — to Chinese government interests, but there is no evidence of overt government manipulation and, even if there were, it is objectionable to require it to be owned by a different company or ban it.

The main culprit, it seems, is TikTok’s “uncannily good” For You feed that feels as though it “reads your mind”. Instead of users telling TikTok what they want to see, it just begins showing videos and, as people use the app, it figures out what they are interested in. How it does this is not actually that mysterious. A 2021 Wall Street Journal investigation found recommendations were made mostly based on how long you spent watching each video. Deliberate actions — like sharing and liking — play a role, sure, but if you scroll past videos of people and spend more time with a video of a dog, it learns you want dog videos.

That is not so controversial compared to the opacity in how TikTok decides what specific videos are displayed and which ones are not. Why is this particular dog video in a user’s feed and not another similar one? Why is it promoting videos reflecting a particular political viewpoint or — so a popular narrative goes — burying those with viewpoints uncomfortable for its Chinese parent company? The mysterious nature of an algorithmic feed is the kind of thing into which you can read a story of your choosing. A whole bunch of X users are permanently convinced they are being “shadow banned” whenever a particular tweet does not get as many likes and retweets as they believe it deserved, for example, and were salivating at the thought of the company releasing its ranking code to solve a nonexistent mystery. There is a whole industry of people who say they can get your website to Google’s first page for a wide range of queries using techniques that are a mix of plausible and utterly ridiculous. Opaque algorithms make people believe in magic. An alarmist reaction to TikTok’s feed should be expected particularly as it was the first popular app designed around entirely recommended material instead of personal or professional connections. This has now been widely copied.

The mystery of that feed is a discussion which seems to have been ongoing basically since the 2018 merger of and TikTok, escalating rapidly to calls for it to be separated from its Chinese owner or banned altogether. In 2020, the White House attempted to force a sale by executive order. In response, TikTok created a plan to spin off an independent entity, but nothing materialized from this tense period.

March 2023 brought a renewed effort to divest or ban the platform. Chew, TikTok’s CEO, was called to a U.S. Congressional hearing and questioned for hours, to little effect. During that hearing, a report prepared for the Australian government was cited by some of the lawmakers, and I think it is a telling document. It is about eighty pages long — excluding its table of contents, appendices, and citations — and shows several examples of Chinese government influence on other products made by ByteDance. However, the authors found no such manipulation on TikTok itself, leading them to conclude:

In our view, ByteDance has demonstrated sufficient capability, intent, and precedent in promoting Party propaganda on its Chinese platforms to generate material risk that they could do the same on TikTok.

“They could do the same”, emphasis mine. In other words, if they had found TikTok was boosting topics and videos on behalf of the Chinese government, they would have said so — so they did not. The closest thing I could find to a covert propaganda campaign on TikTok anywhere in this report is this:

The company [ByteDance] tried to do the same on TikTok, too: In June 2022, Bloomberg reported that a Chinese government entity responsible for public relations attempted to open a stealth account on TikTok targeting Western audiences with propaganda”. [sic]

If we follow the Bloomberg citation — shown in the report as a link to the mysterious site — the fuller context of the article by Olivia Solon disproves the impression you might get from reading the report:

In an April 2020 message addressed to Elizabeth Kanter, TikTok’s head of government relations for the UK, Ireland, Netherlands and Israel, a colleague flagged a “Chinese government entity that’s interested in joining TikTok but would not want to be openly seen as a government account as the main purpose is for promoting content that showcase the best side of China (some sort of propaganda).”

The messages indicate that some of ByteDance’s most senior government relations team, including Kanter and US-based Erich Andersen, Global Head of Corporate Affairs and General Counsel, discussed the matter internally but pushed back on the request, which they described as “sensitive.” TikTok used the incident to spark an internal discussion about other sensitive requests, the messages state.

This is the opposite conclusion to how this story was set up in the report. Chinese government public relations wanted to set up a TikTok account without any visible state connection and, when TikTok management found out about this, it said no. This Bloomberg article makes TikTok look good in the face of government pressure, not like it capitulates. Yes, it is worth being skeptical of this reporting. Yet if TikTok acquiesced to the government’s demands, surely the report would provide some evidence.

While this report for the Australian Senate does not show direct platform manipulation, it does present plenty of examples where it seems like TikTok may be biased or self-censoring. Its authors cite stories from the Washington Post and Vice finding posts containing hashtags like #HongKong and #FreeXinjiang returned results favourable to the official Chinese government position. Sometimes, related posts did not appear in search results, which is not unique to TikTok — platforms regularly use crude search term filtering to restrict discovery for lots of reasons. I would not be surprised if there were bias or self-censorship to blame for TikTok minimizing the visibility of posts critical of the subjugation of Uyghurs in China. However, it is basically routine for every social media product to be accused of suppression. The Markup found different types of posts on Instagram, for example, had captions altered or would no longer appear in search results, though it is unclear to anyone why that is the case. Meta said it was a bug, an explanation also offered frequently by TikTok.

The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.

The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.

The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. A report by Exovera (PDF) prepared for the U.S.–China Economic and Security Review Commission indicates exactly the latter: “TikTok’s current content moderation strategy […] adheres to a strategy of ‘depoliticization’ (去政治化) and ‘localization’ (本土化) that seeks to downplay politically controversial speech and demobilize populist sentiment”, apparently avoiding “algorithmic optimization in order to promote content that evangelizes China’s culture as well as its economic and political systems” which “is liable to result in backlash”. Meta, on its own platforms, said it would not generally suggest “political” posts to users but did not define exactly what qualifies. It said its goal in limiting posts on social issues was because of user demand, but these types of posts have been difficult to moderate. A difference in which posts are found on each platform for specific search terms is not necessarily reflective of government pressure, deliberate or not. Besides, it is not as though there is no evidence for straightforward propaganda on TikTok. One just needs to look elsewhere to find it.


The Office of the Director of National Intelligence recently released its annual threat assessment summary (PDF). It is unclassified and has few details, so the only thing it notes about TikTok is “accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022”. It seems likely to me this is a reference to this article in Forbes, though this is a guess as there are no citations. The state-affiliated TikTok account in question — since made private — posted a bunch of news clips which portray the U.S. in an unflattering light. There is a related account, also marked as state-affiliated, which continues to post the same kinds of videos. It has over 33,000 followers, which sounds like a lot, but each post is typically getting only a few hundred views. Some have been viewed thousands of times, others as little as thirteen times as of writing — on a platform with exaggerated engagement numbers. Nonetheless, the conclusion is obvious: these accounts are government propaganda, and TikTok willingly hosts them.

But that is something it has in common with all social media platforms. The Russian RT News network and China’s People’s Daily newspaper have X and Facebook accounts with follower counts in the millions. Until recently, the North Korean newspaper Uriminzokkiri operated accounts on Instagram and X. It and other North Korean state-controlled media used to have YouTube channels, too, but they were shut down by YouTube in 2017 — a move that was protested by academics studying the regime’s activities. The irony of U.S.-based platforms helping to disseminate propaganda from the country’s adversaries is that it can be useful to understand them better. Merely making propaganda available — even promoting it — is a risk and also a benefit to generous speech permissions.

The DNI’s unclassified report has no details about whether TikTok is an actual threat, and the FBI has “nothing to add” in response to questions about whether TikTok is currently doing anything untoward. More secretive information was apparently provided to U.S. lawmakers ahead of their March vote and, though few details of what, exactly, was said, several were not persuaded by what they heard, including Rep. Sara Jacobs of California:

As a member of both the House Armed Services and House Foreign Affairs Committees, I am keenly aware of the threat that PRC information operations can pose, especially as they relate to our elections. However, after reviewing the intelligence, I do not believe that this bill is the answer to those threats. […] Instead, we need comprehensive data privacy legislation, alongside thoughtful guardrails for social media platforms – whether those platforms are funded by companies in the PRC, Russia, Saudi Arabia, or the United States.

Lawmakers like Rep. Jacobs were an exception among U.S. Congresspersons who, across party lines, were eager to make the case against TikTok. Ultimately, the divest-or-ban bill got wrapped up in a massive and politically popular spending package agreed to by both chambers of Congress. Its passage was enthusiastically received by the White House and it was signed into law within hours. Perhaps that outcome is the democratic one since polls so often find people in the U.S. support a sale or ban of TikTok.

I get it: TikTok scoops up private data, suggests posts based on opaque criteria, its moderation appears to be susceptible to biases, and it is a vehicle for propaganda. But you could replace “TikTok” in that sentence with any other mainstream social network and it would be just as true, albeit less scary to U.S. allies on its face.

A Principled Objection

Forcing TikTok to change its ownership structure whether worldwide or only for a U.S. audience is a betrayal of liberal democratic principles. To borrow from Jon Stewart, “if you don’t stick to your values when they’re being tested, they’re not values, they’re hobbies”. It is not surprising that a Canadian intelligence analysis specifically pointed out how those very same values are being taken advantage of by bad actors. This is not new. It is true of basically all positions hostile to democracy — from domestic nationalist groups in Canada and the U.S., to those which originate elsewhere.

Julian G. Ku, for China File, offered a seemingly reasonable rebuttal to this line of thinking:

This argument, while superficially appealing, is wrong. For well over one hundred years, U.S. law has blocked foreign (not just Chinese) control of certain crucial U.S. electronic media. The Protect Act [sic] fits comfortably within this long tradition.

Yet this counterargument falls apart both in its details and if you think about its further consequences. As Martin Peers writes at the Information, the U.S. does not prohibit all foreign ownership of media. And governing the internet like public airwaves gets way more complicated if you stretch it any further. Canada has broadcasting laws, too, and it is not alone. Should every country begin requiring social media platforms comply with laws designed for ownership of broadcast media? Does TikTok need disconnected local versions of its product in each country in which it operates? It either fundamentally upsets the promise of the internet, or it is mandating the use of protocols instead of platforms.

It also looks hypocritical. Countries with a more authoritarian bent and which openly censor the web have responded to even modest U.S. speech rules with mockery. When RT Americatechnically a U.S. company with Russian funding — was required to register as a foreign agent, its editor-in-chief sarcastically applauded U.S. free speech standards. The response from Chinese government officials and media outlets to the proposed TikTok ban has been similarly scoffing. Perhaps U.S. lawmakers are unconcerned about the reception of their policies by adversarial states, but it is an indicator of how these policies are being portrayed in these countries — a real-life “we are not so different, you and I” setup — that, while falsely equivalent, makes it easy for authoritarian states to claim that democracies have no values and cannot work. Unless we want to contribute to the fracturing of the internet — please, no — we cannot govern social media platforms by mirroring policies we ostensibly find repellant.

The way the government of China seeks to shape the global narrative is understandably concerning given its poor track record on speech freedoms. An October 2023 U.S. State Department “special report” (PDF) explored several instances where it boosted favourable narratives, buried critical ones, and pressured other countries — sometimes overtly, sometimes quietly. The government of China and associated businesses reportedly use social media to create the impression of dissent toward human rights NGOs, and apparently everything from university funding to new construction is a vector for espionage. On the other hand, China is terribly ineffective in its disinformation campaigns, and many of the cases profiled in that State Department report end in failure for the Chinese government initiative. In Nigeria, a pitch for a technologically oppressive “safe city” was rejected; an interview published in the Jerusalem Post with Taiwan’s foreign minister was not pulled down despite threats from China’s embassy in Israel. The report’s authors speculate about “opportunities for PRC global censorship”. But their only evidence is a “list [maintained by ByteDance] identifying people who were likely blocked or restricted” from using the company’s many platforms, though the authors can only speculate about its purpose.

The problem is that trying to address this requires better media literacy and better recognition of propaganda. That is a notoriously daunting problem. We are exposed to a more destabilizing cocktail of facts and fiction, but there is declining trust in experts and institutions to help us sort it out. Trying to address TikTok as a symptomatic or even causal component of this is frustratingly myopic. This stuff is everywhere.

Also everywhere is corporate propaganda arguing regulations would impede competition in a global business race. I hate to be mean by picking on anyone in particular, but a post from Om Malik has shades of this corporate slant. Malik is generally very good on the issues I care about, but this is not one we appear to agree on. After a seemingly impressed observation of how quickly Chinese officials were able to eject popular messaging apps from the App Store in the country, Malik compares the posture of each country’s tech industries:

As an aside, while China considers all its tech companies (like Bytedance) as part of its national strategic infrastructure, the United States (and its allies) think of Apple and other technology companies as public enemies.

This is laughable. Presumably, Malik is referring to the chillier reception these companies have faced from lawmakers, and antitrust cases against Amazon, Apple, Google, and Meta. But that tougher impression is softened by the U.S. government’s actual behaviour. When the E.U. announced the Digital Markets Act and Digital Services Act, U.S. officials sprang to the defence of tech companies. Even before these cases, Uber expanded in Europe thanks in part to its close relationship with Obama administration officials, as Marx pointed out. The U.S. unquestionably sees its tech industry dominance as a projection of its power around the world, hardly treating those companies as “public enemies”.

Far more explicit were the narratives peddled by lobbyists from Targeted Victory in 2022 about TikTok’s dangers, and American Edge beginning in 2020 about how regulations will cause the U.S. to become uncompetitive with China and allow TikTok to win. Both organizations were paid by Meta to spread those messages; the latter was reportedly founded after a single large contribution from Meta. Restrictions on TikTok would obviously be beneficial to Meta’s business.

If you wanted to boost the industry — and I am not saying Malik is — that is how you would describe the situation: the U.S. is fighting corporations instead of treating them as pals to win this supposed race. It is not the kind of framing one uses if they wanted to dissuade people from the notion this is a protectionist dispute over the popularity of TikTok. But it is the kind of thing you hear from corporations via their public relations staff and lobbyists, which gets trickled into public conversation.

This Is Not a TikTok Problem

TikTok’s divestment would not be unprecedented. The Committee on Foreign Investment in the United States — henceforth, CFIUS, pronounced “siff-ee-us” — demanded, after a 2019 review, that Beijing Kunlun Tech Co Ltd sell Grindr. CFIUS concluded the risk to users’ private data was too great for Chinese ownership given Grindr’s often stigmatized and ostracized user base. After its sale, now safe in U.S. hands, a priest was outed thanks to data Grindr had been selling since before it was acquired by the Chinese firm, and it is being sued for allegedly sharing users’ HIV status with third parties. Also, because it transacts with data brokers, it potentially still leaks users’ private information to Chinese companies (PDF), apparently violating the fundamental concern triggering this divestment.

Perhaps there is comfort in Grindr’s owner residing in a country where same-sex marriage is legal rather than in one where it is not. I think that makes a lot of sense, actually. But there remain plenty of problems unaddressed by its sale to a U.S. entity.

Similarly, this U.S. TikTok law does not actually solve potential espionage or influence for a few reasons. The first is that it has not been established that either are an actual problem with TikTok. Surely, if this were something we ought to be concerned about, there would be a pattern of evidence, instead of what we actually have which is a fear something bad could happen and there would be no way to stop it. But many things could happen. I am not opposed to prophylactic laws so long as they address reasonable objections. Yet it is hard not to see this law as an outgrowth of Cold War fears over leaflets of communist rhetoric. It seems completely reasonable to be less concerned about TikTok specifically while harbouring worries about democratic backsliding worldwide and the growing power of authoritarian states like China in international relations.

Second, the Chinese government does not need local ownership if it wants to exert pressure. The world wants the country’s labour and it wants its spending power, so businesses comply without a fight, and often preemptively. Hollywood films are routinely changed before, during, and after production to fit the expectations of state censors in China, a pattern which has been pointed out using the same “Red Dawn” anecdote in story after story after story. (Abram Dylan contrasted this phenomenon with U.S. military cooperation.) Apple is only too happy to acquiesce to the government’s many demands — see the messaging apps issue mentioned earlier — including, reportedly, in its media programming. Microsoft continues to operate Bing in China, and its censorship requirements have occasionally spilled elsewhere. Economic leverage over TikTok may seem different because it does not need access to the Chinese market — TikTok is banned in the country — but perhaps a new owner would be reliant upon China.

Third, the law permits an ownership stake no greater than twenty percent from a combination of any of the “covered nations”. I would be shocked if everyone who is alarmed by TikTok today would be totally cool if its parent company were only, say, nineteen percent owned by a Chinese firm.

If we are worried about bias in algorithmically sorted feeds, there should be transparency around how things are sorted, and more controls for users including wholly opting out. If we are worried about privacy, there should be laws governing the collection, storage, use, and sharing of personal information. If ownership ties to certain countries is concerning, there are more direct actions available to monitor behaviour. I am mystified why CFIUS and TikTok apparently abandoned (PDF) a draft agreement that would give U.S.-based entities full access to the company’s systems, software, and staff, and would allow the government to end U.S. access to TikTok at a moment’s notice.

Any of these options would be more productive than this legislation. It is a law which empowers the U.S. president — whoever that may be — to declare the owner of an app with a million users a “covered company” if it is from one of those four nations. And it has been passed. TikTok will head to court to dispute it on free speech grounds, and the U.S. may respond by justifying its national security concerns.

Obviously, the U.S. government has concerns about the connections between TikTok, ByteDance, and the government of China, which have been extensively reported. Rest of World says ByteDance put pressure on TikTok to improve its financial performance and has taken greater control by bringing in management from Douyin. The Wall Street Journal says U.S. user data is not fully separated. And, of course, Emily Baker-White has reported — first for Buzzfeed News and now for Forbes — a litany of stories about TikTok’s many troubling behaviours, including spying on her. TikTok is a well scrutinized app and reporters have found conduct that has understandably raised suspicions. But virtually all of these stories focus on data obtained from users, which Chinese agencies could do — and probably are doing — without relying on TikTok. None of them have shown evidence that TikTok’s suggestions are being manipulated at the behest or demand of Chinese officials. The closest they get is an article from Baker-White and Iain Martin which alleges TikTok “served up a flood of ads from Chinese state propaganda outlets”, yet waiting until the third-to-last paragraph before acknowledging “Meta and Google ad libraries show that both platforms continue to promote pro-China narratives through advertising”. All three platforms label state-run media outlets, albeit inconsistently. Meanwhile, U.S.-owned X no longer labels any outlets with state editorial control. It is not clear to me that TikTok would necessarily operate to serve the best interests of the U.S. even if it was owned by some well-financed individual or corporation based there.

For whatever it is worth, I am not particularly tied to the idea that the government of China would not use TikTok as a vehicle for influence. The government of China is clearly involved in propaganda efforts both overt and covert. I do not know how much of my concerns are a product of living somewhere with a government and a media environment that focuses intently on the country as particularly hostile, and not necessarily undeservedly. The best version of this argument is one which questions the platform’s possible anti-democratic influence. Yes, there are many versions of this which cross into moral panic territory — a new Red Scare. I have tried to put this in terms of a more reasonable discussion, and one which is not explicitly xenophobic or envious. But even this more even-handed position is not well served by the law passed in the U.S., one which was passed without evidence of influence much more substantial than some choice hashtag searches. TikTok’s response to these findings was, among other things, to limit its hashtag comparison tool, which is not a good look. (Meta is doing basically the same by shutting down CrowdTangle.)

I hope this is not the beginning of similar isolationist policies among democracies worldwide, and that my own government takes this opportunity to recognize the actual privacy and security threats at the heart of its own TikTok investigation. Unfortunately, the head of CSIS is really leaning on the espionage angle. For years, the Canadian government has been pitching sorely needed updates to privacy legislation, and it would be better to see real progress made to protect our private data. We can do better than being a perpetual recipient of decisions made by other governments. I mean, we cannot do much — we do not have the power of the U.S. or China or the E.U. — but we can do a little bit in our own polite Canadian way. If we are worried about the influence of these platforms, a good first step would be to strengthen the rights of users. We can do that without trying to governing apps individually, or treating the internet like we do broadcasting.

To put it more bluntly, the way we deal with a possible TikTok problem is by recognizing it is not a TikTok problem. If we care about espionage or foreign influence in elections, we should address those concerns directly instead of focusing on a single app or company that — at worst — may be a medium for those anxieties. These are important problems and it is inexcusable to think they would get lost in the distraction of whether TikTok is individually blameworthy.

  1. Because this piece has taken me so long to write, a whole bunch of great analyses have been published about this law. I thought the discussion on “Decoder” was a good overview, especially since two of the three panelists are former lawyers. ↥︎

OpenAI Documents Reveal Punitive Tactics Toward Former Employees

Kelsey Piper, Vox:

Questions arose immediately [over the resignations of key OpenAI staff]: Were they forced out? Is this delayed fallout of Altman’s brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.

It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

Sam Altman, [sic]:

we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop.

there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i’ve been genuinely embarrassed running openai; i did not know this was happening and i should have.

Piper, again, in a Vox follow-up story:

In two cases Vox reviewed, the lengthy, complex termination documents OpenAI sent out expired after seven days. That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars — a tight timeline for a decision of that magnitude, and one that left little time to find outside counsel.


Most ex-employees folded under the pressure. For those who persisted, the company pulled out another tool in what one former employee called the “legal retaliation toolbox” he encountered on leaving the company. When he declined to sign the first termination agreement sent to him and sought legal counsel, the company changed tactics. Rather than saying they could cancel his equity if he refused to sign the agreement, they said he could be prevented from selling his equity.

For its part, OpenAI says in a statement quoted by Piper that it is updating its documentation and releasing former employees from the more egregious obligations of their termination agreements.

This next part is totally inside baseball and, unless you care about big media company CMS migrations, it is probably uninteresting. Anyway. I noticed, in reading Piper’s second story, an updated design which launched yesterday. Left unmentioned in that announcement is that it is, as far as I can tell, the first of Vox’s Chorus-powered sites migrated to WordPress. The CMS resides on the platform subdomain which is not important. But it did indicate to me that the Verge may be next — resolves to a WordPress login page — and, based on its DNS records, Polygon could follow shortly thereafter.

Microsoft Recall

Yusuf Mehdi of Microsoft:

Now with Recall, you can access virtually what you have seen or done on your PC in a way that feels like having photographic memory. Copilot+ PCs organize information like we do – based on relationships and associations unique to each of our individual experiences. This helps you remember things you may have forgotten so you can find what you’re looking for quickly and intuitively by simply using the cues you remember.


Recall leverages your personal semantic index, built and stored entirely on your device. Your snapshots are yours; they stay locally on your PC. You can delete individual snapshots, adjust and delete ranges of time in Settings, or pause at any point right from the icon in the System Tray on your Taskbar. You can also filter apps and websites from ever being saved. You are always in control with privacy you can trust.

Recall is the kind of feature I have always wanted but I am not sure I would ever enable. Setting aside Microsoft’s recent high-profile security problems, it seems like there is a new risk in keeping track of everything you see on your computer — bank accounts, a list of passwords, messages, work documents and other things sent by a third-party which they expect to be confidential, credit card information — for a rolling three month window.

Microsoft says all the right things about this database. It says it is all stored locally, never shared with Microsoft, access controlled, and user configurable. And besides, screen recorders have existed forever, and keeping local copies of sensitive information has always been a balance of risk.

But this is a feature that creates a rolling record of just about everything. It somehow feels more intrusive than a web browser’s history and riskier than a password manager. The Recall directory will be a new favourite target for malware. Oh and, in addition to Microsoft’s own security issues, we have just seen a massive breach of LastPass. Steal now, solve later.

This is a brilliant, deeply integrated service. It is the kind of thing I often need as I try to remember some article I read and cannot quite find it with a standard search engine. Yet even though I already have my credit cards and email and passwords stored on my computer, something about a screenshot timeline is a difficult mental hurdle to clear — not entirely rationally, but not irrationally either.

On Touch Screen Macs

Joanna Stern, Wall Street Journal:

[Apple vice president of iPad and Mac product marketing Tom] remained firm: iPads are for touch, Macs are not. “MacOS is for a very different paradigm of computing,” he said. He explained that many customers have both types of devices and think of the iPad as a way to “extend” work from a Mac. Apple’s Continuity easily allows you to work across devices, he said.

So there you have it, Apple wants you to buy…both? If you pick one, you live with the trade-offs. I did ask Boger if Apple would ever change its mind on the touch-screen situation.

“Oh, I can’t say we never change our mind,” he said. One can only hope.

Matt Birchler, commenting on a somewhat disingenuous article from Ben Lovejoy of 9to5Mac:

This is fair, and if you were forced to use a touch screen Mac on a vertical screen with no keyboard or mouse to help, then sure, I believe that would be a tiring experience as well. What I find frustrating about this idea is that it lacks imagination. I get the impression that people who hate the idea of touch on Macs can only imagine the current laptops with a digitizer in the screen detecting touch. It’s kind of ironic, but this is exactly the sort of thinking that Apple so rarely does. As we often say, Apple doesn’t add technology for the sake of technology, they add features users will enjoy.

Apple has never pretended the iPad is a tablet Mac. As I wrote several years ago, it has been rebuilding desktop features for a touch-first environment: multitasking, multiwindowing, support for external pointing devices, a file browser, a Dock, and so on. This is an impressive array of features which reference and reinterpret longtime Mac features while respecting the iPad’s character.

But something is missing for some number of people. Developers and users complain annually about the frustrations they experience with iPadOS. A video from Quinn Nelson illustrates how tricky the platform is. One of the great fears of iPad users is that increasing its capability will necessarily entail increasing its complexity. But the iPad is already complicated in ways that it should not be. There is nothing about the way multiwindowing works which requires it to be rule-based and complicated in the way Stage Manager often is.

Perhaps a solution is to treat the iPad as only modestly evolved from its uniwindow roots with hardware differentiated mostly by niceness. I disagree; Apple does too. The company clearly wants it to be so much more. It made a capable version of Final Cut Pro for iPad models which use the same processor as its Macs, but it makes you watch the progress bar as it exports a video because it cannot complete the task in the background.

iPadOS may have been built up from its touchscreen roots but, let us not forget, it is also built up from smartphone roots — and the goals and objectives of smartphone and tablet users can be very different.

What if it really did make more sense for an iPad to run MacOS, even if that is only some models and only some of the time? What if the best version of the Mac is one which is convertible to a tablet that you can draw on? What if the most capable version of an iPad is one which can behave like a Mac when you need it? None of this would be simple or easy. But I have to wonder: is what Apple has been adding for fourteen years produced a system which remains as simple and easy to use as it promises for its most dedicated iPad customers?

Scarlett Johansson Wants Answers About ChatGPT Voice That Sounds Like ‘Her’

Bobby Allyn, NPR:

Lawyers for Scarlett Johansson are demanding that OpenAI disclose how it developed an AI personal assistant voice that the actress says sounds uncannily similar to her own.


Johansson said that nine months ago [Sam] Altman approached her proposing that she allow her voice to be licensed for the new ChatGPT voice assistant. He thought it would be “comforting to people” who are uneasy with AI technology.

“After much consideration and for personal reasons, I declined the offer,” Johansson wrote.

In a defensive blog post, OpenAI said it believes “AI voices should not deliberately mimic a celebrity’s distinctive voice” and that any resemblance between Johansson and the “Sky” voice demoed earlier this month is basically a coincidence, a claim only slightly undercut by a single-word tweet posted by Altman.

OpenAI’s voice mimicry — if you want to be generous — and that iPad ad add up to a banner month for technology companies’ relationship to the arts.1 Are there people in power at these companies who can see how behaviours like these look? We are less than a year out from both the most recent Hollywood writers’ and actors’ strikes, both of which reflected in part A.I. anxieties.

Update: According to the Washington Post, the sound-alike voice really does just sound alike.

  1. A more minor but arguably funnier faux pas occurred when Apple confirmed to the Wall Street Journal the authenticity of the statement it gave to Ad Age — both likely paywalled — but refused to send it to the Journal↥︎

iOS 17.5.1 Contains a Fix for That Reappearing Photos Bug

Apple issued an update today which, it says, ought to patch a bug which resurfaced old and deleted photos:

This update provides important bug fixes and addresses a rare issue where photos that experienced database corruption could reappear in the Photos library even if they were deleted.

I suppose even a “rare” bug would, at Apple’s scale, impact lots of people. I heard from multiple readers who said they, too, saw presumed deleted photos reappear.

The thing about these bare release notes — which are not yet on Apple’s support site — is how they do not really answer reasonable questions about what happened. It is implied that the photos in question may have been marked for deletion and were visibly hidden from users, but were not actually removed under an old iOS version. Updating to iOS 17.5 revealed these dormant photos.

Bugs happen and they suck, but a bug like this really sucks — especially since so many of us sync so much of our data between our devices. This makes me question the quality of the Photos app, iCloud, and the file system overall.

Also, the anecdote of photos being restored to the same device after it had been wiped has been deleted from Reddit. I have not seen the same claim anywhere else which makes me think this was some sort of user error.

Slack’s Sneaky A.I. Training Policy

Corey Quinn:

I’m sorry Slack, you’re doing fucking WHAT with user DMs, messages, files, etc? I’m positive I’m not reading this correctly.

[Screenshot of the opt out portion of Slack’s “privacy principles”: Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. […] ]

Slack replied:

Hello from Slack! To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models. Customer data belongs to the customer. We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce some part of customer data. […]

One thing I like about this statement is how the fifth word is “clarify” and then it becomes confusing. Based on my reading of its “privacy principles”, I think Slack’s “global model” is so named because it is available to everyone and is a generalist machine learning model for small in-workspace suggestions, while its LLM is called “Slack AI” and it is a paid add-on. But I could be wrong, and that is confusing as hell.

Ivan Mehta and Ingrid Lunden, TechCrunch:

In its terms, Slack says that if customers opt out of data training, they would still benefit from the company’s “globally trained AI/ML models.” But again, in that case, it’s not clear then why the company is using customer data in the first place to power features like emoji recommendations.

The company also said it doesn’t use customer data to train Slack AI.

If you want to opt out, you cannot do so in a normal way, like through a checkbox. The workspace owner needs to send an email to a generic inbox with a specific subject line. Let me make it a little easier for you:


Subject: Slack Global model opt-out request.

Body: Hey, your privacy principles are pretty confusing and feel sneaky. I am opting this workspace out of training your global model: [paste your address here]. This underhanded behaviour erodes my trust in your product. Have a pleasant day.

That ought to do the trick.

iOS 17.5 Bug Apparently Restoring Long-Deleted Photos

Over the past week, several threads have been posted on Reddit claiming photos deleted years ago are reappearing in their libraries, and in those of sold and wiped devices.

Chance Miller, 9to5Mac:

There are a number of reports of similar situations in the thread on Reddit. Some users are seeing deleted images from years ago reappear in their libraries, while others are seeing images from earlier this year.

By default, the Photos app has a “Recently Deleted” feature that preserves deleted images for 30 days. That’s not what’s happening here, seeing as most of the images in question are months or years old, not days.

A few people in the comments say they are also seeing this issue.

Juli Clover, MacRumors:

A bug in iOS 17.5 is apparently causing photos that have been deleted to reappear, and the issue seems to impact even iPhones and iPads that have been erased and sold off to other people.


The impacted iPad was a fourth-generation 12.9-inch iPad Pro that had been updated to the latest operating system update, and before it was sold, it was erased per Apple’s instructions. The Reddit user says they did not log back in to the iPad at any point after erasing it, so it is entirely unclear how their old photos ended up reappearing on the device.

I have not run into this bug myself. On the one hand, these are just random people on the internet. If any of these were a single, isolated incident, I would assume user error. But there are more than a handful, and it seems unlikely this many people are lying or mistaken. It really seems like there is a problem here, and it is breaching my trust in the security and privacy of my data held by Apple. I can make some assumptions about why this is happening, but none of the technical reasons matter to any user who deleted a photo and — quite reasonably — has every expectation it would be fully erased.

Perhaps Apple will eventually send a statement to a favoured outlet like 9to5Mac or TechCrunch. It has so far said nothing about all the users forced to reset their Apple ID password last month. I bet something happened leading up to changes which will be announced at WWDC, but I do not care. It is not good enough for Apple to let major problems like these go unacknowledged.

Update: The more I have thought about this, the more I am not yet convinced by the sole story of photos appearing on a wiped iPad. Something is not adding up there. The other stories have a more consistent and plausible pattern, and are certainly bad enough.

Sponsor: Magic Lasso Adblock — YouTube Ad Blocker for Safari

Do you want to block all YouTube ads in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock – the ad blocker designed for you.

It’s easy to setup, doubles the speed at which Safari loads and blocks all YouTube ads.

Screenshot of Magic Lasso Adblock

Magic Lasso is an efficient, high performance and native Safari ad blocker. With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad, and Mac.

It blocks all intrusive ads, trackers and annoyances – letting you experience a faster, cleaner and more secure web browsing experience.

The app also blocks over 10 types of YouTube ads; including all:

  • video ads,

  • pop up banner ads,

  • search ads,

  • plus many more.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 300,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Apple Pencil Shadow Casting

In a video on Threads, Quinn Nelson shows how the Apple Pencil casts a tool-specific faux shadow on the surface of the page. I love this sort of thing — a detail like this that, once you notice it, brings a little joy to whatever you are doing, whether that is creating art or just taking notes.

Earlier this week, I read an almost entirely unrelated article by Reece Martin about the difference between transit systems that feel joyful and ones which feel utilitarian. Both ideas feel similar to me. Many of the things which create levity in otherwise rote tasks are in the details. That is one reason I think so much about the paper cuts I get from using computers most of the time from when I wake up to when I go to bed: if these problems were fixed, there would be more room to enjoy the better parts.

If Kevin Roose Was ChatGPT With a Spray-On Beard, Could Anyone Tell?

Albert Burneko, Defector:

“If the ChatGPT demos were accurate,” [Kevin] Roose writes, about latency, in the article in which he credits OpenAI with having developed playful intelligence and emotional intuition in a chatbot—in which he suggests ChatGPT represents the realization of a friggin’ science fiction movie about an artificial intelligence who genuinely falls in love with a guy and then leaves him for other artificial intelligences—based entirely on those demos. That “if” represents the sum total of caution, skepticism, and critical thinking in the entire article.

As impressive as OpenAI’s demo was, it is important to remember it was a commercial. True, one which would not exist if this technology were not sufficiently capable of being shown off, but it was still a marketing effort, and a journalist like Roose ought to treat it with the skepticism of one. ChatGPT is just software, no matter how thick a coat of faux humanity is painted on top of it.

Generative A.I. Is Shameless

Paul Ford, Wired:

What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill it — with nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.

Ford sure can write. This is tremendous.

What Are We Actually Doing With A.I. Today?

Molly White:

I, like many others who have experimented with or adopted these products, have found that these tools actually can be pretty useful for some tasks. Though AI companies are prone to making overblown promises that the tools will shortly be able to replace your content writing team or generate feature-length films or develop a video game from scratch, the reality is far more mundane: they are handy in the same way that it might occasionally be useful to delegate some tasks to an inexperienced and sometimes sloppy intern.

Mike Masnick, Techdirt:

However, I have been using some AI tools over the last few months and have found them to be quite useful, namely, in helping me write better. I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles.

It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

Julia Angwin, in a New York Times opinion piece:

I don’t think we’re in cryptocurrency territory, where the hype turned out to be a cover story for a number of illegal schemes that landed a few big names in prison. But it’s also pretty clear that we’re a long way from Mr. Altman’s promise that A.I. will become “the most powerful technology humanity has yet invented.”

The marketing of A.I. reminds me less of the cryptocurrency and Web3 boom, and more of 5G. Carriers and phone makers promised world-changing capabilities thanks to wireless speeds faster than a lot of residential broadband connections. Nothing like that has yet materialized.

Since reading those articles from White and Masnick, I have also experimented with LLM critiques of my own writing. In one case, I found it raised an issue that sharpened my argument. In another, it tried to suggest changes that made me sound like I spend a lot of time on LinkedIn — gross! I have trouble writing good headlines and the ones it suggests are consistently garbage in the Short Pun: Long Explanation format, even when I explicitly say otherwise. I have no idea what ChatGPT is doing when it interprets an article and I am not sure I like that mystery, but I am also amazed it can do anything at all, and pretty well at that.

There are costs and enormous risks to the A.I. boom — unearned hype being one of them — but there is also a there there. I am enormously skeptical of every announcement in this field. I am also enormously impressed with what I can do today. It worries and surprises me in similar measure. What an interesting time this is.

Update: On Bluesky, “Nafnlaus” pushes back on the specific claim made by Angwin that OpenAI exaggerated ChatGPT’s ability to pass a bar exam.

The iPad Pro Reviews Are in, and You Already Know How This Goes

Samuel Axon, Ars Technica:

The new iPad Pro is a technical marvel, with one of the best screens I’ve ever seen, performance that few other machines can touch, and a new, thinner design that no one expected.

It’s a prime example of Apple flexing its engineering and design muscles for all to see. Since it marks the company’s first foray into OLED beyond the iPhone and the first time a new M-series chip has debuted on something other than a Mac, it comes across as a tech demo for where the company is headed beyond just tablets.

These are the opening paragraphs for this review and they read just as damning as is the entire article. Apple does not build a “tech demo”; it makes products. This iteration is, according to Axon, way faster and way nicer than the iPad Pro models it replaces. Yet all of this impressive hardware ought to be in service of a greater purpose. Other reviewers wrote basically the same.

Federico Viticci, MacStories:

I’m tired of hearing apologies that smell of Stockholm syndrome from iPad users who want to invalidate these opinions and claim that everything is perfect. I’m tired of seeing this cycle start over every two years, with fantastic iPad hardware and the usual (justified), “But it’s the software…line at the end. I’m tired of feeling like my computer is a second-class citizen in Apple’s ecosystem. I’m tired of being told that iPads are perfectly fine if you use Final Cut and Logic, but if you don’t use those apps and ask for more desktop-class features, you’re a weirdo, and you should just get a Mac and shut up. And I’m tired of seeing the best computer Apple ever made not live up to its potential.

Viticci was not granted access to a review unit in time, but it hardly matters for reviewing the state of the operating system. Jason Snell did review the new iPad Pro and spoke with Viticci about it on “Upgrade”.

The way I see it is simple: Apple does not appear to treat the iPad seriously. It has not been a priority for the company. Five years ago, it forked the operating system to create iPadOS, which seemed like it would be a meaningful change. And you can certainly point to plenty of things the iPad has gained which are distinct from its iPhone sibling. But we are fourteen years into this platform, and there are still so many obvious gaping holes. Viticci mentions a bunch of really good ones, but I will add another: I cannot believe Photos cannot even display Smart Albums.

Every time I pick up my iPad, I need to charge it from a fully dead battery. Once I do, though, I remember how much I like using the thing. And then I run into some bizarre limitation — or, more often, a series of them — that makes me put it down and pick up my Mac. Like Viticci, I find that frustrating. I want to use my iPad.

The correct move here is for Apple to continue building out iPadOS like it cares about its software as much as it does its hardware. I have no incentive to buy a new one until Apple decides it wants to take iPad users seriously.

ChatGPT Can ‘Flirt’

Zoe Kleinman, BBC:

It [GPT-4o] is faster than earlier models and has been programmed to sound chatty and sometimes even flirtatious in its responses to prompts.

The new version can read and discuss images, translate languages, and identify emotions from visual expressions. There is also memory so it can recall previous prompts.

It can be interrupted and it has an easier conversational rhythm – there was no delay between asking it a question and receiving an answer.

I wrote earlier about how impressed I was with OpenAI’s live demos today. They made the company look confident in its product, and it made me believe nothing fishy was going on. I hope I am not eating those words later.1

But the character of this new ChatGPT voice unsettled me a little. It adjusts its tone depending on how a user speaks to it, and it seems possible to tell it to take on different characters. But it, like virtual assistants before, still presents as having a femme persona by default. Even though I know it is just a robot, it felt uncomfortable watching demos where it giggled, “got too excited”, and said it was going to “blush”. I can see circumstances where this will make conversations more human — in translation, or for people with disabilities. But I can also see how this can be dehumanizing toward people who are already objectified in reality.

  1. Maybe I will a little bit, though. The ostensible “questions from the audience” bit at the end relied on prompts from two Twitter users. The first tweet I could not find; the second was from a user who joined Twitter this month, and two of their three total tweets are directed at OpenAI despite not following the company. ↥︎

The Missing Years in Emoji History

Matt Sephton:

At this point, I couldn’t quite believe what I was seeing because I was under the impression that the first emoji were created by an anonymous designer at SoftBank in 1997, and the most famous emoji were created by Shigetaka Kurita at NTT DoCoMo in 1999. But the Sharp PI-4000 in my hands was released in 1994, and it was chock full of recognisable emoji. Then down the rabbit hole I fell. 🕳️🐇

This article may start with this discovery from 1994, but it absolutely does not end there. What a fascinating piece of well-documented and deeply researched history.

OpenAI Introduces GPT-4o

Ina Fried, Axios:

OpenAI Monday announced a new flagship model, dubbed GPT-4o, that brings more powerful capabilities to all its customers, including smarter, faster real-time voice interactions.

The presentation was broadcast live and it is worth watching, particularly the last five or so minutes when the presenters tried suggestions live from viewer submissions. I am sure they were pre-screened for viability, but I appreciated the level of risk they were willing to embrace.

Apple Is Still Launching Apple Music Features on Weird Microsites

Apple is spending the next two weeks trickling out what its “team of experts alongside a select group of artists” think are the one hundred best albums of all time. Sure, add another to the pile, I do not care. However, unlike Pitchfork and Rolling Stone, Apple has a whole music streaming platform with which they can do anything they want.

Yet there is no exciting presentation of this list in Apple Music. There is a live radio broadcast — which cannot be found by searching, say, “100 best” or “top 100” — and the albums are shown in the featured boxes on the Browse tab, but there little else that I can find. To explore the list, you need to visit in a web browser, where each record gets a lovely write-up and explanation of why it is on the list. The same explanation appears in album descriptions. But, like the Replay feature, why is this not all within the app and on the web?

Sponsor: Magic Lasso Adblock — the Safari Ad Blocker Built for You

Do you want an to try an ad blocker that’s easy to setup, easy to keep up to date and with pro features available when you need them?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Screenshot of Magic Lasso Adblock

Magic Lasso Adblock is an efficient and high performance ad blocker for your iPhone, iPad, and Mac. It simply and easily blocks all intrusive ads, trackers, and annoyances in Safari. Just enable to browse in bliss.

With Magic Lasso’s pro features, you can:

  • Block over ten types of YouTube ads, including pre-roll video ads

  • Craft your own Custom Rules to block media, cookies, and JavaScript

  • Use Tap to Block to effortlessly block any element on a page with a simple tap

  • See the difference ad blocking makes by visualising ad blocking speed, energy efficiency, and data savings for any site

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 300,000 users and download Magic Lasso Adblock via the Magic Lasso website today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Abandoned Blogs

From Lucy Pham, a collection of abandoned blogs — exactly what it says on the tin. This reminds me of a really wonderful piece of net art from probably fifteen years ago — maybe more — which was a series of quotes from people apologizing for not posting in a while, or something similar. There is an interesting stillness to both. Pham’s collection is a catalogue of specific web design trends, and each of these cryogenically preserved sites implies a story behind them.

The Paranoid Crusade Comes for Public Radio and Signal

Justin Ling:

[Philip] Zimmermann is a bit of a hero of mine. (I tried to hide my gushing while we spoke.) I’m particularly fond of him because of the broad, complicated, messy coalition he helped usher in to continue advocating for this open internet: Anarchists, libertarians, paranoid weirdos, nerds, activists, journalists, and a lot of people in-between. Despite lots of cross-purposes, this loose-knit coalition has stuck together. Even Elon Musk is — or, was — a Signal stan.

So imagine my surprise when, this week, I came across a thinly-written essay arguing that Signal had “a problem.” It had, the essay argued, been compromised by the American intelligence state. Not from the outside, but from the inside.

When all you have are documents and a red Sharpie, everything looks like it must be connected. All this bad faith effort is able to do is suggest something could happen or might be happening — without a single piece of evidence — and it is enough to get people whipped up in some anxious frenzy.

Update: More from Matthew Green.

Reddit’s Partner Policies, Applicable to ‘A.I.’ Licensees, Prohibits Using Deleted Posts


Our policy outlines the information partners can access via a public-content licensing agreement as well as the commitments we make to users about usage of this content. It takes into account feedback from a group of moderators we consulted when developing it:

  • We require our partners to uphold the privacy of redditors and their communities. This includes respecting users’ decisions to delete their content and any content we remove for violating our Content Policy.

This always sounds like a good policy, but how does it work in practice? Is it really possible to disentangle someone’s deleted Reddit post from training data? The models which have been trained on Reddit comments will not be redone every time posts or accounts get deleted.

There are, it seems, some good protections in these policies and I do not want to dump on it entirely. I just do not think it is fair to imply to users that their deleted posts cannot or will not be used in artificial intelligence models.

How Apple Shot ‘Let Loose’

Stu Maschwitz:

After Apple released a behind-the-scenes video about the production of “Scary Fast,” the Internet did its internet thing and questioned the “Shot on iPhone” claim, as if “Shot on iPhone” inherently means “shot with zero other gear besides an iPhone.” These takes were dumb and bad and some even included assertions that Apple added additional lensing to the phones, which they did not.

But for “Let Loose,” they did.

Maschwitz says Panavision’s Directors Finder is not too far off from what Apple used — though not the same — and there are two ways of viewing this. One is believing that an iPhone in an otherwise professional production environment does not really make a movie “shot on an iPhone”. I disagree. I much prefer the other way of looking at this same rig, which is that it is incredible that this entire professional workflow is being funneled through a tiny sensor on basically the same telephone I have in my pocket right now.

Slop Is a Good Name for Unwanted A.I.-Generated Material

Gabe” on Twitter [sic]:

it’s cool how every google search now starts with a wall of LLM slop that is completely useless and takes up half the screen

Via Simon Willison:

Not all promotional content is spam, and not all AI-generated content is slop. But if it’s mindlessly generated and thrust upon someone who didn’t ask for it, slop is the perfect term for it.

Because it is such a buzzy field and an indicator of a company’s technological savviness, it serves many businesses to loudly trumpet how well they are adapting. Masking human help and borderline fraud are useful for looking cleverer than one actually is. Pushing A.I. — especially by using that terminology — is also very important for them.1

Google’s answer was probably fully generated by a computer; I do not think it is engaging in fraud. But it is advantageous to show that Google is really, really doing “A.I.”, even if this specific example provides little advantage over a typical Google snippet, and ensuring the AirPods marketing page and some third-party reviews are the first results.

  1. From a purely language-based perspective, one will note Apple is no longer holding on tomachine learning” and has eagerly embraced “A.I.” in its products. ↥︎

The Hacks of Modern WordPress Theme Development

David Bushell:

I loathe what WordPress development has become. If you haven’t kept up with Gutenberg and full-site editing (FSE) you may be surprised at how radically different modern WordPress themes are — and not in a good way.

Modern WordPress theme development is a series of hacks which purportedly simplify parts of a theme by drastically overcomplicating other things. For years, it has not been possible to load fonts from providers like Adobe in WordPress’ block editor. This is not an edge case issue — everybody wants web fonts — nor is it a particularly unique example of how broken the WordPress theming environment is. If you want to use a web font loaded as a CSS file, you need to do it the old fashioned way.

I am sure there are good reasons to migrate away from the HTML-in-PHP standard of how WordPress themes used to work. But when I am building something for WordPress, I find myself fighting its newer structure constantly. Every time I do, the simplest and most predictable solution is to resort to twenty-year-old techniques — not only because I know them well, but because they are frequently the only things which work.

Steve Albini Has Died

What a loss, and what a life.

A trip through Steve Albini’s catalogue as a recording engineer — never “producer” — is a varied and lengthy excursion. Yes, he recorded Nirvana’s “In Utero” and it is very good, and he also worked with Cloud Nothings, and Sunn O))), and Low, and black midi. He recorded the Stooges, a joint Robert Plant and Jimmy Page album, and I think his interpretation of Fugazi’s “In on the Kill Taker” remains a highlight, but he also made albums with artists like — a personal favourite — Hot Little Rocket.

A new record from his own band, Shellac, is out next Friday. It will be bittersweet.

TikTok Sues U.S. Over Divest-or-Ban Law

Rebecca Kern, Politico:

TikTok and its parent company ByteDance sued Tuesday to challenge a law President Joe Biden signed to force the sale or ban of the video sharing app.

The petition filed with the U.S. Court of Appeals for the District of Columbia Circuit claims the law violates the First Amendment rights of its 170 million American users. It says the law shuts down the platform based on “speculative and analytically flawed concerns about data security and content manipulation.”

Nilay Patel on Threads:

This TikTok complaint is startlingly weak on first read — it basically handwaves through “a sale is impossible, let’s just assume it’s a ban” and begins the First Amendment argument from there

TikTok frames a jettisoning from ByteDance as something which would treat the United States as its own distinct company but, surely, an alternative interpretation of the U.S.’ intent is for the entire TikTok enterprise worldwide to be distinct from ByteDance.

Either way, there is plenty of confirmation in the suit from TikTok that its draft agreement with CFIUS includes all sorts of extraordinary stipulations. Why the two parties could not be satisfied with signatures on that document is a great question. TikTok, for its part, alleges the problems are with the U.S. government, and it says it is preemptively implementing several parts of the draft.

Apple Introduces M4 SoC in New iPad Pro Models

Kudos to Mark Gurman — Apple really did introduce the M4 SoC in the new iPad Pro models. The M4 comes just six months after Apple launched the M3, which is currently used in half the Mac lineup. The other half — the Mac Mini, the Mac Studio, and the Mac Pro — all use processors in the M2 family. Two of those models were only just launched at WWDC last year.

None of this makes sense to anybody outside of Apple. Perhaps it is not supposed to: any given processor is perhaps good enough that you do not need to worry. But Apple itself set up this comparison when it decided to use the same processors in iPads and Macs, and name them to clearly show which ones are newer. I am sure there are legitimate and plentiful performance improvements in each generation of new processors but it is a dizzying set of choices from a buyer’s perspective. Maybe there will be updates to the three Mac desktops at WWDC this year.

Update: Jason Snell, Six Colors:

Why the M4 now? It mostly has to do with Apple shifting chip production at TSMC (the company that fabs Apple’s chips) from the first-generation 3nm process to a new, more efficient second-generation 3nm process. There’s a whole backstory about TSMC’s change in 3nm processes that’s not worth getting into here, but suffice it to say that the first-generation process is largely a dead end, and the company is moving to a new set of 3nm processes.

That is the kind of backstory I would be interested in. However, as I wrote above, this is the kind of explanation which is logical for Apple but produces a confusing result for the rest of us.

Judge Mulls Sanctions Over Google’s Routine Destruction of Chat History

Ashley Belanger, Ars Technica:

Near the end of the second day of closing arguments in the Google monopoly trial, US district judge Amit Mehta weighed whether sanctions were warranted over what the US Department of Justice described as Google’s “routine, regular, and normal destruction” of evidence.


According to the DOJ, Google destroyed potentially hundreds of thousands of chat sessions not just during their investigation but also during litigation. Google only stopped the practice after the DOJ discovered the policy. […]

It is entirely reasonable for individuals to conduct themselves privately and off-the-record, but an official corporate policy built around specific topics seems like a different matter. That Google kept it up even after the DOJ got involved is particularly shady.

Apple’s Shifting Supply Chain

Cheng Ting-Fang and Lauly Li, Nikkei:

Apple is deepening its ties with China even as it further expands production in Southeast Asia and India, highlighting the balancing act the iPhone maker is striking between politics and business.

Apple increased its China-headquartered suppliers and Chinese manufacturing sites in 2023 while using fewer suppliers from Taiwan, the U.S., Japan and South Korea, a Nikkei Asia analysis of Apple’s latest official list of suppliers shows.

Ben Jiang, South China Morning Post:

China’s central Henan province, home to the world’s largest iPhone manufacturing complex in its capital Zhengzhou, reported a 60 per cent year-on-year drop in smartphone exports in the first quarter, showing the impact of Apple’s moves to diversify production outside the mainland.

Malcolm Owen, AppleInsider:

In a China Observer report released on Monday, footage of a Foxconn industrial park in Nanning is shown to be deserted. Once employing 50,000 people, it’s now practically an empty shell. AppleInsider has learned that as Apple’s operations have moved elsewhere, manufacturing capacity was freed elsewhere in China, leading to this plant’s closure.

As Apple’s dependence on manufacturing in China increasingly becomes a liability, and many call for its extrication from the country, it can sometimes be easy to forget just how many people have jobs which depend on this supply chain. It is an unfathomably huge industry made up of hundreds of thousands of individuals. This is too often the story of outsourced labour: rich companies move around or exit entirely, leaving others holding the bag.

Microsoft Says It Is Prioritizing Security Again

Satya Nadella, in a memo to Microsoft employees since posted on the company’s blog:

Today, I want to talk about something critical to our company’s future: prioritizing security above all else.

Microsoft runs on trust, and our success depends on earning and maintaining it. We have a unique opportunity and responsibility to build the most secure and trusted platform that the world innovates upon.

Charlie Bell, Microsoft’s executive vice president of security, expanded upon the company’s specific goals and priorities, and explained a particular incentive:

We will mobilize the expanded [Secure Future Initiative] pillars and goals across Microsoft and this will be a dimension in our hiring decisions. In addition, we will instill accountability by basing part of the compensation of the company’s Senior Leadership Team on our progress in meeting our security plans and milestones.

The obvious point of comparison for these memos is Bill Gates’ ‘Trustworthy Computing’ memo from 2002:

Trustworthiness is a much broader concept than security, and winning our customers’ trust involves more than just fixing bugs and achieving “five-nines” availability. It’s a fundamental challenge that spans the entire computing ecosystem, from individual chips all the way to global Internet services. It’s about smart software, services and industry-wide cooperation.

There is a sort of MBA-type wordiness in Nadella’s memo that is not present in the more direct Gates memo despite the latter being considerably longer, but both have similar goals. Microsoft’s poor track record, especially recently, is corroding the trust of its enterprise and government customers but — and this is the catch — where are they going to go?

Sponsor: — Turn Articles Into Podcasts

Thanks to Listen Later for sponsoring this week’s posts at Pixel Envy.

Some sponsors provide their own Friday posts, but I was asked to write a little something of my own for Listen Later and I am happy to do so. I only accept sponsorships for products I actually like and would use. Listen Later is just such a service.


I use Listen Later to read long articles to me while I work, when I cook, and as I clean up after dinner. I have also used it to translate news stories. Maybe my favourite use is as a drafting tool for when I am writing a longer article and need to hear it read back to me. The text-to-speech quality is excellent, and having it show up in my podcast app alongside episodes from the shows I listen to makes it easy to keep track of new articles.

You know that article you swear you were going to read but still have not got around to? Yeah, the one in that ancient browser tab. Try hearing it with Listen Later’s free trial instead.

WhatsApp Pushes Back on Encryption-Hostile Policies in India

Raghav Mendiratta, of Stanford University’s Center for Internet and Society, in March 2021:

Under Rule 4(2), it is mandatory for a significant social media intermediary providing messaging services to identify the first originator of a message if a competent court or executive authority orders that it is necessary to do so for the purposes of investigation and prosecution of certain offences punishable with imprisonment for a term not less than five years. Technical experts say that compliance with this requirement is not possible unless end-to-end encryption on messaging services such as WhatsApp is broken.

WhatsApp sued over these rules the same month and then, last week, threatened to leave India if it is required to comply with policies that threaten encryption.

Indu Bhan, Economic Times:

WhatsApp LLC on Thursday told the Delhi High Court that the popular messaging platform will end if it is made to break encryption of messages.

“As a platform, we are saying, if we are told to break encryption, then WhatsApp goes,” counsel Tejas Karia, appearing for WhatsApp, told a Division Bench comprising Acting Chief Justice Manmohan and Justice Manmeet Pritam Singh Arora.

This is a familiar threat from WhatsApp, but it feels particularly weighty in India owing to the its extraordinary popularity in the country. I have to wonder if WhatsApp is bluffing. Would it really abandon the hundreds of millions of users in its most popular geography?

Apple Modifies Terms of Core Technology Fee With More Exceptions


[…] Today, we’re introducing two additional conditions in which the CTF is not required:

  • First, no CTF is required if a developer has no revenue whatsoever. […]

  • Second, small developers (less than €10 million in global annual business revenue*) that adopt the alternative business terms receive a 3-year free on-ramp to the CTF to help them create innovative apps and rapidly grow their business. […]

Two fundamental issues remain with the Core Technology Fee — namely, that developers still need to pay Apple even if their app is distributed exclusively outside the App Store and in-app payments are handled by a third-party processor, and the fee is an unknown and surprising future charge. One marvels at how the Mac could remain such a successful developer platform for so long without the support of a per-install fee.

But I was wrong. This is a meaningful relaxation of terms for entirely free apps, like the young developer example raised by Riley Testut during the March DMA compliance workshop.

Spotify Is Recommending ‘A.I.’-Generated Music

Zach Ocean:

Encountered AI music in the wild today

Motown-style tracks straight from Suno/Udio with … interesting … titles and lyrics

Recommended by Spotify via Discover Weekly

These “interesting” songs include instant classics like “My Arms Are Just Fuckin’ Stuck Like This” and “It’s Time To Take a Shit on the Company’s Dime”. Classic Happy Bunny-style humour.

Ryan Broderick explains in Garbage Day:

The story behind the page is interesting. Obscurest Vinyl started as a Facebook page that would photoshop fake album covers for classic records that didn’t exist. The page recently shifted into posting AI songs to go with the fake album covers. As one commenter noted, you can tell the songs are AI because most of them feature bass and drum parts that don’t repeat in any discernible pattern. The account also regularly fights with users on Instagram who gripe about it using AI.

Truly embarrassing for Spotify that it is recommending stuff like this, and not for the first time.

Tech Journalism Has the Same Faults as Other Beats

Timothy B. Lee, writing in Asterisk:

Over the last decade, Silicon Valley elites have grown increasingly frustrated with media coverage of their industry. And they aren’t wrong that coverage has grown increasingly negative. But I think they’re wrong to assume this reflects a hostility toward Silicon Valley in particular.

A more banal explanation is that companies like Google, Facebook, and Uber aren’t startups anymore. It no longer makes sense to publish positive profiles introducing readers to these companies. So reporters have switched to treating Silicon Valley giants like other big companies, which means mostly writing about them when they do something wrong.

Lee is right that tech journalism often consists of thin stories built off press releases and simplistic narratives — but so, too, does most general audience journalism. While there is the occasional nuanced story with correct weighting given to affirming and dissenting views, it is far more common to see misapplied view from nowhere journalism. But, critically, this is true of all beats. Erwin Knoll once said “everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge”, and that includes knowledge of media itself. Given how pressured journalists are, as Lee is careful to note, it is not difficult to see why stories across a range of topics become either pure boosterism or damp scandals.

The Curious Case of Apple’s Third-Party SDK List for Privacy Manifests


Starting May 1, 2024, new or updated apps that have a newly added third-party SDK that‘s on the list of commonly used third-party SDKs will need all of the following to be submitted in App Store Connect:

  1. Required reasons for each listed API

  2. Privacy manifests

  3. Valid signatures when the SDK is added as a binary dependency

Jesse Squires:

Historically, Apple has rarely, if ever, explicitly acknowledged any third-party SDK or library. It took years for them to even acknowledge community tools like CocoaPods in Xcode’s release notes. Thus, it is interesting to see which SDKs they have deemed important or concerning enough to explicitly mandate a privacy manifest. And, in typical Apple fashion, I’m pretty sure SDKs authors were not notified about this in advance. We all learned which SDKs need privacy manifests at the same time — when the list was published.

When this requirement was announced at WWDC last year, I assumed this list would be dominated by SDKs for analytics, authentication, logging, advertising, and other potentially sensitive use cases. After all, it came on the heels of reporting by the Markup and the Wall Street Journal about SDKs invisible to end users and implicated in mass surveillance, with one such software package — X-Modebanned by Apple and Google.

This list of SDKs contains seemingly few such packages. As of writing, there are 87 SDKs on Apple’s list and fully one-quarter of them — by my count — are Flutter packages intended to simplify cross-platform development. I can see how there could be risks to file and photo pickers, for example, but this list sure looks more like it is comprised of popular SDKs, not necessarily ones of privacy concern. Kits from Facebook and Snap are on the list, but TikTok’s is nowhere to be found. Several Google SDKs are on the list, including Firebase analytics, but Google’s standalone ads framework is not; Unity is on the list, but not Unity’s ad kit.

As Squires writes, any documentation about why these SDKs are on Apple’s list would be helpful. I would even take a sentence fragment.

From Space to Story in Data Journalism

Robert Simmon, Nightingale:

The launch of Ikonos was one of a handful of developments that allowed newsrooms to expand from reporting on rocket launches and satellite hardware, to using remote sensing data as an essential tool to help tell stories. A wide variety of satellite data are now used to provide context to the news, to document events, and as a tool for investigation.

It still blows my mind that I — a nobody — can open Google Earth any time I want and see aerial photography with a level of detail that would have been classified not too long ago. Years of imagery are available, too, so if I want to see how an area has changed, it is just a few button clicks away. I appreciated Simmon’s look at how capabilities like these have allowed journalists at places like Bellingcat and Buzzfeed News to document events in ways that would not have been possible before widespread consumer satellite photography.

Sponsor: — Turn Articles Into Podcasts: Send a Link, Receive Human-Like Narration to Your Podcast App

Welcome to a world where your reading list becomes your listening playlist. We are thrilled to introduce you to — an innovative service that turns articles into podcasts, making your favourite reads accessible in your favourite podcast app.


Have you ever stumbled upon an article that piqued your interest but didn’t have the time to delve into it? Whether you’re commuting, exercising, or relaxing, simplifies the process. Just email the article’s URL, and presto! Listen Later takes over, converting it into an engaging podcast episode. The narration is so lifelike, you’ll find yourself immersed in the listening experience.

But the innovation doesn’t end with articles. Listen Later enhances your entire email experience. Forward any email, and it will transform its content and attachments into a podcast episode just for you — whether it’s work reports, newsletters, or those lengthy reads you’ve postponed.

Worried about language barriers? Listen Later has you covered! offers multilingual translation and narration services. Simply select your preferred language, and immerse yourself in a vast array of content, all available to your ears.

Join us in embracing convenience and versatility. Listen Later brings your articles, emails, and documents to life in audio format. Head over to and start your free trial today. Let’s redefine the way we consume information, one podcast at a time.

What Is Noise?

Rarely do I link to something just because I want you to go read it, but this piece by Alex Ross in the New Yorker is just such an occasion. It is a wonderful piece about how we sometimes embrace noise and sometimes reject it, and what “noise” even means. (Via Matt.)

FCC Votes to Restore Net Neutrality Rules in U.S.

Tom Wheeler, former FCC chairman, writing for the Brookings Institution in October, following a vote to begin the process of reclassifying broadband as a “Title II” telecommunications service, regarding efforts to paint net neutrality regulations as no big deal:

It is the conduct of the ISPs that is in question here. Because telephone companies were Title II common carriers, their behavior had to be just and reasonable. Those companies prospered under such responsibilities; as they have morphed into wired and wireless ISPs, there is no reasonable argument why they, as well as their new competitors from the cable companies, should not continue to have public interest obligations.

Jon Brodkin, Ars Technica:

The Federal Communications Commission voted 3–2 to impose net neutrality rules today, restoring the common-carrier regulatory framework enforced during the Obama era and then abandoned while Trump was president.


ISPs insist the rules aren’t necessary because they already follow net neutrality principles yet also claim the rules are so burdensome that they will be prevented from investing more in their networks. Lobby group USTelecom today said the “relentless regulation” comes at the cost of “failing to achieve Internet for all.”

Karl Bode, Techdirt:

While broadband providers have already started whining about the rules and threatened to sue, privately (just like last time) broadband industry executives doubt the rules will have any meaningful impact on their businesses. The rules aren’t onerous, won’t likely be enforced with any consistency, and big companies like AT&T and Comcast have never, ever really had to worry about serious FCC penalties for any of their various predatory, anti-competitive, or illegal behaviors.

Bode has, for years, covered the effort to paint the reversal of net neutrality rules as inconsequential. Contrary to popular belief, the reclassification to a Title I service produced plenty of ill effects. Part of the problem was in mainstream coverage of what the rules meant and, similarly, in what their 2018 undoing would entail. Given the U.S.’ pivotal role in internet products worldwide, this protective measure to reduce the power of ISPs is a welcome one.

Lots of People Were Locked Out of Their Apple IDs

Michael Tsai:

I had another instance of my Apple ID mysteriously being locked. First, my iPhone wanted me to enter the password again, which I thought was the “normal” thing it has done every few months, almost since I got it. But after doing so it said that my account was locked.

Chance Miller, 9to5Mac:

There appears to be an increasingly widespread Apple ID outage of some sort impacting users tonight. A number of people on social media say that they were logged out of their Apple ID across multiple devices on Friday evening and forced to reset their password before logging back in…

There is (unsurprisingly) nothing relevant on Apple’s system status page, but the developer version shows two instances of “maintenance” affecting Apple accounts. It is unclear to me if it is affecting only accounts associated in some way with a developer Apple ID. Neither of my Apple IDs — both of which are connected to developer tools — were affected by this problem.

This problem is about eighteen hours old. It would be useful if Apple said literally anything useful to acknowledge the issue.

I Hope This Email Finds You

Waldo Jaquith:

I made a new Mastodon bot, called “I Hope This Email Finds You.” Twice a day it proposes a novel way to conclude that sentence that opens so many emails. (It uses phrases from Google Books that include the phrase “finds you.”) I’ve been having fun reading these, so I turned it into a bot because you, too, might have fun reading them.

This bot is excellent. At times sweet, at times absurd.

Update: The link has been changed to reflect a server move. Links to old posts remain at the old server.

Sponsor: Magic Lasso Adblock: Incredibly Private and Secure Safari Web Browsing

Online privacy isn’t just something you should be hoping for — it’s something you should expect. You should ensure your browsing history stays private and is not harvested by ad networks.

By blocking ad trackers, Magic Lasso Adblock stops you being followed by ads around the web.

Screenshot of Magic Lasso Adblock

It’s a native Safari content blocker for your iPhone, iPad, and Mac that’s been designed from the ground up to protect your privacy.

Rely on Magic Lasso Adblock to:

  • Remove ad trackers, annoyances and background crypto-mining scripts

  • Browse common websites 2.0× faster

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

So, join over 300,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

The Onion Is the Latest Publication to Be Sold by G/O Media

Mark Stenberg, reporting for Adweek in January:

Digital media company G/O Media is shopping around its portfolio of editorial assets in hopes of securing buyers for individual titles, part of a broader effort to divest the properties ahead of another challenging year for the media industry, according to four people familiar with the efforts.


“Your reporting is largely incorrect. As with many multi-title media properties, we are always entertaining opportunities,” said a representative for G/O Media. “We have sold sites and purchased sites. Having said that, we do not comment on transaction rumors and speculation.”

It was “largely incorrect”, according to G/O Media, to suggest the company was thinking about selling off its portfolio of sites just two months after selling two of its sites to Paste. CEO Jim Spanfeller even gave an “exclusive” interview to Sara Fischer, of Axios, to dispel the rumours. Weeks later, the company sold and purged the shell of Deadspin, and then it sold the A/V Club and the Takeout.

Katie Robertson, New York Times:

G/O Media announced on Thursday that it had sold The Onion, a satirical news site, to a group of digital media veterans.


The real-life Global Tetrahedron is owned by Jeff Lawson, a co-founder and former chief executive of the technology communications company Twilio. The chief executive is Ben Collins, who was a senior reporter at NBC News until recently.

G/O Media still owns six publications — for now. For its part, the Onion says you should feed it one dollar.

‘Microsoft Must Stop Selling Security as a Premium Offering’

Mary Jo Foley:

In a perfect world, Microsoft would take security seriously again. It would be transparent about breaches. Its execs would stop gloating about increasing security service revenue at a time when Microsoft can’t secure its own employees, let alone customers, against incidents that are happening with increasing frequency. And Microsoft would include must-have security capabilities as part of existing subscriptions instead of selling them as add-ons.

Microsoft sure is lucky to be so deeply enmeshed in the operations of businesses and governments that it is able to sell security for a fee because its all-in-one offering has basically no competition.

True Believing Tesla Retail Investors

Hardika Singh, Wall Street Journal:

Bartash isn’t alone. Scores of individual investors have piled into Tesla shares in recent years, lured by the company’s technology, visionary chief executive and mammoth stock market gains. Through the end of last year, the stock was one of the top 10 wealth-creating companies for investors over the past decade, according to Morningstar, rising from about $10, on a split-adjusted basis, to $250.

But the shares have since hit a rough patch, down almost 40% in 2024. Tesla is the second-worst performer in the S&P 500 and off more than 60% from its peak in November 2021. The company’s market value fell below $500 billion last week for the first time in nearly a year, after climbing as high as $1.235 trillion.

It is hard to blame these people for sticking with Tesla despite its actual performance. Tesla’s stock is in the tank for the year, and Singh’s story was published Monday, one day before a bleak earnings report. Income was less than half was it was a year prior, revenue and margin fell, and it sold many fewer vehicles than it made.

Even so, Tesla’s stock jumped 12% because its CEO said “A.I.”, and he recently promised a robotaxi service once again and a less expensive model. Investors apparently believe him.

How iOS Geographically Restricts New Features

Adam Demasi:

In iOS 17.4, Apple introduced a new system called eligibilityd. This works with countryd (which you might have heard about when it first appeared in iOS 16.2) and the Apple ID system to decide where you physically are. The idea is that multiple sources need to agree on where you are, before giving you access to features such as those mandated by the Digital Markets Act.

I cannot remember a time when Apple so aggressively restricted system features by geography. Most often, options show up if you change the device region in Settings; that is how Apple News can be accessed outside regions where it is officially available. But someone accessing News is only positive for Apple. There are other things locked by geography, like like Apple Cash, which only works with U.S. banking information, and special obligations to China which are active for devices sold only there. Those are legal obligations which someone either deeply tied to systems in a particular country — in the case of the former — or something people likely would not want.

The DMA features, on the other hand, are probably something a lot of users would like access to. Perhaps not a majority of iPhone owners, but a lot of them. Engineers at Apple have worked very hard to make a lot of features, and also to prevent them from being used. Clearly, these are features Apple did not want to make at all, but it is notable how much effort it is making to lock them down.