Month: May 2025

Manuel Moreale:

Now, while I’m here let me tackle a somewhat related issue I’ve been thinking about lately and that is the idea that AI is going to become the default layer between users and the web. We all yelled and screamed because the web has too many gatekeepers, we all lamented Google search results going to shit, and we all celebrated when new search engines were coming up. Why would I be happy trading a search result page filled with links — even if ranked in a flawed way — for a block of text that gives me an opinionated answer and maybe some links?

A fantastic question. One could easily ask the same about why we would trade the vast surveillance of online advertising for the universal surveillance gestured toward by Perplexity, too. It is all familiar, but more — in ways we might appreciate, and ways we loathe.

Bruce Schneier and Arun Vishwanath, Dark Reading:

There’s a new cybersecurity awareness campaign: Take9. The idea is that people — you, me, everyone — should just pause for nine seconds and think more about the link they are planning to click on, the file they are planning to download, or whatever it is they are planning to share.

There’s a website — of course — and a video, well-produced and scary. But the campaign won’t do much to improve cybersecurity. The advice isn’t reasonable, it won’t make either individuals or nations appreciably safer, and it deflects blame from the real causes of our cyberspace insecurities.

We will always struggle to accurately identify malicious emails because that is the whole point of them. When criminals are able to scam those who are professionals in this field — people like a LastPass system engineer, Jim Browning, and Troy Hunt — it is clear we cannot expect an average user to be a so-called human firewall.

Drew DeVault:

Now it’s LLMs. If you think these crawlers respect robots.txt then you are several assumptions of good faith removed from reality. These bots crawl everything they can find, robots.txt be damned, including expensive endpoints like git blame, every page of every git log, and every commit in every repo, and they do so using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses – mostly residential, in unrelated subnets, each one making no more than one HTTP request over any time period we tried to measure – actively and maliciously adapting and blending in with end-user traffic and avoiding attempts to characterize their behavior or block their traffic.

As curious and fascinating as I find many applications of generative artificial intelligence, I find it difficult to square with the flagrantly unethical way it has been trained. Server admins have to endure and pay for massive amounts of traffic from well-funded corporations, without compensation, all of which treat robots.txt as something to be worked around. Add to that the kind of copyright infringement that would cost users thousands of dollars per file, and it is clear the whole system is morally bankrupt.

Do not get me wrong — existing intellectual property law is in desperate need of reform. Big, powerful corporations have screwed us all over by extending copyright terms. In Canada, the number of works in the public domain will be stagnant for the next eighteen years after we signed onto the Canada–United States–Mexico Agreement. But what artificial intelligence training is proposing is a worst-of-both-worlds situation, in which some big businesses get to retain a tight grip on artists’ works, and others get to assume anything remotely public is theirs to seize.

Federico Viticci, MacStories:

For the past two weeks, I’ve been able to use Sky, the new app from the people behind Shortcuts who left Apple two years ago. As soon as I saw a demo, I felt the same way I did about Editorial, Workflow, and Shortcuts: I knew Sky was going to fundamentally change how I think about my macOS workflow and the role of automation in my everyday tasks.

Only this time, because of AI and LLMs, Sky is more intuitive than all those apps and requires a different approach, as I will explain in this exclusive preview story ahead of a full review of the app later this year.

Matthew Cassinelli has also been using an early version of Sky:

Sky bridges the gap between old-school scripting, modern automation, and new-age LLM technology, built with a deep love for working on the Mac as a platform.

This feels like the so-far-unfulfilled promise of Apple Intelligence — but more. The ways I want to automate iOS are limited. But the kinds of things I want help with on my Mac are boundless. Viticci shares the example of automatically sorting a disorganized folder in Finder, and that is absolutely something I want to do easier than I currently can. Yes, I could cobble together something with AppleScript or an Automator workflow, but it would be so much nicer if I could just tell my computer to do something in the most natural language I understand. This is fascinating.

Mike Scarcella, Reuters:

Alphabet’s Google has persuaded a federal judge in California to reject a lawsuit from video platform Rumble accusing the technology giant of illegally monopolizing the online video-sharing market.

In a ruling on Wednesday, U.S. District Judge Haywood Gilliam Jr said Rumble’s 2021 lawsuit seeking more than $2 billion in damages was untimely filed outside the four-year statute of limitations for antitrust claims.

Rumble is dishonest and irritating, but I thought its case in which it argued Google engages in self-preferencing could be interesting. It seems to rank YouTube videos more highly than those from other sources. This can be explained by YouTube’s overwhelming popularity — it consistently ranks in the top ten web services according to Cloudflare — yet I can see anyone’s discomfort in taking Google’s word for it, since it has misrepresented its ranking criteria.

This is an unsatisfying outcome, but it seems Rumble has another suit it is still litigating.

John Herrman, New York magazine:

But I also don’t want to assume Google knows exactly how this stuff will play out for Google, much less what it will actually mean for millions of websites, and their visitors, if Google stops sending as many people beyond its results pages. Google’s push into productizing generative AI is substantially fear-driven, faith-based, and informed by the actions of competitors that are far less invested in and dependent on the vast collection of behaviors — websites full of content authentic and inauthentic, volunteer and commercial, social and antisocial, archival and up-to-date — that make up what’s left of the web and have far less to lose. […]

Very nearly since it launched, Google has attempted to answer users’ questions as immediately as possible. It had the “I’m Feeling Lucky” button since it was still a stanford.edu subdomain, and it has since steadily changed the results page to more directly respond to queries. But this seems entirely different — a way to benefit from Google’s decades-long ingestion of the web and giving almost nothing back. Or, perhaps, giving back something ultimately worse: invented answers users cannot trust, and will struggle to check because sources are intermingled and buried.

Ciro Santilli:

This article is about covert agent communication channel websites used by the CIA in many countries from the late 2000s until the early 2010s, when they were uncovered by counter intelligence of the targeted countries circa 2010-2013.

This is a pretty clever scheme in theory, but seems to have been pretty sloppy in practice. That is, many of the sites seem to share enough elements allowing an enterprising person to link the seemingly unrelated sites — even, as it turns out, years later and after they have been pulled offline. That apparently resulted in the deaths of, according to Foreign Policy, dozens of people.

Apple issued a news release today touting the safety of the App Store, dutifully covered without context by outlets like 9to5Mac, AppleInsider, and MacRumors. This has become an annual tradition in trying to convince people — specifically, developers and regulators — of the wisdom of allowing native software to be distributed for iOS only through the App Store. Apple published similar stats in 2021, 2022, 2023, and 2024, reflecting the company’s efforts in each preceding year. Each contains similar figures; for example:

  • In its new report, Apple says it “terminated more than 146,000 developer accounts over fraud concerns” in 2024, an increase from 118,000 in 2023, which itself was a decrease from 428,000 in 2022. Apple said the decrease between 2022 and 2023 was “thanks to continued improvements to prevent the creation of potentially fraudulent accounts in the first place”. Does the increase in 2024 reflect poorer initial anti-fraud controls, or an increase in fraud attempts? Is it possible to know either way?

  • Apple says it deactivated “nearly 129 million customer accounts” in 2024, a significant decrease from deactivating 374 million the year prior. However, it blocked 711 million account creations in 2024, which is several times greater than the 153 million blocked in the year before. Compare to 2022, when it disabled 282 million accounts and prevented the creation of 198 million potentially fraudulent accounts. In 2021, the same numbers were 170 million and 118 million; in 2020, 244 million and 424 million. These numbers are all over the place.

  • A new statistic Apple is publishing this year is “illicit app distribution”. It says that, in the past month, it “stopped nearly 4.6 million attempts to install or launch apps distributed illicitly outside the App Store or approved third-party marketplaces”. These are not necessarily fraudulent, pirated, or otherwise untoward apps. This statistic is basically a reflection of the control maintained by Apple over iOS regardless of user intentions.

There are plenty of numbers just like these in Apple’s press release. They all look impressive in large part because just about any statistic would be at Apple’s scale. Apple is also undeniably using the App Store to act as a fraud reduction filter, with mixed results. I do not expect a 100% success rate, but I still do not know how much can be gleaned from context-free numbers.

Nicholas Chrastil, the Guardian:

State officials have praised Butler Snow for its experience in defending prison cases – and specifically William Lunsford, head of the constitutional and civil rights litigation practice group at the firm. But now the firm is facing sanctions by the federal judge overseeing Johnson’s case after an attorney at the firm, working with Lunsford, cited cases generated by artificial intelligence – which turned out not to exist.

It is one of a growing number of instances in which attorneys around the country have faced consequences for including false, AI-generated information in official legal filings. A database attempting to track the prevalence of the cases has identified 106 instances around the globe in which courts have found “AI hallucinations” in court documents.

The database is now up to 120 cases, including some fairly high-profile ones like that against Timothy Burke.

Here is a little behind-the-scenes from this weekend’s piece about “nimble fingers” and Apple’s supply chain. The claim, as framed by Tripp Mickle, in the New York Times, is that “[y]oung Chinese women have small fingers, and that has made them a valuable contributor to iPhone production because they are more nimble at installing screws and other miniature parts”. This sounded suspicious to me because I thought about it for five seconds. There are other countries where small objects are carefully assembled by hand, for example, and attributing a characteristic like “small fingers” to hundreds of millions of “young Chinese women” seems reductive, to put it mildly. But this assumption had to come from somewhere, especially since Patrick McGee also mentioned it.

So I used both DuckDuckGo and Google to search for relevant keywords within a date range of the last fifteen years and excluding the past month or so. I could not quickly find anything of relevance; both thought I was looking for smartphones for use with small hands. So I thought this might be a good time to try ChatGPT. It immediately returned a quote from a 2014 report from an international labour organization, but did not tell me the title of the report or give me a link. I asked it for the title. ChatGPT responded it was actually a 2012 report that mentioned “nimble fingers” of young women being valuable, and gave me the title. But when I found copies of the report, there was no such quote or anything remotely relevant. I did, however, get the phrase “nimble fingers”, which sent me down the correct search path to finding articles documenting this longstanding prejudice.

Whether because of time crunch or laziness, it baffles me how law firms charging as much as they do have repeatedly failed to verify the claims generated by artificial intelligence tools.

Tripp Mickle, of the New York Times, wrote another one of those articles exploring the feasibility of iPhone manufacturing in the United States. There is basically nothing new here; the only reason it seems to have been published is because the U.S. president farted out yet another tariff idea, this time one targeted specifically at the iPhone at a rate of 25%.1

Anyway, there is one thing in this article — bizarrely arranged in a question-and-answer format — that is notable:

What does China offer that the United States doesn’t?

Small hands, a massive, seasonal work force and millions of engineers.

Young Chinese women have small fingers, and that has made them a valuable contributor to iPhone production because they are more nimble at installing screws and other miniature parts in the small device, supply chain experts said. In a recent analysis the company did to explore the feasibility of moving production to the United States, the company determined that it couldn’t find people with those skills in the United States, said two people familiar with the analysis who spoke on the condition of anonymity.

I will get to the racial component of this in a moment, but this answer has no internal logic. There are two sentences in that larger paragraph. The second posits that people in the U.S. do not have the “skills” needed to carefully assemble iPhones, but the skills as defined in the first sentence are small fingers — which is not a skill. I need someone from the Times to please explain to me how someone can be trained to shrink their fingers.

Anyway, this is racist trash. In response to a question from Julia Carrie Wong of the Guardian, Times communications director Charlie Stadtlander disputed the story was furthering “racial or genetic generalizations”, and linked to a podcast segment clipped by Mickle. In it, Patrick McGee, author of “Apple in China”, says:

The tasks that are often being done to make iPhones require little fingers. So the fact that it’s young Chinese women with little fingers — that actually matters. Like, Apple engineers will talk about this.

The podcast in question is, unsurprisingly, Bari Weiss’; McGee did not mention any of this when he appeared on, for example, the Daily Show.

Maybe some Apple engineers actually believe this, and maybe some supply chain experts do, too. But it is a longstanding sexist stereotype. (Thanks to Kat for the Feminist Review link.) It is ridiculous to see this published in a paper of record as though it is just one fact among many, instead of something which ought to be debunked.

The Times has previously reported why iPhones cannot really be made in the U.S. in any significant quantity. It has nothing to do with finger size, and everything to do with a supply chain the company has helped build for decades, as McGee talks about extensively in that Daily Show interview and, presumably, writes about in his book. (I do not yet have a copy.) Wages play a role, but it is the sheer concentration of manufacturing capability that explains why iPhones are made in China, and why it has been so difficult for Apple to extricate itself from the country.


  1. About which the funniest comment comes from Anuj Ahooja on Threads. ↥︎

Uber CEO Dara Khosrowshahi was on the Verge’s “Decoder” podcast with Nilay Patel, and was asked about Route Share:

I read this press release announcing Route Share, and I had this very mid-2010s reaction, which was what if Uber just invented a bus. Did you just invent a bus?

I think to some extent it’s inspired by the bus. If you step back a little bit, a part of us looking to expand and grow is about making Uber more affordable to more people. I think one of the things that makes tech companies different from most companies out there is that our goal is to lower prices. If we lower the price, then we can extend the audience.

There is more to Khosrowshahi’s answer, but I am going to interject with three objections. First, the idea that Route Share is “inspired” “to some extent” by a bus is patently ridiculous — it is a vehicle with multiple passengers who embark and disembark at fixed points along a fixed route. It is a bus. A bad one, but a bus.

Second, tech companies are not the only kinds of companies that want to lower prices. Basically every consumer business is routinely marketed on lowering prices and saving customers money. This is the whole entire concept of big box stores like Costco and Walmart. Whether they are actually saving people money is a whole different point.

Which brings me to my third objection, which is that Uber has been raising prices, not reducing them. In the past year, according to a Gridwise report, Uber’s fares increased by 7.2% in the United States, even though driver pay fell 3.4%. Uber has been steadily increasing its average fare since 2018, probably to set the groundwork for its 2019 initial public offering.

Patel does not raise any similar objections.

Anyway, back to Khosrowshahi:

There are two ways of lowering price as it relates to Route Share. One is you get more than one person to share a car because cars cost money, drivers’ time costs money, etc., or you reduce the size or price of the vehicle. And we’re doing that actively. For example, with two-wheelers and three-wheelers in a lot of countries. We’ve been going after this shared concept, which is a bus, for many, many years. We started with UberX Share, for example, which is on-demand sharing.

But this concept takes it to the next level. If you schedule and create consistency among routes, then I think we can up the matching quotient, so to speak, and then essentially pass the savings on to the consumer. So, call it a next-gen bus, but the goal is just to reduce prices to the consumer and then help with congestion and the environment. That’s all good as well.

Given the premise of “you get more than one person to share a car because cars cost money”, you might think Khosrowshahi would discuss the advantageous economics of increasing vehicle capacity. Instead, he cleverly pivots to smaller vehicles, despite Khosrowshahi and Patel discussing earlier how often their Uber ride occurs in a Toyota Highlander — a “mid-size” but still large SUV. This is an obviously inefficient way of moving one driver and one passenger around a city.

We just need better public transit. We should have an adequate supply of taxis, yes, but it is vastly better for everyone if we improve our existing infrastructure of trains and buses. Part of the magic of living in a city is the viability of shared public services like these.

Greg Storey begins this piece with a well-known quote from Plato’s “Phaedrus”, in which the invention of writing is decried as “an elixir not of memory, but of reminding”. Storey compares this to a criticism of large language models, and writes:

Even though Plato thought writing might kill memory, he still wrote it down.

But this was not Plato’s thought — it was the opinion of Socrates expressed through Thamus. Socrates was too dismissive of the written word for a reason he believed worthwhile — that memory alone is a sufficient marker of intelligence and wisdom.

If anything, I think Storey’s error in attribution actually reinforces the lesson we can draw from it. If we relied on the pessimism of Socrates, we might not know what he said today; after all, human memory is faulty. Because Plato bothered to write it down, we can learn from it. But the ability to interpret it remains ours.

What struck me most about this article, though, is this part:

The real threat to creativity isn’t a language model. It’s a workplace that rewards speed over depth, scale over care, automation over meaning. If we’re going to talk about what robs people of agency, let’s start there. […]

Thanks to new technologies — from writing to large language models, from bicycles to jets — we are able to dramatically increase the volume of work done in our waking hours and that, in turn, increases the pressure to produce even more. The economic term for this is “productivity”, which I have always disliked. It distills everything down to the ratio of input effort compared to output value. In its most raw terms, it rewards the simplistic view of what a workplace ought to be, as Storey expresses well.

Ryan Francis Bradley, New York Times Magazine:

Only — what if we did know exactly how he did the thing, and why? Before the previous installment of the franchise, “Dead Reckoning,” Paramount released a nine-minute featurette titled “The Biggest Stunt in Cinema History.” It was a behind-the-scenes look at that midair-motorbike moment, tracking how Cruise and his crew pulled it off. We saw a huge ramp running off the edge of a Norwegian fjord. We heard about Cruise doing endless motocross jumps as preparation (13,000 of them, the featurette claims) and skydiving repeatedly (more than 500 dives). We saw him touching down from a jump, his parachute still airborne above him, and giving the director Christopher McQuarrie a dap and a casual “Hey, McQ.” We heard a chorus of stunt trainers telling us how fantastic Cruise is (“an amazing individual,” his base-jumping coach says). And we hear from Cruise himself, asking his driving question: “How can we involve the audience?”

The featurette was an excellent bit of Tom Cruise propaganda and a compelling look at his dedication to (or obsession with) his own mythology (or pathology). But for the movie itself, the advance release of this featurette was completely undermining. When the jump scene finally arrived, it was impossible to ignore what you already knew about it. […]

Not only was the stunt compromised by the featurette, the way it was shot and edited did not help matters. Something about it does not look quite right — maybe it is the perpetual late afternoon light — and the whole sequence feels unbelievable. That is, I know Cruise is the one performing the stunt, but if I found out each shot contained a computer-generated replacement for Cruise, it would not surprise me.

I am as excited for this instalment as anyone. I hope it looks as good as a $300 million blockbuster should. But the way this franchise has been shot since “Fallout” has been a sore spot for me and, with the same director, cinematographer, and editor as “Dead Reckoning”, I cannot imagine why it would be much different.

Update: Of course, the practical stunts are only part of the story.

Rolfe Winkler, Amrith Ramkumar, and Meghan Bobrowsky, Wall Street Journal:

Apple stepped up efforts in recent weeks to fight Texas legislation that would require the iPhone-maker to verify ages of device users, even drafting Chief Executive Tim Cook into the fight.

The CEO called Texas Gov. Greg Abbott last week to ask for changes to the legislation or, failing that, for a veto, according to people familiar with the call. These people said that the conversation was cordial and that it made clear the extent of Apple’s interest in stopping the bill.

Abbott has yet to say whether he will sign it, though it passed the Texas legislature with veto-proof majorities.

This comes just a few months after Apple announced it would be introducing age range APIs in iOS later this year. Earlier this month, U.S. lawmakers announced federal bills with the same intent. This is clearly the direction things are going. Is there something specific in Texas’ bill that makes it particularly objectionable? Or is it simply the case Apple and Google would prefer a single federal law instead of individual state laws?

Want to experience twice as fast load times in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Magic Lasso Adblock: 2.0× Faster Web Browsing in Safari

As an efficient, high performance and native Safari ad blocker, Magic Lasso blocks all intrusive ads, trackers, and annoyances – delivering a faster, cleaner, and more secure web browsing experience.

By cutting down on ads and trackers, common news websites load 2× faster and browsing uses less data while saving energy and battery life.

Rely on Magic Lasso Adblock to:

  • Improve your privacy and security by removing ad trackers

  • Block all YouTube ads, including pre-roll video ads

  • Block annoying cookie notices and privacy prompts

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

Remember how, in 2023, the U.S. Office of the Director of National Intelligence published a report acknowledging mass stockpiling of third-party data it had purchased? It turns out there is so much private information about people it is creating a big headache for the intelligence agencies — not because of any laws or ethical qualms, but simply because of the sheer volume.

Sam Biddle, the Intercept:

The Office of the Director of National Intelligence is working on a system to centralize and “streamline” the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be “misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons.” The documents state spy agencies will use the web portal not just to search through reams of private data, but also run them through artificial intelligence tools for further analysis.

Apparently, the plan is to feed all this data purchased from brokers and digital advertising companies into artificial intelligence systems. The DNI says it has rules about purchasing and using this data, so there is nothing to worry about.

By the way, the DNI’s Freedom of Information Act page was recently updated to remove links to released records and FOIA logs. They were live on May 5 but, as of May 16, those pages have been removed, and direct links no longer resolve either. Strange.

Update: The ODNI told me its “website is currently under construction”.

Berber Jin, Wall Street Journal:

Altman and Ive offered a few hints at the secret project they have been working on [at a staff meeting]. The product will be capable of being fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk, and will be a third core device a person would put on a desk after a MacBook Pro and an iPhone.

Ambitious, albeit marginally less hubristic than considering it a replacement for either of those two device categories.

Stephen Hackett:

If OpenAI’s future product is meant to work with the iPhone and Android phones, then the company is opening a whole other set of worms, from the integration itself to the fact that most people will still prefer to simply pull their phone out of their pockets for basically any task.

I am reminded of an April 2024 article by Jason Snell at Six Colors:

The problem is that I’m dismissing the Ai Pin and looking forward to the Apple Watch specifically because of the control Apple has over its platforms. Yes, the company’s entire business model is based on tightly integrating its hardware and software, and it allows devices like the Apple Watch to exist. But that focus on tight integration comes at a cost (to everyone but Apple, anyway): Nobody else can have the access Apple has.

A problem OpenAI could have with this device is the same as was faced by Humane, which is that Apple treats third-party hardware and software as second-class citizens in its post-P.C. ecosystem. OpenAI is laying the groundwork for better individual context. But this is a significant limitation, and it is one I am curious to see how it is overcome.

Whatever this thing is, it is undeniably interesting to me. OpenAI has become a household name on a foundation of an academic-sounding product that has changed the world. Jony Ive has been the name attached to entire eras of design. There is plenty to criticize about both. Yet the combination of these things is surely intriguing, inviting the kind of speculation that used to be commonplace in tech before it all became rote. I have little faith our world will become meaningfully better with another gadget in it. Yet I hope the result is captivating, at least, because we could use some of that.

Jessica Conditt, Engadget:

A group of GeoGuessr map creators have pulled their contributions from the game to protest its participation in the Esports World Cup 2025, calling the tournament “a sportswashing tool used by the government of Saudi Arabia to distract from and conceal its horrific human rights record.” The protestors say the blackout will hold until the game’s publisher, GeoGuessr AB, cancels its planned Last Chance Wildcard tournament at the EWC in Riyadh, Saudi Arabia, from July 21 to 27.

Those participating in this blackout created some of the most popular and notable maps in the game. Good for them.

Update: GeoGuessr says it is withdrawing from the EWC.

Thinking about the energy “footprint” of artificial intelligence products makes it a good time to re-link to Mark Kaufman’s excellent 2020 Mashable article in which he explores the idea of a carbon footprint:

The genius of the “carbon footprint” is that it gives us something to ostensibly do about the climate problem. No ordinary person can slash 1 billion tons of carbon dioxide emissions. But we can toss a plastic bottle into a recycling bin, carpool to work, or eat fewer cheeseburgers. “Psychologically we’re not built for big global transformations,” said John Cook, a cognitive scientist at the Center for Climate Change Communication at George Mason University. “It’s hard to wrap our head around it.”

Ogilvy & Mather, the marketers hired by British Petroleum, wove the overwhelming challenges inherent in transforming the dominant global energy system with manipulative tactics that made something intangible (carbon dioxide and methane — both potent greenhouse gases — are invisible), tangible. A footprint. Your footprint.

The framing of most of the A.I. articles I have seen thankfully shies away from ascribing individual blame; instead, they point to systemic flaws. This is preferable, but it still does little at the scale of electricity generation worldwide.