Month: June 2024

M.G. Siegler:

With all the talk about how the EU believes Apple is anticompetitive, it never occurred to me to read it more literally. By announcing the [sic] would not be shipping their ‘Apple Intelligence’ tools in the EU, Apple is choosing to not compete in AI in the region. That is anticompetitive. I guess?

Siegler is not the only person who seems to be confused by Margrethe Vestager’s recent comments, as transcribed by Ben Lovejoy of 9to5Mac:

I find that very interesting that they say we will now deploy AI where we’re not obliged to enable competition. I think that is that is the most sort of stunning open declaration that they know 100% that this is another way of disabling competition where they have a stronghold already.

Vestanger is claiming Apple Intelligence must be anticompetitive because Apple is not launching it in the E.U. where it would fall under the governance of the DMA. It is, at best, a stretch to conclude that from Apple’s cautious behaviour. But I cannot see how one could interpret Vestanger’s comments to mean she believes the delay of Apple Intelligence in the E.U. is itself anticompetitive.

Yesterday, in responding to a Google profile of DRAGONBRIDGE, a Chinese state-affiliated disinformation campaign, I wrote that I hoped Google would do the same if it were a U.S.-allied effort it had found instead — forgetting that Google had already done so, and in a far more complicated circumstance.

Michael Coppola:

In January 2021, Google’s Project Zero published a series of blog posts coined the In the Wild Series. Written in conjunction with Threat Analysis Group (TAG), this report detailed a set of zero-day vulnerabilities being actively exploited in the wild by a government actor.

[…]

What the Google teams omitted was that they had in fact exposed a nine-month-long counterterrorism operation being conducted by a U.S.-allied Western government, and through their actions, Project Zero and TAG had unilaterally destroyed the capabilities and shut down the operation.

This is not the only example cited by Coppola; there are many in this post.

When an exploit chain is discovered, there is a very easy situation — technically: Google did the right thing by finding and exposing these vulnerabilities, no matter how they were being used. But doing so is politically and ethically fraught if those vulnerabilities are being used by state actors.

Patrick Howell O’Neill, reporting for MIT Technology Review in March 2021:

It’s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.

As far as I know, the U.S. ally was never revealed nor were the specific targets. Google’s revelation could have had catastrophic consequences, as Coppola speculates. But it is also true that not revealing known exploits to software vendors can have severe outcomes, as we learned with WannaCry. The risk of exposing the use of vulnerabilities is variable; the risk of not reporting them is fixed and known: they will be found by or released to people who should never have access to them.

Dell Cameron, Wired:

United States lawmakers who’ve flirted for years with the idea of offering Americans a semblance of control over their own data yanked at the last moment the latest iteration of a “comprehensive” privacy package that’s been subject to continual editing and debate for the better part of a decade. The bill, known as the American Privacy Rights Act (APRA), was scheduled for markup Thursday morning by the House Energy & Commerce Committee (E&C), which holds jurisdiction over matters of commercial surveillance.

Americans, if you do not like how Democrats diluted this bill to appease obstinate Republicans who still killed its chances, you should let your representative know.

The demise of this bill sucks because strong privacy rules in the U.S. would have a knock-on effect worldwide. It would mean the expectations of data collection, retention, and use would fundamentally shift. This bill was imperfect even in its original guise, but it was a meaningful positive step forward. My own government should learn from it.

Alex Young, Consequence:

MTV News has pulled its digital archive, making thousands of news stories, profiles, interviews, and other editorial features dating back to the 1996 no longer accessible on the web.

Jed Rosenzweig, LateNighter:

ComedyCentral.com had been home to clips from every episode of The Daily Show since 1999, and the entire run of The Colbert Report, but as of Wednesday morning, those clips (and most everything else on the site) are gone.

Michael Alex, who used to run MTV News’ web team, in a guest piece for Variety:

History needs stewards, not owners. Whoever legally owns the archive does not legally own the history, even if they own the creative work of thousands of writers, editors, producers and more. This archive — of MTV News, where you heard it first — needs to be available to public.

I will not pretend to understand how big of a financial hole Paramount is in, but I fully understand the loss of this archive. Most of the video clips are not available anywhere else — at least, not publicly and not legally. Much of the text on MTV News has been saved by the Internet Archive going back to 1996, but it also has huge gaps.

I do not want to make this sound like the Library of Alexandria is burning down, but it is an important collection of work. Now, without any notice, it is all gone.

Apple:

Diagnostics is part of Apple’s ongoing effort to extend the lifespan of Apple products. While Apple is committed to providing safe and affordable repair options, designing and building long-lasting products remains the top priority. The best type of repair for customers and the planet is one that is never needed. Today, Apple published a whitepaper explaining the company’s principles for designing for longevity — a careful balance between product durability and repairability.

The paper is worth a read to understand what role Apple sees repair playing in the lifecycle of a device, and why it is so keen on parts pairing. For example, it says the charging port is part of a more complex module, and separating it would actually create greater carbon emissions if you account for both the total emissions from manufacturing and the likelihood of repair. This is fair though, it should be said, based entirely on an internal case study, the results of which are not readily available, and which appears to be isolated to only carbon emissions — what about other environmental costs? It does sound believable.

Apple also repeats the argument made by John Ternus that building for durability can prevent the need for repair, though sometimes at the cost of its ease. Ternus and this paper explain how the addition of seals and adhesives used to make iPhones far more water resistant thereby eliminating a whole host of repair needs. But, as I pointed out at the time, these goals are not necessarily in conflict, as Apple’s recent iPhones have been easier to repair than their predecessors without sacrificing their water and dust ingress rating.

Ternus’ interview with Marques Brownlee last month and this paper seem to be both Apple’s attempt to explain how it sees repair, and a way to reframe it in more favourable terms. Repairability is important, Apple says, but “when it benefits our customers and the environment”, not in isolation. It should be considered in the context of overall device longevity. That is a reasonable argument and one I do not disagree with in general. It also makes me wonder about Apple’s attitude toward batteries in general. There should be no need to replace the trackpad, keyboard, and a square foot of aluminum in order to install a new battery in a laptop.

You do have to chuckle at Apple’s diagram on page eight highlighting the sole repairable component on the original iPhone: the SIM card tray.

[…] Next year, Canada will become the 34th country in which Apple offers Self Service Repair.

Excellent news. Legislative pressure works.

Zak Butler of Google:

Today we are sharing updated insights about DRAGONBRIDGE, the most prolific IO actor Google’s Threat Analysis Group (TAG) tracks. DRAGONBRIDGE, also known as “Spamouflage Dragon,” is a spammy influence network linked to the People’s Republic of China (PRC) that has a presence across multiple platforms. Despite producing a high amount of content, DRAGONBRIDGE still does not get high engagement from users on YouTube or Blogger.

[…]

Despite their continued profuse content production and the scale of their operations, DRAGONBRIDGE achieves practically no organic engagement from real viewers. In 2023, of the over 57,000 YouTube channels disabled, 80% had zero subscribers. Of the over 900,000 videos suspended, over 65% of their videos had fewer than 100 views, and 30% of their videos had zero views. Despite experimenting with content and producing large amounts of content, DRAGONBRIDGE still does not receive high engagement.

Reporting earlier this year by David Gilbert at Wired indicates this is not an isolated case: these propaganda efforts have been largely unsuccessful. Nevertheless, I appreciate that platform owners like Google are looking out for coordinated campaigns like these and intervening. I hope it would do the same when it is a U.S.-led disinformation campaign — and I would really like to believe it would. It is hilarious to call out any governments trying these tactics and embarrassing them with their weak engagement numbers.

At least, I hope these campaigns keep seeing a milquetoast reception. The alternative is likely terrible, especially for something like that U.S. anti-vaccine initiative.

Ben Lennett, Justin Hendrix, and Gabby Miller, Tech Policy Press:

Today, the US Supreme Court ruled in favor of the Biden Administration in Murthy v. Missouri. In a 6-3 ruling, the Court reversed a decision by the Court of Appeals for the Fifth Circuit that had found that the administration had violated the plaintiffs’ First Amendment rights, finding instead that the plaintiffs did not have standing to bring the case. “Neither the individual nor the state plaintiffs have established Article III standing to seek an injunction against any defendant,” the decision says.

This was an escalation of a September ruling favourable to the Biden administration — one which, by the way, the Supreme Court justices seemed really annoyed about having to listen to.

There are two more key cases concerning U.S. government influence over social media platforms’ moderation policies from this term, the decisions for which will be released soon.

Update: More good news after the justices ruled social media companies can moderate their platforms as they see fit.

Tuesdays used to be my favourite day of the week because it was the day when a bunch of new music would typically be released. That is no longer the case — but only because music releases were moved to Fridays. The music itself is as good as ever.

It remains a frustratingly recurring theme that today’s music is garbage, and yesterday’s music is gold. Music today is “just noise”, according to generations of people who insist the music in their day — probably when they were in their twenties — was better. Today’s peddler of this dead horse trope is YouTuber and record producer Rick Beato, who published a video about the two “real reason[s] music is getting worse”: it is “too easy to make”, and “too easy to consume”.

Beato is right that new technologies have been steadily making it easier to make music and listen to it, but he is wrong that they are destructive. “Good” and “bad” are subjective, to be sure, but it seems self-evident that more good music is being made now than ever before, simply because people are so easily able to translate an idea to published work. Any artist can experiment with any creative vision. It is an amazing time.

This also suggests more bad music is being made, but who cares? Bad music has existed forever. The lack of a gatekeeper now means it gains wider distribution, but that has more benefits than problems. Maybe some people will stumble across it and recognize the potential in a burgeoning artist.

Aside from the lack of distribution avenues historically, the main reason we do not remember bad records is because they are no longer played. This does not mean unpopular music is inherently bad, of course, only that time sifts things we generally like from things we do not.

Perhaps one’s definition of “good” includes how influential a work of art turns out to be. Again, it seems difficult to argue modern music is not as influential as that which has preceded it. It may be too early to tell what will prove its influence, to be sure, but we have relatively recent examples which indicate otherwise. The Weeknd spawned an entire genre of moody R&B imitators from a series of freely distributed mixtapes. The entire genre of trap spread to the world from its origins in Atlanta, to the extent that its unique characteristics have underpinned much of pop music for a decade. Many of its biggest artists made their name on DatPiff. Just two of countless examples.

If you actually love music for all that it can be, you are spoiled for choice today. If anything, that is the biggest problem with music today: there is so much of it and it can be overwhelming. The ease with which music can be made does not necessarily make it worse, but it does make it more difficult if you want to try as much of it as you can. I have only a small amount of sympathy when Beato laments how the ease of streaming services devalues artistry because of how difficult it can be to spend time with any one album when there is another to listen to, and then another. But anyone can make the decision to hold the queue and embrace a single release. (And if artistry is something we are concerned with, why call it “consuming”? A good record is not something I want to chug down.)

We can try any music we like these days. We can explore old releases just as easily as we can see what has just been published. We can and should take a chance on genres we had never considered before. We can explore new recordings of jazz and classical compositions. Every Friday is a feast for the ears — if you want it to be. If you really like music, you are living in the luckiest time. I know that is how I feel. I just wish artists could get paid an appropriate amount for how much they contribute to the best parts of life.

The European Commission:

Today, the European Commission has informed Apple of its preliminary view that its App Store rules are in breach of the Digital Markets Act (DMA), as they prevent app developers from freely steering consumers to alternative channels for offers and content.

The problems cited by the Commission are so far entirely related to in-app referrals for external purchases. The Commission additionally says it is looking into Apple’s terms for third-party app stores — including the Core Technology Fee — but that is not what these specific findings are about.

Jesper:

In the DMA, the ground rule is for sideloading apps to be allowed, and to only very minimally be reigned in under very specific conditions. Apple chose to take these conditions and lawyer them into “always, unless you pay us sums of money that are plainly prohibitive for most actors”. Apple knew the rules and understood the intent and chose to evade them, in order to retain additional income.

Separately, earlier this month — the weekend before WWDC, in fact — Apple rejected an emulator after holding it in review for two months.

Benjamin Mayo, 9to5Mac:

App Review has rejected a submission from the developers of UTM, a generic PC system emulator for iPhone and iPad.

The open source app was submitted to the store, given the recent rule change that allows retro game console emulators, like Delta or Folium. App Review rejected UTM, deciding that a “PC is not a console”. What is more surprising, is the fact that UTM says that Apple is also blocking the app from being listed in third-party app stores in the EU.

Michael Tsai compiled the many disapproving reactions to Apple’s decision, adding:

The bottom line for me is that Apple doesn’t want general-purpose emulators, it’s questionable whether the DMA lets it block them, and even siding with Apple on this it isn’t consistently applying its own rules.

Jason Snell, Six Colors:

The whole point of the DMA is that Apple does not get to act as an arbitrary approver or disapprover of apps. If Apple can still reject or approve apps as it sees fit, what’s the point of the DMA in the first place?

The Commission continues:

In parallel, the Commission will continue undertaking preliminary investigative steps outside of the scope of the present investigation, in particular with respect to the checks and reviews put in place by Apple to validate apps and alternative app stores to be sideloaded.

Riley Testut:

When we first met with the EC a few months ago, we were asked repeatedly if we trusted Apple to be in charge of Notarization. We emphatically said yes.

However, it’s clear to us now that Apple is indeed using Notarization to not only delay our apps, but also to determine on a case-by-case basis how to undermine each release — such as by changing the App Store rules to allow them

If you are somebody who believes it is only fair to take someone at their word and assume good faith, I am right there with you. Even though Apple has a long history of capricious App Review processes, it was fair to consider its approach to the E.U. a begrudging but earnest attempt at compliance. Even E.U. Commissioner Margrethe Vestager did, telling CNBC she was “very surprised that we would have such suspicions of Apple being non-compliant”.

That is, however, a rather difficult position to maintain, given the growing evidence Apple seems determined to evade both the letter and spirit of this legislation. Perhaps there are legitimate security concerns in the UTM emulator. The burden of proof for that claim rests on Apple, however, and its ability to be a reliable narrator is sometimes questionable. Consider the possible conflicts of interest in App Tracking Transparency rules raised by German competition authorities.

Manton Reece:

When a company withholds a feature from the EU because of the DMA — Apple for AI, Meta today for the fediverse — they should document which sections of the DMA would potentially be violated. Let users fact-check whether there’s a real problem.

Agreed. This would allow people to understand what businesses see are the limitations of the DMA on the merits. Users may not be the best judge of whether a legal problem exists — especially since laws get interpreted and reinterpreted by different experts all the time — but any details would be better than a void filled with speculation.

Joel Dryden, CBC News:

Alberta’s government says it is “actively exploring” the use of every legal option, including a constitutional challenge or the use of the Alberta Sovereignty Act, to push back against federal legislation that will soon become law.

That legislation is Bill C-59, which would require companies to provide evidence to back up their environmental claims. It is currently awaiting royal assent.

As of Thursday, it was also what led the Pathways Alliance, a consortium of Canada’s largest oilsands companies, to remove all its content from its website, social media and other public communications.

The Alberta government and these petrochemical companies are cowards, the lot of them. If a business wants to claim that a drug treats or cures a disease, it needs to have proof of that. If it wants to claim the health benefits of some packaged food product, it needs evidence. If a mechanical process is supposed to meet energy or environmental standards, it must pass relevant tests.

Why should oil companies making absurd claims laundered through a quasi-governmental public relations office and supported by the province get to greenwash their way out of responsibility? All else being equal, most people probably do not care how their energy needs are met. They care about having electricity, transportation, and warmth. There is no shame in being honest about where we are environmentally speaking, where we need to go, and how difficult it will be to get there.

John Gruber:

An oft-told story is that back in 2009 — two years after Dropbox debuted, two years before Apple unveiled iCloud — Steve Jobs invited Dropbox cofounders Drew Houston and Arash Ferdowsi to Cupertino to pitch them on selling the company to Apple. Dropbox, Jobs told them, was “a feature, not a product”.

[…]

Leading up to WWDC last week, I’d been thinking that this same description applies, in spades, to LLM generative AI. Fantastically useful, downright amazing at times, but features. Not products. Or at least not broadly universal products. Chatbots are products, of course. People pay for access to the best of them, or for extended use of them. But people pay for Dropbox too.

Marques Brownlee published a video about the same topic last week, and referenced a Wired podcast episode from the week before.

This seems to be the way things are shaping up and, anecdotally, describes the kinds of A.I. things I find most useful. Previous site sponsor ListenLater’s pitch is it lets you “listen to articles as podcasts”; that it uses an A.I.-trained voice makes it sound better, but is only one component of a more comprehensive story. Generative features in Adobe’s products enable faster and easier object removal from photos, and extending images beyond the known edges.

These are just features, though. Text-to-speech has been around for ages, and training it on real speech patterns makes it sound more realistic than most digital voices have so far been. Likewise, removal tools have been a core feature in image editing software for decades, and Adobe’s has changed a lot in the time I have used it: from basic clone stamping, which allows you to paint an area with sampled pixels, to the healing brush — sort of similar, but it tries to match the tone of the destination — to Content-Aware Fill. And, now, Generative Fill. These tools have made image editing easier and more realistic. It could take hours to remove an errant street sign from a photo with older tools; now, it really does take mere seconds, and the results are usually at least as good as a manual effort. The same is true for extending a photo — something routinely done to make it fit better in an ad or some other fixed composition.

The irony of the feature-not-product framing is that iCloud Drive and OneDrive, for example, have struggled to become as efficient and reliable as Dropbox was when it launched. But, then again, so has Dropbox today. As synced folders became just a feature within a broader platform, Dropbox expanded its offering to become a collaborative work environment, a cloud backup utility, and more. As a result, its formerly quiet and dutiful desktop app has become less efficient.1

A similar story could be told about 1Password, too, though perhaps not to the same extent. For many users, the password manager built into their system or browser might be fine. 1Password makes a more robust product marketed heavily toward business and enterprise users. Unfortunately, it has supported that effort with a suite of apps which are less efficient for users to create a better workflow for its developers.

If you are looking for the path the standalone A.I. companies are likely to take — aside from a merger or acquisition — these examples may be lurking along the way. I wonder what has been happening with that OpenAI hardware project.


  1. At the time, I wrote the enterprise positioning was “misguided” and likely would not be successful. This humble pie tastes fine, I guess. ↥︎

Mark Sullivan, Fast Company:

[Perplexity CEO Aravind] Srinivas said the mysterious web crawler that Wired identified was not owned by Perplexity, but by a third-party provider of web crawling and indexing services.

Srinivas wants the warm glow of innovation without the cold truth of responsibility.

Srinivas would not say the name of the third-party provider, citing a Nondisclosure Agreement.

The way Perplexity works is dependent on favourable relationships with these providers, so Srinivas cannot throw them under the bus by name. He can, however, scatter blame all around.

Asked if Perplexity immediately called the third-parter crawler to tell them to stop crawling Wired content, Srinivas was non-committal. “It’s complicated,” he said.

Srinivas has not.

Srinivas also noted that the Robot Exclusion Protocol, which was first proposed in 1994, is “not a legal framework.”

Srinivas is creating a clear difference between laws and principles because the legal implications are so far undecided, but it sure looks unethical that its service ignores the requests of publishers — no matter whether that is through first- or third-party means.

He suggested that the emergence of AI requires a new kind of working relationship between content creators, or publishers, and sites like his.

On this, Srinivas and I agree. But it seems that, until new policies are in place, Perplexity will keep pillaging the web.

Since 2022, the European Parliament has been trying to pass legislation requiring digital service providers to scan for and report CSAM as it passes through their services.

Giacomo Zandonini, Apostolis Fotiadis, and Luděk Stavinoha, Balkan Insight, with a good summary in September:

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.

[…]

The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

This is going to require a little back-and-forth, and I will pick up the story with quotations from Matthew Green’s introductory remarks to a panel before the European Internet Services Providers Association in March 2023:

The only serious proposal that has attempted to address this technical challenge was devised — and then subsequently abandoned — by Apple in 2021. That proposal aimed only at detecting known content using a perceptual hash function. The company proposed to use advanced cryptography to “split” the evaluation of hash comparisons between the user’s device and Apple’s servers: this ensured that the device never received a readable copy of the hash database.

[…]

The Commission’s Impact Assessment deems the Apple approach to be a success, and does not grapple with this failure. I assure you that this is not how it is viewed within the technical community, and likely not within Apple itself. One of the most capable technology firms in the world threw all their knowledge against this problem, and were embarrassed by a group of hackers: essentially before the ink was dry on their proposal.

Daniel Boffey, the Guardian, in May 2023:

Now leaked internal EU legal advice, which was presented to diplomats from the bloc’s member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year.

The European Parliament in a November 2023 press release:

In the adopted text, MEPs excluded end-to-end encryption from the scope of the detection orders to guarantee that all users’ communications are secure and confidential. Providers would be able to choose which technologies to use as long as they comply with the strong safeguards foreseen in the law, and subject to an independent, public audit of these technologies.

Joseph Menn, Washington Post, in March, reporting on the results of a European court ruling:

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”

This is not directly about the proposed CSAM measures, but it is precedent for European regulators to follow.

Natasha Lomas, TechCrunch, this week:

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Meredith Whittaker, CEO of Signal, issued a PDF statement criticizing the proposal:

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Patrick Breyer, of Germany’s Pirate Party:

Only Germany, Luxembourg, the Netherlands, Austria and Poland are relatively clear that they will not support the proposal, but this is not sufficient for a “blocking minority”.

Ella Jakubowska on X:

The exact quote from [Věra Jourová] the Commissioner for Values & Transparency: “the Commission proposed the method or the rule that even encrypted messaging can be broken for the sake of better protecting children”

Věra Jourová on X, some time later:

Let me clarify one thing about our draft law to detect online child sexual abuse #CSAM.

Our proposal is not breaking encryption. Our proposal preserves privacy and any measures taken need to be in line with EU privacy laws.

Matthew Green on X:

Coming back to the initial question: does installing surveillance software on every phone “break encryption”? The scientist in me squirms at the question. But if we rephrase as “does this proposal undermine and break the *protections offered by encryption*”: absolutely yes.

Maïthé Chini, the Brussels Times:

It was known that the qualified majority required to approve the proposal would be very small, particularly following the harsh criticism of privacy experts on Wednesday and Thursday.

[…]

“[On Thursday morning], it soon became clear that the required qualified majority would just not be met. The Presidency therefore decided to withdraw the item from today’s agenda, and to continue the consultations in a serene atmosphere,” a Belgian EU Presidency source told The Brussels Times.

That is a truncated history of this piece of legislation: regulators want platform operators to detect and report CSAM; platforms and experts say that will conflict with security and privacy promises, even if media is scanned prior to encryption. This proposal may be specific to the E.U., but you can find similar plans to curtail or invalidate end-to-end encryption around the world:

I selected English-speaking areas because that is the language I can read, but I am sure there are more regions facing threats of their own.

We are not served by pretending this threat is limited to any specific geography. The benefits of end-to-end encryption are being threatened globally. The E.U.’s attempt may have been pushed aside for now, but another will rise somewhere else, and then another. It is up to civil rights organizations everywhere to continue arguing for the necessary privacy and security protections offered by end-to-end encryption.

Javier Espinoza and Michael Acton, Financial Times:

Apple has warned that it will not roll out the iPhone’s flagship new artificial intelligence features in Europe when they launch elsewhere this year, blaming “uncertainties” stemming from Brussels’ new competition rules.

This article carries the headline “Apple delays European launch of new AI features due to EU rules”, but it is not clear to me these features are “delayed” in the E.U. or that they would “launch elsewhere this year”. According to the small text in Apple’s WWDC press release, these features “will be available in beta […] this fall in U.S. English”, with “additional languages […] over the course of the next year”. This implies the A.I. features in question will only be available to devices set to U.S. English, and acting upon text and other data also in U.S. English.

To be fair, this is a restriction of language, not geography. Someone in France or Germany could still want to play around with Apple Intelligence stuff even if it is not very useful with their mostly not-English data. Apple is saying they will not be able to. It aggressively region-locks alternative app marketplaces to Europe and, I imagine, will use the same infrastructure to keep users out of these new features.

There is an excerpt from Apple’s statement in this Financial Times article explaining which features will not launch in Europe this year: iPhone Mirroring, better screen sharing with SharePlay, and Apple Intelligence. Apple provided a fuller statement to John Gruber. This is the company’s explanation:

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

Apple does not explain specifically how these features run afoul of the DMA — or why it would not or could not build them to clearly comply with the DMA — so this could be mongering, but I will assume it is a good-faith effort at compliance in the face of possible ambiguity. I am not sure Apple has earned a benefit of the doubt, but that is a different matter.

It seems like even the possibility of lawbreaking has made Apple cautious — and I am not sure why that is seen as an inherently bad thing. This is one of the world’s most powerful corporations, and the products and services it rolls out impact a billion-something people. That position deserves significant legal scrutiny.

I was struck by something U.S. FTC chair Lina Khan said in an interview at a StrictlyVC event this month:

[…] We hear routinely from senior dealmakers, senior antitrust lawyers, who will say pretty openly that as of five or six or seven years ago, when you were thinking about a potential deal, antitrust risk or even the antitrust analysis was nowhere near the top of the conversation, and now it is up front and center. For an enforcer, if you’re having companies think about that legal issue on the front end, that’s a really good thing because then we’re not going to have to spend as many public resources taking on deals that we believe are violating the laws.

Now that competition laws are being enforced, businesses have to think about them. That is a good thing! I get a similar vibe from this DMA response. It is much newer than antitrust laws in both the U.S. and E.U. and there are things about which all of the larger technology companies are seeking clarity. But it is not an inherently bad thing to have a regulatory layer, even if it means delays.

Is that not Apple’s whole vibe, anyway? It says it does not rush into things. It is proud of withholding new products until it feels it has gotten them just right. Perhaps you believe corporations are a better judge of what is acceptable than a regulatory body, but the latter serves as a check on the behaviour of the former.

Apple is not saying Europe will not get these features at all. It is only saying it is not sure it has built them in a DMA compliant way. We do not know anything more about why that is the case at this time, and it does not make sense to speculate further until we do.

After Robb Knight found — and Wired confirmed — Perplexity summarizes websites which have followed its opt out instructions, I noticed a number of people making a similar claim: this is nothing but a big misunderstanding of the function of controls like robots.txt. A Hacker News comment thread contains several versions of these two arguments:

  • robots.txt is only supposed to affect automated crawling of a website, not explicit retrieval of an individual page.

  • It is fair to use a user agent string which does not disclose automated access because this request was not automated per se, as the user explicitly requested a particular page.

That is, publishers should expect the controls provided by Perplexity to apply only to its indexing bot, not a user-initiated page request. Wary of being the kind of person who replies to pseudonymous comments on Hacker News, this is an unnecessarily absolutist reading of how site owners expect the Robots Exclusion Protocol to work.

To be fair, that protocol was published in 1994, well before anyone had to worry about websites being used as fodder for large language model training. And, to be fairer still, it has never been formalized. A spec was only recently proposed in September 2022. It has so far been entirely voluntary, but the draft standard proposes a more rigid expectation that rules will be followed. Yet it does not differentiate between different types of crawlers — those for search, others for archival purposes, and ones which power the surveillance economy — and contains no mention of A.I. bots. Any non-human means of access is expected to comply.

The question seems to be whether what Perplexity is doing ought to be considered crawling. It is, after all, responding to a direct retrieval request from a user. This is subtly different from how a user might search Google for a URL, in which case they are asking whether that site is in the search engine’s existing index. Perplexity is ostensibly following real-time commands: go fetch this webpage and tell me about it.

But it clearly is also crawling in a more traditional sense. The New York Times and Wired both disallow PerplexityBot, yet I was able to ask it to summarize a set of recent stories from both publications. At the time of writing, the Wired summary is about seventeen hours outdated, and the Times summary is about two days old. Neither publication has changed its robots.txt directives recently; they were both blocking Perplexity last week, and they are blocking it today. Perplexity is not fetching these sites in real-time as a human or web browser would. It appears to be scraping sites which have explicitly said that is something they do not want.

Perplexity should be following those rules and it is shameful it is not. But what if you ask for a real-time summary of a particular page, as Knight did? Is that something which should be identifiable by a publisher as a request from Perplexity, or from the user?

The Robots Exclusion Protocol may be voluntary, but a more robust method is to block bots by detecting their user agent string. Instead of expecting visitors to abide by your “No Homers Club” sign, you are checking IDs. But these strings are unreliable and there are often good reasons for evading user agent sniffing.

Perplexity says its bot is identifiable by both its user agent and the IP addresses from which it operates. Remember: this whole controversy is that it sometimes discloses neither, making it impossible to differentiate Perplexity-originating traffic from a real human being — and there is a difference.

A webpage being rendered through a web browser is subject to the quirks and oddities of that particular environment — ad blockers, Reader mode, screen readers, user style sheets, and the like — but there is a standard. A webpage being rendered through Perplexity is actually being reinterpreted and modified. The original text of the page is transformed through automated means about which neither the reader or the publisher has any understanding.

This is true even if you ask it for a direct quote. I asked for a full paragraph of a recent article and it mashed together two separate sections. They are direct quotes, to be sure, but the article must have been interpreted to generate this excerpt.1

It is simply not the case that requesting a webpage through Perplexity is akin to accessing the page via a web browser. It is more like automated traffic — even if it is being guided by a real person.

The existing mechanisms for restricting the use of bots on our websites are imperfect and limited. Yet they are the only tools we have right now to opt out of participating in A.I. services if that is something one wishes to do, short of putting pages or an entire site behind a user name and password. It is completely reasonable for someone to assume their signal of objection to any robotic traffic ought to be respected by legitimate businesses. The absolute least Perplexity can do is respecting those objections by clearly and consistently identifying itself, and excluding websites which have indicated they do not want to be accessed by these means.


  1. I am not presently blocking Perplexity, and my argument is not related to its ability to access the article. I am only illustrating how it reinterprets text. ↥︎

Dhruv Mehrotra and Tim Marchman, of Wired, were able to confirm Robb Knight’s finding that Perplexity ignores the very instructions it gives website owners to opt out of scraping. And there is more:

The WIRED analysis also demonstrates that despite claims that Perplexity’s tools provide “instant, reliable answers to any question with complete sources and citations included,” doing away with the need to “click on different links,” its chatbot, which is capable of accurately summarizing journalistic work with appropriate credit, is also prone to bullshitting, in the technical sense of the word.

I had not played around with Perplexity very much, but I tried asking it “what is the bullshit web?”. Its summaries in response to prompts with and without a question mark are slightly different but there is one constant: it does not cite my original article, only a bunch of (nice) websites which linked to or reblogged it.

Takeshi Narabe, the Asahi Shimbun:

SoftBank Corp. announced that it has developed voice-altering technology to protect employees from customer harassment.

The goal is to reduce the psychological burden on call center operators by changing the voices of complaining customers to calmer tones.

The company launched a study on “emotion canceling” three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call.

Penny Crosman, the American Banker:

Call center agents who have to deal with angry or perplexed customers all day tend to have through-the-roof stress levels and a high turnover rate as a result. About 53% of U.S. contact center agents who describe their stress level at work as high say they will probably leave their organization within the next six months, according to CMP Research’s 2023-2024 Customer Contact Executive Benchmarking Report.

Some think this is a problem artificial intelligence can fix. A well-designed algorithm could detect the signs that a call center rep is losing it and do something about it, such as send the rep a relaxing video montage of photos of their family set to music.

Here we have examples from two sides of the same problem: working in a call centre sucks because dealing with usually angry, frustrated, and miserable customers sucks. The representative probably understands why some corporate decision made the customer angry, frustrated, and miserable, but cannot really do anything about it.

So there are two apparent solutions here — the first reconstructs a customer’s voice in an effort to make them sound less hostile, and the second shows call centre employees a “video montage” of good memories as an infantilizing calming measure.

Brian Merchant wrote about the latter specifically, but managed to explain why both illustrate the problems created by how call centres work today:

If this showed up in the b-plot of a Black Mirror episode, we’d consider it a bit much. But it’s not just the deeply insipid nature of the AI “solution” being touted here that gnaws at me, though it does, or even the fact that it’s a comically cynical effort to paper over a problem that could be solved by, you know, giving workers a little actual time off when they are stressed to the point of “losing it”, though that does too. It’s the fact that this high tech cost-saving solution is being used to try to fix a whole raft of problems created by automation in the first place.

A thoughtful exploration of how A.I. is really being used which, combined with the previously linked item, does not suggest a revolution for anyone involved. It looks more like cheap patch on society’s cracking dam.

Jonathan Maze, Restaurant Business Online:

McDonald’s is ending its two-year-old test of drive-thru, automated order taking (AOT) that it has conducted with IBM and plans to remove the technology from the more than 100 restaurants that have been using it.

[…]

McDonald’s has taken a deliberative approach on drive-thru AI even as many other restaurant chains have jumped fully on board. Checkers and Rally’s, Hardee’s, Carl’s Jr., Krystal, Wendy’s, Dunkin and Taco Johns are either testing or have implemented the technology in its drive-thrus.

Some of those chains “fully on board” with A.I. order-taking are customers of Presto which, according to reporting last year in Bloomberg, relied on outsourced workers in the Philippines for roughly 70% of the orders processed through its “A.I.” system. In a more recent corporate filing, human intervention has fallen to 54% of orders at “select locations” where Presto has launched what it calls its “most advanced version of [its] A.I. technology”. However, that improvement only applies to 55 of 202 restaurant locations where Presto is used. It does not say in that filing how many orders need human intervention at the other 147 locations.

Perhaps I am being unfair. Any advancements in A.I. are going to start off rocky, and will take a while to improve. They will understandably be mired in controversy, too. I am fond of how Cory Doctorow put it:

[…] their [A.I. vendors’] products aren’t anywhere near good enough to do your job, but their salesmen are absolutely good enough to convince your boss to fire you and replace you with an AI model that totally fails to do your job.

We can choose to create a world where even the smallest expressions of human creativity in our work are eliminated to technology — or we can choose not to. I am not a doomsday person about A.I.; I have found it sometimes useful in home and work contexts. But I am not buying the hype either. The problem is that I think Doctorow might be right: the people making decisions may hold their nose over any concerns they could have about trust as they realize how much more productive someone can be when they no longer have to think so much, and how much less they can be paid. And then whatever standards we have for good enough fall off a cliff.

But the McDonald’s experiment is probably just silly.

Kif Leswing, CNBC:

Nvidia, long known in the niche gaming community for its graphics chips, is now the most valuable public company in the world.

[…]

Nvidia shares are up more than 170% so far this year, and went a leg higher after the company reported first-quarter earnings in May. The stock has multiplied by more than ninefold since the end of 2022, a rise that’s coincided with the emergence of generative artificial intelligence.

I know computing is math — even drawing realistic pictures really fast — but it is so funny to me that Nvidia’s products have become so valuable for doing applied statistics instead of for actual graphics work.

Patrick McGee, Financial Times, August 2022:

In interviews with 15 female Apple employees, both current and former, the Financial Times has found that Mohr’s frustrating experience with the People group has echoes across at least seven Apple departments spanning six US states.

The women shared allegations of Apple’s apathy in the face of misconduct claims. Eight of them say they were retaliated against, while seven found HR to be disappointing or counterproductive.

Ashley Belanger, Ars Technica, last week:

Apple has spent years “intentionally, knowingly, and deliberately paying women less than men for substantially similar work,” a proposed class action lawsuit filed in California on Thursday alleged.

[…]

The current class action has alleged that Apple continues to ignore complaints that the company culture fosters an unfair and hostile workplace for women. It’s hard to estimate how much Apple might owe in back pay and other damages should women suing win, but it could easily add up if all 12,000 class members were paid thousands less than male counterparts over the complaint’s approximately four-year span. Apple could also be on the hook for hundreds in civil penalties per class member per pay period between 2020 and 2024.

I pulled the 2022 Financial Times investigation into this because one of the plaintiffs in the lawsuit filed last week also alleges sexual harassment by a colleague which was not adequately addressed.

Stephen Council, SFGate:

The lawyer said that asking women about pay expectations “locks” past pay discrimination in and that the requirements of a job should determine pay. Finberg isn’t new to the fight over tech pay; he represented employees suing Oracle and Google for gender-based pay discrimination, securing $25 million and $118 million settlements, respectively.

Last year, Apple paid $25 million to settle claims it discriminated in U.S. hiring in favour of people whose ability to remain in the U.S. depended on their employment status.