Ted Chiang, the New Yorker:

It’s harder to imagine a program that, over many sessions, helps you write a good novel. This hypothetical writing program might require you to enter a hundred thousand words of prompts in order for it to generate an entirely different hundred thousand words that make up the novel you’re envisioning. It’s not clear to me what such a program would look like. Theoretically, if such a program existed, the user could perhaps deserve to be called the author. But, again, I don’t think companies like OpenAI want to create versions of ChatGPT that require just as much effort from users as writing a novel from scratch. The selling point of generative A.I. is that these programs generate vastly more than you put into them, and that is precisely what prevents them from being effective tools for artists.

Matt Muir, writer of the excellent Web Curios newsletter:

[…] Broadly speaking I agree with some of the points he makes, specifically about the requirement for art to have an element of intentionality which is necessarily absent from anything made by (current generative) AI being as all it is is maths, and maths cannot have intent. Equally, though, Chiang concedes that artists have made, are making, and will continue to make, work *in conjunction with* non-intentional systems, and that these works are perfectly capable of being considered as ‘art’. […]

Adi Robertson on Bluesky:

I can hazard lots of guesses why, but it’s striking that virtually none of the “can AI do art” conversation focuses on the most interesting examples I’ve seen, in which the interactive conversation between user and machine — rather than the end output — *is* the art.

Robertson points to the Are You the Asshole bot and the Hey Robot game as two examples, both of which are creative explorations of human–A.I. interaction. Whether those conversations are considered “art” is something I will leave others to decide because I spent a bachelor’s degree hearing hundreds of people asking that question and I lost my patience for it.

Robertson’s observation is a spiritual successor to my issue with Instagram bait art installations: neither are necessarily cheapening art, but I wish artists treated social media and, now, A.I. with less formalism and more conceptualism. Artists can eke compelling works out of any medium. In fact, the very suspicion of A.I.’s involvement in art seems likely to lend itself to surprising and moving works, with suitably talented artists.

I got a batch of film scans from the developer today and realized I needed a better process for converting them — better, that is, than the way I had been doing it, which was to flip the curves in Lightroom and then do all my corrections in reverse.

I played around with the Filmomat SmartConvert demo but I did not like the workflow enough to consider paying for it. I really like the results I got from Negative Lab Pro and I think the USD $99 price tag is reasonable. However, its main selling point — that DNG scans remain in DNG format — is also its drawback: your workflow is still going to be the reverse of what you expect because, under the hood, the image is still a negative.

That pushed me to trying Grain2Pixel which, from a getting-started perspective, is more cumbersome than the other two options, particularly as MacOS is alarmed you are trying to use unsigned software.

But once you get that sorted out and install the script, it makes quick work out of batch processing a folder of DNGs into TIFF images. Then you can import them into Lightroom and make corrections in positive colour. You do not need to worry too much about a loss of range — TIFF is plenty flexible in post, at least for my amateur purposes. I am very happy with the resulting images.

Danielle Deschamps, in the conclusion to a rather interesting chapter from “Contemporary Issues in Collection Management”, hosted by Open Education Alberta:

Ebook licensing agreements have become the widespread norm for library ebook access. Yet, between libraries and publishers, these agreements, the terms of which are set by publishers, have devolved to an extent that libraries are struggling to maintain their access to ecollections. Publishers perceive libraries as harming their bottom lines and libraries are in a particularly vulnerable place, without much negotiating power. However, there are several optional ways for public libraries to move forward, in effort of balancing their financial capacity while maintaining their ethical principle of respecting intellectual property rights. […]

The subsequent chapter specifically about ebook pricing is also a terrific primer.

Daniel A. Gross, writing in September 2021, in the New Yorker:

To illustrate the economics of e-book lending, the N.Y.P.L. sent me its January, 2021, figures for “A Promised Land,” the memoir by Barack Obama that had been published a few months earlier by Penguin Random House. At that point, the library system had purchased three hundred and ten perpetual audiobook licenses at ninety-five dollars each, for a total of $29,450, and had bought six hundred and thirty-nine one- and two-year licenses for the e-book, for a total of $22,512. Taken together, these digital rights cost about as much as three thousand copies of the consumer e-book, which sells for about eighteen dollars per copy. As of August, 2021, the library has spent less than ten thousand dollars on two hundred and twenty-six copies of the hardcover edition, which has a list price of forty-five dollars but sells for $23.23 on Amazon. A few thousand people had checked out digital copies in the book’s first three months, and thousands more were on the waiting list. (Several librarians told me that they monitor hold requests, including for books that have not yet been released, to decide how many licenses to acquire.)

If you want to know why publishers so aggressively fought the Internet Archive on its model of lending out scanned copies of physical books, this is the reason. Publishers have created a model which fundamentally upsets a library’s ability to function. There is no scarcity in bytes, so publishers have created a way to charge more for something limitless, weightless, with nearly no storage costs.

What the Internet Archive did was perhaps a legal long-shot, and I worry about the effects of this suit and the one over shellac 78s. But it is hard not to see publishers as the real villains in this mess. They are consolidating power and charging even legitimate libraries unreasonable amounts of money for electronic copies of books which the publishers and their intermediaries ultimately still control.

At the beginning of August, Nassim Haramein sued RationalWiki on charges of defamation, conspiracy, and invasion of privacy. Regardless of the merits of the suit — I write, trying not to fall afoul of an obviously litigious individual — RationalWiki is a small, volunteer-run operation and will need legal representation to avoid losing next week by default. The site is currently soliciting donations.

I think the world is better for having RationalWiki in it. If you have the means and would like to chip in, I am sure the administrators there would appreciate it.

Update: RationalWiki has been SLAPP-ed into settling. Donations will go toward a proper legal fund.

Jason Koebler, 404 Media:

The chats show 22 instances in which one Google employee told another Google employee to turn chat history off. In total, the court has dozens of specific employees who have told others to turn history off in DMs or broader group chats and channels. The document includes exchanges like this (each exchange includes different employees) […]

These examples are equal parts amusing, blatant, and telling. I doubt this is isolated; there are probably similar policies standard at other companies. But apparently this was part of Google’s new employee training.

The Economist:

So how big is too big? At what point do the costs of the heaviest vehicles — measured in lives lost — vastly exceed their benefits? To answer this question, The Economist compiled ten years’ worth of crash data from more than a dozen states. Like the data compiled by Messrs Anderson and Auffhammer, our figures come from reports filed by police officers, who are tasked with recording information about car crashes when called to the scene. Although all states collect such data, we focus on those that collect the most detailed figures and share them with researchers. The resulting dataset, which covers more than a third of America’s population, provides us with a sample that is both big and representative.

The results? According to the Economist, “if the heaviest tenth of vehicles in America’s fleet were downsized […] road fatalities in multi-car crashes — which totaled 19,081 in 2023 — could be reduced by 12%, or 2,300, without sacrificing the safety of any cars involved”.

Andre Mayer and Emily Chung, reporting for CBC News in June:

But the ubiquity of SUVs and trucks isn’t an accurate reflection of what people want to drive, say industry analysts.

The trend has been greatly influenced by a combination of savvy marketing, government regulations that incentivize bigger vehicles and limited supply of more modest ones.

Indeed, much of it is driven by one simple economic fact.

“Smaller cars are less profitable,” said Stephanie Brinley, associate director at U.S.-based transportation consultancy S&P Global Mobility.

People are guided to purchase an SUV or truck in the United States and Canada because most cities oblige us to own a vehicle of some kind, but inexpensive cars are not generally available, and other people drive oversized SUVs and trucks which makes us scared of driving anything smaller. Repeat until around 80% of new vehicle sales are various kinds of SUVs and trucks.

This forced market is dangerous for everybody except for those who are inside a large SUV or truck. It means headlights from oncoming traffic at eye level. It means small roads are less navigable and parking spaces need to be made larger. It means roads feel more dangerous so fewer people feel comfortable walking or cycling. It means more people are seriously injured and die. All because these vehicles are more profitable, many cities are inaccessible by other means, automakers have artificially constrained their wares, and people feel roads are competitive instead of cooperative.

Sérgio Spagnuolo, Sofia Schurig, and Pedro Nakamura, Núcleo:

A Supreme Court Justice ordered, on Friday (August 30, 2024), the complete suspension of all access to X (formerly Twitter) across the entire Brazilian territory, in an unprecedented ruling against the social platform.

[…]

In a ruling issued on the afternoon of Aug. 31, Justice Alexandre de Moraes ordered the president of Brazil’s telecom regulator, Anatel, Carlos Manuel Baigorri, to ensure that necessary measures are taken and that internet companies are notified to block the application within 24 hours.

An un-bylined report from Al Jazeera:

At the core of the dispute, de Moraes argues that Musk refused earlier this year to block accounts responsible for the spread of fake news, hate speech and attacks on the rule of law.

At the time, Musk denounced the order as censorship and responded by closing the company’s offices in Brazil while ensuring the platform was still available in the country.

Mike Masnick, Techdirt:

And, of course, as a reminder, before Elon took over Twitter (but while he was in a legal fight about it), he accused the company of violating the agreement because of its legal fight against the Modi government over their censorship demands. I know it’s long forgotten now, but one of the excuses Elon used in trying to kill the Twitter deal was that the company was fighting too hard to protect free speech in India.

And then, once he took over, he not only caved immediately to Modi’s demands, he agreed to block the content that the Modi government ordered blocked globally, not just in India.

So Elon isn’t even consistent on this point. He folds to governments when he likes the leadership and fights them when he doesn’t. It’s not a principled stance. It’s a cynical, opportunistic one.

This is being compared by some to the arrest of Pavel Durov but, again, I am not sure I see direct parallels. This Brazilian law seems, from my Canadian perspective, more onerous and restrictive than those from most other liberal democracies. But I do not know much of anything about Brazilian policy, and perhaps this is in line with local expectations.

This is probably not the reason Bluesky wanted for growing by two million new users in one week.

Robert Reich, former U.S. Secretary of Labor for the Clinton administration and Sam Reich’s dad, wrote about Elon Musk’s political influence in an editorial for the Guardian. It begins as a decent piece, comparing the power of owning a social media platform with Musk’s childlike gullibility — my words, not Reich’s. But, in a section of ideas about what to do, one suggestion seems particularly harmful:

3. Regulators around the world should threaten Musk with arrest if he doesn’t stop disseminating lies and hate on X.

Global regulators may be on the way to doing this, as evidenced by the 24 August arrest in France of Pavel Durov, who founded the online communications tool Telegram, which French authorities have found complicit in hate crimes and disinformation. Like Musk, Durov has styled himself as a free speech absolutist.

There are places where U.S.-style interpretation of free expression is contradicted by local laws and, so, X’s operations must comply. Maybe Musk could be legally responsible in some jurisdiction for things he has said, or for things hosted on a platform he owns. But we should almost never encourage the idea of arresting people for things they say. Yes, there are limits: threats of violence and fraud are both types of generally illegal speech. Yet charging Musk for being a loud public idiot is a very bad idea.

Also, while details about Pavel Durov’s arrest are still solidifying, it does not yet appear he is being held responsible for “hate crimes and disinformation”. According to French prosecutors (PDF), which I translated with DeepL, his charges are mostly about failing to comply with subpoenas and other legitimate legal demands. If X follows legal avenues for either complying with or disputing government demands, then I do not see how Durov’s arrest is even relevant. And, for what it is worth, neither Durov nor Telegram have been “found complicit” in anything. The United States is not the only country which has legal procedures.

In response to Reich’s article, a troll X account posted a screenshot of a 4chan post about “low T men”, itself containing an arguably antisemitic meme, which was quoted by Musk calling it an “interesting observation”. Just more evidence Musk is a big, dumb, rich, influential moron.

Maryclaire Dale, Associated Press:

A U.S. appeals court revived on Tuesday a lawsuit filed by the mother of a 10-year-old Pennsylvania girl who died attempting a viral challenge she allegedly saw on TikTok that dared people to choke themselves until they lost consciousness.

While federal law generally protects online publishers from liability for content posted by others, the court said TikTok could potentially be found liable for promoting the content or using an algorithm to steer it to children.

Notably, the “Blackout Challenge” or the “Choking Game” is one of few internet challenges for teenagers which is neither a media-boosted fiction nor relatively harmless. It has been circulating for decades, and was connected with 82 deaths in the United States alone between 1995–2007. Which, yes, is before TikTok or even social media as we know it today. Melissa Chan reported in a 2018 Time article that its origins go back to at least the 1930s.

Mike Masnick, of Techdirt, not only points out the extensive Section 230 precedent ignored by the Third Circuit in its decision, he also highlights the legal limits of publisher responsibility:

We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

Matt Stoller, on the other hand, celebrates the Third Circuit’s ruling as an end to “big tech’s free ride on Section 230”:

Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.

And that’s a huge rollback of Section 230.

On a legal level, both Masnick and Stoller agree the Third Circuit’s ruling creates a massive change in U.S. internet policy and, because of current structures, the world. But their interpretations of this are in vehement disagreement on whether this is a good thing. Masnick says it is not, and I am inclined to agree. Not only is there legal precedent on his side, there are plenty of very good reasons for why Section 230 is important to preserve more-or-less the way it has existed for decades.

However, it seems unethical for TikTok to have no culpability for how users’ dangerous posts are recommended, especially to children. Perhaps legal recourse is wrong in this case and others like it, yet it just feels wrong for this case to eventually — after appeals and escalation to, probably, the Supreme Court — be summarily dismissed on the grounds that corporations have little responsibility or care for automated recommendations. There is a real difference between teenagers spreading this challenge one-on-one for decades and teenagers broadcasting it — or, at least, there ought to be a difference.

I do not wish to make a whole big thing out of this, but I have noticed a bunch of little things which make my iPhone a little bit harder to use. For this, I am setting aside things like rearranging the Home Screen, which still feels like playing Tetris with an adversarial board. These are all things which are relatively new, beginning with the always-on display and the Island in particular, neither of which I had on my last iPhone.

The always-on display is a little bit useful and a little bit of a gimmick. I have mine set to hide the wallpaper and notifications. In this setup, however, the position of media controls becomes unpredictable. Imagine you are listening to music when someone wishes to talk to you. You reach down to the visible media controls and tap where the pause button is, knowing that this only wakes the display. You go in for another tap to pause but — surprise — you got a notification at some point and, so, now that you have woken up the display, the notification slides in from the bottom and moves the media controls up, so you have now tapped on a notification instead.

I can resolve this by enabling notifications on the dimmed lock screen view, but that seems more like a workaround than a solution to this unexpected behaviour. A simple way to fix this would be to not show media controls when the phone is locked and the display is asleep. They are not functional, but they create an expectation for where those controls will be, which is not necessarily the case.

The Dynamic Island is fussy, too. I frequently interact with it for media playback, but it has a very short time-out. That is, if I pause media from the Dynamic Island, the ability to resume playback disappears after just a few seconds; I find this a little disorientating.

I do not understand how to swap the priority or visibility of Dynamic Island Live Activities. That is to say the Dynamic Island will show up to two persistent items, one of which will be minimized into a little circular icon, while the other will wrap around the display cutout. Apple says I should be able to swap the position of these by swiping horizontally, but I can only seem to make one of the Activities disappear no matter how I swipe. And, when I do make an Activity disappear, I do not know how I can restore it.

I find a lot of the horizontal swiping gestures too easy to activate in the Dynamic Island — I have unintentionally made an Activity disappear more than once — and across the system generally. It seems only a slightly off-centre angle is needed to transform a vertical scrolling action into a horizontal swiping one. Many apps make use of “sloppy” swiping — being able to swipe horizontally anywhere on the display to move through sequential items or different pages — and vertical scrolling in the same view, but the former is too easy for me to trigger when I intend the latter.

I also find the area above the Dynamic Island too easy to touch when I am intending to expand the current Live Activity. This will be interpreted as touching the Status Bar, which will jump the scroll position of the current view to the top.

Lastly, the number of unintended taps I make has, anecdotally, skyrocketed. One reason for this is a change made several iOS versions ago to recognize touches more immediately. If I am scrolling a long list and I tap the display to stop the scroll in-place, resting my thumb onscreen is sometimes read as a tap action on whatever control is below it. Another reason for accidental touches is that pressing the sleep/wake button does not immediately stop interpreting taps on the display. You can try this now: open Mail, press the sleep/wake button, then — without waiting for the display to fall asleep — tap some message in the list. It is easy to do this accidentally when I return my phone to my pocket, for example.

These are all little things but they are a cumulative irritation. I do not think my motor skills have substantially changed in the past seventeen years of iOS device use, though I concede they have perhaps deteriorated a little. I do notice more things behaving unexpectedly. I think part of the reason is this two-dimensional slab of glass is being asked to interpret a bunch of gestures in some pretty small areas.

Lauren Theisen, Defector:

Columbus Blue Jackets winger Johnny Gaudreau and his brother Matthew were killed by a car while biking in Oldmans Township, New Jersey on Thursday night, according to New Jersey State Police. Johnny was 31, and Matthew was 29.

The brothers, originally from New Jersey, were in the area for their sister Katie’s wedding, which was scheduled for Friday. Around 8:00 p.m., police say, the driver of a Jeep Grand Cherokee hit them from behind while trying to pass an SUV that had made room for the bikers. The driver has been charged with two counts of death by auto, and police suspect that the driver had been drinking.

I am not much of a sports person; I do not really follow hockey. But I knew of Gaudreau as a longtime Calgary Flames player. His death and that of his brother were completely avoidable if this driver had not been drinking, had not attempted to pass so recklessly, or was not driving an SUV.

As Theisen writes, over a thousand cyclists were killed by drivers in 2022 in the United States alone. This is a high-profile tragedy, but not an outlier.

Juli Clover, MacRumors:

With the third beta of iOS 18.1, Apple has introduced new Apple Intelligence features for notifications. The notification summarization option that was previously available for the Mail and Messages apps now works with all of your apps.

Matt Birchler posted a video of the screen advertising this feature, showing how the “crazy ones” script could be summarized:

Woof, come up with a better example for this during iOS 18.1 startup, Apple. Sucking all the life out of the “here’s to the crazy ones” piece is a bad look.

Not the worst crime of all time or anything, but not great for those who are upset about AI feature sucking the humanity out of art.

Aside from the gall of simplifying an iconic ad campaign to a single-sentence description, this screen barely makes sense. I am guessing few people receive poems or creative writing in an application’s notifications. Those who do would probably prefer it not be summarized. Surely the whole point of a feature like this is to remove the corporate mumbo jumbo from an executive’s email, or to condense a set of alerts from the same app into a single notification.

Sometimes, it is worth taking a second to think about how things look. Part of what makes new technologies special is how they enable human creativity and expression. Not every new invention will be to that end, but surely technology should not be treated as a goal unto itself. If the showcase use of A.I. summarization is to strip a poem — albeit one written for an ad — down to its literal message, what are we even trying to do here?

Cyrille Louis, Le Figaro, originally in French and translated here with DeepL:

After four days in police custody, Pavel Dourov, founder and boss of the encrypted messaging service Telegram, was indicted in Paris on Wednesday evening by two examining magistrates for a litany of offences relating to organised crime, Paris prosecutor Laure Beccuau announced in a statement. The 39-year-old entrepreneur was released under a strict judicial supervision order, which includes the obligation to post a €5 million bond, to report to the police twice a week and to refrain from leaving French territory.

The charges are related to criminal uses of Telegram’s platform and its refusal to cooperate with authorities. I know there are some people who are worried about the potential implications of this for other services. I am not yet sure whether these concerns are merited.

TJ McIntyre:

Anyway, what legal issues arise from the investigation? The content moderation ones are easiest; if Telegram has been notified of CSAM, etc. and has failed to act then it loses the hosting immunity under Art 6 DSA and may be liable under French law on complicity.

The issue of failure to respond to official requests for data may be more difficult. The Telegram entities seem to be based in multiple non-EU jurisdictions, including the British Virgin Islands and Dubai, and Telegram may attempt to argue that French orders do not have extraterritorial effect.

Adam Satariano and Cecilia Kang, of the New York Times, compared Durov’s arrest to those of Megaupload’s Kim Dotcom and the Silk Road’s Ross Ulbricht, neither of which I find particularly controversial. Perhaps I should; let me know if you think either arrest was unjustified. If Durov knew about criminal activity on Telegram and took little action to curtail it — which seems to be the case — it seems reasonable to hold him accountable for his company’s facilitation of that activity.

And from an un-bylined story in Le Monde:

His [Durov’s] lawyer David-Olivier Kaminski said it was “absurd” to suggest Durov could be implicated in any crime committed on the app, adding: “Telegram complies in all respects with European rules concerning digital technology.”

Separately, Durov is also being investigated on suspicion of “serious acts of violence” towards one of his children while he and an ex-partner, the boy’s mother, were in Paris, a source said. She also filed another complaint against Durov in Switzerland last year.

Maybe Durov is a piece of shit and Telegram sucks and this is also worrisome for civil liberties. But we do not yet have evidence for any of these things yet.

Paul Frazee, on Bluesky’s blog, announced a set of new “anti-toxicity” features. This one seems particularly good:

As of the latest app version, released today (version 1.90), users can view all the quote posts on a given post. Paired with that, you can detach your original post from someone’s quote post.

Quoted posts are a good feature, says someone who writes a website largely built around quotes from others, and I appreciate the benefits they provide. But there are also times when someone could be inundated with hostile mentions because they were quoted by someone with a large audience. This is a good way of allowing them to back out while retaining the feature.

Bluesky continues to do some really interesting stuff — from new things like Starter Packs, to rethinking established norms of social media platforms. I hope it succeeds.

French magistrate Laure Beccuau (PDF) on Monday disclosed the reasons for Pavel Durov’s arrest and detainment. The first two pages are in French; the last two are in English.

Mike Masnick, Techdirt:

In the end, though, a lot of this does seem potentially very problematic. So far, there’s been no revelation of anything that makes me say “oh, well, that seems obviously illegal.” A lot of the things listed in the charge sheet are things that lots of websites and communications providers could be said to have done themselves, though perhaps to a different degree.

Among the things being investigated by French authorities “against person unnamed” — not necessarily Durov — are “complicity” with various illegal communications, money laundering, and providing cryptography tools without authorization or registration. The latter category has raised the eyebrows of many but, I believe, must be read in the context of the whole list of charges. That is, this is not a pure objection to encrypted communications — to the extent Telegram chats may be encrypted — but unauthorized encryption used in complicity with other crimes.

In a way, that might be worse — all forms of communication, no matter whether they are encrypted, are used to facilitate crime. But providers of end-to-end encryption are facing seemingly endless proposals to weaken its protections. I do not think this is France trying to create a backdoor.

I think France is trying to pressure one of its own — Durov is a French citizen — to moderate the massive social network he runs within sensible boundaries. It is proudly carefree, which means it ignores CSAM reports and, according to an April report (PDF) from the Stanford Internet Observatory, does not appear to scan for known CSAM at all.

Telegram appears to believe it is a dumb pipe for users no matter whether they are communicating one-on-one or to a crowd of hundreds of thousands. It seems to think it has no obligation to cooperate with law enforcement in almost any circumstance.

Casey Newton, Platformer:

Anticipating these requests, Telegram created a kind of jurisdictional obstacle course for law enforcement that (it says) none of them have successfully navigated so far. From the FAQ again:

To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data. […] To this day, we have disclosed 0 bytes of user data to third parties, including governments.

It is important to more fully contextualize Telegram’s claim since it does not seem to be truthful. In 2022, Der Spiegel reported Telegram had turned over data to German authorities about users who had abused its platform. However, following an in-app user vote, it seems Telegram’s token willingness to cooperate with law enforcement on even the most serious of issues dried up.

I question whether Telegram’s multi-jurisdiction infrastructure promise is even real, much less protective against legal demands, given it says so in the same FAQ section as its probably wrong “0 bytes of user data” claim. Even so, Telegram says it “can be forced to give up data only if an issue is grave and universal enough” for several unrelated and possibly adversarial governments to agree on the threat. CSAM is globally reviled. Surely even hostile governments could agree on tracking those predators. Yet it seems Telegram, by its own suspicious “0 bytes” statistic, has not complied with even those requests.

Durov’s arrest presents an internal conflict for me. A world in which facilitators of user-created data are responsible for their every action is not conducive to effective internet policy. On the other hand, I think corporate executives should be more accountable for how they run their businesses. If Durov knew about severe abuse and impeded investigations by refusing to cooperate with information the company possessed, that should be penalized.

As of right now, though, all we have are a lot of questions about what this arrest means. There is simply little good information right now, and what crumbs are available lead to yet more confusion.

The extremely normal U.S. House Committee on the Judiciary posted a letter sent from Mark Zuckerberg to Chairman Jim Jordan.1 In it, Zuckerberg says Meta felt “pressured” by the Biden administration to more aggressively moderate users’ posts during the COVID-19 pandemic, that the administration was “wrong” for doing so, and says he “regret[s] that we were not more outspoken about it”.

This is substantially not news. Ryan Tracy of the Wall Street Journal reported last June the existence of these grievances within Meta. To be clear, this is contrition over Meta’s reluctance to more forcefully respond to government complaints about platform moderation. Nevertheless, it set off a wave of coverage about the Biden administration’s social media complaints during the pandemic.

Look a little closer, though, and it is a fairly embarrassing message which comes across less as a “big win for free speech”, as the Committee called it, and more like sophistry. Zuckerberg admits Meta decided its own moderation policy. It chose which actions to take, including issuing a direct response to the administration at the time. The government’s actions were also not as chilling as they sound. Indeed, many of the same issues were raised in Murthy v. Missouri, and were grossly misrepresented to portray U.S. officials as censorial and threatening instead of tense conversations made during a global pandemic.

But I wanted to draw your attention to something specific in Zuckerberg’s letter, as summarized by Hannah Murphy, of the Financial Times:

Zuckerberg also said he would no longer make a contribution to support electoral infrastructure via the Chan Zuckerberg Initiative, his philanthropic group, as he had previously done. The donations totalled more than $400mn and were made to non-profit groups including the Chicago-based Center for Tech and Civic Life. They were intended to make sure local election jurisdictions would have appropriate voting resources during the pandemic, he said. But he added that they had been interpreted as “benefiting one party over the other”.

Zuckerberg does not say who, specifically, interpreted his foundation’s contributions toward promoting information about voting as a somehow partisan effort, nor does Zuckerberg question the validity of these ridiculous complaints. But his concerns about the appearance of personal partisanship do not seem to carry over to his company. To name just one example, Meta is listed as a sponsor of the 2024 Canada Strong and Free Regional Networking Conference, a conservative activist event which this year is hosting Chris Rufo. That sponsorship is what kicked me into writing this whole thing instead of being satisfied with a couple of snarky posts. How is it that Meta will happily contribute to an explicitly partisan group, but Zuckerberg’s foundation promoting the general concept of voting is beyond the pale?

This letter is Zuckerberg ingratiating himself with lawmakers investigating a supposed conspiracy between tech companies, watchdog organizations, and an opposition political party. It is politically beneficial to a specific party and viewpoint. For Zuckerberg, whose objective is nominally to “not play a role one way or another — or to even appear to be playing a role”, this seems like a dishonest choice.


  1. The letter’s paragraphs are fully justified but hyphenation has not been enabled, so it looks like crap and readability is impacted. ↥︎

Joseph Cox, 404 Media:

Media giant Cox Media Group (CMG) says it can target adverts based on what potential customers said out loud near device microphones, and explicitly points to Facebook, Google, Amazon, and Bing as CMG partners, according to a CMG presentation obtained by 404 Media.

The deck says things like “smart devices capture real-time intent data by listening to our conversations” which seems like an obviously privacy-hostile invention on its face. But I continue to doubt any of this voice collection is actually happening, no matter how many buzzwords Cox Media Group throws in a PowerPoint presentation, when there is a far simpler explanation: they are lying. It already feels like behavioural advertising is targeting every word we say, so why not lean into that? Unscrupulous marketers love that kind of stuff. Feed them what they want.

If anyone from Cox Media Group would like to prove to me this is happening as described, give me a demo. I would love to see your creepy technology.

Jess Weatherbed, the Verge:

Image manipulation techniques and other methods of fakery have existed for close to 200 years — almost as long as photography itself. (Cases in point: 19th-century spirit photography and the Cottingley Fairies.) But the skill requirements and time investment needed to make those changes are why we don’t think to inspect every photo we see. Manipulations were rare and unexpected for most of photography’s history. But the simplicity and scale of AI on smartphones will mean any bozo can churn out manipulative images at a frequency and scale we’ve never experienced before. It should be obvious why that’s alarming.

This excellent piece is a necessary correction for too-simple comparisons between Google’s Reimagine feature and Adobe Photoshop. It also encouraged me re-read my own article about the history of photo manipulation to see if it holds up and, thankfully, I think it mostly does, even as Google’s A.I. editing tools have advanced from useful to irresponsible.

Last year’s features mostly allowed users to reposition and remove objects from their shots. This still seems fine, but one aspect of my description has not aged well. I wrote, in the context of removing a trampoline from a photo of a slam dunk, that Google’s tools make it “a little bit easier […] to lie”. For object removal, that remains true; for object addition — which is what Google’s Reimagine feature allows — it is much easier.

Me:

The questions that are being asked of the Pixel 8’s image manipulation capabilities are good and necessary because there are real ethical implications. But I think they need to be more fully contextualized. There is a long trail of exactly the same concerns and, to avoid repeating ourselves yet again, we should be asking these questions with that history in mind. This era feels different. I think we should be asking more precisely why that is.

Between Weatherbed’s piece and Sarah Jeong’s article on similar themes, I think some better context is rapidly taking shape, driven largely by Google’s decision to include additive features with few restrictions. A more responsible implementation of A.I. additions would limit the kinds of objects which could be added — balloons, fireworks, a big red dog. But, no, it is more important to Google — and X — to demonstrate their technological bonafides.

These technologies are different because they allow basically anyone to make basically any image realistically and on command with virtually no skill. Oh, and they can share them instantly. Two hundred years of faked photos cannot prepare us for the wild ride ahead.

Kate Conger and Ryan Mac, in an excerpt from their forthcoming book “Character Limit” published in the New York Times:

Mr. Musk’s fixation on Blue extended beyond the design, and he engaged in lengthy deliberations about how much it should cost. Mr. [David] Sacks insisted that they should raise the price to $20 a month, from its current $4.99. Anything less felt cheap to him, and he wanted to present Blue as a luxury good.

[…]

Mr. Musk also turned to the author Walter Isaacson for advice. Mr. Isaacson, who had written books on Steve Jobs and Benjamin Franklin, was shadowing him for an authorized biography. “Walter, what do you think?” Mr. Musk asked.

“This should be accessible to everyone,” Mr. Isaacson said, no longer just the fly on the wall. “You need a really low price point, because this is something that everyone is going to sign up for.”

I learned a new specific German word today as a direct result of this article: fremdschämen. It is more-or-less the opposite of schadenfreude; instead of being pleased by someone else’s embarrassment, you instead feel their pain.

This is humiliating for everyone involved: Musk, Sacks — who compared Twitter’s blue checkmarks to a Chanel handbag — and Jason Calacanis of course. But most of all, this is another blow to Isaacson’s credibility as an ostensibly careful observer of unfolding events.

Max Tani, of Semafor, was tipped off to Isaacson’s involvement earlier this year by a single source:

“I wanted to get in touch because we’re including an item in this week’s Semafor media newsletter reporting that you actually set the price for Twitter Premium,” I wrote to Isaacson in March. “We’ve heard that while you were shadowing Elon Musk for your book, he told Twitter staff that you had advised him on what the price should be, and he thought it was a good idea and implemented it.”

“Hah! That’s the first I’d heard of this. It’s not true. I’m not even sure what the price is. Sorry,” he replied.

This denial is saved from being a lie only by the grounds that Isaacson did not literally “set the price”, as Tani put it, on the subscription service. In all meaningful ways, though, it is deceptive.

An un-bylined report in Le Monde:

French judicial authorities on Sunday extended the detention of the Russian-born founder and chief of Telegram Pavel Durov after his arrest at a Paris airport over alleged offenses related to the popular but controversial messaging app.

I believe it is best to wait until there is a full description of the crimes French authorities are accusing Durov of committing before making judgements about the validity of this arrest. Regardless of what is revealed, I strongly suspect a lot of the more loudmouthed knee-jerk reactionary crowd will look pretty stupid and will, in all likelihood, dig in their heels looking even stupider in the process. Best to wait until we know more.

This Le Monde article goes on to describe Telegram as an “encrypted messaging app”.

Matthew Green:

But this arrest is not what I want to talk about today.

What I do want to talk about is one specific detail of the reporting. Specifically: the fact that nearly every news report about the arrest refers to Telegram as an “encrypted messaging app.” […]

This phrasing drives me nuts because in a very limited technical sense it’s not wrong. Yet in every sense that matters, it fundamentally misrepresents what Telegram is and how it works in practice. And this misrepresentation is bad for both journalists and particularly for Telegram’s users, many of whom could be badly hurt as a result.

Despite the company’s press page saying “[e]verything sent on Telegram is securely encrypted” and building much of its marketing around how “safe” and “secure” it is, there is a big difference between what Telegram does and the end-to-end encryption used by services like Signal and WhatsApp. There is, in fact, no way to enable what Telegram calls “secret chats” by default.

One can quibble with Telegram’s choices. How appealing it is to be using an app which does not support end-to-end encryption by default is very much a user’s choice. But one can only make that choice if Telegram provides accurate and clear information. I have long found Apple’s marketing of iMessage deceptive. Telegram’s explanation of its own privacy and security is far more exploitative of users’ trust.