Month: September 2024

Sérgio Spagnuolo, Sofia Schurig, and Pedro Nakamura, Núcleo:

A Supreme Court Justice ordered, on Friday (August 30, 2024), the complete suspension of all access to X (formerly Twitter) across the entire Brazilian territory, in an unprecedented ruling against the social platform.

[…]

In a ruling issued on the afternoon of Aug. 31, Justice Alexandre de Moraes ordered the president of Brazil’s telecom regulator, Anatel, Carlos Manuel Baigorri, to ensure that necessary measures are taken and that internet companies are notified to block the application within 24 hours.

An un-bylined report from Al Jazeera:

At the core of the dispute, de Moraes argues that Musk refused earlier this year to block accounts responsible for the spread of fake news, hate speech and attacks on the rule of law.

At the time, Musk denounced the order as censorship and responded by closing the company’s offices in Brazil while ensuring the platform was still available in the country.

Mike Masnick, Techdirt:

And, of course, as a reminder, before Elon took over Twitter (but while he was in a legal fight about it), he accused the company of violating the agreement because of its legal fight against the Modi government over their censorship demands. I know it’s long forgotten now, but one of the excuses Elon used in trying to kill the Twitter deal was that the company was fighting too hard to protect free speech in India.

And then, once he took over, he not only caved immediately to Modi’s demands, he agreed to block the content that the Modi government ordered blocked globally, not just in India.

So Elon isn’t even consistent on this point. He folds to governments when he likes the leadership and fights them when he doesn’t. It’s not a principled stance. It’s a cynical, opportunistic one.

This is being compared by some to the arrest of Pavel Durov but, again, I am not sure I see direct parallels. This Brazilian law seems, from my Canadian perspective, more onerous and restrictive than those from most other liberal democracies. But I do not know much of anything about Brazilian policy, and perhaps this is in line with local expectations.

This is probably not the reason Bluesky wanted for growing by two million new users in one week.

Robert Reich, former U.S. Secretary of Labor for the Clinton administration and Sam Reich’s dad, wrote about Elon Musk’s political influence in an editorial for the Guardian. It begins as a decent piece, comparing the power of owning a social media platform with Musk’s childlike gullibility — my words, not Reich’s. But, in a section of ideas about what to do, one suggestion seems particularly harmful:

3. Regulators around the world should threaten Musk with arrest if he doesn’t stop disseminating lies and hate on X.

Global regulators may be on the way to doing this, as evidenced by the 24 August arrest in France of Pavel Durov, who founded the online communications tool Telegram, which French authorities have found complicit in hate crimes and disinformation. Like Musk, Durov has styled himself as a free speech absolutist.

There are places where U.S.-style interpretation of free expression is contradicted by local laws and, so, X’s operations must comply. Maybe Musk could be legally responsible in some jurisdiction for things he has said, or for things hosted on a platform he owns. But we should almost never encourage the idea of arresting people for things they say. Yes, there are limits: threats of violence and fraud are both types of generally illegal speech. Yet charging Musk for being a loud public idiot is a very bad idea.

Also, while details about Pavel Durov’s arrest are still solidifying, it does not yet appear he is being held responsible for “hate crimes and disinformation”. According to French prosecutors (PDF), which I translated with DeepL, his charges are mostly about failing to comply with subpoenas and other legitimate legal demands. If X follows legal avenues for either complying with or disputing government demands, then I do not see how Durov’s arrest is even relevant. And, for what it is worth, neither Durov nor Telegram have been “found complicit” in anything. The United States is not the only country which has legal procedures.

In response to Reich’s article, a troll X account posted a screenshot of a 4chan post about “low T men”, itself containing an arguably antisemitic meme, which was quoted by Musk calling it an “interesting observation”. Just more evidence Musk is a big, dumb, rich, influential moron.

Maryclaire Dale, Associated Press:

A U.S. appeals court revived on Tuesday a lawsuit filed by the mother of a 10-year-old Pennsylvania girl who died attempting a viral challenge she allegedly saw on TikTok that dared people to choke themselves until they lost consciousness.

While federal law generally protects online publishers from liability for content posted by others, the court said TikTok could potentially be found liable for promoting the content or using an algorithm to steer it to children.

Notably, the “Blackout Challenge” or the “Choking Game” is one of few internet challenges for teenagers which is neither a media-boosted fiction nor relatively harmless. It has been circulating for decades, and was connected with 82 deaths in the United States alone between 1995–2007. Which, yes, is before TikTok or even social media as we know it today. Melissa Chan reported in a 2018 Time article that its origins go back to at least the 1930s.

Mike Masnick, of Techdirt, not only points out the extensive Section 230 precedent ignored by the Third Circuit in its decision, he also highlights the legal limits of publisher responsibility:

We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

Matt Stoller, on the other hand, celebrates the Third Circuit’s ruling as an end to “big tech’s free ride on Section 230”:

Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.

And that’s a huge rollback of Section 230.

On a legal level, both Masnick and Stoller agree the Third Circuit’s ruling creates a massive change in U.S. internet policy and, because of current structures, the world. But their interpretations of this are in vehement disagreement on whether this is a good thing. Masnick says it is not, and I am inclined to agree. Not only is there legal precedent on his side, there are plenty of very good reasons for why Section 230 is important to preserve more-or-less the way it has existed for decades.

However, it seems unethical for TikTok to have no culpability for how users’ dangerous posts are recommended, especially to children. Perhaps legal recourse is wrong in this case and others like it, yet it just feels wrong for this case to eventually — after appeals and escalation to, probably, the Supreme Court — be summarily dismissed on the grounds that corporations have little responsibility or care for automated recommendations. There is a real difference between teenagers spreading this challenge one-on-one for decades and teenagers broadcasting it — or, at least, there ought to be a difference.