Twitter and Hate Speech

This is all going to sound very familiar. It is something society will re-litigate a few times a year until the internet goes away instead of, you know, learning.

On Tuesday evening, the BBC’s James Clayton scored an impromptu interview with Elon Musk, which was streamed live on Twitter Spaces. It ran for about ninety minutes and the most popular clip has been a brief segment in which Clayton pressed Musk on a rise in hate speech:

Clayton: You’ve asked me whether my feed, whether it’s got less or more. I’d say it’s got slightly more.

Musk: That’s why I’m asking for examples. Can you name one example?

Clayton: I honestly don’t — honestly…

Musk: You can’t name a single example?

Musk concludes by calling Clayton a liar. It is an awkward segment to watch because it is clear how unprepared Clayton was for this exchange — but his claim is not wrong.

Mike Wendling, BBC:

Several fringe characters that were banned under the previous management have been reinstated.

They include Andrew Anglin, founder of the neo-Nazi Daily Stormer website, and Liz Crokin, one of the biggest propagators of the QAnon conspiracy theory.

[…]

Anti-Semitic tweets doubled from June 2022 to February 2023, according to research from the Institute of Strategic Dialogue (ISD). The same study found that takedowns of such content also increased, but not enough to keep pace with the surge.

Of course, this followup story appears to be a case where the broadcaster would not only like to correct the record, but also stand behind a reporter who struggled with a line of questioning he, in hindsight, should have anticipated. That may undermine readers’ confidence in it. A reader may also doubt the independence of the ISD as it counts as funders government agencies, billionaires’ foundations, and large technology companies. Research like this demands access to Twitter’s API so, if billionaire funders are bothersome, prepare for it to get much worse.

I believe those aspects are worth considering, but are secondary concerns to the findings of the ISD report (PDF). In other words, if there are methodological problems or the study’s conclusions seem contrived, that is a more immediate concern. The ISD concluded by saying two things, and implying one other: Twitter is getting better at removing antisemitic tweets on a proportional basis; Twitter is not keeping up with a doubling of the number of antisemitic tweets being posted; and the total number of antisemitic tweets is still a tiny fraction of the total tweets published daily. That any antisemitic tweets are remaining on the site is obviously not good, but a doubling of a very small number is still a very small number.

The ISD is not the only source for this kind of research. Wendling cites other sources, including the BBC’s own, to make the case that hate speech on Twitter has climbed. Just a few days ago, a preprint study on Musk’s Twitter was released seeking to understand the presence of both hate speech and bots pre- and post-acquisition. Its authors found an increase in both — though, again, still at a relatively low percentage of all tweets.

But even if it is just a handful of posts which are violative of even a baseline understanding of what constitutes hate speech, it is harmful to the person who is targeted and — if you want the detached business case for it — may have a chilling effect on their use of the platform. From Twitter’s own policies:

We recognize that if people experience abuse on Twitter, it can jeopardize their ability to express themselves. Research has shown that some groups of people are disproportionately targeted with abuse online. For those who identify with multiple underrepresented groups, abuse may be more common, more severe in nature, and more harmful.

We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals or groups with abuse based on their perceived membership in a protected category.

That is one reason why it is so important for platforms to set guidelines for permissible speech, and enforce those rules clearly and vigorously. There will always be grey area, but when a platform advertises the grey area as a space for “difficult” or “controversial” arguments, it will begin to slide. From that preprint study (PDF):

Both analyses we performed show large increases in hate speech following Musk’s purchase, with no sign of hate speech returning to previously typical levels. Prior research highlights the consequences of online hate speech, including increased anxiety in users (Saha, Chandrasekharan, and De Choudhury 2019) and offline victimization of targeted groups (Lewis, Rowe, and Wiper 2019). The effects of Twitter’s moderation policies are thus likely far-reaching and will lead to negative consequences if left unchecked.

The researchers note no causal relationship between any specific Twitter rules and the baseline rise in hate speech on the platform. But Musk’s documented views on encouraging a platform environment with fewer guidelines have effectively done the same work as an official policy change. He is an influential figure who could use his unique platform to encourage greater understanding; instead, he spends his very busy day farting out puerile memes — you are welcome — and mocking anti-racist initiatives.

The efforts of some to minimize the effects of hateful conduct as merely being words or not being the responsibility of platforms is grossly out of step with research. These are not new ideas, and we do not need to pretend that light touch moderation for only the most serious offences is an effective strategy. Twitter may not be overrun with hate speech but, for its most frequent targets, it has increasing presence.

This happens over and over again; you would think we would learn something. As I was writing this piece, a clip from the Verge’s “Decoder” podcast was published to TikTok, in which Nilay Patel asks Substack CEO Chris Best a pretty basic question about whether explicitly racist speech would be permitted in the platform’s new Notes section. That clip is not the result of crafty editing; the full transcript is an exercise by Best in equivocation and dodging. At one point, Best tries to claim “we are making a new thing […] we launched this thing one day ago”, but anyone can look at Notes and realize it is not really a new idea. If anything, its recent launch is even less of an excuse than Best believes — because it is new for Substack, it gives the company an opportunity to set reasonable standards from the first day. That it is not doing so and not learning from the work of other platforms and researchers is ridiculous.