There’s some good discussion in this video, but this part from Nilay Patel is wonderful:
I think one of the major things we need to shift our thinking about is [that] regulating individual pieces of speech is very difficult. Regulating behaviour is probably a better approach, where you can say “well, these people are consistently behaving in a way that goes against our values, and we don’t have to, like, write A.I. that finds words. We can actually look holistically at behaviour.” None of the platforms seem to be ready to do that. They are not willing to articulate strong values that they stand for — Twitter, in particular, seems to be very hands-off. Mark Zuckerberg is talking about a Facebook court.
Those are all very legalistic interpretations. I think they’re not going to work unless these companies have strong values that they believe in, and the government decides it wants to pursue a non-discriminatory approach. […]
The most awful corners of Twitter have gotten very good at evading automatic detection of targeted harassment and discriminatory language, even though it’s clear that their behaviour, as a whole, is harassing and discriminatory. When you report a user to Twitter for this kind of behaviour, they ask that you add up to five relevant tweets even when their entire account is a problem. Twitter’s rules prohibit targeted abuse, but you can still find plenty of users who reference who reference the “fourteen words” and “blood and soil” in their bios, or any of the other coded language used in the context of white supremacy and white nationalism.
Banning Nazism is, for me, the baseline of good platform moderation — if a company can’t or won’t prioritize removing Nazis from their platform, who will they remove?