Elizabeth Dwoskin, Craig Timberg, and Tony Romm, the Washington Post:
The document, which is previously unreported and obtained by The Post, weighed four options. They included removing the post [by then-candidate Donald Trump advocating for banning Muslims from entering the United States] for hate speech violations, making a one-time exception for it, creating a broad exemption for political discourse and even weakening the company’s community guidelines for everyone, allowing comments such as “No blacks allowed” and “Get the gays out of San Francisco.”
Facebook spokesman Tucker Bounds said the latter option was never seriously considered.
The document also listed possible “PR Risks” for each. For example, lowering the standards overall would raise questions such as, “Would Facebook have provided a platform for Hitler?” Bickert wrote. A carveout for political speech across the board, on the other hand, risked opening the floodgates for even more hateful “copycat” comments.
Ultimately, Zuckerberg was talked out of his desire to remove the post in part by Kaplan, according to the people. Instead, the executives created an allowance that newsworthy political discourse would be taken into account when making decisions about whether posts violated community guidelines.
It isn’t often that invoking Hitler is useful in policy questions but, in this instance, it is hard to see how Facebook’s broad policy exceptions for “newsworthiness” would not apply to hypothetical posts from him. Is Facebook theoretically comfortable with a politician to use its platform to advocate for genocide? Staggering as it may seem, it is not such an outlandish question. I would argue that, if such a hypothetical fits into the company’s “newsworthiness” rules, then that policy is wildly irresponsible; if it does not, then there is a line, and advocating genocide should not be the benchmark for when such posts are not allowed.
One way to simplify this question is to ask whether Facebook’s executives believe that its role is to be a comfortable space for many people to see advertising, or if they believe it should be a passthrough entity and international exporter of the First Amendment.
Kevin Roose, the New York Times:
Just like the California gold rush, the Wild Wild Web started an enormous accumulation of personal and corporate power, transforming our social order overnight. Power shifted from the czars of government and the creaky moguls of the Fortune 500 to the engineers who built the machines and the executives who gave them their marching orders. These people were not prepared to run empires, and most of them deflected their newfound responsibility, or pretended to be less powerful than they were. Few were willing to question the 2010s Silicon Valley orthodoxy that connection was a de facto good, even as counter-evidence piled up.
There are still some stubborn holdouts. (Facebook, in particular, still appears attached to the narrative that social media simply reflects offline society, rather than driving it.) But among the public, there is no more mistaking Goliaths for Davids. The secret of the tech industry’s influence is out, and the critics who have been begging tech leaders to take more responsibility for their creations are finally being heard.
Anna Wiener, the New Yorker:
Under Section 230, content moderation is free to be idiosyncratic. Companies have their own ideas about right and wrong; some have flagship issues that have shaped their outlooks. In part because its users have pushed it to take a clear stance on anti-vaccination content, Pinterest has developed particularly strong policies on misinformation: the company now rejects pins from certain Web sites, blocks certain search terms, and digitally fingerprints anti-vaccination memes so that they can be identified and excluded from its service. Twitter’s challenge is bigger, however, because it is both all-encompassing and geopolitical. Twitter is a venue for self-promotion, social change, violence, bigotry, exploration, and education; it is a billboard, a rally, a bully pulpit, a networking event, a course catalogue, a front page, and a mirror. The Twitter Rules now include provisions on terrorism and violent extremism, suicide and self-harm. Distinct regulations address threats of violence, glorifications of violence, and hateful conduct toward people on the basis of gender identity, religious affiliation, age, disability, and caste, among other traits and classifications. The company’s rules have a global reach: in Germany, for instance, Twitter must implement more aggressive filters and moderation, in order to comply with government laws banning neo-Nazi content.
Debates over content moderation tend to focus on companies like Facebook and Twitter, and some might be glad to see the biggest platforms lose their immunity shield. But what are Twitch, Reddit, FlyerTalk, Bogleheads, Hacker News, wikiFeet, or iNaturalist without their content? (Yelp without yelps: Why bother?) Twitter could revert to “bird chirps,” and be a place for benign, pithy commentary on nothing; Instagram could subsist on photographs of gloppy eggs Benedict and memes about disgruntled golden retrievers. Alternatively, if companies were disincentivized from moderating content, the Internet could become a cesspool. We might face another kind of information crisis — a dying off of user-generated discourse, opinion, and news. “Without Section 230, the traditional media would have even more power over speech and expression,” [Jeff Kosseff] writes. “And those power structures could be even more stacked against the disenfranchised.”
Ultimately, the problems that need solving may not be ones of content moderation. In the book “Platform Capitalism,” published in 2017, the economist Nick Srnicek explores the reliance of digital platforms on “network effects,” in which value increases for both users and advertisers as a service expands its pool of participants and suite of offerings. Network effects, Srnicek writes, orient platforms toward monopolization; monopolization, in turn, makes it easier for a single tweet to be an extension of state power, or for a single thirty-six-year-old entrepreneur, such as Zuckerberg, to influence the speech norms of the global digital commons. Both outcomes might be less likely if there were other places to go. The business model common to many social-media platforms, meanwhile, is itself an influence over online speech. Advertisers are attracted by data about users; that data is created through the constant production and circulation of user-generated content; and so controversial content, which keeps users posting and sharing, is valuable. From this perspective, Donald Trump is an ideal user of Twitter. A different kind of business might encourage a different kind of user.
Wiener’s article is the one I have hopelessly been trying to write, and my conclusion is similar. Moderation is not inherently a problem; having monolithic siloed social networks is much more concerning.
The other conclusion is deeply unsatisfying: voters need to elect leaders who view power with responsibility, not arousal. The last several years have proved that to be accurate in virtually all contexts. But, true as it may be, it is not helpful us now, and it does not give us an adequate framework for dealing with the problems caused by amplifying propagandists’ glorification of their interests without context.