‘60 Minutes’ Airs Misleading Story About Section 230 techdirt.com

CBS’ 60 Minutes aired a story, reported by Scott Pelley, arguing that cases of harassment and abuse from online sources are enabled by Section 230 of the Communications Decency Act:

A priority of the new president and Congress will be reining in the giants of social media. On this, Democrats and Republicans agree. Their target is a federal law known as Section 230. In a single sentence it set off the ‘big bang’ helping to create the universe of Google, Facebook, Twitter and the rest. Some critics of the law say that it leaves social media free to ignore lies, hoaxes and slander that can wreck the lives of innocent people. One of those critics is Lenny Pozner. After a tragedy in his own life, Pozner has become a champion for victims of online lies, people including Maatje and Matt Benassi, who, overnight, became the target of death threats like these.

[…]

Right about now you might be thinking, they should sue. But that’s the problem. They can’t file hundreds of lawsuits against internet trolls hiding behind aliases. And they can’t sue the internet platforms because of that law known as Section 230 of the Communications Decency Act of 1996. Written before Facebook or Google were invented, Section 230 says, in just 26 words, that internet platforms are not liable for what their users post.

These cases are truly terrible — but they are not enabled by Section 230 as much as by the generosities afforded by the First Amendment combined with the scale of these platforms. And, as Mike Masnick of Techdirt points out, major platforms have eventually been responsive to user complaints:

Over and over again, the report blames Section 230 for all of this. Incredibly, at the end of the report, they admit that the video from that nutjob conspiracy theorist was taken down from YouTube after people complained about it. In other words Section 230 did exactly what it was supposed to do in enabling YouTube to pull down videos like that. But, of course, unless you watch the entire 60 Minutes segment, you’ll miss that, and still think that 230 is somehow to blame.

Facebook, Twitter, and YouTube have thankfully stepped up their moderation efforts in the last couple of years. But because of their scale — partially due to network effects, and partially because of a reluctance to use antitrust precedent to slow their roll — this increased moderation has been mistakenly referred to as “censorship”. None of this has anything to do with Section 230, however.

60 Minutes filmed a very good interview with Jeff Kosseff, an expert on Section 230, of which only a part made it into the final report. I am disappointed that they axed Kosseff’s historical context:

To understand why [Section 230] is necessary, you really have to go back to what the law was before Section 230, and that is: what is the liability for distributors of content that others create? Before the internet, that was bookstores and newsstands. And the general rule was that, if you are a distributor of someone else’s content, you’re only liable if you know or have reason to know if it’s illegal.

That compares favourably with Section 230, which requires platforms to remove illegal materials when they are notified and encourages them to moderate proactively.1 Because of the explosive growth of these platforms, moderation is extremely difficult.

Kosseff also fields a question from Pelley about news publishers:

Scott Pelley: But help me understand, the same is not true for other forms of media. If somebody says something defamatory on 60 Minutes or on Fox or CNN or in The New York Times, those organizations can be sued. So why not Google, YouTube, Facebook?

Jeff Kosseff: So the difference between a social media site and let’s say the Letters to the Editor page of The New York Times is the vast amount of content that they deliver. So I mean you might have five or ten letters to the editor on a page. You could have I think it’s 6,000 tweets per second. […]

One other difference is that the press relies upon human beings making a decision about what should be published and what should not. An interview subject can make a dubious and potentially defamatory claim, but it is up to the system of reporters and editors and fact-checkers to determine whether that claim ought to be shown to the public. Online platforms are more infrastructural. Making them legally liable for what their users publish would be like making it fair game to sue newsstands and grocery stores for selling copies of the Times containing an illegally defamatory story.

Perhaps owing to their unique scale and manipulated reach, I hope that platforms will continue to take a more active role in curbing high-profile bad faith use. I do not think making Twitter liable for my dumb tweets, or websites liable for their users’ comments, is a sensible way of getting there.


  1. Platforms’ own rules mean that what they disallow is not necessarily the same as what the law disallows. ↥︎