Neima Jahromi, the New Yorker:
Schaffer told me that hate speech had been a problem on YouTube since its earliest days. Dealing with it used to be fairly straightforward. YouTube was founded, in 2005, by Chad Hurley, Steve Chen, and Jawed Karim, who met while working at PayPal. At first, the site was moderated largely by its co-founders; in 2006, they hired a single, part-time moderator. The company removed videos often, rarely encountering pushback. In the intervening thirteen years, a lot has changed. “YouTube has the scale of the entire Internet,” Sundar Pichai, the C.E.O. of Google, which owns YouTube, told Axios last month. The site now attracts a monthly audience of two billion people and employs thousands of moderators. Every minute, its users upload five hundred hours of new video. The technical, social, and political challenges of moderating such a system are profound. They raise fundamental questions not just about YouTube’s business but about what social-media platforms have become and what they should be.
YouTube’s monopoly position means that their moderation decisions can be a massive if controversial force for good, but they will also have a high likelihood of flagging non-offending videos. Like I’ve been saying about Facebook and its inept moderation, this is a direct result of the platform’s scale.
As it is, YouTube is taking little meaningful action while still recommending videos that will keep users watching as they crawl further into a narrowing tunnel of viewpoints, thereby radicalizing users while simultaneously claiming that they are neutral.
The broad failure of U.S. authorities to take seriously the antitrust threat of tech companies remains among the biggest policy mistakes of the last twenty years.