Following YouTube’s Recommendations Down the Rabbit Hole buzzfeednews.com

Caroline O’Donovan and Charlie Warzel, Buzzfeed News:

To better understand how Up Next discovery works, BuzzFeed News ran a series of searches on YouTube for news and politics terms popular during the first week of January 2019 (per Google Trends). We played the first result and then clicked the top video recommended by the platform’s Up Next algorithm. We made each query in a fresh search session with no personal account or watch history data informing the algorithm, except for geographical location and time of day, effectively demonstrating how YouTube’s recommendation operates in the absence of personalization.

[…]

One of the defining US political news stories of the first weeks of 2019 has been the partial government shutdown, now the longest in the country’s history. In searching YouTube for information about the shutdown between January 7 and January 11, BuzzFeed News found that the path of YouTube’s Up Next recommendations had a common pattern for the first few recommendations, but then tended to pivot from mainstream cable news outlets to popular, provocative content on a wide variety of topics.

After first recommending a few videos from mainstream cable news channels, the algorithm would often make a decisive but unpredictable swerve in a certain content direction. In some cases, that meant recommending a series of Penn & Teller magic tricks. In other cases, it meant a series of anti–social justice warrior or misogynist clips featuring conservative media figures like Ben Shapiro or the contrarian professor and author Jordan Peterson. In still other cases, it meant watching back-to-back videos of professional poker players, or late-night TV shows, or episodes of Lauren Lake’s Paternity Court.

I have all of YouTube’s personalization features switched off and I’m not signed in, so I’ve been seeing this same bizarre mix of Up Next recommendations for months now. In addition to the Peterson and Shapiro clips referenced above, I also get loads of Joe Rogan episodes and compilation-type videos of narrators reading Wikipedia over a photo slideshow. I don’t understand how YouTube’s recommendations work, but only a few clicks seems to separate a benign video from something conspiratorial, discriminatory and hateful, or wholly unrelated.

Daisuke Wakabayashi, New York Times:

After years of criticism that YouTube leads viewers to videos that spread misinformation, the company said it was changing what videos it recommends to users. In a blog post, YouTube said it would no longer suggest videos with “borderline content” or those that “misinform users in a harmful way” even if the footage does not violate its community guidelines.

YouTube said the number of videos affected by the policy change amounted to less than 1 percent of all videos on the platform. But given the billions of videos in YouTube’s library, it is still a large number.

Note that YouTube said that this applies to “less than one percent of content”, but they don’t say how often those videos are watched. If the view counts on many of the recommendations surfaced in Buzzfeed’s experiment are anything to go by, these videos are very popular. Would they have been nearly as popular if YouTube’s machine learning features did not frequently suggest them? I doubt it.