Zeynep Tufekci, writing for Wired:
What should you watch? What should you read? What’s news? What’s trending? Wherever you go online, companies have come up with very particular, imperfect ways of answering these questions. Everywhere you look, recommendation engines offer striking examples of how values and judgments become embedded in algorithms and how algorithms can be gamed by strategic actors.
Consider a common, seemingly straightforward method of making suggestions: a recommendation based on what people “like you” have read, watched, or shopped for. What exactly is a person like me? Which dimension of me? Is it someone of the same age, gender, race, or location? Do they share my interests? My eye color? My height? Or is their resemblance to me determined by a whole mess of “big data” (aka surveillance) crunched by a machine-learning algorithm?
Last year, Chris Hayes showed how quickly a YouTube search for information about the Federal Reserve turns into antisemitic conspiracy theories in just a few clicks. The exact path he cited no longer exists, but I just tried it: the first search result for “Federal Reserve” on YouTube is a video from CNN, but many of its recommendations are dubious and conspiracy-minded.
YouTube’s recommendations have long been problematic and, like all recommendation engines, they seem designed to encourage users to consume more, with profoundly differing results depending on their context. Music recommendations, for example, seem relatively benign: software can probably make some good guesses about what someone who listens to Fugazi and Bad Brains would also want to hear, even if it knows nothing else about them. But Amazon knows a lot more about its users than simply their music choices; YouTube’s metrics, meanwhile, encouraged complicity in its flaws in the pursuit of growth.