Facebook’s Mood Manipulation Experiment laboratorium.net

This story was first broken by Sophie Weiner of Animal:

Using an algorithm that can recognize negative or positive words, the researchers were able to comb through NewsFeeds without actually viewing any text that may have been protected under users’ privacy settings. “As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research,” the study’s authors wrote. That’s right: You consented to be randomly selected for this kind of research when you signed up for Facebook. Might want to check out that User Policy again.

James Grimmelmann, Professor of Law at the University of Maryland, on the ethical and legal implications of this study:

A stronger reason is that even when Facebook manipulates our News Feeds to sell us things, it is supposed—legally and ethically—to meet certain minimal standards. Anything on Facebook that is actually an ad is labelled as such (even if not always clearly.) This study failed even that test, and for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded.

So, this study is legally dubious, ethically bankrupt, and made a bunch of people miserable without telling them. But what else would you expect from Facebook, you “dumb fuck”? (And, yet, here I am with Facebook open in another tab.)