Shortly after the Wall Street Journal began publishing “The Facebook Files” last month, a series of articles based on leaked internal research documents, the paper confirmed that two U.S. lawmakers were in touch with the whistleblower who leaked the files. Not only was the research in the possession of the Journal’s reporters and the SEC, the lawmakers said that they were hoping the whistleblower would speak publicly.
Yesterday, they got their wish. For three hours, Frances Haugen, who worked on misinformation policies at Facebook, testified before a Senate sub-committee. Not long after she finished speaking, Facebook’s communications department sought to discredit her — and, bizarrely, so did Glenn Greenwald, since Haugen was not the right type of whistleblower and must be regarded as suspicious for not saying what Greenwald thinks she should be saying — and then Mark Zuckerberg responded.
Zuckerberg’s letter is behind Facebook’s login wall; since I do not have an account, I cannot access it. Thankfully, the Verge has reproduced it in full for those of us who think that the public statements of the CEO of a major company should be, you know, public.
I obviously do not know more about Facebook than its founder and CEO. But I think it would be worthwhile to compare Zuckerberg’s comments against the reporting so far, so we can see what may be omitted, taken out of context, or misrepresented. Skipping over a perfunctory introduction and a brief reflection on Monday’s companywide outage, here is Zuckerberg’s comment on Haugen’s congressional appearance:
Second, now that today’s testimony is over, I wanted to reflect on the public debate we’re in. I’m sure many of you have found the recent coverage hard to read because it just doesn’t reflect the company we know. We care deeply about issues like safety, well-being and mental health. It’s difficult to see coverage that misrepresents our work and our motives. At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted.
Many of the claims don’t make any sense. If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn’t care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space — even ones larger than us? If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we’re doing? And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
These are quite the paragraphs, with the latter being particularly misleading. Let’s look at each rhetorical question:
If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?
This premise is obviously false. Just because there exists a well-funded corporate research team, it does not mean their findings cannot be ignored — or worse. Researchers at oil companies were aware of the environmental harm caused by their products for decades before the general public; instead of doing something about it, they lied and lobbied.
If we didn’t care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space — even ones larger than us?
Of all the rhetorical questions in this paragraph, this one is framed around a wishy-washy straw man argument, so any response is going to be similarly vague. Framed as a binary choice of caring versus not caring, I suppose the presence of any platform moderation could be seen as caring. But perhaps this does not demonstrate an adequate level of care, even with the most contractors — not employees — compared to its competitors.
That premise is not the claim made by Haugen or the reporting on the documents she released, however. On September 16, the Journal published an analysis of moderation-related documents indicating that the company prioritizes growth and user retention, and is reluctant to remove users. The reporting portrays this as a systemic moderation problem that can be similarly attributed to greed and incompetence. As user growth has been driven almost exclusively (PDF) by the “Asia-Pacific” and “Rest of World” categories for years, platform moderation has not kept pace with language and regional requirements.
I bet Facebook’s staff and contractors, at all levels, are horrified to see the company’s platforms used to promote murder, drug cartels, human exploitation, and ethnicity-targeted violence. What the Journal’s reporting indicates is they struggle to balance those problems against profits, to which anyone with a conscience might wonder why there is a need for a touch so cautious they are reluctant to ban cartel members.
If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we’re doing?
Just a few weeks ago, Facebook acknowledged it omitted roughly half of all U.S. users from the research it provided to social scientists and other researchers. In August, Facebook suspended ad targeting researchers. In April, the team running the Facebook-owned CrowdTangle analytics tool was broken up. Is this an “industry-leading standard for transparency”?
And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?
This is, no kidding, an honest-to-goodness question worth asking, though it seems like the answer may be fairly straightforward: platforms like Facebook may not be wholly to blame for polarization, but they seem to exacerbate existing societal fractures, according to the Brookings Institute. Regions that are already polarized or have more fragile democracies are pulled apart further, while reducing time spent on these platforms decreases animosity and hardened views.
Zuckerberg:
At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That’s just not true. For example, one move that has been called into question is when we introduced the Meaningful Social Interactions change to News Feed. This change showed fewer viral videos and more content from friends and family — which we did knowing it would mean people spent less time on Facebook, but that research suggested it was the right thing for people’s well-being. Is that something a company focused on profits over people would do?
I am confused why Zuckerberg would choose to illustrate this by referencing Meaningful Social Interactions, the topic of one of the first pieces of reporting from the Journal based on Haugen’s document disclosures. The summary Zuckerberg paints is almost the opposite of what has been reported based on Facebook’s internal research. It is so easy to fact-check that it seems as though Zuckerberg is counting on readers not to. From the Journal:
Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.
“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.
They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.
This change may have reduced time spent on the site, but internal researchers found it made Facebook a worse place to be, not a better one. The Journal also says Zuckerberg was worried about its impact on engagement after this algorithm change was launched and asked for changes to reduce its impact.
In response to Zuckerberg’s question this week, “is that something a company focused on profits over people would do?”, I say “duh and/or hello”.
Zuckerberg:
The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don’t want their ads next to harmful or angry content. And I don’t know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.
Once again, I think there is a subtle distinction that Zuckerberg is avoiding here to make an easier argument. The internal documents collected by Haugen indicate the company profits more when people are engaged more, that engagement rises with incendiary materials, and that engagement is prioritized in the News Feed. These documents and reporting based on them do not indicate the company is intentionally trying to make people angry, only that it is following a path to greater profit that, incidentally, stokes stronger emotions.
Buzzfeed data scientist Max Woolf, in a thread on Twitter, illustrated another problem with Zuckerberg’s claims. In posts where discriminatory perspectives are framed as defiant or patriotic, the most common responses are “likes” and “loves”, not angry reactions. Would it be fair to say these are positive posts? If you only look at their reactions without looking at the context, that is the impression you might get.
Zuckerberg also reflects on some of the reporting on the effects of Facebook’s products on children and youth, but ends up passing the buck:
Similar to balancing other social issues, I don’t believe private companies should make all of the decisions on their own. That’s why we have advocated for updated internet regulations for several years now. I have testified in Congress multiple times and asked them to update these regulations. I’ve written op-eds outlining the areas of regulation we think are most important related to elections, harmful content, privacy, and competition.
In testimony earlier this year, Zuckerberg said that he would support changing Section 230 of the Communications Decency Act, but in a specific way that benefits Facebook and other large companies. In a vacuum and without existing social media giants, I think his proposal makes sense. But, today, it would be toxic for the open web. Increasing liability for websites that allow public posting of any kind would make it hard for smaller businesses with lower budgets to compete. Contrary even to Haugen’s limited reform scope, it seems likely that changes to Section 230 — without antitrust action — will, like many other laws, be easily absorbed by massive companies like Facebook while disadvantaging upstarts.
Zuckerberg:
That said, I’m worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It’s disheartening to see that work taken out of context and used to construct a false narrative that we don’t care. If we attack organizations making an effort to study their impact on the world, we’re effectively sending the message that it’s safer not to look at all, in case you find something that could be held against you. That’s the conclusion other companies seem to have reached, and I think that leads to a place that would be far worse for society. Even though it might be easier for us to follow that path, we’re going to keep doing research because it’s the right thing to do.
It is not every day you get an honest-to-goodness mafioso threat out of a CEO. I almost admire how straightforward it is.
Zuckerberg concludes:
When I reflect on our work, I think about the real impact we have on the world — the people who can now stay in touch with their loved ones, create opportunities to support themselves, and find community. This is why billions of people love our products. I’m proud of everything we do to keep building the best social products in the world and grateful to all of you for the work you do here every day.
Today, the Verge released the results of its latest Tech Trust Survey. Of the U.S.-representative 1,200 people polled in August, 31% think Facebook has a negative impact on society, 56% do not trust it with their personal information, and 72% think the company has too much power. 48% said they would not miss Facebook if it went away. Respondents were more positive about Instagram, but even more of them — 60% — said they would be okay if it disappeared. That is not a promising sign that people “love” the company’s offerings. All of this is after several years of critical coverage of Facebook, but before Haugen’s disclosures.
I do not think the Journal’s stories this month about Facebook revealed much new information that will swing those numbers much in either direction. What these leaks show is the degree to which Facebook is aware of the harmful effects of its products, yet often prioritizes its earnings over positive societal influence. If you read that sentence and thought like every company, Nick, I think we have found common ground on broader questions of balancing business desires with the public good.
Mike Masnick, Techdirt:
So if you’ve been brought up to believe with every ounce of your mind and soul that growth is everything, and that the second you take your eye off the ball it will stop, decisions that are “good for Facebook, but bad for the world” become the norm. Going back to my post on the hubris of Facebook, it also feels like Mark thinks that once Facebook passes some imaginary boundary, then they can go back and fix the parts of the world they screwed up. It doesn’t work like that, though.
And that’s a problem.
The incentives are all screwed up here. While Zuckerberg may claim that advertisers will refuse to spend on platforms that regularly spew hate, spread misinformation, and sow division, the last three quarters have been the most financially successful in its history. Repeated negative press stories have not correlated with advertising spending or Facebook’s value to investors.
It is not surprising this is the case: Facebook runs two of the world’s most successful personalized advertising platforms. Regardless of what advertisers say, it is not like many of them will actually go anywhere else, because where else is there to go? If they are being honest, none of the senators before which Haugen testified will stop their millions of dollars spent on Facebook ads either.
The policies that will require Facebook to reform in big, meaningful ways are those that improve privacy and restrict the use of behavioural information. Facebook’s incentives are aligned with exploiting that data, and the company’s paranoia pushes it to stretch acceptable boundaries. It is long past time to change those incentives.