Hanlon’s Razor, Kansas Edition

Earlier this week, Dave Kendall of documentary production company Prairie Hollow and formerly of a Topeka, Kansas PBS station, wrote in the Kansas Reflector an article criticizing Meta. Kendall says he tried to promote posts on Facebook for a screening of “Hot Times in the Heartland” but was prevented from doing so. A presumably automated message said it was not compliant with its political ads policy.

I will note Meta’s ambiguous and apparently fluid definition of which posts count as political. But Kendall comes to the ridiculous conclusion that “Meta deems climate change too controversial for discussion” based solely on his inability to “boost” an existing post. Being pedantic but correct, that means that Meta did not prohibit discussion generally, just the ad.

I cannot fault Kendall’s frustration, however, as he correctly describes the non-specific support page and nonexistent support:

But in the Meta-verse, where it seems virtually impossible to connect with a human being associated with the administration of the platform, rules are rules, and it appears they would prefer to suppress anything that might prove problematic for them.

Exactly. This accurately describes the imbalanced power of even buying ads on Meta’s platforms. Advertisers are Meta’s customers and, unless one is a big spender, they receive little to no guidance. There are only automated checks and catch-all support contacts, neither of which are particularly helpful for anything other than obvious issues.

A short while later in the editorial, however, things take a turn for the wrong again:

The implications of such policies for our democracy are alarming. Why should corporate entities be able to dictate what type of speech or content is acceptable?

In a centralized social network like Facebook, the same automated technologies which flagged this post also flag and remove posts which contribute to a poor community. We already know how lax policies turn out and why those theories do not last in the real world.

Of course, in a decentralized social network, it is possible to create communities with different policies. The same spec that underpins Mastodon, for example, also powers Gab and Truth Social. Perhaps that is more similar to the system which Kendall would prefer — but that is not how Facebook is built.

Whatever issue Facebook flagged regarding those ads — Kendall is not clear, and I suspect that is because Facebook is not clear either — the problems of its poor response intensified later that day.

Clay Wirestone and Sherman Smith, opinion editor and editor-in-chief, respectively, of the Kansas Reflector:

This morning, sometime between 8:20 and 8:50 a.m. Thursday, Facebook removed all posts linking to Kansas Reflector’s website.

This move not only affected Kansas Reflector’s Facebook page, where we link to nearly every story we publish, but the pages of everyone who has ever shared a story from us.

[…]

Coincidentally, the removals happened the same day we published a column from Dave Kendall that is critical of Facebook’s decision to reject certain types of advertising: “When Facebook fails, local media matters even more for our planet’s future.”

Marisa Kabas, writing in the Handbasket:

Something strange started happening Thursday morning: Facebook users who’d at some point in the past posted a link to a story from the Kansas Reflector received notifications that their posts had violated community standards on cybersecurity. “It looks like you tried to gather sensitive information, or shared malicious software,” the alert said.

[…]

Shortly after 4, it appeared most links to the site were posting properly on Meta properties—Facebook, Instagram Threads — except for one: [Thursday’s column][ed] critical of Facebook.

If you wanted to make a kind-of-lame modern conspiracy movie, this is where the music swells and it becomes a fast-paced techno-thriller. Kabas followed this article with one titled “Here’s the Column Meta Doesn’t Want You to See”, republishing Kendall’s full article “in an attempt to sidestep Meta’s censorship”.

While this interpretation of a deliberate effort by Facebook to silence critical reporting is kind of understandable, given its poor communication and the lack of adequate followup, it hardly strikes me as realistic. In what world would Meta care so much about tepid criticism published by a small news operation that it would take deliberate manual actions to censor it? Even if you believe Meta would be more likely to kneecap a less visible target than, say, a national news outlet, it does not make sense for Facebook to be this actively involved in hiding any of the commentary I have linked to so far.

Facebook’s explanation sounds more plausible to me. Sherman Smith, Kansas Reflector:

Facebook spokesman Andy Stone in a phone call Friday attributed the removal of those posts, along with all Kansas Reflector posts the day before, to “a mistaken security issue that popped up.” He wouldn’t elaborate on how the mistake happened and said there would be no further explanation.

[…]

“It was a security issue related to the Kansas Reflector domain, along with the News From The States domain and The Handbasket domain,” Stone added. “It was not this particular story. It was at the domain level.”

If some system at Meta erroneously flagged as a threat Kendall’s original attempt to boost a post, it makes sense that related stories and domains would also be flagged. Consider how beneficial this same chain of effects could be if there were actually a malicious link: not only does it block the main offending link, but also any adjacent links that look similar, and any copycats or references. That is an entirely fair way to prevent extreme platform abuse. In this case, with large numbers of people trying to post one link that had already been flagged, alongside other similar links, it is easy to see how Meta’s systems might see suspicious behaviour.

For an even simpler example, consider how someone forgetting a password for their account looks exactly the same as someone trying to break into it. On any website worth its salt, you will be slowed down or prevented from trying more than some small number of password attempts, even if you are the actual account owner. This is common security behaviour; Meta’s is merely more advanced.

This is not to say Meta got this right — not even a little bit. I have no reason to carry water for Meta and I have plenty to criticize; more on that later. Unfortunately, the coverage of this non-story has been wildly disproportionate and misses the actual problems. CNN reported that Meta was “accused of censoring” the post. The Wrap said definitively that it “block[ed] Kansas Reflector and MSNBC columnist over op-ed criticizing Facebook”. An article in PC Magazine claimed “Facebook really, really doesn’t want you to read” Kendall’s story.

This is all nonsense.

What is true and deeply frustrating is the weak approach of companies like Meta and Google toward customer service. Both have offloaded the administrative work of approving or rejecting ads to largely automated systems, with often vague and unhelpful responses, because they have prioritized scale above quality from their earliest days.

For contrast, consider how apps made available in Apple’s App Store have always received human review. There are plenty of automated processes, too, which can detect obvious problems like the presence of known malware — but if an app passes those tests, a person sees it before approving or rejecting it. Of course, this system is also deeply flawed; see the vast number of articles and links I have posted over the years about the topic. Any developer can tell you that Apple’s support has problems, too. But you can see a difference in approaches between companies which have scaled with human intervention, and those which have avoided it.

Criticism of Meta in this case is absolutely warranted. It should be held to a higher standard, with more options available for disputing its moderation judgements, and its pathetic response in this case deserves the scrutiny and scorn it is receiving. This is particularly true as it rolls out its de-prioritization of “political” posts in users’ feeds, while continuing to dodge meaningful explanations of what will be affected.

Dion Lefler, the Wichita Eagle:

Both myself and Eagle investigative reporter Chance Swaim have tried to contact Facebook/Meta — although we knew before we started that it’s a waste of time and typing.

Their corporate phone number is a we-don’t-give-a-bleep recording that hangs up on you after two repeats. And their so-called media relations department is where press emails go to die.

Trying to understand how these megalithic corporations make decisions is painful enough, and their ability to dodge the press gives the impression they are not accountable to anybody. They may operate our social spaces and digital marketplaces, but they are oftentimes poor stewards. There will always be problems at this scale. Yet, it often seems as though public-facing tech businesses, in particular, behave as though they are still scrappy upstarts with little responsibility to a larger public. Meta is proud to say its products “empower more than 3 billion people around the world”. I cannot imagine what it is like to design systems which affect that many people. But it is important to criticize the company when it messes up this badly without resorting to conspiracy theories or misleading narratives. The press can do better. But Meta also needs to be more responsive, less hostile, and offer better explanations of how these systems work because, like just about any massive entity, nobody should be trusting it at its word.