Let’s Play the Blame Game, for Sure
There are tons of these stories. The question is how do you deal with them in ways that don’t throw out all of the good aspects of the internet. How do you distinguish someone running a defamation campaign with someone with a legitimate grievance?
I guess my final point: humanity & society are messy. Sometimes the internet reflects that mess. We shouldn’t immediately jump the conclusion that because the internet reflects that mess that it’s the cause of that mess or that it can solve that mess.
Dare Obasanjo quoted from the above Twitter thread and added:
I like to frame up this question; who do you blame for drunk driving? The drivers, automobile companies, beer companies or bars?
A lot of the discussion about bad behavior online is like only blaming car companies for not requiring a breathalyzer as part of the ignition process.
I think this is a decent analogy, so let’s take it just a step further to explain why I think its implied conclusion misses the mark.
The reason drunk driving is a problem is not the intoxication itself, in a vacuum, but the effects it is likely to have on the driver, passengers, and others. To be perfect clear, I am not condoning drunk driving in any context. But we are not worried about the action of literally drinking too much and then driving a car, as much as we are about its likely effects. So we have widespread campaigns to dissuade people from driving under the influence. And that’s very good.
But we do not stop there, because some people will shamelessly ignore their personal responsibility and drive when they should not: when they are drunk, or high, or tired, or highly irritable, or using their phone. Some of the drivers on the road at any given time should not be behind the wheel, but that is where they are. Please stop using your phone while behind the wheel.
Cars and roads have also changed as regulators recognized that they, too, play a role in a driver’s safety. Seatbelts, airbags, always-on lights, and crumple zones are well-known improvements. Technology has helped usher in ABS, automatic braking, and traction control. There have been subtler changes to car cabins as designs and materials are now chosen to reduce the likelihood of injury when they impact passengers. Roads are now designed to more effectively drain water, improve visibility, and reduce sudden drop-offs. These are not directly a response to drinking and driving, but they help lessen its worst effects.
It was not so long ago that vehicle collisions were treated as a matter of personal responsibility or bad luck; it was widely seen that you should expect to be injured or die when a bunch of steel impacts you, no matter whether you are a driver, passenger, pedestrian, or in another vehicle. But it is now understood that lots of people will crash lots of cars into lots of different things for lots of reasons, and there are many ways to reduce the likelihood of serious injury or death. Personal responsibility undoubtably remains an important factor, but there are lots of things that can be adjusted to lessen the impact of bad decisions.
That brings me back to technology and platforms. The technology landscape of the early 2000s was obsessed with growth; in many ways, it still is. Venture capital firms were happy to lose huge sums of money over many years while platforms grew, with the hope that one day they could slap some ads on everything and call it a business. Moderation was treated more like an impediment to scale, and less like a safety obligation.
A decade and a half later, in the immortal words of @screaminbutcalm:
Me sowing: Haha fuck yeah!!! Yes!!
Me reaping: Well this fucking sucks. What the fuck.
Platform moderation is hard — unquestionably. It is more difficult for images and video than it is for plain text, and it only becomes harder as a platform becomes more popular. But we can only definitively say that is the case for platforms as they are designed and built today, with little advance consideration for reducing abuse.
If we used that “Men in Black” neuralizer to forget how tech platforms work today and had to rebuild them with the dangers of lax moderation in mind, they will probably look and feel somewhat different. But they could be designed and built with more consideration for the real-world effects of abusive users.
If you want me to bring it back to a car analogy, consider the Lamborghini Countach. The original Bertone-designed shape was sadly made lumpier with every new version. But nothing uglified the car more than the plastic bumpers fitted to U.S. models to comply with new safety regulations. The problem was not with the regulations. It was that the car was not designed to accommodate them, so this attempt at compliance looked dreadful. Lamborghini still designs jaw-dropping cars. But now they have been designed to incorporate modern safety regulations so, in addition to looking amazing, they are safer for their occupants and the victims of collisions.
Online platforms face a similar reckoning today. It is not directly their fault that there are horrible people who do horrible things, but they ought to recognize that they can play a role in reducing the predictably horrible effects. Their current efforts resemble the Countach’s plastic bumper extension because the likelihood of abuse continues to be underestimated. They can try to do better, and I welcome these attempts. But I am skeptical that Facebook, Reddit, Twitter, or YouTube are willing to make radical changes. They can’t, really; they are big companies now.
Whatever is being designed and built now that will one day dethrone today’s giants ought to treat safety as a foundational tenet. That should be encouraged by its funding and business model, too, so it does not become something that is auctioned off at a later date.