A Different Perspective on the ‘Design Choices’ Social Media Company Verdicts ⇥ techdirt.com
Mike Masnick, of Techdirt, unsurprisingly opposes the verdicts earlier this week finding Meta and Google guilty of liability for how their products impact children’s safety. I think it is a perspective worth reading. Unlike the Wall Street Journal, Masnick respects your intelligence and brings actual substance. Still, I have some disagreements.
Masnick, on the “design choices” argument:
This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
This sounds like a reasonable retort until you think about it for three more seconds and realize that the lack of neutrality in the outcomes of these decisions is the entire point. Users post all kinds of stuff on social media platforms, and those posts can be delivered in all kinds of different ways, as Masnick also writes. They can be shown in reverse-chronological order in a lengthy scroll, or they can be shown one at a time like with Stories. The source of the posts someone sees might be limited to just accounts a user has opted into, or it can be broadened to any account from anyone in the world. Twitter used to have a public “firehose” feed.
But many of the biggest and most popular platforms have coalesced around a feed of material users did not ask for. This is not like television, where each show has been produced and vetted by human beings, and there are expectations for what is on at different times of the day. This is automated and users have virtually no control within the platforms themselves. If you do not like what Instagram is serving you on your main feed, your choice is to stop using Instagram entirely — even if you like and use other features.
Platforms know people will post objectionable and graphic material if they are given a text box or an upload button. We know it is “impossible” to moderate a platform well at scale. But we are supposed to believe they have basically no responsibility for what users post and what their systems surface in users’ feeds? Pick one.
Masnick, on the risks of legal accountability for smaller platforms:
And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.
I am going to need a citation that TikTok and Snap caved because they could not afford continuing to fight. It seems just as plausible they could see which way the winds were blowing, given what I have read so far in the evidence that has been released.
Masnick:
One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.
The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”
Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.
This is undeniably a worrisome precedent. I will note Raúl Torrez, New Mexico’s Attorney General and the man who brought this case against Meta, says he wants to do so for minors only. The implementation of this is an obvious question, though one that mandated age-gating would admittedly make straightforward.
Meta cited low usage when it announced earlier this month that it would be turning off end-to-end encryption in Instagram. If it is a question of safety or liability, it is one Meta would probably find difficult to articulate given end-to-end encryption remains available and enabled by default in Messenger and WhatsApp. An executive raised concerns about the feature when it was being planned, drawing a distinction between it and WhatsApp because the latter “does not make it easy to make social connections, meaning making Messenger e2ee will be far, far worse”.
I think Masnick makes some good arguments in this piece and raises some good questions. It is very possible or even likely this all gets unwound when it is appealed. I, too, expect the ripple effects of these cases to create some chaos. But I do not think the correct response to a lack of corporate accountability — or, frankly, standards — is, in Masnick’s words, “actually funding mental health care for young people”. That is not to say mental health should not be funded, only that it is a red herring response. In the U.S., total spending on children’s mental health care rose by 50% between 2011 and 2017; it continued to rise through the pandemic, of course. Perhaps that is not enough. But, also, it is extraordinary to think that we should allow companies to do knowingly harmful things and expect everyone else to correct for the predictable outcomes.