TikTok’s Design Breaches Digital Services Act, According to Preliminary European Commission Findings ⇥ ec.europa.eu
The European Commission:
The Commission’s investigation preliminarily indicates that TikTok did not adequately assess how these addictive features could harm the physical and mental wellbeing of its users, including minors and vulnerable adults.
For example, by constantly ‘rewarding’ users with new content, certain design features of TikTok fuel the urge to keep scrolling and shift the brain of users into ‘autopilot mode’. Scientific research shows that this may lead to compulsive behaviour and reduce users’ self-control.
Additionally, in its assessment, TikTok disregarded important indicators of compulsive use of the app, such as the time that minors spend on TikTok at night, the frequency with which users open the app, and other potential indicators.
It is fair for regulators to question the efficacy of measures claiming to “promote healthier sleep habits”. This wishy-washy verbiage is just as irritating as when it is employed by supplement companies and it should be more strictly regulated.
Trying to isolate infinite scrolling as a key factor in encouraging unhealthy habits is, I think, oversimplifying the issue. Contrary to the conclusions drawn by some people, I am unsure if that is what the Commission is suggesting. The Commission appears to have found this is one part of a constellation of features that are intended to increase the time users spend in the app, regardless of the impact it may have on users. In an article published last year in Perspectives on Public Health, two psychologists sought to distinguish this kind of compulsive use from other internet-driven phenomena, arguing that short-form video “has been particularly effective at triggering psychological patterns that keep users in a continuous scrolling loop”, pointing to a 2023 article in Proceedings of the ACM on Human-Computer Interaction. It is a mix of the engaging quality of video with the unknown of what comes next — like flipping through television channels, only entirely tailored to what each user has previously become enamoured with.
Casey Newton reported on the Commission’s investigation and a similar U.S. lawsuit. Here is the lede:
The old way of thinking about how to make social platforms safer was that you had to make them do more content moderation. Hire more people, take down more posts, put warning labels on others. Suspend people who posted hate speech, and incitements to violence, or who led insurrections against their own governments.
At the insistence of lawmakers around the world, social platforms did all of this and more. But in the end they had satisfied almost no one. To the left, these new measures hadn’t gone nearly far enough. To the right, they represented an intolerable infringement of their freedom of expression.
I find the left–right framing of the outcomes of this entirely unproductive and, frankly, dumb. Even as a broad generalization, it makes little sense: there are plenty of groups across the political spectrum arguing their speech is being suppressed. I am not arguing these individual complaints are necessarily invalid. I just think Newton’s argument is silly.
Adequate moderation is an effective tool for limiting the spread of potentially harmful posts for users of all ages. While Substack is totally cool with Nazis, that stance rarely makes for a healthy community. Better behaviour, even from pseudonymous users, is encouraged by marginalizing harmful speech and setting relatively strict boundaries for what is permissible. Moderation is difficult to do well, impossible to do right, and insufficient on its own — of course — but it is not an old, outdated way of thinking, regardless of what Mark Zuckerberg argues.
Newton:
Of course, Instagram Reels and YouTube Shorts work in similar ways. And so, whether on the stand or before the commission, I hope platform executives are called to answer: if you did want to make your products addictive, how different would they really look from the ones we have now?
This is a very good argument. All of these platforms are deliberately designed to maximize user time. They are not magic, nor are they casting some kind of spell on users, but we are increasingly aware they have risks for people of all ages. Is it so unreasonable for regulators to have a role?