FTC Staff Report on Social Media Data Collection ftc.gov

The U.S. Federal Trade Commission:

A new Federal Trade Commission staff report that examines the data collection and use practices of major social media and video streaming services shows they engaged in vast surveillance of consumers in order to monetize their personal information while failing to adequately protect users online, especially children and teens.

The staff report is based on responses to 6(b) orders issued in December 2020 to nine companies including some of the largest social media and video streaming services: Amazon.com, Inc., which owns the gaming platform Twitch; Facebook, Inc. (now Meta Platforms, Inc.); YouTube LLC; Twitter, Inc. (now X Corp.); Snap Inc.; ByteDance Ltd., which owns the video-sharing platform TikTok; Discord Inc.; Reddit, Inc.; and WhatsApp Inc.

This is, even for me, a surprisingly dry report. I really struggled to get through it — in part, perhaps, because many of these behaviours are well known to me and, probably, you. But it is, I think, worthwhile having a single document laying out how these companies are hostile to personal privacy.

My copy contains dozens of highlighted passages from where companies have reported ingesting and exploiting non-user data, inferred demographic and personal details not disclosed by users, and enriched their own collected data from the libraries of third-parties. The latter is illustrated on page 33 like a biology diagram of creepy behaviour. Other highlights include an entire section dedicated to U.S. users’ access to the rights conferred to European users under the GDPR, poor or nonexistent user testing of privacy controls, and bad documentation of data handling and minimization practices.

Much of this is either stuff I know or things I could assume, but that is not to say I did not learn anything. One notable finding is how “most Companies did not proactively delete inactive or abandoned accounts”, which makes sense in a vacuum — but, when paired with the possibility of data being used in the training of machine learning and A.I. features, is less comforting.

As the Commission says, this report also looks at how these platforms are used by children and teens, and what safeguards are in place. I understand this is a controversial issue, and it is unclear which findings are legitimately concerning and which are a moral panic. I will note that endless machine-powered suggestions are a relatively new phenomenon; Instagram only switched to that format in its feed eight years ago. I think it is fair to be worried about the effects these devices and services are having on young brains without following it to Jonathan Haidt’s questionable conclusions.