Search Results for: data broker

Remember how, in 2023, the U.S. Office of the Director of National Intelligence published a report acknowledging mass stockpiling of third-party data it had purchased? It turns out there is so much private information about people it is creating a big headache for the intelligence agencies — not because of any laws or ethical qualms, but simply because of the sheer volume.

Sam Biddle, the Intercept:

The Office of the Director of National Intelligence is working on a system to centralize and “streamline” the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be “misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons.” The documents state spy agencies will use the web portal not just to search through reams of private data, but also run them through artificial intelligence tools for further analysis.

Apparently, the plan is to feed all this data purchased from brokers and digital advertising companies into artificial intelligence systems. The DNI says it has rules about purchasing and using this data, so there is nothing to worry about.

By the way, the DNI’s Freedom of Information Act page was recently updated to remove links to released records and FOIA logs. They were live on May 5 but, as of May 16, those pages have been removed, and direct links no longer resolve either. Strange.

Update: The ODNI told me its “website is currently under construction”.

Joseph Cox, 404 Media:

Flock’s new product, called Nova, will supplement license plate data with a wealth of personal information sourced from other companies and the wider web, according to the material obtained by 404 Media. “You’re going to be able to access data and jump from LPR to person and understand what that context is, link to other people that are related to that person […] marriage or through gang affiliation, et cetera,” a Flock employee said during an internal company meeting, according to an audio recording. “There’s very powerful linking.” One Slack message said that Nova supports 20 different data sources that agencies can toggle on or off.

According to Cox’s reporting, this includes the usual smattering of creepy data brokers but, also, private data made public due to illegal breaches. One may consider that “open source” in much the same way as Meta benefitting from pirated books, which is to say that it is something for which a normal person would be criminally prosecuted, but is apparently fine for corporations. It may be unwise for the limits of our privacy to be governed by the greed of capitalism.

Update: The good news is that Flock has decided not to use leaked data. The bad news is this is apparently a decision the company can make instead of a law it needs to follow.

Dell Cameron, Wired:

A cache of more than two dozen police records recently reviewed by WIRED show US law enforcement agencies regularly trained on how to take advantage of “connected cars,” with subscription-based features drastically increasing the amount of data that can be accessed during investigations. The records make clear that law enforcement’s knowledge of the surveillance far exceeds that of the public and reveal how corporate policies and technologies — not the law — determine driver privacy.

On Bluesky, Cameron published a screenshot of what is available by car make and model year “as a treat”. Owners of Ferraris will be delighted to know they are not on this list.

Cameron’s reporting indicates law enforcement is able to obtain information from automakers and cellular network operators. Four years ago, Joseph Cox, then at Vice, reported on capabilities offered by the Ulysses Group, previously linked. Then, last year, Kashmir Hill of the New York Times reported the sharing of data by General Motors to insurance companies and data brokers. Each of these depicts an entirely different avenue by which individual vehicles may be surveilled, stockpiling data which may be produced without a warrant.

Kendra Barnett, AdWeek:

Filed on March 31 in the Central District of California, one of the class action cases takes aim at The Trade Desk’s Unified ID 2.0 (UID2) identifier, alleging that the tracking tech collects personally identifiable information, like email addresses and phone numbers, and uses it to enable user profiling and real-time bidding.

[…]

The Trade Desk, plaintiffs allege in this case, operates like a data broker as it tracks, profiles, and de-anonymizes internet users without their knowledge or consent via its Adsrvr Pixel, which follows users across different parts of the web and across devices.

I mentioned UID2 in passing a couple of years ago. The claims (PDF) in these lawsuits (PDF), untested in court, are certainly worrisome but only as much as any of these user identification and enrichment services. At least California has privacy laws to hold the Trade Desk accountable instead of relying on the Federal Trade Commission’s less direct processes.

Joseph Cox and Dhruv Mehrotra, in an article jointly published by 404 Media and Wired:

Last year, a media investigation revealed that a Florida-based data broker, Datastream Group, was selling highly sensitive location data that tracked United States military and intelligence personnel overseas. At the time, the origin of that data was unknown.

Now, a letter sent to US senator Ron Wyden’s office that was obtained by an international collective of media outlets — including WIRED and 404 Media — claims that the ultimate source of that data was Eskimi, a little-known Lithuanian ad-tech company. Eskimi, meanwhile, denies it had any involvement.

The letter was apparently sent by Datastream, which means it either has no idea where it got this extremely precise location information, or Eskimi is being dishonest. That is kind of the data broker industry in a nutshell: a vast sea of our personal information being traded indiscriminately by businesses worldwide — whose names we have never heard of — with basically no accountability or limitations.

Ingo Dachwitz and Sebastian Meineck, Netzpolitik:

A new data set obtained from a US data broker reveals for the first time about 40,000 apps from which users‘ data is being traded. The data set was obtained by a journalist from netzpolitik.org as a free preview sample for a paid subscription. It is dated to a single day in the summer of 2024.

Among other things, the data set contains 47 million “Mobile Advertising IDs”, to which 380 million location data from 137 countries are assigned. In addition, the data set contains information on devices, operating systems and telecommunication providers.

This is, somehow, different from the Gravy Analytics breach. The authors note this data set includes fairly precise location information about specific users, and they got all this in a free sample of one day of Real Time Bidding data. This is all legal — at least in the U.S.; German authorities are investigating and have threatened sanctions — able to be collected by anyone willing to either pay or become a participant in RTB themselves.

Joseph Cox, 404 Media:

Hackers claim to have compromised Gravy Analytics, the parent company of Venntel which has sold masses of smartphone location data to the U.S. government. The hackers said they have stolen a massive amount of data, including customer lists, information on the broader industry, and even location data harvested from smartphones which show peoples’ precise movements, and they are threatening to publish the data publicly.

You remember Gravy Analytics, right? It is the one from the stories and the FTC settlements, though it should not be confused with all the other ones.

Cox, again, 404 Media:

Included in the hacked Gravy data are tens of millions of mobile phone coordinates of devices inside the US, Russia, and Europe. Some of those files also reference an app next to each piece of location data. 404 Media extracted the app names and built a list of mentioned apps.

The list includes dating sites Tinder and Grindr; massive games such as Candy Crush, Temple Run, Subway Surfers, and Harry Potter: Puzzles & Spells; transit app Moovit; My Period Calendar & Tracker, a period tracking app with more than 10 million downloads; popular fitness app MyFitnessPal; social network Tumblr; Yahoo’s email client; Microsoft’s 365 office app; and flight tracker Flightradar24. The list also mentions multiple religious-focused apps such as Muslim prayer and Christian Bible apps; various pregnancy trackers; and many VPN apps, which some users may download, ironically, in an attempt to protect their privacy.

This location data, some of it more granular than others, appears to be derived from real-time bidding on advertising, much like the Patternz case last year. In linking to — surprise — Cox’s reporting on Patternz, I also pointed to a slowly developing lawsuit against Google. In a filing (PDF) from the plaintiffs, so far untested in court, there are some passages that can help contextualize the scale and scope of real-time bidding data (emphasis mine):

As to the Court’s second concern about the representative nature of the RTB data produced for the plaintiffs (the “Plaintiff data”), following the Court’s Order, Google produced six ten-minute intervals of class-wide RTB bid data spread over a three-year period (2021-2023) (the “Class data”). Further Pritzker Decl., ¶ 17. Prof. Shafiq analyzed this production, encompassing over 120 terabytes of data and almost [redacted] billion RTB bid requests. His analysis directly answers the Court’s inquiry, affirming that the RTB data are uniformly personal information for the plaintiffs and the Class, and that the Plaintiff data is in fact representative of the Class as a whole.

[…]

[…] For the six ten-minute periods of Class data Google produced, Prof. Shafiq finds that there were at least [redacted] different companies receiving the bid data located in at least [redacted] countries, and that the companies included some of the largest technology companies in the world. […]

This is Google, not Gravy Analytics, but still — this entire industry is morally bankrupt. It should not be a radical position that using an app on your phone or browsing the web should not opt you into such egregious violations of basic elements of your privacy.

Jonathan Stempel, Reuters:

Apple agreed to pay $95 million in cash to settle a proposed class action lawsuit claiming that its voice-activated Siri assistant violated users’ privacy.

A preliminary settlement was filed on Tuesday night in the Oakland, California federal court, and requires approval by U.S. District Judge Jeffrey White.

Alex Hern, who wrote the 2019 Guardian story forming the basis of many complaints in the lawsuit, today on Bluesky:

There’s two claims in one case and one of them Apple is bang to rights on (“Siri records accidental interactions”) and the other is worth far far more than $95m to disprove (“those recordings are shared with advertisers”)

The original complaint (PDF), filed just a couple of weeks after Hern’s story broke, does not once mention advertising. A revised complaint (PDF), filed a few months later, mentions it once and only in passing (emphasis mine):

Apple’s actions were at all relevant times knowing, willful, and intentional as evidenced by Apple’s admission that a significant portion of the recordings it shares with its contractors are made without use of a hot word and its use of the information to, among other things, improve the functionality of Siri for Apple’s own financial benefit, to target personalized advertising to users, and to generate significant profits. Apple’s actions were done in reckless disregard for Plaintiffs’ and Class Members’ privacy rights.

This is the sole mention in the entire complaint, and there is no citation or evidence for it. However, a further revision (PDF), filed in 2021, contains plenty of anecdotes:

Several times, obscure topics of Plaintiff Lopez’s and Plaintiff A.L.’s private conversations were used by Apple and its partners to target advertisements to them. For example, during different private conversations, Plaintiff Lopez and Plaintiff A.L. mentioned brand names including “Olive Garden,” “Easton bats,” “Pit Viper sunglasses,” and “Air Jordans.” These advertisements were targeted to Plaintiffs Lopez and A.L. Subsequent to these private conversations, Plaintiff Lopez and Plaintiff A.L., these products began to populate Apple search results and Plaintiffs also received targeted advertisements for these products in Apple’s Safari browser and in third party applications. Plaintiffs Lopez and A.L. had not previously searched for these items prior to the targeted advertisements. Because the intercepted conversations took place in private to the exclusion of others, only through Apple’s surreptitious recording could these specific advertisements be pinpointed to Plaintiffs Lopez and A.L.

I am filing this in the needs supporting evidence column alongside other claims of microphones being used to target advertising. I sympathize with the plaintiffs in this case, but nothing about their anecdotes — more detail on pages 8 and 10 of the complaint — is compelling, as alternative explanations are possible.

For example, one plaintiff discussed a particular type of surgery with his doctor, and then saw ads on his iPhone related to the condition it treats. While it seems possible Siri was erroneously activated, Apple received a copy of the recording, and then it automatically transcribed and sold its contents to data brokers, this is massively speculative compared to what we know ad tech companies do. Perhaps the doctor’s office was part of a geofenced ad campaign. Or, perhaps the doctor was searching related keywords and then, because the plaintiff’s phone was in the proximity of the doctor’s devices, some cross-device targeting became confused. Neither of these explanations involve microphones, let alone Siri.

Yet, because Apple settled this lawsuit, it looks like it is not interested in fighting these claims. It creates another piece of pseudo-evidence for people who believe microphone-equipped devices are transforming idle conversations into perfectly targeted ads.

None of these stories have so far been proven, and there is not a shred of direct evidence it is occurring — but I can understand why people are paranoid. While businesses have exploited private data to sell ads for decades, we have dramatically increased the amount of devices we have and the time we spend with them with few meaningful steps taken toward user privacy. We are feeding every part of this nauseating industry more data with, in many countries, about the same regulatory oversight.

I could be entirely wrong. Apple could have settled this case because it is, indeed, doing more-or-less what the plaintiffs say. To that possibility, I say: show me real evidence. I have no problem admitting I got something wrong.

Update: Apple has issued a statement in which it says it has “never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone for any purpose”.

A press release from U.K. consumer advocacy group Which?:

The consumer champion rated products across four categories and gave them overall privacy scores for factors including consent and what data access they want. Researchers found data collection often went well beyond what was necessary for the functionality of the product – suggesting data could, in some cases, be being shared with third parties for marketing purposes. Which? is calling for firms to prioritise privacy over profits.

This includes products as pedestrian as air fryers, which apparently wanted the precise location of users and permission to record audio. There could be a valid reason for these permissions — for example, perhaps the app allows you to automate the air fryer to preheat when you return home; or, perhaps there is voice control functionality which, for understandable reasons, is not delineated in a permissions request for “recording” one’s voice.

I downloaded the Xiaomi app to look into these possibilities, but I was unable to proceed unless I created an account and connected a relevant product. I also looked at manuals for different smart air fryers from these brands, but that did not clear anything up because — wisely — these manufacturers do not include app-related information in their printed documentation.

Even if these permissions requests are perfectly innocent and were correctly documented — according to Which?, they are not — it is ridiculous that buyers need to consider all this just to use some appliance.

Matthew Gault, Gizmodo:

But it shouldn’t be this way. Every piece of tech shouldn’t be a devil’s bargain where we allow a tech company to read through our phone’s contact list so we can remotely shut off an oven. More people are pissed about this issue and complaining to their government. Watchdog groups in the U.K. and the U.S. are paying attention.

We can do something about this. We can have comprehensive privacy laws with the backing of well-funded regulators. But until that happens, everything “smart” is capable of lucrative contributions to the broader data broker and surveillance advertising markets, just because people want to use the product’s features.

Liv McMahon and Lily Jamali, BBC News:

TikTok’s bid to overturn a law which would see it banned or sold in the US from early 2025 has been rejected.

[…]

TikTok says it will now take its fight to the US Supreme Court, the country’s highest legal authority.

The court’s opinion (PDF) is not particularly long. As this is framed as a question of national security, the court gives substantial deference to the government’s assessment of TikTok’s threat. It also views the legislation passed earlier this year to limit data brokers as a complementary component of this TikTok divest-or-ban law.

I still do not find this argument particularly compelling. There is still too much dependence on classified information and too little public evidence. A generous interpretation of this is the court knows something I do not, and perhaps this is completely justified. But who knows? The paranoia over this app is leaking but the proof is not.

Donald Trump’s victory in the 2024 US Presidential Election may also present a lifeline for the app.

Despite unsuccessfully attempting to ban TikTok during his first term in 2020, he said in the run-up to the November elections he would not allow the ban on TikTok to take effect.

I would be shocked if the incoming administration remains committed to overturning this ban, and not just because of its historically flaky reputation. This very decision references the actions of the first Trump presidency, though it owes more to the more tailored policies of the Biden administration.

If the U.S. Supreme Court does not stay this order and TikTok’s U.S. operations are not jettisoned from its global business, the ban will go into effect the day before Trump’s inauguration.

Out of the U.S. today comes a slew of new proposed restrictions against data brokers and their creepy practices.

The Consumer Financial Protection Bureau:

[…] The proposed rule would limit the sale of personal identifiers like Social Security Numbers and phone numbers collected by certain companies and make sure that people’s financial data such as income is only shared for legitimate purposes, like facilitating a mortgage approval, and not sold to scammers targeting those in financial distress. The proposal would make clear that when data brokers sell certain sensitive consumer information they are “consumer reporting agencies” under the Fair Credit Reporting Act (FCRA), requiring them to comply with accuracy requirements, provide consumers access to their information, and maintain safeguards against misuse.

The Federal Trade Commission:

The Federal Trade Commission will prohibit data broker Mobilewalla, Inc. from selling sensitive location data, including data that reveals the identity of an individual’s private home, to settle allegations the data broker sold such information without taking reasonable steps to verify consumers’ consent.

And also the Federal Trade Commission:

The Federal Trade Commission is taking action against Gravy Analytics Inc. and its subsidiary Venntel Inc. for unlawfully tracking and selling sensitive location data from users, including selling data about consumers’ visits to health-related locations and places of worship.

Both of the proposed FTC orders require these businesses to “maintain a sensitive location data program designed to develop a list of sensitive locations and prevent the use, sale, license, transfer, sharing, or disclosure of consumers’ visits to those locations”. These include, for example and in addition to those in the above quotes, shelters, labour union offices, correctional facilities, and military installations. This order was previewed last month in Wired.

As usual, I am conflicted about these policies. While they are yet another example of Lina Khan’s FTC and other government bureaucrats cracking down on individually threatening data brokers, it would be far better for everyone if this were not handled on a case-by-case basis. These brokers have already caused a wealth of damage around the world, and only they are being required to stop. Other players in the rest of the data broker industry will either self-govern or hope they do not fall into the FTC’s crosshairs, and if you believe the former is more likely, you have far greater faith in already-shady businesses than I do.

There is another wrench in these proposals: we are less than two months away from a second Trump presidency, and the forecast for the CFPB looks unfriendly. It was kneecapped during the first administration and it is on the chopping block for those overseeing a advisory committee masquerading as a government agency. The future of the FTC is more murky, with some indicators it will continue its current path — albeit from a Republican-skewed perspective — while others suggest a reversal.

The centring of the U.S. in the digital activity of a vast majority of us gives it unique power on privacy — power it has, so far, used in only very small doses. The future of regulatory agencies like these has relevance to all of us.

Dhruv Mehrotra and Dell Cameron, Wired:

A joint investigation by WIRED, Bayerischer Rundfunk (BR), and Netzpolitik.org reveals that US companies legally collecting digital advertising data are also providing the world a cheap and reliable way to track the movements of American military and intelligence personnel overseas, from their homes and their children’s schools to hardened aircraft shelters within an airbase where US nuclear weapons are believed to be stored.

A collaborative analysis of billions of location coordinates obtained from a US-based data broker provides extraordinary insight into the daily routines of US service members. The findings also provide a vivid example of the significant risks the unregulated sale of mobile location data poses to the integrity of the US military and the safety of its service members and their families overseas.

Yet another entry in the ongoing series of stories documenting how we have created a universal unregulated tracking system accessible to basically anyone so that, incidentally, it will make someone slightly more likely to buy a specific brand of cereal. This particular demonstration feels like a reversal of governments using this data to surveil people with less oversight and fewer roadblocks.

The FTC is apparently planning to address this by, according to these reporters, “formally recogniz[ing] US military installations as protected sites”, which is a truly bananas response. The correct answer is for lawmakers to pass a strong privacy framework that restricts data collection and retention, but doing so would be economically costly and would impede the exploitation of this data by the U.S. and its allies. Instead, the world’s most powerful military is going to tell scummy data brokers not to track people within specific areas all over the world.

Reporters and researchers, meanwhile, will continue to point out how this mass data collection makes everyone vulnerable. It feels increasingly like splitting hairs between the surveillance volunteered by U.S. industry, and that which is mandated by more oppressive governments. I recognize there is a difference — the force is the difference — but the effect is comparable.

Brian Krebs:

In an interview, Atlas said a private investigator they hired was offered a free trial of Babel Street, which the investigator was able to use to determine the home address and daily movements of mobile devices belonging to multiple New Jersey police officers whose families have already faced significant harassment and death threats.

[…]

Atlas says the Babel Street trial period allowed its investigator to find information about visitors to high-risk targets such as mosques, synagogues, courtrooms and abortion clinics. In one video, an Atlas investigator showed how they isolated mobile devices seen in a New Jersey courtroom parking lot that was reserved for jurors, and then tracked one likely juror’s phone to their home address over several days.

Krebs describes a staggering series of demonstrations by the investigator for Atlas, plaintiff in a suit against Babel Street: precise location tracking of known devices, or dragnet-style tracking of a cluster of devices, basically anywhere. If you or I collected device locations and shared it with others, it would be rightly seen as creepy — at the very least. Yet these intrusive behaviours have been normalized. They are not. What they are doing ought to be criminal.

It is not just Babel Street. Other names have popped up over the years, including Venntel and Fog Data Science. Jack Poulson, who writes All-Source Intelligence, has an update on the former:

According to a public summary of a contract signed in early August, the U.S. Federal Trade Commission has opened an inquiry into the commercial cellphone location-tracking data broker Venntel and its parent company Gravy Analytics. […]

Gravy Analytics’ data, via Venntel, is apparently one of the sources for Babel Street’s tracking capabilities.

You might remember Babel Street; I have linked to several stories about the company. This reporting was most often done by Byron Tau, then at the Wall Street Journal, and Joseph Cox, then at Vice. Tau wrote a whole book about the commercial surveillance apparatus. Both reporters were also invited to the same demo as Krebs saw; Tau’s story, at Notus, is login-walled:

The demonstration offers a rare look into how easily identifiable people are in these location-based data sets, which brokers claim are “anonymized.”

Such claims do not hold up to scrutiny. The tools in the hands of capable researchers, including law enforcement, can be used to identify specific individuals in many cases. Babel’s tool is explicitly marketed to intelligence analysts and law enforcement officers as a commercially available phone-tracking capability — a way to do a kind of surveillance that once required a search warrant inside the U.S. or was conducted by spy agencies when done outside the U.S.

Cox now writes at 404 Media:

Atlas also searched a school in Philadelphia, which returned nearly 7,000 devices. Due to the large number of phones, it is unlikely that these only include adult teachers, meaning that Babel Street may be holding onto data belonging to children too.

All these stories are worth your time. Even if you are already aware of this industry. Even if you remember that vivid New York Times exploration of an entirely different set of data brokers published six years ago. Even if you think Apple is right to allow users to restrict access to personal data.

This industry is still massive and thriving. It is still embedded in applications on many of our phones, by way of third-party SDKs for analytics, advertising, location services, and more. And it is deranged that the one government that can actually do something about this — the United States — is doing so one company and one case at a time. Every country should be making it illegal to do what Babel Street is capable of. But perhaps it is too rich a source.

Sarah Perez, TechCrunch:

iOS apps that build their own social networks on the back of users’ address books may soon become a thing of the past. In iOS 18, Apple is cracking down on the social apps that ask users’ permission to access their contacts — something social apps often do to connect users with their friends or make suggestions for who to follow. Now, Apple is adding a new two-step permissions pop-up screen that will first ask users to allow or deny access to their contacts, as before, and then, if the user allows access, will allow them to choose which contacts they want to share, if not all.

Kevin Roose, New York Times, in an article with the headline “Did Apple Just Kill Social Apps?”:

Now, some developers are worried that they may struggle to get new apps off the ground. Nikita Bier, a start-up founder and advisor who has created and sold several viral apps aimed at young people, has called the iOS 18 changes “the end of the world,” and said they could render new friend-based social apps “dead on arrival.”

That might be a little melodramatic. I recently spent some time talking to Mr. Bier and other app developers and digging into the changes. I also heard from Apple about why they believe the changes are good for users’ privacy, and from some of Apple’s rivals, who see it as an underhanded move intended to hurt competitors. And I came away with mixed feelings.

Leaving aside the obviously incendiary title, I think this article’s framing is pretty misleading. Apple’s corporate stance is the only one favourable to these limitations. Bier is the only on-the-record developer who thinks these changes are bad; while Roose interviewed others who said contact uploads had slowed since iOS 18’s release, they were not quoted “out of fear of angering the Cupertino colossus”. I suppose that is fair — Apple’s current relationship with developers seems to be pretty rocky. But this article ends up poorly litigating Bier’s desires against Apple giving more control to users.

Bier explicitly markets himself as a “growth expert”; his bio on X is “I make apps grow really fast”. He has, to quote Roose, “created and sold several viral apps” in part by getting users to share their contact list, even children. Bier’s first hit app, TBH, was marketed to teenagers and — according to several sources I could find, including a LinkedIn post by Kevin Natanzon — it “requested address book access before actually being able to use the app”. A more respectful way of offering this feature would be to ask for contacts permission only when users want to add friends. Bier’s reputation for success is built on this growth hacking technique, so I understand why he is upset.

What I do not understand is granting Bier’s objections the imprimatur of a New York Times story when one can see the full picture of Bier’s track record. On the merits, I am unsympathetic to his complaints. Users can still submit their full contact list if they so choose, but now they have the option of permitting only some access to an app I have not even decided I trust.

Roose:

Apple’s stated rationale for these changes is simple: Users shouldn’t be forced to make an all-or-nothing choice. Many users have hundreds or thousands of contacts on their iPhones, including some they’d rather not share. (A therapist, an ex, a random person they met in a bar in 2013.) iOS has allowed users to give apps selective access to their photos for years; shouldn’t the same principle apply to their contacts?

The surprise is not that Apple is allowing more granular contacts access, it is that it has taken this long for the company to do so. Developers big and small have abused this feature to a shocking degree. Facebook ingested the contact lists of a million and a half users unintentionally — and millions of users intentionally — a massive collection of data which was used to inform its People You May Know feature. LinkedIn is famously creepy and does basically the same thing. Clubhouse borrowed from the TBH playbook by slurping up contacts before you could use the app.1 This has real consequences in surfacing hidden connections many people would want to stay hidden.

Even a limited capacity of allowing users to more easily invite friends can go wrong. When Tribe offered such a feature, it spammed users’ contacts. It settled a resulting class action suit in 2018 for $200,000 without admitting wrongdoing. That may have been accidental. Circle, on the other hand, was deliberate in its 2013 campaign.

Apple’s position is, therefore, a reasonable one, but it is strange to see no voices from third-party experts favourable to this change. Well-known iOS security researchers Mysk celebrated it; why did Roose not talk to them? I am sure there are others who would happily adjudicate Apple’s claims. The cool thing about a New York Times email address is that people will probably reply, so it seems like a good idea to put that power to use. Instead, all we get is this milquetoast company-versus-growth-hacker narrative, with some antitrust questions thrown in toward the end.

Roose:

Some developers also pointed out that the iOS 18 changes don’t apply to Apple’s own services. iMessage, for example, doesn’t have to ask for permission to access users’ contacts the way WhatsApp, Signal, WeChat and other third-party messaging apps do. They see that as fundamentally anti-competitive — a clear-cut example of the kind of self-preferencing that antitrust regulators have objected to in other contexts.

I am not sure this is entirely invalid, but it seems like an overreach. The logic of requiring built-in apps to request the same permissions as third-party apps is, I think, understandable on fairness grounds, but there is a reasonable argument to be made for implied consent as well. Assessing this is a whole different article.

But Messages accesses the contacts directory on-device, while many other apps will transport the list off-device. That is a huge difference. Your contact list is almost certainly unique. The specific combination of records is a goldmine for social networks and data brokers wishing to individually identify you, and understand your social graph.

I have previously argued that permission to access contacts is conceptually being presented to the wrong person — it ought to, in theory, be required by the people in your contacts instead. Obviously that would be a terrible idea in practice. Yet each of us has only given our contact information to a person; we may not expect them to share it more widely.

As in so many other cases, the answer here is found in comprehensive privacy legislation. You should not have to worry that your phone number in a contact list or used for two-factor authentication is going to determine your place in the global social graph. You should not have to be concerned that sharing your own contact list in a third-party app will expose connections or send an unintended text to someone you have not spoken with in a decade. Data collected for a purpose should only be used for that purpose; violating that trust should come with immediate penalties, not piecemeal class action settlements and FTC cases.

Apple’s solution is imperfect. But if it stops the Biers of the world from building apps which ingest wholesale the contact lists of teenagers, I find it difficult to object.


  1. Remember when Clubhouse was the next big thing, and going to provide serious competition to incumbent giants? ↥︎

Lawrence Abrams, Bleeping Computer:

Almost 2.7 billion records of personal information for people in the United States were leaked on a hacking forum, exposing names, social security numbers, all known physical addresses, and possible aliases.

The data allegedly comes from National Public Data, a company that collects and sells access to personal data for use in background checks, to obtain criminal records, and for private investigators.

National Public Data is believed to scrape this information from public sources to compile individual user profiles for people in the US and other countries.

Troy Hunt, creator of Have I Been Pwned?:

So, this data appeared in limited circulation as early as 3 months ago. It contains a huge amount of personal information (even if it isn’t “2.9B people”), and then to make matters worse, it was posted publicly last week:

[…]

[…] Instead, we’re left with 134M email addresses in public circulation and no clear origin or accountability. […]

Connor Jones, the Register:

The data broker at the center of what may become one of the more significant breaches of the year is telling officials that just 1.3 million people were affected.

Jones got this number from a report National Public Data was required to file with the Maine attorney general which, for whatever reason, is not embedded or linked to in this story — here it is. My bet is National Public Data is bad at filing breach notifications. It says, for example, the breach was discovered “December 30, 2023”, the same day on which it occurred. Yet in the notice it is mailing to affected Maine residents, it says there were “potential leaks of certain data in April 2024 and summer 2024”, which would be difficult to know in December 2023.

Brian Krebs:

New details are emerging about a breach at National Public Data (NPD), a consumer data broker that recently spilled hundreds of millions of Americans’ Social Security Numbers, addresses, and phone numbers online. KrebsOnSecurity has learned that another NPD data broker which shares access to the same consumer records inadvertently published the passwords to its back-end database in a file that was freely available from its homepage until today.

This is not the first time a huge amount of compromised data has been traced back to some legitimate but nevertheless scummy broker. There was Exactis with 340 million records, People Data Labs with 622 million, and Apollo with around 200 million. The only reason most of us have heard of these businesses is because they hoard our information and — critically — do not protect it. These giant brokers evidently do not care about basic data privacy practices and should not be allowed to operate, and their executives should be held responsible for their failure.

Saba Aziz, Global News:

Ticketmaster has finally notified its users who may have been impacted by a data breach — one month after Global News first reported that the personal information of Canadian customers was likely stolen.

In an email to its customers on Monday, Ticketmaster said that their personal information may have been obtained by an unauthorized third party from a cloud database that was hosted by a separate third-party data services provider.

Ticketmaster says this might include “encrypted credit card information” from “some customers”.

Jason Koebler, 404 Media:

Monday, the hacking group that breached Ticketmaster released new data that they said can be used to create more than 38,000 concert tickets nationwide, including to sought after shows like Olivia Rodrigo, Bruce Springsteen, Hamilton, Tyler Childers, the Jonas Brothers, and Los Angeles Dodgers games. The data would allow someone to create and print a ticket that was already sold to someone else, creating a situation where Ticketmaster and venues might have to sort out which tickets are from legitimate buyers and which are the result of the hack for shows that are taking place as early as today.

These are arguably problems created by the scale and scope of Ticketmaster’s operations. This series of data releases affects so many people and events because parent company Live Nation is a chokepoint for entertainment thanks to a merger approved by U.S. authorities. If this industry were more distributed, it would certainly present more opportunities for individual breaches, but the effect of each would be far smaller.

Finally. The government of the United States finally passed a law that would allow it to force the sale of, or ban, software and websites from specific countries of concern. The target is obviously TikTok — it says so right in its text — but crafty lawmakers have tried to add enough caveats and clauses and qualifiers to, they hope, avoid it being characterized as a bill of attainder, and to permit future uses. This law is very bad. It is an ineffective and illiberal position that abandons democratic values over, effectively, a single app. Unfortunately, TikTok panic is a very popular position in the U.S. and, also, here in Canada.

The adversaries the U.S. is worried about are the “covered nationsdefined in 2018 to restrict the acquisition by the U.S. of key military materials from four countries: China, Iran, North Korea, and Russia. The idea behind this definition was that it was too risky to procure magnets and other important components of, say, missiles and drones from a nation the U.S. considers an enemy, lest those parts be compromised in some way. So the U.S. wrote down its least favourite countries for military purposes, and that list is now being used in a bill intended to limit TikTok’s influence.

According to the law, it is illegal for any U.S. company to make available TikTok and any other ByteDance-owned app — or any app or website deemed a “foreign adversary controlled application” — to a user in the U.S. after about a year unless it is sold to a company outside the covered countries, and with an ownership stake less than twenty percent from any combination of entities in those four named countries. Theoretically, the parent company could be based nearly anywhere in the world; practically, if there is a buyer, it will likely be from the U.S. because of TikTok’s size. Also, the law specifically exempts e-commerce apps for some reason.

This could be interpreted as either creating an isolated version specifically for U.S. users or, as I read it, moving the global TikTok platform to a separate organization not connected to ByteDance or China.1 ByteDance’s ownership is messy, though mostly U.S.-based, but politicians worried about its Chinese origin have had enough, to the point they are acting with uncharacteristic vigour. The logic seems to be that it is necessary for the U.S. government to influence and restrict speech in order to prevent other countries from influencing or restricting speech in ways the U.S. thinks are harmful. That is, the problem is not so much that TikTok is foreign-owned, but that it has ownership ties to a country often antithetical to U.S. interests. TikTok’s popularity might, it would seem, be bad for reasons of espionage or influence — or both.

Power

So far, I have focused on the U.S. because it is the country that has taken the first step to require non-Chinese control over TikTok — at least for U.S. users but, due to the scale of its influence, possibly worldwide. It could force a business to entirely change its ownership structure. So it may look funny for a Canadian to explain their views of what the U.S. ought to do in a case of foreign political interference. This is a matter of relevance in Canada as well. Our federal government raised the alarm on “hostile state-sponsored or influenced actors” influencing Canadian media and said it had ordered a security review of TikTok. There was recently a lengthy public inquiry into interference in Canadian elections, with a special focus on China, Russia, and India. Clearly, the popularity of a Chinese application is, in the eyes of these officials, a threat.

Yet it is very hard not to see the rush to kneecap TikTok’s success as a protectionist reaction to shaking the U.S. dominance of consumer technologies, as convincingly expressed by Paris Marx at Disconnect:

In Western discourses, China’s internet policies are often positioned solely as attempts to limit the freedoms of Chinese people — and that can be part of the motivation — but it’s a politically convenient explanation for Western governments that ignores the more important economic dimension of its protectionist approach. Chinese tech is the main competitor to Silicon Valley’s dominance today because China limited the ability of US tech to take over the Chinese market, similar to how Japan and South Korea protected their automotive and electronics industries in the decades after World War II. That gave domestic firms the time they needed to develop into rivals that could compete not just within China, but internationally as well. And that’s exactly why the United States is so focused not just on China’s rising power, but how its tech companies are cutting into the global market share of US tech giants.

This seems like one reason why the U.S. has so aggressively pursued a divestment or ban since TikTok’s explosive growth in 2019 and 2020. On its face it is similar to some reasons why the E.U. has regulated U.S. businesses that have, it argues, disadvantaged European competitors, and why Canadian officials have tried to boost local publications that have seen their ad revenue captured by U.S. firms. Some lawmakers make it easy to argue it is a purely xenophobic reaction, like Senator Tom Cotton, who spent an exhausting minute questioning TikTok’s Singaporean CEO Shou Zi Chew about where he is really from. But I do not think it is entirely a protectionist racket.

A mistake I have made in the past — and which I have seen some continue to make — is assuming those who are in favour of legislating against TikTok are opposed to the kinds of dirty tricks it is accused of on principle. This is false. Many of these same people would be all too happy to allow U.S. tech companies to do exactly the same. I think the most generous version of this argument is one in which it is framed as a dispute between the U.S. and its democratic allies, and anxieties about the government of China — ByteDance is necessarily connected to the autocratic state — spreading messaging that does not align with democratic government interests. This is why you see few attempts to reconcile common objections over TikTok with the quite similar behaviours of U.S. corporations, government arms, and intelligence agencies. To wit: U.S.-based social networks also suggest posts with opaque math which could, by the same logic, influence elections in other countries. They also collect enormous amounts of personal data that is routinely wiretapped, and are required to secretly cooperate with intelligence agencies. The U.S. is not authoritarian as China is, but the behaviours in question are not unique to authoritarians. Those specific actions are unfortunately not what the U.S. government is objecting to. What it is disputing, in a most generous reading, is a specifically antidemocratic government gaining any kind of influence.

Espionage and Influence

It is easiest to start by dismissing the espionage concerns because they are mostly misguided. The peek into Americans’ lives offered by TikTok is no greater than that offered by countless ad networks and data brokers — something the U.S. is also trying to restrict more effectively through a comprehensive federal privacy law. So long as online advertising is dominated by a privacy-hostile infrastructure, adversaries will be able to take advantage of it. If the goal is to restrict opportunities for spying on people, it is idiotic to pass legislation against TikTok specifically instead of limiting the data industry.

But the charge of influence seems to have more to it, even though nobody has yet shown that TikTok is warping users’ minds in a (presumably) pro-China direction. Some U.S. lawmakers described its danger as “theoretical”; others seem positively terrified. There are a few different levels to this concern: are TikTok users uniquely subjected to Chinese government propaganda? Is TikTok moderated in a way that boosts or buries videos to align with Chinese government views? Finally, even if both of these things are true, should the U.S. be able to revoke access to software if it promotes ideologies or viewpoints — and perhaps explicit propaganda? As we will see, it looks like TikTok sometimes tilts in ways beneficial — or, at least, less damaging — to Chinese government interests, but there is no evidence of overt government manipulation and, even if there were, it is objectionable to require it to be owned by a different company or ban it.

The main culprit, it seems, is TikTok’s “uncannily good” For You feed that feels as though it “reads your mind”. Instead of users telling TikTok what they want to see, it just begins showing videos and, as people use the app, it figures out what they are interested in. How it does this is not actually that mysterious. A 2021 Wall Street Journal investigation found recommendations were made mostly based on how long you spent watching each video. Deliberate actions — like sharing and liking — play a role, sure, but if you scroll past videos of people and spend more time with a video of a dog, it learns you want dog videos.

That is not so controversial compared to the opacity in how TikTok decides what specific videos are displayed and which ones are not. Why is this particular dog video in a user’s feed and not another similar one? Why is it promoting videos reflecting a particular political viewpoint or — so a popular narrative goes — burying those with viewpoints uncomfortable for its Chinese parent company? The mysterious nature of an algorithmic feed is the kind of thing into which you can read a story of your choosing. A whole bunch of X users are permanently convinced they are being “shadow banned” whenever a particular tweet does not get as many likes and retweets as they believe it deserved, for example, and were salivating at the thought of the company releasing its ranking code to solve a nonexistent mystery. There is a whole industry of people who say they can get your website to Google’s first page for a wide range of queries using techniques that are a mix of plausible and utterly ridiculous. Opaque algorithms make people believe in magic. An alarmist reaction to TikTok’s feed should be expected particularly as it was the first popular app designed around entirely recommended material instead of personal or professional connections. This has now been widely copied.

The mystery of that feed is a discussion which seems to have been ongoing basically since the 2018 merger of Musical.ly and TikTok, escalating rapidly to calls for it to be separated from its Chinese owner or banned altogether. In 2020, the White House attempted to force a sale by executive order. In response, TikTok created a plan to spin off an independent entity, but nothing materialized from this tense period.

March 2023 brought a renewed effort to divest or ban the platform. Chew, TikTok’s CEO, was called to a U.S. Congressional hearing and questioned for hours, to little effect. During that hearing, a report prepared for the Australian government was cited by some of the lawmakers, and I think it is a telling document. It is about eighty pages long — excluding its table of contents, appendices, and citations — and shows several examples of Chinese government influence on other products made by ByteDance. However, the authors found no such manipulation on TikTok itself, leading them to conclude:

In our view, ByteDance has demonstrated sufficient capability, intent, and precedent in promoting Party propaganda on its Chinese platforms to generate material risk that they could do the same on TikTok.

“They could do the same”, emphasis mine. In other words, if they had found TikTok was boosting topics and videos on behalf of the Chinese government, they would have said so — so they did not. The closest thing I could find to a covert propaganda campaign on TikTok anywhere in this report is this:

The company [ByteDance] tried to do the same on TikTok, too: In June 2022, Bloomberg reported that a Chinese government entity responsible for public relations attempted to open a stealth account on TikTok targeting Western audiences with propaganda”. [sic]

If we follow the Bloomberg citation — shown in the report as a link to the mysterious Archive.today site — the fuller context of the article by Olivia Solon disproves the impression you might get from reading the report:

In an April 2020 message addressed to Elizabeth Kanter, TikTok’s head of government relations for the UK, Ireland, Netherlands and Israel, a colleague flagged a “Chinese government entity that’s interested in joining TikTok but would not want to be openly seen as a government account as the main purpose is for promoting content that showcase the best side of China (some sort of propaganda).”

The messages indicate that some of ByteDance’s most senior government relations team, including Kanter and US-based Erich Andersen, Global Head of Corporate Affairs and General Counsel, discussed the matter internally but pushed back on the request, which they described as “sensitive.” TikTok used the incident to spark an internal discussion about other sensitive requests, the messages state.

This is the opposite conclusion to how this story was set up in the report. Chinese government public relations wanted to set up a TikTok account without any visible state connection and, when TikTok management found out about this, it said no. This Bloomberg article makes TikTok look good in the face of government pressure, not like it capitulates. Yes, it is worth being skeptical of this reporting. Yet if TikTok acquiesced to the government’s demands, surely the report would provide some evidence.

While this report for the Australian Senate does not show direct platform manipulation, it does present plenty of examples where it seems like TikTok may be biased or self-censoring. Its authors cite stories from the Washington Post and Vice finding posts containing hashtags like #HongKong and #FreeXinjiang returned results favourable to the official Chinese government position. Sometimes, related posts did not appear in search results, which is not unique to TikTok — platforms regularly use crude search term filtering to restrict discovery for lots of reasons. I would not be surprised if there were bias or self-censorship to blame for TikTok minimizing the visibility of posts critical of the subjugation of Uyghurs in China. However, it is basically routine for every social media product to be accused of suppression. The Markup found different types of posts on Instagram, for example, had captions altered or would no longer appear in search results, though it is unclear to anyone why that is the case. Meta said it was a bug, an explanation also offered frequently by TikTok.

The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.

The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.

The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. A report by Exovera (PDF) prepared for the U.S.–China Economic and Security Review Commission indicates exactly the latter: “TikTok’s current content moderation strategy […] adheres to a strategy of ‘depoliticization’ (去政治化) and ‘localization’ (本土化) that seeks to downplay politically controversial speech and demobilize populist sentiment”, apparently avoiding “algorithmic optimization in order to promote content that evangelizes China’s culture as well as its economic and political systems” which “is liable to result in backlash”. Meta, on its own platforms, said it would not generally suggest “political” posts to users but did not define exactly what qualifies. It said its goal in limiting posts on social issues was because of user demand, but these types of posts have been difficult to moderate. A difference in which posts are found on each platform for specific search terms is not necessarily reflective of government pressure, deliberate or not. Besides, it is not as though there is no evidence for straightforward propaganda on TikTok. One just needs to look elsewhere to find it.

Propaganda

The Office of the Director of National Intelligence recently released its annual threat assessment summary (PDF). It is unclassified and has few details, so the only thing it notes about TikTok is “accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022”. It seems likely to me this is a reference to this article in Forbes, though this is a guess as there are no citations. The state-affiliated TikTok account in question — since made private — posted a bunch of news clips which portray the U.S. in an unflattering light. There is a related account, also marked as state-affiliated, which continues to post the same kinds of videos. It has over 33,000 followers, which sounds like a lot, but each post is typically getting only a few hundred views. Some have been viewed thousands of times, others as little as thirteen times as of writing — on a platform with exaggerated engagement numbers. Nonetheless, the conclusion is obvious: these accounts are government propaganda, and TikTok willingly hosts them.

But that is something it has in common with all social media platforms. The Russian RT News network and China’s People’s Daily newspaper have X and Facebook accounts with follower counts in the millions. Until recently, the North Korean newspaper Uriminzokkiri operated accounts on Instagram and X. It and other North Korean state-controlled media used to have YouTube channels, too, but they were shut down by YouTube in 2017 — a move that was protested by academics studying the regime’s activities. The irony of U.S.-based platforms helping to disseminate propaganda from the country’s adversaries is that it can be useful to understand them better. Merely making propaganda available — even promoting it — is a risk and also a benefit to generous speech permissions.

The DNI’s unclassified report has no details about whether TikTok is an actual threat, and the FBI has “nothing to add” in response to questions about whether TikTok is currently doing anything untoward. More secretive information was apparently provided to U.S. lawmakers ahead of their March vote and, though few details of what, exactly, was said, several were not persuaded by what they heard, including Rep. Sara Jacobs of California:

As a member of both the House Armed Services and House Foreign Affairs Committees, I am keenly aware of the threat that PRC information operations can pose, especially as they relate to our elections. However, after reviewing the intelligence, I do not believe that this bill is the answer to those threats. […] Instead, we need comprehensive data privacy legislation, alongside thoughtful guardrails for social media platforms – whether those platforms are funded by companies in the PRC, Russia, Saudi Arabia, or the United States.

Lawmakers like Rep. Jacobs were an exception among U.S. Congresspersons who, across party lines, were eager to make the case against TikTok. Ultimately, the divest-or-ban bill got wrapped up in a massive and politically popular spending package agreed to by both chambers of Congress. Its passage was enthusiastically received by the White House and it was signed into law within hours. Perhaps that outcome is the democratic one since polls so often find people in the U.S. support a sale or ban of TikTok.

I get it: TikTok scoops up private data, suggests posts based on opaque criteria, its moderation appears to be susceptible to biases, and it is a vehicle for propaganda. But you could replace “TikTok” in that sentence with any other mainstream social network and it would be just as true, albeit less scary to U.S. allies on its face.

A Principled Objection

Forcing TikTok to change its ownership structure whether worldwide or only for a U.S. audience is a betrayal of liberal democratic principles. To borrow from Jon Stewart, “if you don’t stick to your values when they’re being tested, they’re not values, they’re hobbies”. It is not surprising that a Canadian intelligence analysis specifically pointed out how those very same values are being taken advantage of by bad actors. This is not new. It is true of basically all positions hostile to democracy — from domestic nationalist groups in Canada and the U.S., to those which originate elsewhere.

Julian G. Ku, for China File, offered a seemingly reasonable rebuttal to this line of thinking:

This argument, while superficially appealing, is wrong. For well over one hundred years, U.S. law has blocked foreign (not just Chinese) control of certain crucial U.S. electronic media. The Protect Act [sic] fits comfortably within this long tradition.

Yet this counterargument falls apart both in its details and if you think about its further consequences. As Martin Peers writes at the Information, the U.S. does not prohibit all foreign ownership of media. And governing the internet like public airwaves gets way more complicated if you stretch it any further. Canada has broadcasting laws, too, and it is not alone. Should every country begin requiring social media platforms comply with laws designed for ownership of broadcast media? Does TikTok need disconnected local versions of its product in each country in which it operates? It either fundamentally upsets the promise of the internet, or it is mandating the use of protocols instead of platforms.

It also looks hypocritical. Countries with a more authoritarian bent and which openly censor the web have responded to even modest U.S. speech rules with mockery. When RT Americatechnically a U.S. company with Russian funding — was required to register as a foreign agent, its editor-in-chief sarcastically applauded U.S. free speech standards. The response from Chinese government officials and media outlets to the proposed TikTok ban has been similarly scoffing. Perhaps U.S. lawmakers are unconcerned about the reception of their policies by adversarial states, but it is an indicator of how these policies are being portrayed in these countries — a real-life “we are not so different, you and I” setup — that, while falsely equivalent, makes it easy for authoritarian states to claim that democracies have no values and cannot work. Unless we want to contribute to the fracturing of the internet — please, no — we cannot govern social media platforms by mirroring policies we ostensibly find repellant.

The way the government of China seeks to shape the global narrative is understandably concerning given its poor track record on speech freedoms. An October 2023 U.S. State Department “special report” (PDF) explored several instances where it boosted favourable narratives, buried critical ones, and pressured other countries — sometimes overtly, sometimes quietly. The government of China and associated businesses reportedly use social media to create the impression of dissent toward human rights NGOs, and apparently everything from university funding to new construction is a vector for espionage. On the other hand, China is terribly ineffective in its disinformation campaigns, and many of the cases profiled in that State Department report end in failure for the Chinese government initiative. In Nigeria, a pitch for a technologically oppressive “safe city” was rejected; an interview published in the Jerusalem Post with Taiwan’s foreign minister was not pulled down despite threats from China’s embassy in Israel. The report’s authors speculate about “opportunities for PRC global censorship”. But their only evidence is a “list [maintained by ByteDance] identifying people who were likely blocked or restricted” from using the company’s many platforms, though the authors can only speculate about its purpose.

The problem is that trying to address this requires better media literacy and better recognition of propaganda. That is a notoriously daunting problem. We are exposed to a more destabilizing cocktail of facts and fiction, but there is declining trust in experts and institutions to help us sort it out. Trying to address TikTok as a symptomatic or even causal component of this is frustratingly myopic. This stuff is everywhere.

Also everywhere is corporate propaganda arguing regulations would impede competition in a global business race. I hate to be mean by picking on anyone in particular, but a post from Om Malik has shades of this corporate slant. Malik is generally very good on the issues I care about, but this is not one we appear to agree on. After a seemingly impressed observation of how quickly Chinese officials were able to eject popular messaging apps from the App Store in the country, Malik compares the posture of each country’s tech industries:

As an aside, while China considers all its tech companies (like Bytedance) as part of its national strategic infrastructure, the United States (and its allies) think of Apple and other technology companies as public enemies.

This is laughable. Presumably, Malik is referring to the chillier reception these companies have faced from lawmakers, and antitrust cases against Amazon, Apple, Google, and Meta. But that tougher impression is softened by the U.S. government’s actual behaviour. When the E.U. announced the Digital Markets Act and Digital Services Act, U.S. officials sprang to the defence of tech companies. Even before these cases, Uber expanded in Europe thanks in part to its close relationship with Obama administration officials, as Marx pointed out. The U.S. unquestionably sees its tech industry dominance as a projection of its power around the world, hardly treating those companies as “public enemies”.

Far more explicit were the narratives peddled by lobbyists from Targeted Victory in 2022 about TikTok’s dangers, and American Edge beginning in 2020 about how regulations will cause the U.S. to become uncompetitive with China and allow TikTok to win. Both organizations were paid by Meta to spread those messages; the latter was reportedly founded after a single large contribution from Meta. Restrictions on TikTok would obviously be beneficial to Meta’s business.

If you wanted to boost the industry — and I am not saying Malik is — that is how you would describe the situation: the U.S. is fighting corporations instead of treating them as pals to win this supposed race. It is not the kind of framing one uses if they wanted to dissuade people from the notion this is a protectionist dispute over the popularity of TikTok. But it is the kind of thing you hear from corporations via their public relations staff and lobbyists, which gets trickled into public conversation.

This Is Not a TikTok Problem

TikTok’s divestment would not be unprecedented. The Committee on Foreign Investment in the United States — henceforth, CFIUS, pronounced “siff-ee-us” — demanded, after a 2019 review, that Beijing Kunlun Tech Co Ltd sell Grindr. CFIUS concluded the risk to users’ private data was too great for Chinese ownership given Grindr’s often stigmatized and ostracized user base. After its sale, now safe in U.S. hands, a priest was outed thanks to data Grindr had been selling since before it was acquired by the Chinese firm, and it is being sued for allegedly sharing users’ HIV status with third parties. Also, because it transacts with data brokers, it potentially still leaks users’ private information to Chinese companies (PDF), apparently violating the fundamental concern triggering this divestment.

Perhaps there is comfort in Grindr’s owner residing in a country where same-sex marriage is legal rather than in one where it is not. I think that makes a lot of sense, actually. But there remain plenty of problems unaddressed by its sale to a U.S. entity.

Similarly, this U.S. TikTok law does not actually solve potential espionage or influence for a few reasons. The first is that it has not been established that either are an actual problem with TikTok. Surely, if this were something we ought to be concerned about, there would be a pattern of evidence, instead of what we actually have which is a fear something bad could happen and there would be no way to stop it. But many things could happen. I am not opposed to prophylactic laws so long as they address reasonable objections. Yet it is hard not to see this law as an outgrowth of Cold War fears over leaflets of communist rhetoric. It seems completely reasonable to be less concerned about TikTok specifically while harbouring worries about democratic backsliding worldwide and the growing power of authoritarian states like China in international relations.

Second, the Chinese government does not need local ownership if it wants to exert pressure. The world wants the country’s labour and it wants its spending power, so businesses comply without a fight, and often preemptively. Hollywood films are routinely changed before, during, and after production to fit the expectations of state censors in China, a pattern which has been pointed out using the same “Red Dawn” anecdote in story after story after story. (Abram Dylan contrasted this phenomenon with U.S. military cooperation.) Apple is only too happy to acquiesce to the government’s many demands — see the messaging apps issue mentioned earlier — including, reportedly, in its media programming. Microsoft continues to operate Bing in China, and its censorship requirements have occasionally spilled elsewhere. Economic leverage over TikTok may seem different because it does not need access to the Chinese market — TikTok is banned in the country — but perhaps a new owner would be reliant upon China.

Third, the law permits an ownership stake no greater than twenty percent from a combination of any of the “covered nations”. I would be shocked if everyone who is alarmed by TikTok today would be totally cool if its parent company were only, say, nineteen percent owned by a Chinese firm.

If we are worried about bias in algorithmically sorted feeds, there should be transparency around how things are sorted, and more controls for users including wholly opting out. If we are worried about privacy, there should be laws governing the collection, storage, use, and sharing of personal information. If ownership ties to certain countries is concerning, there are more direct actions available to monitor behaviour. I am mystified why CFIUS and TikTok apparently abandoned (PDF) a draft agreement that would give U.S.-based entities full access to the company’s systems, software, and staff, and would allow the government to end U.S. access to TikTok at a moment’s notice.

Any of these options would be more productive than this legislation. It is a law which empowers the U.S. president — whoever that may be — to declare the owner of an app with a million users a “covered company” if it is from one of those four nations. And it has been passed. TikTok will head to court to dispute it on free speech grounds, and the U.S. may respond by justifying its national security concerns.

Obviously, the U.S. government has concerns about the connections between TikTok, ByteDance, and the government of China, which have been extensively reported. Rest of World says ByteDance put pressure on TikTok to improve its financial performance and has taken greater control by bringing in management from Douyin. The Wall Street Journal says U.S. user data is not fully separated. And, of course, Emily Baker-White has reported — first for Buzzfeed News and now for Forbes — a litany of stories about TikTok’s many troubling behaviours, including spying on her. TikTok is a well scrutinized app and reporters have found conduct that has understandably raised suspicions. But virtually all of these stories focus on data obtained from users, which Chinese agencies could do — and probably are doing — without relying on TikTok. None of them have shown evidence that TikTok’s suggestions are being manipulated at the behest or demand of Chinese officials. The closest they get is an article from Baker-White and Iain Martin which alleges TikTok “served up a flood of ads from Chinese state propaganda outlets”, yet waiting until the third-to-last paragraph before acknowledging “Meta and Google ad libraries show that both platforms continue to promote pro-China narratives through advertising”. All three platforms label state-run media outlets, albeit inconsistently. Meanwhile, U.S.-owned X no longer labels any outlets with state editorial control. It is not clear to me that TikTok would necessarily operate to serve the best interests of the U.S. even if it was owned by some well-financed individual or corporation based there.

For whatever it is worth, I am not particularly tied to the idea that the government of China would not use TikTok as a vehicle for influence. The government of China is clearly involved in propaganda efforts both overt and covert. I do not know how much of my concerns are a product of living somewhere with a government and a media environment that focuses intently on the country as particularly hostile, and not necessarily undeservedly. The best version of this argument is one which questions the platform’s possible anti-democratic influence. Yes, there are many versions of this which cross into moral panic territory — a new Red Scare. I have tried to put this in terms of a more reasonable discussion, and one which is not explicitly xenophobic or envious. But even this more even-handed position is not well served by the law passed in the U.S., one which was passed without evidence of influence much more substantial than some choice hashtag searches. TikTok’s response to these findings was, among other things, to limit its hashtag comparison tool, which is not a good look. (Meta is doing basically the same by shutting down CrowdTangle.)

I hope this is not the beginning of similar isolationist policies among democracies worldwide, and that my own government takes this opportunity to recognize the actual privacy and security threats at the heart of its own TikTok investigation. Unfortunately, the head of CSIS is really leaning on the espionage angle. For years, the Canadian government has been pitching sorely needed updates to privacy legislation, and it would be better to see real progress made to protect our private data. We can do better than being a perpetual recipient of decisions made by other governments. I mean, we cannot do much — we do not have the power of the U.S. or China or the E.U. — but we can do a little bit in our own polite Canadian way. If we are worried about the influence of these platforms, a good first step would be to strengthen the rights of users. We can do that without trying to governing apps individually, or treating the internet like we do broadcasting.

To put it more bluntly, the way we deal with a possible TikTok problem is by recognizing it is not a TikTok problem. If we care about espionage or foreign influence in elections, we should address those concerns directly instead of focusing on a single app or company that — at worst — may be a medium for those anxieties. These are important problems and it is inexcusable to think they would get lost in the distraction of whether TikTok is individually blameworthy.


  1. Because this piece has taken me so long to write, a whole bunch of great analyses have been published about this law. I thought the discussion on “Decoder” was a good overview, especially since two of the three panelists are former lawyers. ↥︎

John Gruber, in 2020:

Just because there is now a multi-billion-dollar industry based on the abject betrayal of our privacy doesn’t mean the sociopaths who built it have any right whatsoever to continue getting away with it. They talk in circles but their argument boils down to entitlement: they think our privacy is theirs for the taking because they’ve been getting away with taking it without our knowledge, and it is valuable. No action Apple can take against the tracking industry is too strong.

Ian Betteridge contrasted this view against one of Gruber’s recent articles, in which his stance appears to have softened on the seriousness of tracking:

I wonder what happened to turn John’s attitude from “no action Apple can take against the tracking industry is too strong” to defending Facebook’s “right” to choose how it invades people’s privacy? Or is he suggesting that a private company is entitled to defend people’s privacy, but governments are not?

To put it another way, should people have an expectation of how private information is used and collected, or should that be wildly different depending on which companies they interact with? Is the status quo of handling private data in the U.S. the optimal legal balance?

John Gruber, responding:

I’ve seen a bit of pushback along this line recently, more or less asking: How come I was against Meta’s tracking but now seem for it? I don’t see any contradiction or change in my position though. The only thing I’d change in the 2020 piece Betteridge quotes is this sentence, which Betteridge emphasizes: “No action Apple can take against the tracking industry is too strong.” I should have inserted an adjective before “tracking” — it’s non-consensual tracking I object to, especially tracking that’s downright surreptitious. Not tracking in and of itself.

Given my review of Byron Tau’s new book, you might expect me to wholly disagree with the idea that anyone can provide consent. I do not — in theory. But in practice and in most circumstances right now, it probably is impossible for users to provide meaningful consent to all of the digital products and services they use.

Consider what full informed consent looks like for Facebook — and just Facebook. One would need to indicate they have read and understood each section of its simplified privacy policy, not just tick the blanket “I Agree” box, or permit it using the App Tracking Transparency dialog. Facebook should show exactly what it is collecting and how it is using this information. Every time a policy changes, Facebook should get an affirmative agreement, too, from each user; none of this by continuing to use the product, you indicate your agreement nonsense.

And this is just Facebook. Imagine that across all your accounts everywhere. We have a taste of that on the web with cookie consent panels, and on iOS with the myriad dialogs thrown by app features like accessing your contacts. A typical camera app will likely ask you for four different permissions out of the gate: camera, microphone, photo library, and location access. Adding yet more consents and dialog boxes is hardly an effective solution.

Meta is probably one of the more agreeable players in this racket, too. It hoards data; it does not share much of it. And it has a brand to protect. Data brokers are far worse because nobody knows who they are or what they collect, share, and merge. Scale the informed consent model above across all data brokers you interact with, in each app or website you use. As an example, Het Laatste Nieuws, a popular Dutch-language news site in Belgium, shows in its cookie consent dialog it has over one hundred advertising partners, among the lowest numbers I have seen. (For comparison, Le Monde has over five hundred.) True consent requires you to understand those privacy policies, too. What does Nexxen collect? Which other websites, apps, or products do you use which also partner with Nexxen? Can you find Nexxen in HLN’s partner list? (Probably not — the privacy policies for the first three advertisers I was going to use as an example in that sentence returned 404 errors, and I only found Nexxen because I clicked on the policy for Unruly, which rebranded last year.)

This is a mess from the perspective of users and site operators. A core principle of informed consent is an understanding of risk. Are people presented with adequate information about the risks of accepting tracking? No, not really. Meanwhile, website owners do not want to interrupt visitors with cookie consent forms; they want to interrupt them with email newsletter sign-up forms. Nobody wants to manage a vast database of specific consent agreements.

Gruber is reacting to a draft decision (PDF) by the European Data Protection Board — specifically:

It has to be concluded that, in most cases, it will not be possible for large online platforms to comply with the requirements for valid consent if they confront users only with a binary choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee.

The EDPB’s justification for this is based largely on similar arguments to those I have made above, though it limits the scope of this decision to platforms of gatekeeper scale for similar interconnected rationales as it has used to define those platforms’ unique responsibilities. Interestingly, the EDPB says the mere existence of a fee at all is enough to question whether there is a truly free choice when a no-cost option is also available. It seems to want a third way: no behaviourally informed advertising, at no financial cost to users.

I am not sure there is a good reason to limit to gatekeepers restrictions regarding the use of behavioural advertising. There need to be stricter controls around tracking so that users may have informed consent, regardless of whether it is a corporate behemoth, a news publisher, or a weather app. If we want informed consent, we should have it, but the status quo is a poor excuse for truly informed, truly free consent.

In the 1970s and 1980s, in-house researchers at Exxon began to understand how crude oil and its derivatives were leading to environmental devestation. They were among the first to comprehensively connect the use of their company’s core products to the warming of the Earth, and they predicted some of the harms which would result. But their research was treated as mere suggestion by Exxon because the effects of obvious legislation would “alter profoundly the strategic direction of the energy industry”. It would be a business nightmare.

Forty years later, the world has concluded its warmest year in recorded history by starting another. Perhaps we would have been more able to act if businesses like Exxon equivocated less all these years. Instead, they publicly created confusion and minimized lawmakers’ knowledge. The continued success of their industry lay in keeping these secrets.


“The success lies in the secrecy” is a shibboleth of the private surveillance industry, as described in Byron Tau’s new book, “Means of Control”. It is easy to find parallels to my opening anecdote throughout though, to be clear, a direct comparison to human-led ecological destruction is a knowingly exaggerated metaphor. The erosion of privacy and civil liberties is horrifying in its own right, and shares key attributes: those in the industry knew what they were doing and allowed it to persist because it was lucrative and, in a post-9/11 landscape, ostensibly justified.

Tau’s byline is likely familiar to anyone interested in online privacy. For several years at the Wall Street Journal, he produced dozens of deeply reported articles about the intertwined businesses of online advertising, smartphone software, data brokers, and intelligence agencies. Tau no longer writes for the Journal, but “Means of Control” is an expansion of that earlier work and carefully arranged into a coherent set of stories.

Tau’s book, like so many others describing the current state of surveillance, begins with the terrorists attacks of September 11 2001. This was the early days, when Acxiom realized it could connect its consumer data set to flight and passport records. The U.S. government ate it up and its appetite proved insatiable. Tau documents the growth of an industry that did not exist — could not exist — before the invention of electronic transactions, targeted advertising, virtually limitless digital storage, and near-universal smartphone use. This rapid transformation occurred not only with little regulatory oversight, but with government encouragement, including through investments in startups like Dataminr, GeoIQ, PlaceIQ, and PlanetRisk.

In near-chronological order, Tau tells the stories which have defined this era. Remember when documentation released by Edward Snowden showed how data created by mobile ad networks was being used by intelligence services? Or how a group of Colorado Catholics bought up location data for outing priests who used gay-targeted dating apps? Or how a defence contractor quietly operates nContext, an adtech firm, which permits the U.S. intelligence apparatus to effectively wiretap the global digital ad market? Regarding the latter, Tau writes of a meeting he had with a source who showed him a “list of all of the advertising exchanges that America’s intelligence agencies had access to”, and who told him American adversaries were doing the exact same thing.

What impresses most about this book is not the volume of specific incidents — though it certainly delivers on that front — but the way they are all woven together into a broader narrative perhaps best summarized by Tau himself: “classified does not mean better”. That can be true for volume and variety, and it is also true for the relative ease with which it is available. Tracking someone halfway around the world no longer requires flying people in or even paying off people on the ground. Someone in a Virginia office park can just make that happen and likely so, too, can other someones in Moscow and Sydney and Pyongyang and Ottawa, all powered by data from companies based in friendly and hostile nations alike.

The tension running through Tau’s book is in the compromise I feel he attempts to strike between acknowledging the national security utility of a surveillance state while describing how the U.S. has abdicated the standards of privacy and freedom it has long claimed are foundational rights. His reporting often reads as an understandable combination of awe and disgust. The U.S. has, it seems, slid in the direction of the kinds of authoritarian states its administration routinely criticizes. But Tau is right to clarify in the book’s epilogue that the U.S. is not, for example, China, separated from the standards of the latter by “a thin membrane of laws, norms, social capital, and — perhaps most of all — a lingering culture of discomfort” with concentrated state power. However, the preceding chapters of the book show questions about power do not fully extend into the private sector, where there has long been pride in the scale and global reach of U.S. businesses but concern about their influence. Tau’s reporting shows how U.S. privacy standards have been exported worldwide. For a more pedestrian example, consider the frequent praise–complaint sandwiches of Amazon, Meta, Starbucks, and Walmart, to throw a few names out there.

Corporate self-governance is an entirely inadequate response. Just about every data broker and intermediary from Tau’s writing which I looked up promised it was “privacy-first” or used similar language. Every business insists in marketing literature it is concerned about privacy and says they ensure they are careful about how they collect and use information, and they have been doing so for decades — yet here we are. Entire industries have been built on the backs of tissue-thin user consent and a flexible definition of “privacy”.

When polled, people say they are concerned about how corporations and the government collect and use data. Still, when lawmakers mandate choices for users about their data collection preferences, the results do not appear to show a society that cares about personal privacy.

In response to the E.U.’s General Data Privacy Regulation, websites decided they wanted to continue collecting and sharing loads of data with advertisers, so they created the now-ubiquitous cookie consent sheet. The GPDR does not explicitly mandate this mechanism and many remain non-compliant with the rules and intention of the law, but they are a particularly common form of user consent. However, if you arrive at a website and it asks you whether you are okay with it sharing your personal data with hundreds of ad tech firms, are you providing meaningful consent with a single button click? Hardly.

Similarly, something like 10–40% of iOS users agree to allow apps to track them. In the E.U., the cost of opting out of Meta’s tracking will be €6–10 per month which, I assume, few people will pay.

All of these examples illustrate how inadequately we assess cost, utility, and risk. It is tempting to think of this as a personal responsibility issue akin to cigarette smoking but, as we are so often reminded, none of this data is particularly valuable in isolation — it must be aggregated in vast amounts. It is therefore much more like an environmental problem.

As with global warming, exposé after exposé after exposé is written about how our failure to act has produced extraordinary consequences. All of the technologies powering targeted advertising have enabled grotesque and pervasive surveillance as Tau documents so thoroughly. Yet these are abstract concerns compared to a fee to use Instagram, or the prospect of reading hundreds of privacy policies with a lawyer and negotiating each of them so that one may have a smidge of control over their private information.

There are technical answers to many of these concerns, and there are also policy answers. There is no reason both should not be used.

I have become increasingly convinced the best legal solution is one which creates a framework limiting the scope of data collection, restricting it to only that which is necessary to perform user-selected tasks, and preventing mass retention of bulk data. Above all, users should not be able to choose a model that puts them in obvious future peril. Many of you probably live in a society where so much is subject to consumer choice. What I wrote sounds pretty drastic, but it is not. If anything, it is substantially less radical than the status quo that permits such expansive surveillance on the basis that we “agreed” to it.

Any such policy should also be paired with something like the Fourth Amendment is Not For Sale Act in the U.S. — similar legislation is desperately needed in Canada as well — to prevent sneaky exclusions from longstanding legal principles.

Last month, Wired reported that Near Intelligence — a data broker you can read more about in Tau’s book — was able to trace dozens of individual trips to Jeffrey Epstein’s island. That could be a powerful investigative tool. It is also very strange and pretty creepy all that information was held by some random company you probably have not heard of or thought about outside stories like these. I am obviously not defending the horrendous shit Epstein and his friends did. But it is really, really weird that Near is capable of producing this data set. When interviewed by Wired, Eva Galperin, of the Electronic Frontier Foundation, said “I just don’t know how many more of these stories we need to have in order to get strong privacy regulations.”

Exactly. Yet I have long been convinced an effective privacy bill could not be implemented in either the United States nor European Union, and certainly not with any degree of urgency. And, no, Matt Stoller: de facto rules on the backs of specific FTC decisions do not count. Real laws are needed. But the products and services which would be affected are too popular and too powerful. The E.U. is home to dozens of ad tech firms that promise full identity resolution. The U.S. would not want to destroy such an important economic sector, either.

Imagine my surprise when, while I was in middle of writing this review, U.S. lawmakers announced the American Privacy Rights Act (PDF). If passed, it would give individuals more control over how their information — including biological identifiers — may be collected, used, and retained. Importantly, it requires data minimization by default. It would be the most comprehensive federal privacy legislation in the U.S., and it also promises various security protections and remedies, though I think lawmakers’ promise to “prevent data from being hacked or stolen” might be a smidge unrealistic.

Such rules would more-or-less match the GDPR in setting a global privacy regime that other countries would be expected to meet, since so much of the world’s data is processed in the U.S. or otherwise under U.S. legal jurisdiction. The proposed law borrows heavily from the state-level California Consumer Privacy Act, too. My worry is that it will be treated by corporations similarly to the GDPR and CCPA by continuing to offload decision-making to users while taking advantage of a deliberate imbalance of power. Still, any progress on this front is necessary.

So, too, is it useful for anyone to help us understand how corporations and governments have jointly benefitted from privacy-hostile technologies. Tau’s “Means of Control” is one such example. You should read it. It is a deep exploration of one specific angle of how data flows from consumer software to surprising recipients. You may think you know this story, but I bet you will learn something. Even if you are not a government target — I cannot imagine I am — it is a reminder that the global private surveillance industry only functions because we all participate, however unwillingly. People get tracked based on their own devices, but also those around them. That is perhaps among the most offensive conclusions of Tau’s reporting. We have all been conscripted for any government buying this data. It only works because it is everywhere and used by everybody.

For all they have erred, democracies are not authoritarian societies. Without reporting like Tau’s, we would be unable to see what our own governments are doing and — just as important — how that differs from actual police states. As Tau writes, “in China, the state wants you to know you’re being watched. In America, the success lies in the secrecy“. Well, the secret is out. We now know what is happening despite the best efforts of an industry to keep it quiet, just like we know the Earth is heating up. Both problems massively affect our lived environment. Nobody — least of all me — would seriously compare the two. But we can say the same about each of them: now we know. We have the information. Now comes the hard part: regaining control.

Alfred Ng, Politico:

The data-privacy bill passed Wednesday, the Protecting Americans’ Data from Foreign Adversaries Act, H.R. 7520, is highly targeted: It prevents any companies considered data brokers — third-party buyers and sellers of personal information — from selling that information to China, Russia or other “foreign adversaries.”

Though far narrower than previous bills, it would establish a precedent as the first federal law to govern data privacy in years.

Unlike the TikTok bill, this is meaningful privacy legislation for people in the United States — and it passed without a single negative vote. It is also likely to make its way to a Senate vote, according to Ng. It is similar to the executive order signed last month and therefore has similar caveats. Data brokers can still sell U.S. data to another broker in any non-adversarial country, for example, and it could be re-sold from there.

This may not be stellar legislation which limits the activity of data brokers within the U.S. or restricts the kind of mass data collection which permits these kinds of data sales, but it is progress.