Search Results for: data broker

After the Supreme Court of the United States overturned Roe v. Wade last year, a bunch of the corporations which have built their business on knowing the physical locations of people without their knowledge or explicit permission said they would not permit the use of that information for health-related reasons. Google promised it would delete records of visits to abortion providers, domestic violence shelters, and rehab centres; when Accountable Tech checked several months later, it found much of that information was still retained.

Geoffrey Fowler, of the Washington Post, decided to look again:

To test Google’s privacy promise, I’ve been running an experiment. Over the last few weeks, I visited a dozen abortion clinics, medical centers and fertility specialists around California, using Google Maps for directions. A colleague visited two more in Florida.

In about half of the visits, I watched Google retain a map of my activity that looked like it could have been made by a private investigator.

[…]

This didn’t happen every time. After I sat for 15 minutes in the parking lots of two clinics south of San Francisco, Google deleted each from my location history within 24 hours. It did the same for my colleague’s two visits to clinics in Florida.

To state the obvious, about half the time, Google will keep a record of a visit to a sensitive location even though a user might not have been aware it was tracking them, which is better than all the time, but substantially worse than none of the time.

Google is one of the most valuable businesses in the world and cannot reliably prevent a record of a visit to a more sensitive location. What about a smaller business similarly built on violating individuals’ privacy?

Byron Tau and Patience Haggin, the Wall Street Journal:

A Midwest antiabortion group used cellphone location data to target online content to visitors of certain Planned Parenthood clinics, according to people familiar with the matter and documents reviewed by The Wall Street Journal.

[…]

Near, the location broker whose data was used to geofence the Planned Parenthood clinics and extract the mobile-phone data, was one among a number of similar companies that faced inquiries from regulators last year over whether its data set could be used for tracking people seeking abortions.

The company told investors it received an inquiry from members of the House of Representatives about abortion tracking in July. “We communicated to the Members that the company doesn’t allow the use of its data for law enforcement or healthcare purposes, including the disclosure of reproductive rights information,” Near said in a filing with the U.S. Securities and Exchange Commission.

Why anybody is wasting their time pretending that companies with this business model are willing and able to protect users’ privacy would be laughable if it were not so dangerous. This is a solvable problem, but the answer will not be provided by organizations like these.

In August, the FTC sued location broker Kochava over its sale of device-specific location data. The company asked for the suit to be dropped and, this week, it got its wish.

Jessica Lyons Hardcastle, the Register:

An FTC lawsuit against Kochava, alleging the data broker harmed Americans by selling records of their whereabouts, has failed.

That said, a federal court has given the US government agency 30 days to come up with a better legal argument and try again.

If this feels familiar, it may be because a judge rejected the first FTC complaint against Facebook — filed in December 2020 — before agreeing to hear an amended version.

Hardcastle:

In a ruling on Thursday, the federal court agreed with Kochava in that the FTC’s lawsuit didn’t make a strong enough case to prove consumer injury.

“Although the FTC’s first legal theory of consumer injury is plausible, the FTC has not made sufficient factual allegations to proceed,” US District Court Judge Lynn Winmill wrote [PDF]. “To do so, it must not only claim that Kochava’s practices could lead to consumer injury, but that they are likely to do so, as required by the statute.”

This is a frustrating argument. To the same extent there may be questions about whether Kochava’s sale of personal data could be injurious to consumers is likely or merely possible, there is little awareness of data collection by this specific company. Consumers do not know they are providing their location to Kochava for resale when using a given app. They are increasingly aware of the privacy risks of using smartphones, I am sure, but not of these specific contracts. How should any consumer realistically protect themselves when data brokers are prevalent yet invisible in their lives?

Meanwhile, it is worth noting, the harm of an unregulated market for private data is far from a theoretical concern. Trying to figure out whether it is definitively the data bought from a specific broker’s market which leads to some awful circumstance is a profoundly idiotic pursuit. The only reasonable course of action is a comprehensive prophylactic federal data privacy law.

Michelle Boorstein and Heather Kelly, Washington Post:

A group of conservative Colorado Catholics has spent millions of dollars to buy mobile app tracking data that identified priests who used gay dating and hookup apps and then shared it with bishops around the country.

[…]

One report prepared for bishops says the group’s sources are data brokers who got the information from ad exchanges, which are sites where ads are bought and sold in real time, like a stock market. The group cross-referenced location data from the apps and other details with locations of church residences, workplaces and seminaries to find clergy who were allegedly active on the apps, according to one of the reports and also the audiotape of the group’s president.

Boorstein and Kelly say some of those behind this group also outed a priest two years ago using similar tactics, which makes it look like a test case for this more comprehensive effort. As they write, a New York-based Reverend said at the time it was justified to expose priests who had violated their celibacy pledge. That is a thin varnish on what is clearly an effort to discriminate against queer members of the church. These operations have targeted clergy by using data derived almost exclusively by the use of gay dating apps.

Data brokers have long promised the information they supply is anonymized but, time and again, this is shown to be an ineffective means of protecting users’ privacy. That ostensibly de-identified data was used to expose a specific single priest’s use of Grindr in 2021, and the organization in question has not stopped. Furthermore, nothing would prevent this sort of exploitation by groups based outside the United States, which may be able to obtain similar data to produce the same — or worse — outcomes.

This is some terrific reporting by Boorstein and Kelly.

Rachel Gilmore, writing for Global News on February 27:

The Canadian government is banning the use of the popular short-form video application TikTok on all government-issued mobile devices, Treasury Board President Mona Fortier announced on Monday.

Effective Tuesday, TikTok “will be removed from government-issued mobile devices,” Fortier said in a statement.

“Following a review of TikTok, the Chief Information Officer of Canada determined that it presents an unacceptable level of risk to privacy and security,” she added.

This comes days after the Office of the Privacy Commissioner of Canada announced it was beginning an investigation into TikTok’s practices. Bans on government devices have been reciprocated at the provincial and municipal levels.

Nick Logan, CBC News:

The government has not indicated it wants to widen the ban but there are discussions in the U.S. about banning TikTok outright and preventing ByteDance from doing business there.

Kristen Csenkey, a PhD candidate at the University of Waterloo’s Balsillie School of International Affairs, sees problems with this because of the app’s roles as both a social platform and a source of income for millions of people.

“We need to consider what the implications are,” she said. “It’s not just a technology or an app that’s just used for one purpose.”

Thomas Germain, Gizmodo:

Some 28,251 apps use TikTok’s software development kits, (SDKs), tools which integrates apps with TikTok’s systems—and send TikTok user data — for functions like ads within TikTok, logging in, and sharing videos from the app. That’s according to a search conducted by Gizmodo and corroborated by AppFigures, an analytics company. But apps aren’t TikTok’s only source of data. There are TikTok trackers spread across even more websites. The type of data sharing TikTok is doing is just as common on other parts of the internet.

You have probably seen me and others make similar arguments before, and the reality of this situation has not changed since. The way online advertising is structured has made it impossible to create privacy for users on a case-by-case or app-by-app basis. There is too much information being collected about too many people at all times to make that a viable response. Despite increasing attempts at legislation, the United States remains a haven for the kinds of businesses which depend on subverting our expectations of privacy. Even in countries like Canada — with provincial and national privacy laws — there is work to be done. Perfect is the enemy of good, as they say, so if a national ban of TikTok were a productive effort for improving individual privacy, I think it would be a worthwhile step to take. But it would only be theatre.

Even if there were a coordinated response in countries that view China as an antagonist — the U.S., Canada, E.U. nations, the U.K., and so on — to prohibit TikTok and its SDKs, it would be a waiting game until the next big app from a developer with connections to the Chinese government. In the meantime, that government could happily acquire vast amounts of individualized information from the existing online advertising and data broker markets. It would be a response that, at the very least, has the appearance of new Red Scare xenophobia, yet has little actual benefit to user privacy.

Also:

If the US government starts blocking traffic from going to a particular company, or country for that matter, it starts to look a lot like practices the US has spent years criticizing China for. The so-called “great firewall of China” sets up significant filters that censor and monitor the Chinese internet, keeping out businesses that pose threats to the nation’s economy and political control. If you ask the Chinese Communist Party why it does this, it will tell you it’s for the good of the Chinese people, and it protects national security concerns. It also limits free expression.

About that.

Jon Keegan, the Markup:

When you hit the checkout line at your local supermarket and give the cashier your phone number or loyalty card, you are handing over a valuable treasure trove of data that may not be limited to the items in your shopping cart. Many grocers systematically infer information about you from your purchases and “enrich” the personal information you provide with additional data from third-party brokers, potentially including your race, ethnicity, age, finances, employment, and online activities. Some of them even track your precise movements in stores. They then analyze all this data about you and sell it to consumer brands eager to use it to precisely target you with advertising and otherwise improve their sales efforts.

[…]

“I think the average consumer thinks of a loyalty program as a way to save a few dollars on groceries each week. They’re not thinking about how their data is going to be funneled into this huge ecosystem with analytics and targeted advertising and tracking,” said John Davisson, director of litigation at Electronic Privacy Information Center (EPIC) in an interview with The Markup. Davisson added, “And I also think that’s by design.”

Some people surely understand that loyalty programs have at least a minor privacy trade-off, in that they permit stores to track which items are popular among specific demographics. Like so many privacy-hostile practices enabled by insufficient regulation and a collect-it-all mindset, this goes so far beyond reason and expectation. Kroger brags of holding “over two thousand” data attributes for each shopper. Allowing a margin for some marketing bullshit, that is still a staggering amount of information to collect about people buying groceries. Even the most fundamental building block of life — food — has been leveraged as yet another piece of this abhorrent data marketplace.

Brian Krebs:

The website for the settlement — equifaxbreachsettlement.com — also includes a lookup tool that lets visitors check whether they were affected by the breach; it requires your last name and the last six digits of your Social Security Number.

But be aware that phishers and other scammers are likely to take advantage of increased public awareness of the payouts to snooker people. Tim Helming, security evangelist at DomainTools.com, today flagged several new domains that mimic the name of the real Equifax Breach Settlement website and do not appear to be defensively registered by Equifax, including equifaxbreechsettlement[.]com, equifaxbreachsettlementbreach[.]com, and equifaxsettlements[.]co.

So far, those URLs do not contain anything more than parked domain advertising, but it is not difficult to imagine how they could be used — recall how something similar happened earlier in the Equifax breach. Is there a legal requirement for settlement websites like Equifax’s or the Apple butterfly keyboard suit to be separate from either party’s own hosting? I can imagine why that would be desired, but the use of these generic domains is an opportunity for scammers.

Krebs:

Of course, most of those earnings come from Equifax’s continued legal ability to buy and sell eye-popping amounts of financial and personal data on U.S. consumers. As one of the three major credit bureaus, Equifax collects and packages information about your credit, salary, and employment history. It tracks how many credit cards you have, how much money you owe, and how you pay your bills. Each company creates a credit report about you, and then sells this report to businesses who are deciding whether to give you credit.

This is a choice. In addition to 143 million Americans, thousands of Britons and Canadians were also compromised. An investigation by the Office of the Privacy Commissioner of Canada found Equifax retained consumer data beyond Canadian law and its own internal policies — data later stolen. The broker market in Canada is different to that in the U.S. but, so long as the market here is dominated by American firms like Equifax and TransUnion, the lack of a culture of privacy will be a liability.

Johana Bhuiyan, the Guardian:

The tech advocacy group Accountable Tech conducted an experiment in August and October to test Google’s pledge. Using a brand new Android device, researchers with the group analyzed their Google activity timeline, where the company shows what information is logged about an account holder’s actions. This activity helps make Google’s services “more useful” to users, according to the company – for instance, by “helping you rediscover the things that you’ve searched for, read and watched”. However, any information collected by Google is potentially subject to law enforcement requests, including the data logged in “My Activity”.

The group found that searches for directions to abortion clinics on Google Maps, as well as the routes taken to visit two Planned Parenthood locations, were stored in their Google activity timeline for weeks after it occurred. At the time of this article’s publication, the information was still stored and available at myactivity.google.com.

Not exactly surprising but still worrisome. In a narrower scope, it points to Google’s confusing mess of privacy settings, in which it treats location privacy as separate from searches and directions in Google Maps. The best thing you can do right now, regardless of who you are or what you think you will search for in the future, is to turn off Web and App Activity.

If you widen the scope, though, it is obvious such controls should not be left up to individual users to figure out, nor should it the decision of specific data brokers whether to retain or flush sensitive information. This is a systemic issue that requires a systemic legislative response.

Ina Fried, Axios:

While TikTok had no official presence at the Code Conference, the Chinese-owned firm was the talk of the annual gathering of tech world notables this week — serving as the foil of choice for a parade of tech executives, pundits and even some government officials.

[…]

[Scott] Galloway, who took every chance to call out the dangers of TikTok, was the sharpest critic in calling for it to be banned, but others were happy to join in.

Galloway repeated that demand on “Real Time with Bill Maher”. In fairness to Galloway, his disagreement with TikTok’s practices is not unique. He has repeatedly treated Facebook with disdain and dislikes surveillance advertising. But his claims about the control impressed by TikTok is on another level.

Taylor Lorenz on Twitter [sic]:

“Tiktok is flooding our children with Chinese propaganda all day” mf have u been on tiktok like once ever please stop. And before ppl come and twist my words, I’m not saying tiktok is “good” just that there’s no evidence of what he’s constantly alleging

Karl Bode, Techdirt:

As we’ve noted several times, you could ban TikTok tomorrow with a giant patriotic hammer and the Chinese government could nab all the same U.S. consumer data from just an absolute parade of companies and dodgy data brokers. And they can do that because U.S. privacy and security standards have been a trash fire for decades, especially when it comes to things like sensitive user location data.

And they’ve been a trash fire for decades because most of the same folks crying about TikTok prioritized making money over consumer privacy standards. None of these folks, nor the operators of conferences like Code, seem particularly keyed in to any of this.

I am certain some people are truly concerned about an internet where an autocratic state has an increased presence. I get it. I do not think everyone with these worries is xenophobic. I also do not believe an American-dominated internet is a universally acceptable variant. But it is the status quo, and a lot of the world’s private data is held by U.S. companies with few regulations and little oversight.

It would be worrisome for TikTok fears to be used as an excuse against U.S. privacy regulations on competition grounds. Unfortunately, that is the case being made by advocacy firms working on behalf of big American technology companies.

Bennett Cyphers, Electronic Frontier Foundation:

The company, Fog Data Science, has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and associate. Fog sells access to this data via a web application, called Fog Reveal, that lets customers point and click to access detailed histories of regular people’s lives. This panoptic surveillance apparatus is offered to state highway patrols, local police departments, and county sheriffs across the country for less than $10,000 per year.

The records received by EFF indicate that Fog has past or ongoing contractual relationships with at least 18 local, state, and federal law enforcement clients; several other agencies took advantage of free trials of Fog’s service. EFF learned about Fog after filing more than 100 public records requests over several months for documents pertaining to government relationships with location data brokers. EFF also shared these records with The Associated Press.

Cyphers found several connections between Fog Data Science and a data broker called Venntel. While Fog Data focuses on smaller police departments, Venntel works mostly with national agencies and, according to Cypher’s reporting, also provides data to other law enforcement-connected location companies like Babel Street and X-Mode. Venntel is well-connected in Washington. The Department of Homeland Security is a current user of its software; in the past, it has also held contracts with the FBI, DEA, ICE, and IRS, according to a search of USAspending.gov.

Cyphers:

Together, the “area search” and the “device search” functions allow surveillance that is both broad and specific. An area search can be used to gather device IDs for everyone in an area, and device searches can be used to learn where those people live and work. As a result, using Fog Reveal, police can execute searches that are functionally equivalent to the geofence warrants that are commonly served to Google.

The EFF says Fog Reveal will display a proprietary hash of the advertiser ID for devices within a geofence instead of the actual ID. But that may not be the case for all users.

Will Greenberg, EFF:

Federal users have access to an interface for converting between Fog’s internal device IDs (“FOG IDs”) and the device’s actual Advertiser ID:

This is eyebrow raising for a couple reasons. First, if this feature is operational, it would contradict assurances made in a sample State search warrant Fog sends to customers that FOG IDs can’t be converted back into Advertiser IDs. Second, if users could retrieve the Advertiser IDs of all devices in a query’s results, it would make Reveal far more capable of unmasking the identities of those device’s owners. This is due to the fact that if you have access to a device, you can read its Advertiser ID, and thus law enforcement would be able to verify if a specific person’s device was part of a query’s results.

To be clear, the EFF does not know if this extra level of federal functionality is available to end users. The U.S. Marshals had a two-year contract with Fog Data, which ended in 2020. It is the only national-level contract the EFF could find, and there is no evidence the Marshals or any Fog Data customer has access to unhashed advertiser IDs.

Even so, the presence of this functionality is worrisome. Last year, Joseph Cox of Vice explained how “identity resolution” companies like BIGDBM and FullContact brag about their ability to tie advertising identifiers to individual profiles of people: their names, physical addresses, IP addresses, property records, and more. If a law enforcement agency has contracts with a device location aggregator like Fog Data and an identity resolution company, and has access to this feature, officers could create full named profiles of people’s movements without a warrant.

Even if an agency does not have access to an unhashed device identifier, the repeated presence of a device at an address is a strong indicator that its owner lives there. It is hard to overstate how easy it is to link an address back to a name and phone number with free and publicly accessible web tools. That is, even though Fog Data may not collect what it deems is personally identifiable information — which, somehow, does not include device advertising identifiers — it is trivial to tie what it does show back to a specific person. And, again, police somehow do not need a warrant for this because the location data is bought from data brokers which harvest it from apps instead of cell towers.

Andrew Paul, Popular Science:

Hosted by comedian Wanda Sykes, the show originates from MGM Studios (itself a subsidiary of Amazon), and promises “friends and family a fun new way to enjoy time with one another” via doorbell cams, although the ensuing online reaction has been less than promising. Despite the rising popularity of smart home security systems such as Ring, it seems as though some audiences can see through the upcoming show’s premise to know it sounds less “family friendly” than a thinly-veiled surveillance state infomercial attempting to push more home monitoring products.

Just some lighthearted yuks in support of the private police state. Order your Ring today, brought to you by Amazon, and you, too, can help cops violate civil liberties while providing material for this long-form advertisement.

From the FTC’s press release:

In a complaint filed against Kochava, the FTC alleges that the company’s customized data feeds allow purchasers to identify and track specific mobile device users. For example, the location of a mobile device at night is likely the user’s home address and could be combined with property records to uncover their identity. In fact, the data broker has touted identifying households as one of the possible uses of its data in some marketing materials.

[…]

The FTC alleges that Kochava fails to adequately protect its data from public exposure. Until at least June 2022, Kochava allowed anyone with little effort to obtain a large sample of sensitive data and use it without restriction. The data sample the FTC examined included precise, timestamped location data collected from more than 61 million unique mobile devices in the previous week. Using Kochava’s publicly available data sample, the FTC complaint details how it is possible to identify and track people at sensitive locations […]

Lauren Feiner, CNBC:

“This lawsuit shows the unfortunate reality that the FTC has a fundamental misunderstanding of Kochava’s data marketplace business and other data businesses,” Kochava Collective General Manager Brian Cox said in a statement. “Kochava operates consistently and proactively in compliance with all rules and laws, including those specific to privacy.”

Cox said the company announced a new ability to block location data from sensitive locations prior to the FTC’s lawsuit. He said the company engaged with the FTC for weeks explaining the data collection process and hoped to come up with “effective solutions” with the agency.

By “engaging with the FTC for weeks”, Cox appears to mean filing a lawsuit against the Commission earlier this month in an attempt to block the FTC from filing this complaint.

Marketing and data companies are eager to put on a privacy-respecting guise when it suits them while promising services completely antithetical to that. For example, Kochava says it offers in its data marketplace the ability to match mobile devices — perhaps the billion unique mobile devices it also brags about — to email addresses and precise locations. Its marketing materials say it can tie those devices to households and their respective behaviour and purchasing data. Of course, on the same page, it says it is “privacy-first by design” — one wonders how that is possible when the sample data set viewed by the FTC apparently pinpoints specific users by time and location.

Want to opt out? Thanks to regulation in Europe, some U.S. states, and elsewhere, that is made possible. But Kochava is uniquely dickish about it:

[…] You may submit a request to delete all your personal information by emailing Kochava at privacy@kochava.com or by contacting the legal department via telephone at 855-562-4282. However, please bear in mind that when you contact Kochava with such a request, because of the precautions we have proactively taken to protect your privacy, you are actually volunteering more personally identifying information to Kochava as a result of lodging the request than Kochava would have ever had prior to you initiating contact.

I call bullshit. What identifiers could you possibly give Kochava to opt out of its privacy hostile practices that it does not already know and have enriched with other data sources?

Kochava obviously wants to promote itself as uniquely precise to its audience of marketers who crave that kind of fidelity. Its claims warrant some skepticism. But time and time again this industry has proved itself to be as creepy as the brochures claim, at least in how much it collects. How it interprets that information is, in my experience, more questionable.

The FTC does not come out of this looking particularly good, either. Megan Gray on Twitter:

Methinks the agency knows it’s going to lose. Picked this company b/c thought it would settle. Oopsy. Then when company preemptively filed case, agency was in a corner and doesn’t want to be perceived as backing down from a fight.

Gray, continued:

The agency had until MID OCTOBER to respond to the DJ (and could’ve gotten an extension for further time). This was clearly rushed to capture the press cycle. I genuinely feel bad for staff.

It looks really bad for regulators to get financial settlements and modest concessions out of these cases without pushing for an admission of wrongdoing. It makes it look as though these cases are primarily for revenue generation instead of exposing heinous behaviour and setting standards for others to follow.

Alfred Ng, Politico:

Congress has never been closer to passing a federal data privacy law — and the brokers that profit from information on billions of people are spending big to nudge the legislation in their favor.

[…]

The brokers, including U.K.-based data giant RELX and credit reporting agency TransUnion, want changes to the bill — such as an easing of data-sharing restrictions that RELX says would hamper investigations of crimes. Some data brokers also want clearer permission to use third-party data for advertising purposes.

The only surprising part of this is that data brokers are bragging about being treated as an extension of law enforcement. Imagine being thrilled to live in a police state, so long as it is privatized.

A unique consequence of writing about the biggest computer companies, which are all based in the United States, from most any other country is a lurking sense of invasion. I do not mean this in an anti-American sense; it is perhaps inherent to any large organization emanating from the world’s most powerful economy. But there is always a sense that the hardware, software, and services we use are designed by Americans often for Americans. You can see this in a feature set inevitably richer in the U.S. than elsewhere, language offerings that prioritize U.S. English, pricing often pegged to the U.S. dollar, and — perhaps more subtly — in the values by which these products are created and administered.

These are values that I, as someone who resides in a country broadly similar to the U.S., often believe are positive forces. A right to free expression is among those historically espoused by these companies in the use of their products. But over the past fifteen years of their widespread use, platforms like Facebook, Instagram, Twitter, and YouTube have established rules of increasing specificity and caution to restrict what they consider permissible. That, in a nutshell, is the premise of Jillian C. York’s 2021 book, Silicon Values.

Though it was published last year, I only read it recently. I am glad I did, especially with several new stories questioning the impact of a popular tech company an ocean away. TikTok’s rapid rise after decades of industry dominance by American giants is causing a re-evaluation of an America-first perspective. Om Malik put it well:

For as long as I can remember, American technology habits did shape the world. Today, the biggest user base doesn’t live in the US. Billion-plus Indians do things differently. Ditto for China. Russia. Africa. These are giant markets, capable of dooming any technology that attempts a one-size-fits-all approach.

The path taken by York in Silicon Values gets right up to the first line of this quote from Malik. In the closing chapter, York (228) writes:

I used to believe that platforms should not moderate speech; that they should take a hands-off approach, with very few exceptions. That was naïve. I still believe that Silicon Valley shouldn’t be the arbiter of what we can say, but the simple fact is that we have entrusted these corporations to do just that, and as such, they must use wisely the responsibility that they have been given.

I am not sure this is exactly correct. We often do not trust the judgements of moderation teams, as evidenced by frequent complaints about what is permissible and, more often, what gets flagged, demonetized, or removed. As I was writing this article, reporters noted that Twitter took moderation action against doctors and scientists posting factual, non-controversial information about COVID-19. This erroneous flagging was reverted, but it is another in a series of stories about questionable decisions made by big platforms.

In fact, much of Silicon Values is about the tension between the power of these giants to shape the permissible bounds of public conversations and their disquieting influence. At the beginning of the book, York points to a 1946 U.S. Supreme Court decision, Marsh v. Alabama, which held that private entities can become sufficiently large and public to require them to be subject to the same Constitutional constraints as government entities. Though York says this ruling has “not as of this writing been applied to the quasi-public spaces of the internet” (14), I found a case which attempted to use Marsh to push against a moderation decision. In an appellate decision in Prager University v. Google, Judge M. Margaret McKeown wrote (PDF) “PragerU’s reliance on Marsh is not persuasive”. More importantly, McKeown reflected on the tension between influence and expectations:

Both sides say that the sky will fall if we do not adopt their position. PragerU prophesizes living under the tyranny of big-tech, possessing the power to censor any speech it does not like. YouTube and several amicus curiae, on the other hand, foretell the undoing of the Internet if online speech is regulated. While these arguments have interesting and important roles to play in policy discussions concerning the future of the Internet, they do not figure into our straightforward application of the First Amendment.

All of the subjects concerned being American, it makes sense to judge these actions on American legal principles. But even if YouTube were treated as an extension of government due to its size and required to retain every non-criminal video uploaded to its service, it would make as much of a political statement elsewhere, if not more. In France and Germany, it — like any other company — must comply with laws that require the removal of hate speech, laws which in the U.S. would be unconstitutional. York (19) contrasts their eager compliance with Facebook’s memorable inaction to rein in hate speech that contributed to the genocide of Rohingya people in Myanmar. Even if this is a difference of legal policy — that France and Germany have laws but Myanmar does not — it is clearly unethical for Facebook to have inadequately moderated this use of its platform.

The concept of an online world no longer influenced largely by U.S. soft power brings us back to the tension with TikTok and its Chinese ownership. It understandably makes some people nervous for the most popular social media platform for many Americans has the backing of an authoritarian regime. Some worry about the possibility of external government influence on public policy and discourse, though one study I found reflects a clear difference in moderation principles between TikTok and its Chinese-specific counterpart Douyin. Some are concerned about the mass collection of private data. I get it.

But from my Canadian perspective, it feels like most of the world is caught up in an argument between a superpower and a near-superpower, with continued dominance by the U.S. preferable only by comparison and familiarity. Several European countries have banned Google Analytics because it is impossible for their citizens to be protected against surveillance by American intelligence agencies. The U.S. may have legal processes to restrict ad hoc access by its spies, but those are something of a formality. Its processes are conducted in secret and with poor public oversight. What is known is that it rarely rejects warrants for surveillance, and that private companies must quietly comply with document requests with little opportunity for rebuttal or transparency. Sometimes, these processes are circumvented entirely. The data broker business permits surveillance for anyone willing to pay — including U.S. authorities.

The privacy angle holds little more weight. While it is concerning for an authoritarian government to be on the receiving end of surveillance technologies rather than advertising and marketing firms, it is unclear that any specific app disproportionately contributes to this sea of data. Banning TikTok does not make for a meaningful reduction of visibility into individual behaviours.

Even concerns about how much a recommendation algorithm may sway voter intent smell funny. Like Facebook before it, TikTok has downplayed the seriousness of its platform by framing it as an entertainment venue. As with other platforms, disinformation on TikTok spreads and multiplies. These factors may have an effect on how people vote. But the sudden alarm over yet-unproved allegations of algorithmic meddling in TikTok to boost Chinese interests is laughable to those of us who have been at the mercy of American-created algorithms despite living elsewhere. American state actors have also taken advantage of the popularity of social networks in ways not dissimilar from political adversaries.

However, it would be wrong to conclude that both countries are basically the same. They obviously differ in their means of governance and the freedoms afforded to people. The problem is that I should not be able to find so many similarities in the use of technology as a form of soft power, and certainly not for spying, between a democratic nation and an authoritarian one. The mount from which Silicon Values are being shouted looks awfully short from this perspective.

You do not need me to tell you that decades of undermining democracy within our countries has caused a rise in autocratic leanings, even in countries assumed stable. The degradation of faith in democratic institutions is part of a downward spiral caused by internal undermining and a failure to uphold democratic values. Again, there are clear differences and I do not pretend otherwise. You will not be thrown in jail for disagreeing with the President or Prime Minister, and please spare me the cynical and ridiculous “yet!” responses.

I wish there were a clear set of instructions about where to go from here. Silicon Values is, understandably, not a book about solutions; it is an exploration of often conflicting problems. York delivers compelling defences of free expression on the web, maddening cases where newsworthy posts were removed, and the inequity of platform moderation rules. It is not a secret, nor a compelling narrative, that rules are applied inconsistently, and that famous and rich people are treated with more lenience than the rest of us. But what York notes is how aligned platforms are with the biases of upper-class white Americans; not coincidentally, the boards and executive teams of these companies are dominated by people matching that description.

The question of how to apply more local customs and behaviours to a global platform is, I believe, the defining challenge of the next decade in tech. One thing seems clear to me: the world’s democracies need to do better. It should not be so easy to point to similarities in egregious behaviour; corruption of legal processes should not be so common. I worry that regulators in China and the U.S. will spend so much time negotiating which of them gets to treat the internet as their domain while the rest of us get steamrolled by policies that maximize their self-preferencing.

This is especially true as waves of stories have been published recently alleging TikTok and its adjacent companies have suspicious ties to arms of an autocratic state. Lots of TikTok employees apparently used to work for China’s state media outlets and, in another app from ByteDance, TikTok’s owner, pro-China stories were regularly promoted while critical news was minimized. ByteDance sure seems to be working more closely with government officials than operators of other social media platforms. That is probably not great; we all should be able to publish negative opinions about lawmakers and big businesses without fear of reprisal.

There is a laundry list of reasons why we must invest more in our democratic institutions. One of them is, I believe, to ensure a clear set of values projected into the world. One way to achieve that is to prefer protocols over platforms. It is impossible for Facebook or Twitter or YouTube to be moderated to the full expectations of its users, and the growth of platforms like Rumble is a natural offshoot of that. But platforms like Rumble which trumpet their free speech bonafides are missing the point: moderation is good, normal, and reinforces free speech principles. It is right for platform owners to decide the range of permissible posts. What is worrying is the size and scope of them. Facebook moderates the discussions of billions — with a b and an s — of people worldwide. In some places, this can permit greater expression, but it is also an impossible task to monitor well.

The ambition of Silicon Valley’s biggest businesses has not gone unnoticed outside of the U.S. and, from my perspective, feels out of place. Yes, the country’s light touch approach to regulation and generous support of its tech industry has brought the world many of its most popular products and services. But it should not be assumed that we must rely on these companies built in the context of middle- and upper-class America. That is not an anti-American statement; nothing in this piece should be construed as anti-American. Far from it. But I am dismayed after my reading of Silicon Values. What I would like is an internet where platforms are not so giant, common moderation actions are not viewed as weapons, and more power is in more relevant hands.

Salvador Rodriguez, Wall Street Journal:

In the years before the change, Apple suggested a series of possible arrangements that would earn the iPhone maker a slice of Facebook’s revenue, according to people who either participated in the meetings or were briefed about them. As one person recalled: Apple officials said they wanted to “build businesses together.”

One idea that was discussed: creating a subscription-based version of Facebook that would be free of ads, according to people familiar with the discussions. Because Apple collects a cut of subscription revenue for apps in its App Store, that product could have generated significant revenue for the Cupertino, Calif., giant.

[…]

Apple has discussed similar business models with many developers, according to a person familiar with the conversations.

If Apple was, indeed, planning a relentless and self-preferencing campaign against Facebook beginning in 2016, as Rodriguez reports, for a feature previewed in 2020, that would be pretty terrible. But 2016 is the time when Apple enabled subscriptions for all types of apps and launched its Search Ads initiative. Apple executives, including Phil Schiller, explained these changes in press briefings, and the company privately discussed them with developers, too.

Lauren Goode of the Verge in June 2016:

One popular app developer, who had been clued in to Apple’s App Store changes, says the new subscription offerings are “an earthquake in my world, in a good way.”

“It’s hard to emphasize how significantly this can change the viability of companies like mine and their growth trajectory,” says Itai Tsiddon, the co-founder of Lightricks, which makes top-selling apps like Facetune and Enlight.

If Apple discussed its changes with Tsiddon and, implicitly or explicitly, encouraged him to adopt subscriptions, why would it not do the same for big developers like Facebook?

Maybe this was a scheme five years in the making and explicitly targeted at Facebook. Its apps were a “persistent frustration for some Apple executives”, according to Rodriguez’s sources, because Apple did not get a cut of ad revenue. If that is the case, it confirms some of the harshest critiques of the App Store model and Apple’s entitlement to a cut of revenue. On the other hand, without context, it is unclear why Facebook’s apps so chafed Apple’s leadership. Maybe it is because they were popular, free, and huge, likely costing Apple huge amounts of money every time a new version of Instagram was released. That is partly Apple’s problem; the App Store is designed in a way that disincentivizes in-app transactions, incentivizes free apps, and does not care about file size or bandwidth use.

Here is the part of Rodriguez’s story that is not getting as much attention:

The Facebook executives who internally proposed ending the collection of third-party data argued that by ceasing its reliance on such data, the social-media giant could also reduce the company’s dependence on Apple and Google’s mobile operating systems.

Mr. Zuckerberg opted instead to leave the bulk of its data-collection practices in place. The company shut down an ad-targeting option that relied on information collected by data brokers shortly after the Cambridge Analytica scandal was reported in March 2018, but otherwise Facebook continued to rely on third-party data to target users with personalized ads.

Executives at Facebook knew, in 2018, that its reliance on third-party data was a risk, and that regulatory and platform changes could make its use very difficult. It could have switched to an entirely first-party model at the time. I assume its executives now regret that decision.

App Tracking Transparency (hereafter, “ATT”) is in the news again because many advertising-supported companies have reported a particularly bad earnings quarter attributable, many of them have said, to several factors, perhaps best summarized by Mobile Dev Memo’s Eric Seufert:

It’s impractical, if not impossible, to try to tease out the individual burden of any of these dynamics on mobile advertising performance, generally. And it’s also largely beside the point: it is the “perfect storm” combination of these three conditions that compounds to such painful detriment to advertising performance.

This is perhaps true, but it has not stopped Seufert and others from calling out ATT as a key factor. Seufert published the third instalment of his series about how unfair ATT is earlier this month after news broke of new App Store ad formats, and it is, as is typical, an excoriation of the Apple-imposed question of whether users want to be tracked by third-party services:

Note that Apple’s ad network utilizes app install and in-app purchase data, to which Apple has exclusive first-party access under the restrictions of ATT, to target ads to users with its ad network. It’s worth underscoring that, with ATT, the scope and substance of consumer data utilized to target ads remains unchanged, except that only Apple has access to it. To be fair: Apple does employ privacy controls with its own ad network that are superior to the pre-ATT status quo. But my primary contention with ATT is that it does not facilitate real consumer choice and that it deprives consumers of widespread ad relevancy and advertisers and publishers of commercial opportunity.

Those are actually three “primary” concerns, and I think it is worth responding to them. But first, I think we should ask whether ATT really is cratering mobile advertising in the way both its critics and its proponents seem to believe. That includes me, by the way. I have previously linked to stories about the apparently enormous impact ATT has had on big ad companies like Alphabet, Meta, and Snap. But I thought it would be worth a deeper look.


As Seufert says, it is very difficult to figure out what specific effect ATT has because there are so many factors involved. But it is fair to think that, if it is affecting publishers’ revenue as Seufert says, it should also be affecting advertisers’ revenue too. And, while these companies do not separate revenue by platform, they do offer geographic breakdowns. North America is the only region where the iPhone is more popular than Android; elsewhere, the reverse is true, and often overwhelmingly. We also know ATT was rolled out at the end of April 2021. With time given for users to update, that means we should begin seeing North American revenue beginning to falter in the third calendar quarter of 2021 compared to the rest of the world.

The actual figures tell a much murkier story. I do not think it is fair to suggest ATT does nothing, but its effect does not seem as pronounced as either its biggest supporters or its biggest naysayers suggest.

Snap, for example, is a company that has no major revenue stream outside of ad placements in its smartphone apps. But in Q3 2021, a full quarter after ATT’s public debut, Snap posted year-over-year revenue growth of 57% overall. In North America, it reported 60% growth — higher than in any other region.

The following quarters all show overall revenue gains in North America just one percentage point below the company’s total growth. It is a pattern that more closely mimics the number of daily active users. Snap has only posted modest, single-digit year-over-year gains in North American users, but decent double-digit growth elsewhere. Meanwhile, its growth in the average revenue per user has been stronger in North America since ATT’s debut than anywhere else.

If ATT were so significantly kneecapping revenue, I would think we would see a pronounced skew against North America compared to elsewhere. But that is not the case. Revenue in North America is only slightly off compared to the company total, and it is increasing how much it earns per North American user compared to the rest of the world.

What about Alphabet? It has actually posted year-over-year revenue gains in the United States — one of few countries where iOS is dominant — higher than those in Africa, Asia, or Europe in its first and second quarters this year. In fairness, its gains were much stronger in “Other Americas”, which comprises Mexico, southward, plus Canada.

Meta’s business is the one everyone appears to be watching because two quarters this year have been rough. In its most recent, it reported its first ever year-over-year revenue decline, which dropped by about a billion dollars in Europe and about $600 million in the U.S. and Canada. That is alarming for the company, to be sure, but it still does not track with ATT causality for two reasons:

  • iOS is far more popular in the U.S. and Canada than it is in Europe, but Meta incurred a greater revenue decline — in absolute terms and, especially, in percentage terms — in Europe.

  • Meta was still posting year-over-year gains in both those regions until this most recent quarter, even though ATT rolled out over a year ago.

Those are all big, well-known companies. What about pure advertising businesses? Surprisingly few are publicly traded. Even so, I pulled the earnings from a few popular programmatic display ad providers. Magnite, for example, calls itself the “world’s largest independent sell-side ad platform”. In its most recent quarter, the proportion of revenue it derived from the U.S. increased year-over-year compared to the rest of the world. The most recent investor report from Criteo, a major provider of retargeted ads, showed an overall decline year-over-year, but the Americas performed far better than African, Asian, or European markets.

Perhaps the most favourable evidence for ATT’s effects lies in the earnings reports from Publicis Groupe, which has acquired dozens of name-brand agencies — like Leo Burnett and Saatchi & Saatchi — and also runs a digital ad platform. It is such a multifaceted business that it is hard to see where the effects of ATT may come into play. In the first half of 2022, its “organic” growth in North America was the lowest of any region. But it ranked among the middle in total growth over 2021, posting higher gains than Asia or Europe. In the same press release, Publicis also specifically calls out the performance of Epsilon, its internal data brokerage service, as a reason for its U.S. growth.

Though I did not examine every available earnings report, I am not cherry picking. I looked through the list of companies on Martech Map, checked to see if they were significant enough and had investor information, and then looked for geographic breakdowns. It is possible I have my assumptions all wrong, too; please let me know if you believe that is the case. I am not arguing this is a perfect analogue, only that it paints a murkier picture of ATT’s apparent financial effects on the ad tech industry.


I think Seufert’s criticisms of ATT have been among the most cogent and thoughtful, and I do not intend for this to be a full article about him, specifically. But he does articulate some of the more common problems I see being raised with ATT. There are legal questions which are being investigated by British and German authorities about Apple’s simultaneous offering of “personalized” App Store ads; I will focus only on the moral questions on which I think can fairly comment.

There is a fairly significant ethical problem out of the gate: there are those who believe highly-targeted advertisements are a fair trade-off because they offer businesses a more accurate means of finding their customers, and the behavioural data collected from all of us is valuable only in the aggregate. That is, as I understand it, the view of analysts like Seufert, Benedict Evans, and Ben Thompson. Frequent readers will not be surprised to know I disagree with this premise. Regardless of how many user agreements we sign and privacy policies we read, we cannot know the full extent of the data economy. Personal information about us is being collected, shared, combined, and repackaged. It may only be profitable in aggregate, but it is useful with finer granularity, so it is unsurprising that it is indefinitely warehoused in detail. You can prove this to yourself by viewing the browsing history collected by Facebook and Google, or requesting a copy of your personal data from major brokers. Some make that process very easy: you can often complete a form on the company’s website. Others require you to send an email with the personal identifiers you would like to obtain a records check on, like your name, email addresses, phone number, and device IDs. Some will display user data to those who ask anywhere in the U.S. or worldwide, while others will only comply with requests from California or Vermont, or wherever laws require. You may find some companies you have never heard of have a lot of information about you, often a mix of scraped public sources and data shared or collected in private deals.

What you will likely find after completing several of these requests, especially if you live in the U.S., is how much information about you is being held by by these brokers and marketing companies. Even though Canadian privacy laws give me some cover from the worst abuses, I have still found brokers that held my full name, my full street address, and other personal identifiers. These attributes are not often not relevant to targeted advertising — what does it matter what my apartment number is? Why are brokers not dividing the world into general areas in the style of What Three Words? — but they hold it all because it is cheap enough to do so, even at scale. All so, the story goes, a neighbourhood restaurant can precisely advertise a special offer when it is close to my partner’s birthday.

In a passionate defence of targeted ads, Seufert asked, rhetorically, “what happens when ads aren’t personalized?”, answering “digital ads resemble TV ads: jarring distractions from core content experience. Non-personalized is another way of saying irrelevant, or at best, randomly relevant.”

For what it is worth, a relevant ad has never serendipitously graced my screen, even before I took steps to avoid targeted advertising. My friends and family barely see well-targeted ads, either. Most often, they see the same ad — on every other webpage and in every app they use — for an online store they visited once, begging them to return, sometimes in French when their device is set to an English language setting. What is the solution to this — more data collection? That is absurd. Even at their absolute best, targeted ads are seen by viewers as creepy. People do not want irrelevant ads, but they do not want to feel followed or harassed either. Targeted advertising enables the latter. Even if they were a significant payoff for publishers — which there is not — does it make sense to build the internet’s economy on the backs of a few hundred brokers none of us have heard of, trading and merging our personal information in the hope of generating a slightly better click-through rate?

Earlier, I quoted Seufert:

But my primary contention with ATT is that it does not facilitate real consumer choice and that it deprives consumers of widespread ad relevancy and advertisers and publishers of commercial opportunity.

ATT may not be worded fairly — though Seufert’s proposed solution is similarly vague and unhelpful — but he is right to argue it does not offer real choice, though probably not in the way he intends. Users can still be tracked and apps from well-known developers were found to ignore opt-outs.

Then there is the much bigger question of whether people should even be able to opt into such widespread tracking. We simply cannot be informed consumers in every aspect of our lives, and we cannot foresee how this information will be used and abused in the full extent of time. It sounds boring, but what is so wrong with requiring data minimization at every turn, permitting only the most relevant personal data to be collected, and restricting the ability for this information to be shared or combined?

Does ATT really “[deprive] consumers of widespread ad relevancy and advertisers and publishers of commercial opportunity”? Even if it does — which I doubt — has that commercial opportunity really existed with meaningful consumer awareness and choice? Or is this entire market illegitimate, artificially inflated by our inability to avoid becoming its subjects?

I wonder how much of ad tech’s woes is really ascribable to ATT, and how much is the fault of the myriad other problems it is running into: currency fluctuations, regulation, pandemic effects, and changes in user behaviour all come to mind.

Cal Newport, the New Yorker:

[…] TikTok is estimated to have a billion active monthly users, a number it achieved in a breathtakingly short time, and according to some reports it boasts an average session length of 10.85 minutes, which, if true, would be far longer than that of any other major social-media app. Meanwhile, Facebook’s parent company recently lost more than two hundred and thirty billion dollars in market capitalization in a single day after the company announced that user growth had stalled. Analysts identified TikTok as an important factor in this slowdown.

Is it possible the social media giants from California are facing waning relevance? Is ATT perhaps a useful scapegoat with questionable effect? I am not sure it is possible to say from the outside looking in, but I am also not sure we can draw any conclusions from one or two quarters this year, over a year after ATT was launched to the public.


In theory, ATT is a very good option for users. Its biggest problem is that the company which makes it also has an advertising division, and it appears to have engaged in some quiet self-preferencing behaviours. Legal questions aside, it is disappointing to see such an obvious user benefit so easily undermined. These App Store ads give ATT’s critics a clear conflict of interest to point to, look tacky, and create an unpleasant experience. ATT’s reliance on a very specific definition of “tracking” that allows Apple to segment users based on what they read in News and what they buy in third-party apps is far more permissive than I think it ought to be for a company that so loudly trumpets its privacy bonafides. But advertising that relies on first-party data can accurately be described as better for privacy than those based on the third-party data economy. Whether it is fair for Apple to treat itself, as the platform creator, as the root-level first-party with an infinitely bigger observation window is another question. I do not think it should.

Conflicts like these are one of many reasons why privacy rights should be established by regulators, not individual companies. Privacy must not be a luxury good, or something you opt into, and it should not be a radical position to say so. We all value different degrees of privacy, but it should not be possible for businesses to be built on whether we have rights at all. The digital economy should not be built on such rickety and obviously flawed foundations.

While we are on the subject of data marketplaces, here is Joseph Cox, of Vice:

Placer.ai, a location data firm that Motherboard previously revealed was providing heatmaps of approximately where abortion clinic visitors live, has admitted that people have obtained data related to these visits in the past.

A different location data company, INRIX, offers census block-level aggregate statistics of Planned Parenthood visitors. But it is kind of irrelevant what individual data brokers are offering and the limitations they place on themselves because the value of this stuff is in the aggregate and users have little individual control. As an example, one data platform, Narrative, boasts connections to seventeen different location providers claiming two billion mobile identifiers. “Always present” in this data set are the latitude and longitude, timestamp, and device identifier. In May, it removed data on its platform collected from some health-related apps, but it relies on platform users following its terms and conditions.

Narrative is just one example of a massive and insidious industry relying on a lack of knowledge among users and failure to regulate.

Kristin Cohen, of the U.S. Federal Trade Commission:

The conversation about technology tends to focus on benefits. But there is a behind-the-scenes irony that needs to be examined in the open: the extent to which highly personal information that people choose not to disclose even to family, friends, or colleagues is actually shared with complete strangers. These strangers participate in the often shadowy ad tech and data broker ecosystem where companies have a profit motive to share data at an unprecedented scale and granularity.

This sounds promising. Cohen says the FTC is ready to take action against companies and data brokers misusing health information, in particular, in a move apparently spurred or accelerated by the overturning of Roe v. Wade. So what is the FTC proposing?

[…] There are numerous state and federal laws that govern the collection, use, and sharing of sensitive consumer data, including many enforced by the Commission. The FTC has brought hundreds of cases to protect the security and privacy of consumers’ personal information, some of which have included substantial civil penalties. In addition to Section 5 of the FTC Act, which broadly prohibits unfair and deceptive trade practices, the Commission also enforces the Safeguards Rule, the Health Breach Notification Rule, and the Children’s Online Privacy Protection Rule.

I am no lawyer, so it would be ridiculous for me to try to interpret these laws. But what is there sure seems limited in scope — in order: personal information entrusted to financial companies, security breaches of health records, and children under 13 years old. This seems like the absolute bottom rung on the ladder of concerns. It is obviously good that the FTC is reiterating its enforcement capabilities, though revealing of its insipid authority, but what is it about those laws which will permit it to take meaningful action against the myriad anti-privacy practices covered by over-broad Terms of Use agreements?

Companies may try to placate consumers’ privacy concerns by claiming they anonymize or aggregate data. Firms making claims about anonymization should be on guard that these claims can be a deceptive trade practice and violate the FTC Act when untrue. Significant research has shown that “anonymized” data can often be re-identified, especially in the context of location data. One set of researchers demonstrated that, in some instances, it was possible to uniquely identify 95% of a dataset of 1.5 million individuals using four location points with timestamps. Companies that make false claims about anonymization can expect to hear from the FTC.

Many digital privacy advocates have been banging this drum for years. Again, I am glad to see it raised as an issue the FTC is taking seriously. But given the exuberant data broker market, how can any company that collects dozens or hundreds of data points honestly assert their de-identified data cannot be associated with real identities?

The only solution is for those companies to collect less user data and to pass even fewer points onto brokers. But will the FTC be given the tools to enforce this? Its funding is being increased significantly, so it will hopefully be able to make good on its cautionary guidance.

Since the parallel rise over the past couple years of TikTok and concerns about the service’s connections to China — or, more specifically, its intelligence and military arms — I have been mulling over this piece. I have dropped bits and pieces, but I feel like now is a good time to bring all those thoughts together given a letter sent by FCC commissioner Brendan Carr jointly to Tim Cook and Sundar Pichai. Unfortunately, it has solely been posted to Twitter as a series of images without descriptive text because I guess Carr hates people who use screen readers:

I am writing the two of you because Apple and Google hold themselves out as operating app stores that are safe and trusted places to discover and download apps. Nonetheless, Apple and Google have reviewed and approved the TikTok app for inclusion in your respective app stores. Indeed, statistics show that TikTok has been downloaded in the U.S. from the Apple App Store and the Google Play Store nearly 19 million times in the first quarter of this year alone. It is clear that TikTok poses an unacceptable national security risk due to its extensive data harvesting being combined with Beijing’s apparently unchecked access to that sensitive data But it is also clear that TikTok’s pattern of conduct and misrepresentations regarding the unfettered access that persons in Beijing have to sensitive U.S. user data — just some of which is detailed below — puts it out of compliance with the policies that both of your companies require every app to adhere to as a condition of remaining available on your app stores. Therefore, I am requesting that you apply the plain text of your app store policies to TikTok and remove it from your app stores for failure to abide by those terms.

As a reminder, Carr works for the FCC, not the FTC. Nor does Carr work for the Department of Commerce, which was most recently tasked with eradicating TikTok from the United States. While frequent readers will know how much I appreciate a regulator doing their job and making tough demands, I feel Carr’s fury is misplaced and, perhaps, a little disingenuous.

Carr’s letter follows Emily Baker-White’s reporting earlier this month for Buzzfeed News about the virtually nonexistent wall between U.S. user data collected by TikTok and employees at ByteDance, its parent company in China. The concerns, Baker-White says, are claims of persistent backdoors connected to Chinese military or intelligence which allow access to users’ “nonpublic data”. The ostensible severing of ties between ByteDance and TikTok’s U.S. users is referred to as “Project Texas” internally:

TikTok’s goal for Project Texas is that any data stored on the Oracle server will be secure and not accessible from China or elsewhere globally. However, according to seven recordings between September 2021 and January 2022, the lawyer leading TikTok’s negotiations with CFIUS and others clarify that this only includes data that is not publicly available on the app, like content that is in draft form, set to private, or information like users’ phone numbers and birthdays that is collected but not visible on their profiles. A Booz Allen Hamilton consultant told colleagues in September 2021 that what exactly will count as “protected data” that will be stored in the Oracle server was “still being ironed out from a legal perspective.”

In a recorded January 2022 meeting, the company’s head of product and user operations announced with a laugh that unique IDs (UIDs) will not be considered protected information under the CFIUS agreement: “The conversation continues to evolve,” they said. “We recently found out that UIDs are things we can have access to, which changes the game a bit.”

What the product and user operations head meant by “UID” in this circumstance is not clear — it could refer to an identifier for a specific TikTok account, or for a device. Device UIDs are typically used by ad tech companies like Google and Facebook to link your behavior across apps, making them nearly as important an identifier as your name.

It has become a cliché by now to point out that TikTok’s data collection practices are no more invasive or expansive than those of American social media giants. It is also a shallow comparison. The concerns raised by Carr and others are explicitly related to the company’s Chinese parentage, not simply the pure privacy violations of collecting all that information.

But, you know, maybe they should be worried about that simpler situation. I think Baker-White buried the lede in that big, long Buzzfeed story:

Project Texas’s narrow focus on the security of a specific slice of US user data, much of which the Chinese government could simply buy from data brokers if it so chose, does not address fears that China, through ByteDance, could use TikTok to influence Americans’ commercial, cultural, or political behavior.

This piece is almost entirely about users’ private data being accessible by staff apparently operating as agents of a foreign government; almost none of it is about its algorithm influencing behaviour.1 So it is wild to read, in the first half of this sentence, that a great deal of the piece’s concerns about TikTok collecting user data can be effectively undone if its management clicks the “Add to Cart” button on a data broker’s website. Those are the privacy concerns churning away in the back room. What is happening in the front office?

From Carr’s letter:

In March 2020, researchers discovered that TikTok, through its app in the Apple App Store, was accessing users’ most sensitive data, including passwords, cryptocurrency wallet addresses, and personal messages.

This seems to be a reference to Talal Haj Bakry and Tommy Mysk’s research showing that TikTok and a large number of other popular apps were automatically reading the iOS clipboard — or “pasteboard” in iOS parlance. It sounds bad, but is not clear that TikTok actually received any of this pasted data.

There are also many non-insidious reasons why this could have been the case. In a statement to Ars Technica, TikTok said it was related to an anti-spam feature. I am not sure that is believable, but I also do not have a good reason to think it was secretly monitoring users’ copied data when there are other innocent explanations. Google’s Firebase platform, for example, automatically retrieves the pasteboard by default when using its Dynamic Links feature. TikTok probably does not use Firebase, but all I am saying is that it is not, in of itself, a reason to think the worst. At any rate, suspicious pasteboard access is one reason pasting stuff in iOS has become increasingly irritating and why there is a whole new paste control in iOS 16.

Also Carr:

In August 2020, TikTok circumvented a privacy safeguard in Google’s Android operating system to obtain data that allowed it to track users online.

This appears to reference TikTok’s collection of MAC addresses — a behaviour which, while against Google’s policies, is not exclusive to TikTok. It may be more concerning when TikTok does it, but it could obtain the same information from data brokers (PDF).

That is really what this is all about. The problem of TikTok is really the problem of worldwide privacy failures. There are apps on your phone collecting historically unprecedented amounts of information about your online and offline behaviour. There are companies that buy all of this — often nominally de-identified — and many of them offer “enrichment” services that mix information from different sources to create more comprehensive records. Those businesses, in turn, provide those more complex profiles to other businesses — or journalists — which now have the ability to individually identify you. It is often counterproductive to do so, and they often promise they would never do such a thing, but it is completely possible and entirely legal — in the U.S., at least. It should be noted that, while the world is grappling with privacy problems, some of those failures are specific to the United States.

One of the concerns Carr enumerated in his letter is TikTok’s collection of “biometric identifiers, including faceprints […] and voiceprints”. When this language was added last year, it seemed to give the company legal cover for features like automatic captions and face filters, both of which involve things that are considered biometric identifiers. But the change was only made to the U.S.-specific privacy policy; TikTok has three. When I looked at them, the U.S. one seemed more permissive than those for Europe or the rest of the world, but I am not a lawyer, and things may have changed since.2

One of the frustrating characteristics about Carr’s letter is that he is, in many ways, completely right — and I just wish he had raised these concerns about literally everything else applicable. From the perspective of a non-American, his concerns about intrusive surveillance reflect those I have about my data being stored under the control of American companies operating under American laws. Sure, Canada is both an ally and a participant in the Five Eyes group. But it is hard to be reassured by that when the U.S. has lost its moral high ground by wiretapping allies and entire countries.

It is worse that an authoritarian surveillance state may be doing the snooping. But the moral and ethical problems are basically the same, so it is hard to read even the most extreme interpretation of Carr’s letter without some amount of hypocrisy. The U.S. decided not to pass adequate privacy legislation at any point in the past fifteen years of accelerating intrusive practices — by tech companies, internet service providers, ad networks, and the data brokers tying them all together — and has its own insidious and overreaching intelligence practices. Even if Carr has a point, TikTok is not the problem; it is one entity taking advantage of a wildly problematic system.


  1. On the question of influence, there is room for nuance. A response to that is a whole different article. ↥︎

  2. TikTok is far from the only company to have region-specific privacy policies. I would love to see a legal analysis and comparison. ↥︎

Jia Tolentino, the New Yorker:

If you become pregnant, your phone generally knows before many of your friends do. The entire Internet economy is built on meticulous user tracking — of purchases, search terms — and, as laws modelled on Texas’s S.B. 8 proliferate, encouraging private citizens to file lawsuits against anyone who facilitates an abortion, self-appointed vigilantes will have no shortage of tools to track and identify suspects. (The National Right to Life Committee recently published policy recommendations for anti-abortion states that included criminal penalties for anyone who provides information about self-managed abortion “over the telephone, the internet, or any other medium of communication.”) A reporter for Vice recently spent a mere hundred and sixty dollars to purchase a data set on visits to more than six hundred Planned Parenthood clinics. Brokers sell data that make it possible to track journeys to and from any location — say, an abortion clinic in another state. In Missouri, this year, a lawmaker proposed a measure that would allow private citizens to sue anyone who helps a resident of the state get an abortion elsewhere; as with S.B. 8, the law would reward successful plaintiffs with ten thousand dollars. The closest analogue to this kind of legislation is the Fugitive Slave Act of 1793.

Two data brokers, Safegraph and Placer.ai, said they removed Planned Parenthood visits from their data sets. They could reverse that decision at any time, and there is nothing preventing another company from offering its own package of users seeking a form of healthcare that is now illegal in a dozen states. People have little choice about which third-party providers receive data from the apps and services they use. Anyone using a period tracking app is at risk of that data being subpoenaed and, while some vendors say they do not pass health records to brokers, some of those same apps were found to be inadvertently sharing records with Facebook.

If the U.S. had more protective privacy laws, it would not make today’s ruling any less of a failure to uphold individuals’ rights in the face of encroaching authoritarian policies. But it would make it a whole lot harder for governments and those deputized on their behalf to impose their fringe views against medical practitioners, clinics, and people seeking a safe abortion.

Muyi Xiao, Paul Mozur, Isabelle Qian, and Alexander Cardia of the New York Times put together a haunting short documentary about the state of surveillance in China. It shows a complete loss of privacy, and any attempt to maintain one’s sense of self is regarded as suspicious. From my limited perspective, I cannot imagine making such a fundamental sacrifice.

This is why it is so important to match the revulsion we feel over things like that Cadillac Fairview surreptitious facial recognition incident or Clearview AI — in its entirety — with strong legislation. These early-stage attempts at building surveillance technologies that circumvent legal processes forecast an invasive future for everyone.