Search Results for: data broker

Kashmir Hill, New York Times:

But in other instances, something much sneakier has happened. Modern cars are internet-enabled, allowing access to services like navigation, roadside assistance and car apps that drivers can connect to their vehicles to locate them or unlock them remotely. In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis.

[…]

After LexisNexis and Verisk get data from consumers’ cars, they sell information about how people are driving to insurance companies. […]

If this subject feels familiar to you, you may have read Mozilla’s September report about mass data collection. At the time, I wrote that it was possible a car’s privacy policy is suggesting a greater amount of data collection than is actually occurring. (As an aside, how messed up is it that a car has a privacy policy?) This is something I felt compelled to note, as one of the most circulated disclaimers was Nissan’s permission to collect drivers’ “sexual activity” and “genetic information”, something Mozilla noted was possibly boilerplate stuff. There was lots else in Mozilla’s report I found credible, but this clause in particular read to me like a standard cover-your-ass contract thing — perhaps a template that a law office would keep on hand. While it is not impossible Nissan could, somehow, inadvertently collect this kind of information, it seems unlikely it is actually trying to do so. Why would Nissan want it in the first place?

What Hill’s investigation found is more plausible and understandable — not only because Hill has evidence, but also because everybody involved has a clear financial incentive. Insurance companies can use this information to fiddle with their rates or entirely decline to serve someone. Data brokers and automakers each get paid. Car dealers also get financial bonuses for signing people up — perhaps without their full knowledge.

There is also a mechanism for collecting this information: the car itself knows how you accelerate, brake, and how fast you drive. A pathway this obvious was not present in the Nissan example.

If drivers were fully aware of how their behaviour behind the wheel was disclosed and there were adequate privacy laws protecting its use solely for insurance providers, I think that would be a fair trade off. I would welcome my insurance provider virtually riding along; my one redeeming quality is that I drive like a saint. But this arrangement with data brokers under the mask of driver aides is skeezy in all the ways this same sort of thing appears everywhere. Data is valuable and there are way too many avenues to quietly monetize it.

Update: General Motors has apparently stopped sharing OnStar data with LexisNexis and Verisk.

Kalhan Rosenblatt and Kyle Stewart, NBC News:

A bill that would force TikTok’s Chinese owner ByteDance to divest the popular social media company passed a crucial vote Thursday, as the company sought to rally users to its defense with a call to action that flooded some congressional offices with phone calls.

The House Energy and Commerce Committee voted unanimously to pass the bill.

On Thursday, TikTok sent push notifications to some of its users to encourage them to call their representatives and ask them to vote against the bill. It displayed text that read “Stop a TikTok shutdown” in big, white letters before urging users to “speak up now.”

Apparently, representatives did not much appreciate having a bunch of people decades younger calling their offices and expressing their opinion on this legislation, in a manner not dissimilar from campaigns by Airbnb and Uber.

The introduction of this legislation makes no subtle statements in specifically targeting ByteDance and TikTok, and the platform’s data collection practices and opaque algorithmic feed. However, the bill (PDF) itself appears to apply more broadly to large internet companies — excluding travel and product review websites, for some reason — operated by a “foreign adversary country”.1

Even so, this proposal has many of the same problems as the last time I wrote about a TikTok ban about a year ago. Instead of attempting to protect users’ private data generally, it is trying to wall off select countries and, therefore, is subject to the same caveats as that recent Executive Order. Also, U.S. lawmakers love to paint TikTok’s recommendation engine as some kind of Chinese mind control experiment, but this should not be interpreted as an objection to political influence in social media feeds more generally. After all, U.S. social media companies have eagerly adopted customized feeds worldwide, driven by opaque American systems. Obviously, the U.S. is happier projecting its own power instead of China, but it would be a mistake to assume lawmakers are outraged by the idea of foreign influence as a concept.

During tonight’s State of the Union address, U.S. President Joe Biden said it was a priority to “pass bipartisan privacy legislation to protect our children online”. It is unclear what this is referencing.


  1. Unfortunately, there is not a cool forced acronym in the title:

    This Act may be cited as the ‘‘Protecting Americans from Foreign Adversary Controlled Applications Act’’.

    Or “PAFACA” for short, I suppose. ↥︎

U.S. President Joe Biden today signed an executive order, previously covered, which intends to limit the sale and distribution of Americans’ sensitive data to “countries of concern”:

To address this threat and to take further steps with respect to the national emergency declared in Executive Order 13873, the order authorizes the Attorney General, in coordination with the Secretary of Homeland Security and in consultation with the heads of relevant agencies, to issue, subject to public notice and comment, regulations to prohibit or otherwise restrict the large-scale transfer of Americans’ personal data to countries of concern and to provide safeguards around other activities that can give those countries access to sensitive data. […]

According to a fact sheet (PDF) from the U.S. Department of Justice, six countries are being considered for restrictions: “China (including Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela”. The sensitive data which will be covered includes attributes like a person’s name, their location, and health and financial information.

This sounds great in theory, but it will be difficult to enforce in practice as data brokers operating outside the U.S. will not have the same restrictions. That is not to say it is useless. However, it is not as effective as creating conditions hostile to this kind of exploitation to begin with. You should not have to worry that your precise location is being shared with a data broker somewhere just because you checked the weather, nor should you need to be extremely diligent in reviewing the specific policies of each app or website you visit.

See Also: Dell Cameron, Wired.

Byron Tau, in an excerpt from his new book “Means of Control”, as published in Wired with a clarification in brackets by me:

Initially, PlanetRisk was sampling data country by country, but it didn’t take long for the team to wonder what it would cost to buy the entire world. The sales rep at UberMedia provided the answer: For a few hundred thousand dollars a month, the company would provide a global feed of [the location of] every phone on earth that the company could collect on. The economics were impressive. For the military and intelligence community, a few hundred thousand a month was essentially a rounding error — in 2020, the intelligence budget was $62.7 billion. Here was a powerful intelligence tool for peanuts.

Locomotive, the first version of which was coded in 2016, blew away Pentagon brass. One government official demanded midway through the demo that the rest of it be conducted inside a SCIF, a secure government facility where classified information could be discussed. The official didn’t understand how or what PlanetRisk was doing but assumed it must be a secret. A PlanetRisk employee at the briefing was mystified. “We were like, well, this is just stuff we’ve seen commercially,” they recall. “We just licensed the data.” After all, how could marketing data be classified?

Government officials were so enthralled by the capability that PlanetRisk was asked to keep Locomotive quiet. It wouldn’t be classified, but the company would be asked to tightly control word of the capability to give the military time to take advantage of public ignorance of this kind of data and turn it into an operational surveillance program.

In the where are they now? vein, UberMedia was acquired by Near, a name you might recognize from recent coverage of how its data was used to target visitors to abortion clinics. Sen. Ron Wyden has requested (PDF) an investigation from the FTC and SEC; the former has been on a roll settling data broker and privacy violations.

Alfred Ng, Politico:

A company allegedly tracked people’s visits to nearly 600 Planned Parenthood locations across 48 states and provided that data for one of the largest anti-abortion ad campaigns in the nation, according to an investigation by Sen. Ron Wyden, a scope that far exceeds what was previously known.

[…]

Wyden’s letter asks the Federal Trade Commission and the Securities and Exchange Commission to investigate Near Intelligence, a location data provider that gathered and sold the information. The company claims to have information on 1.6 billion people across 44 countries, according to its website.

Scrutiny over Near Intelligence first began at the Markup before the Wall Street Journal reported how its data was used for this ad campaign.

Data brokers like Near provide the critical link that allows precise targeting for ad campaigns like this one. People are overwhelmingly concerned about the exploitation of their private data, yet have little understanding of how it works. It is hard to blame anyone for finding this industry impenetrable. That makes it easier for data brokers like Near to dampen even the most modest attempts at restricting their business and, because regulators have limited legal footing on privacy grounds, they must resort to finding procedural infractions. It is like Al Capone’s imprisonment on tax offences.

An effective privacy framework would make it more difficult for third parties to collect users’ data, would limit its use, and would require its destruction after it has served its purpose. Unfortunately, a policy like that would also destroy the data broker industry, sharply curtail Silicon Valley advertising giants, and limit intelligence gathering efforts. So, instead, users must nominally consent and pretend they — we — have meaningful control.

Suzanne Smalley, the Record:

An Idaho federal judge on Saturday ruled that a Federal Trade Commission (FTC) enforcement action against the data broker Kochava — which the agency asserts sells vast amounts of non-anonymized data belonging to millions of people — may continue, a reversal of a prior ruling to dismiss the case.

Privacy advocates consider the court decision to be significant for several reasons, including that the case is the FTC’s first against a geolocation data broker to be fought in court. The decision also lays the foundation for a widely anticipated FTC rulemaking on commercial surveillance, which could further limit data brokers’ activities.

The FTC, under Lina Khan, is scoring victories for consumers worldwide by beating back data brokers like Kochava and X-Mode, as they cannot be certain they are only collecting data on U.S. customers. If the FTC prevails in this suit, it could put significant restrictions on this industry based on the fundamental principle that exposing private data is inherently subjecting people to potential harm.

Joseph Cox, 404 Media (this page may be login-walled, which 404 justifies for business reasons):

Hundreds of thousands of ordinary apps, including popular ones such as 9gag, Kik, and a series of caller ID apps, are part of a global surveillance capability that starts with ads inside each app, and ends with the apps’ users being swept up into a powerful mass monitoring tool advertised to national security agencies that can track the physical location, hobbies, and family members of people to build billions of profiles, according to a 404 Media investigation.

[…]

Patternz’s marketing material explicitly mentions real time bidding. This is where companies in the online ad industry try to outbid one another to have their ad placed in front of a certain type of user. But a side effect is that companies, including surveillance firms, can obtain data on individual devices such as the latitude and longitude of the device. Patternz says it is analyzing data from various types of ad formats, including banner, native, video, and audio.

It is important to be cautious about the claims made by any company, but especially ones which say they are operating at unprovable scale, and market themselves to receive rich government contracts. It does not seem possible to know for sure whether Patternz really processes ninety terabytes of data daily (PDF), for example, but the company claims it creates a direct link between online advertising networks and global surveillance for intelligence agencies. It does not sound far fetched.

Cox’s story builds upon reports published in November by the Irish Council for Civil Liberties — one regarding Europe (PDF) and a slightly different one focused on the U.S. (PDF). Both of those reports cite exploitations of real-time bidding beyond Patternz. All stories paint a picture of an advertising system which continues to ingest huge amounts of highly personal, real-time information which is purchased by spooks. Instead of agencies nominally accountable to the public monitoring the globe with a sweeping, pervasive, all-seeing eye, there are also private businesses in this racket, all because of how we are told which soap and lawn care products we ought to buy.

Even if you believe targeted advertising is a boon for publishers — something which seems increasingly hard to justify — it has turned the open web into the richest and most precise spyware the world has ever known. That is not the correct trade-off.

Probably worth keeping an eye on a case in California’s Northern District, filed in 2021, which alleges the privacy problems of Google’s real-time bidding system amount to a contract breach.

Riley Griffin, Bloomberg:

The administration plans to soon unveil the new executive order, which will direct the US Attorney General and Department of Homeland Security to issue new restrictions on transactions involving data that, if obtained, could threaten national security, according to three people familiar with the matter, who asked not to be named as the details are still private.

The draft order focuses on ways that foreign adversaries are gaining access to Americans’ “highly sensitive” personal data — from genetic information to location — through legal means. That includes obtaining information through intermediaries, such as data brokers, third-party vendor agreements, employment agreements or investment agreements, according to a draft of the proposed order.

Like the X-Mode settlement earlier this year, this does not reflect a principled stance on data brokers. The Biden administration has not reached the conclusion that this shady industry trading barely-legal data is terrible for Americans’ privacy and should be abolished. Its objection to the ways in which this information can be used by political adversaries is real, sure, but it can still be made available if passed through intermediaries. There is no reliable way of geofencing the kind of data sold by these brokers; I have found my own (Canadian) personal information in databases of brokers who insist they only hoard details about U.S. citizens.

The only way to effectively restrict this trade is to reduce the amount of data that can be collected. Cut it off at the source.

In 2020, Joseph Cox of Vice published an investigation into HYAS, explaining how it received precise location data from X-Mode; I linked to this story at the time. The latter company, now Outlogic, obtained that data from, according to Cox’s reporting, an SDK embedded “in over 400 apps [which] gathers information on 60 million global monthly users on average”. It sold access to that data to marketers, law enforcement, and intelligence firms. Months later, Apple and Google said apps in their stores would be prohibited from embedding X-Mode’s SDK.

Even in the famously permissive privacy environment of the United States, it turns out some aspects of the company’s behaviour could be illegal and, in 2022, the FTC filed a complaint (PDF) alleging seven counts of “unfair and deceptive” trade. Today, the Commission has announced a settlement.

Lesley Fair of the FTC:

[…] Among other things, the proposed order puts substantial limits on sharing certain sensitive location data and requires the company to develop a comprehensive sensitive location data program to prevent the use and sale of consumers’ sensitive location data. X-Mode/Outlogic also must take steps to prevent clients from associating consumers with locations that provide services to LGBTQ+ individuals or with locations of public gatherings like marches or protests. In addition, the company must take effective steps to see to it that clients don’t use their location data to determine the identity or location of a specific individual’s home. And even for location data that may not reveal visits to sensitive locations, X-Mode/Outlogic must ensure consumers provide informed consent before it uses that data. Finally, X-Mode/Outlogic must delete or render non-sensitive the historical data it collected from its own apps or SDK and must tell its customers about the FTC’s requirement that such data should be deleted or rendered non-sensitive.

This all sounds good — it really does — but a closer reading of the reasoning behind the consent order (PDF) leaves a lot to be desired. Here are the seven counts from the original complaint (linked above) as described by the section title for each:

  • “X-Mode’s location data could be used to identify people and track them to sensitive locations”

  • “X-Mode failed to honour consumers’ privacy choices”

  • “X-Mode failed to notify users of its own apps of the purposes for which their location data would be used”

  • “X-Mode has provided app publishers with deceptive consumer disclosures”

  • “X-Mode fails to verify that third-party apps notified consumers of the purposes for which their location data would be used”

  • “X-Mode has targeted consumers based on sensitive characteristics”

  • “X-Mode’s business practices cause or are likely to cause substantial injury to consumers”

These are not entirely objections to X-Mode’s sale of location data in a gross violation of their privacy. These are mostly procedural violations, which you can see more clearly in the analysis of the proposed order (PDF). The first and fifth counts are both violations of the rights of protected classes; the second is an allegation of data collection after users had opted out. But the other four are all related to providing insufficient notice or consent, which is the kind of weak justification illustrating the boundaries of U.S. privacy law. Meaningful privacy regulation would not allow the exploitation of real-time location data even if a user had nominally agreed to it. Khan’s FTC is clearly working with the legal frameworks that are available, not the ones that are needed.

Sen. Ron Wyden’s office, which ran an investigation into X-Mode’s practices, is optimistic with reservations. Wyden correctly observes that this should not be decided on a case-by-case basis; everyone deserves a minimum standard of privacy. Though this post and case is U.S.-focused, that expectation is true worldwide, and we ought to pass much stricter privacy laws here in Canada.

A little over a year ago, the U.S. Federal Trade Commission sued Kochava, a data broker, for doing things deemed too creepy even for regulators in the United States. In May, a judge required the FTC to narrow its case.

Ashley Belanger, Ars Technica:

One of the world’s largest mobile data brokers, Kochava, has lost its battle to stop the Federal Trade Commission from revealing what the FTC has alleged is a disturbing, widespread pattern of unfair use and sale of sensitive data without consent from hundreds of millions of people.

US District Judge B. Lynn Winmill recently unsealed a court filing, an amended complaint that perhaps contains the most evidence yet gathered by the FTC in its long-standing mission to crack down on data brokers allegedly “substantially” harming consumers by invading their privacy.

[…]

Winmill said that for now, the FTC has provided enough support for its allegations against Kochava for the lawsuit to proceed.

And what a complaint it is. Even with the understanding that its claims are unproven and many are based on Kochava’s marketing documents, it is yet another reminder of how much user data is captured and resold by brokers like these with virtually no oversight or restrictions.

I noticed something notable on page 31 of the complaint. The FTC shows a screenshot of an app with a standard iOS location services consent dialog overtop. The name of the app is redacted, but it appears to be Ibotta based on a screenshot in this blog post. The FTC describes this as a “consent screen that Kochava provided”, and later says it is “not a Kochava app, nor is Kochava mentioned anywhere on the consent screen”. The FTC alleges location data collected through this app still made its way to Kochava without users’ awareness. While the FTC muddles the description of this consent screen, it is worth mentioning that a standard iOS consent screen appears to be, in this framing, inadequately informative.

Earlier this month, the Wall Street Journal published an article authored by four reporters — Byron Tau, Andrew Mollica, Patience Haggin, and Dustin Volz — explaining how data collected by ad tech ends up in the hands of U.S. government agencies. They cited the case of Near Intelligence, which received data from apps like Life360 and then sold that data through intermediaries to U.S. military and intelligence.

I found this story via an Electronic Frontier Foundation post, which calls it an “investigation from the Wall Street Journal” which “identified a company called Near Intelligence”. An earlier version of the story referred to it as a “WSJ News Exclusive”.

But this was not a Journal investigation nor an exclusive — not really. The Journal did not break the news of Near Intelligence, it was not the first to report on Life360’s privacy violating side hustle, and it did not discover links between Near and Life360. All of those stories were first reported by two journalists at the Markup, Jon Keegan and Alfred Ng, in 2021. That September, they wrote about Near Intelligence in a story about data brokers more generally, the first instance I can find of an article where any journalist scrutinized the company. Then, in December, those same reporters broke the story of Life360 selling location data collected from its users and feeding it to the data broker industry, including Near Intelligence. Because of the outcry over the latter story, Life360 said it would reduce location data sales.

None of this solid journalism goes acknowledged in the Journal story. There is not one link or credit to the Markup despite repeated references to the results of its investigations. To be fair, it is not as though the four reporters for the Journal did not add anything: in fact, they obtained internal emails showing senior leadership at Near was fully aware that it was selling data without permission, among adding further colour and context. But Keegan and Ng at the Markup laid the groundwork for this story and ought to be recognized.

Jen Caltrider, Misha Rykov, and Zoë MacDonald, of Mozilla:

Car makers have been bragging about their cars being “computers on wheels” for years to promote their advanced features. However, the conversation about what driving a computer means for its occupants’ privacy hasn’t really caught up. While we worried that our doorbells and watches that connect to the internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data-gobbling machines. Machines that, because of their all those brag-worthy bells and whistles, have an unmatched power to watch, listen, and collect information about what you do and where you go in your car.

All 25 car brands we researched earned our *Privacy Not Included warning label — making cars the official worst category of products for privacy that we have ever reviewed.

General Motors announced earlier this year that its cars will no longer support CarPlay. The objective, according to Brian Sozzi of Yahoo Finance, is that it is “aiming to collect more of its own data to not only better understand drivers, but to pad its longer-term profit margins“. According to Mozilla’s researchers, GM’s major brands — including Chevrolet and GMC — are among the worst of this very bad bunch.

Mozilla has supplementary articles about how automakers collect and use personal data, and they are worth a read. It is entirely possible these privacy policies reflect an overly broad approach, that cars do not actually collect vast amounts of personal information, and that the data brokers who have partnered with automakers are marketing themselves more ambitiously than they are able to deliver. But is that better? Automakers either collect vast amounts of private information which they share with data brokers and use for targeted advertising efforts, or they are lying and only wish they were collecting and sharing vast amounts of private information.

Joseph Cox at the newly launched 404 Media:

This is the result of a secret weapon criminals are selling access to online that appears to tap into an especially powerful set of data: the target’s credit header. This is personal information that the credit bureaus Experian, Equifax, and TransUnion have on most adults in America via their credit cards. Through a complex web of agreements and purchases, that data trickles down from the credit bureaus to other companies who offer it to debt collectors, insurance companies, and law enforcement.

A 404 Media investigation has found that criminals have managed to tap into that data supply chain, in some cases by stealing former law enforcement officer’s identities, and are selling unfettered access to their criminal cohorts online. The tool 404 Media tested has also been used to gather information on high profile targets such as Elon Musk, Joe Rogan, and even President Joe Biden, seemingly without restriction. 404 Media verified that although not always sensitive, at least some of that data is accurate.

[…]

“It should absolutely not be allowed,” Rob Shavell, CEO of DeleteMe said of credit bureaus feeding credit header data to wider industries. Of all the entities that are the root cause of this data, “the credit bureaus are number one,” Shavell added. “They are the ones that should be subject to the strictest compliance and ultimately be held to a higher privacy standard by the federal government and by state governments than they are being,” he said.

The newest part of what Cox found appears to be the Telegram bot which automates the lookup based on stolen credentials, but there are hundreds of services which offer more-or-less similar information because of how leaky the credit reporting industry is. You can look on DeleteMe’s site and find a whole bunch of websites which promise results, so long as you pinky promise not to break the law.

Even credit reporting agencies themselves have been prone to leaks, reported as recently as January by Brian Krebs, not to mention that whole Equifax thing.

Tressie McMillan Cottom:

When you get a new mortgage they sell your number and this happens.

Chelsey Cox, CNBC, last week:

The White House on Tuesday held a roundtable examining potentially harmful data broker practices, part of an administration wide push to protect Americans’ privacy in the era of AI.

This, like so many privacy issues, has seemed critically important for decades yet has seen little to no progress. At Techdirt, Karl Bode points out two good reasons for that: the industry is rich, and it creates a workaround to avoid messy things like civil liberties.

Dell Cameron, Wired:

A “must-pass” defense bill wending its way through the United States House of Representatives may be amended to abolish the government practice of buying information on Americans that the country’s highest court has said police need a warrant to seize. Though it’s far too early to assess the odds of the legislation surviving the coming months of debate, it’s currently one of the relatively few amendments to garner support from both Republican and Democratic members.

[…]

Congressional staffers and others privy to ongoing conferencing over privacy matters on Capitol Hill say that regardless of whether the amendment succeeds, the focus on data brokers is just a prelude to a bigger fight coming this fall over the potential sunsetting of one of the spy community’s powerful tools, Section 702 of the Foreign Intelligence Surveillance Act, the survivability of which is anything but assured.

Cameron cites his February story, reporting bipartisan skepticism over the reauthorization of Section 702. But Karoun Demirjian, of the New York Times, is bizarrely framing the abuse of this law and disagreement with its renewal as a “far-right” concern:

Since the program was last extended in 2018, the G.O.P.’s approach to law enforcement and data collection has undergone a dramatic transformation. Disdain for the agencies that benefit from the warrantless surveillance program has moved into the party mainstream, particularly in the House, where Republicans assert that the F.B.I.’s investigations of Mr. Trump were biased and complain of a broader plot by the government to persecute conservatives — including some of those charged for storming the Capitol on Jan. 6, 2021 — for their political beliefs. They argue that federal law enforcement agencies cannot be trusted with Americans’ records, and should be prevented from accessing them.

If you do not read it very carefully, this is a shining profile of a Republican party which now appears to be standing up for privacy rights. But it is not the party with a track record of supporting reform. Rep. Matt Gaetz, of Florida, is cited as a Republican who sees Section 702 as overreaching and will not reauthorize it — though Demirjian notes he did so last time it was up for a vote. At the time, though there was plenty of evidence that surveillance authorized by the law was being routinely abused by intelligence agencies, Gaetz rejected an attempt to introduce new restrictions. Now that Republicans have apparently been the victims of Section 702 violations, they are more on board. As for Democrats?

In recent years, Capitol Hill has welcomed several new Democrats with backgrounds in national security who favor extending the program. But convincing others is a challenge, as most members of the party — including Representative Hakeem Jeffries of New York, the minority leader — have voted against extensions. Even President Biden voted against the law to legalize the program in 2008, when he was a senator.

U.S.-based readers concerned about privacy and surveillance might wish to ensure there is bipartisan consensus on restricting private data purchases and warrantless wiretaps.

For the first time in more than a decade, it truly feels like we are experiencing massive changes in how we use computers now, and how that will change in the future. The ferocious burgeoning industry of artificial intelligence, machine learning, LLMs, image generators, and other nascent inventions has been a part of our lives first gradually, then suddenly. The growth of this new industry provides an opportunity to reflect on how it ought to be grown while avoiding problems similar to those which have come before.

A frustrating quality of industries and their representatives is a general desire to avoid scrutiny of their inventions and practices. High technology is no different. They begin by claiming things are too new or that worries are unproven and, therefore, there is no need for external policies governing their work. They argue industry-created best practices are sufficient in curtailing bad behaviour. After a period of explosive growth, as regulators are eager to corral growing concerns, those same industry voices protest that regulations will kill jobs and destroy businesses. It is a very clever series of arguments which can luckily be repurposed for any issue.

Eighteen years ago, EPIC reported on the failure of trusting data brokers and online advertising platforms to self-regulate. It compared them unfavourably to the telemarketing industry, which pretended to self-police for years before the Do Not Call list was introduced. At the time, it was a rousing success; unfortunately, regulators were underfunded and failed to keep pace with technological change. Due to overwhelming public frustration with the state of robocalls, the U.S. government began rolling out call verification standards in 2019, and Canadian regulators followed suit. For U.S. numbers, these verification standards will be getting even more stringent just nine days from now.

These are imperfect rules and they are producing mixed results, but they are at least an attempt at addressing a common problem with some success. Meanwhile, a regulatory structure for personal privacy remains elusive. That industry still believes self-regulation is effective despite all evidence to the contrary, as my regular readers are fully aware.

Artificial intelligence and machine learning services are growing in popularity across a wide variety of industries, which makes it a perfect opportunity to create a regulatory structure and a set of ideals for safer development. The European Union has already proposed a set of restrictions based on risk. Some capabilities — like when automated systems are involved in education, law enforcement, or hiring contexts — would be considered “high risk” and subject to ongoing assessment. Other services would face transparency requirements. I do not know if these rules are good but, on their face, the behavioural ideals which the E.U. appears to be constructing are fair. The companies building these tools should be expected to disclose how models were trained and, if they do not do so, there should be consequences. That is not unreasonable.

This is about establishing a set of principles to which new developments in this space must adhere. I am not sure what those look like, but I do not think the correct answer is in letting businesses figure it out before regulators struggle to catch up years later with lobbyist-influenced half-measures. Things can be different this time around if there is a demand and an expectation for doing so. Written and enforced correctly, these regulations can help temper the worst tendencies of this industry while allowing it to flourish.

Brian Fung, CNN:

Five US senators are set to reintroduce legislation Wednesday that would block companies including TikTok from transferring Americans’ personal data to countries such as China, as part of a proposed broadening of US export controls.

The bipartisan bill led by Oregon Democratic Sen. Ron Wyden and Wyoming Republican Sen. Cynthia Lummis would, for the first time, subject exports of US data to the same type of licensing requirements that govern the sale of military and advanced technologies. It would apply to thousands of companies that rely on routinely transferring data from the United States to other jurisdictions, including data brokers and social media companies.

This is an updated version of a bill introduced last June which, itself, was a revision to legislation proposed in April 2021. It represents a real, concrete step the U.S. could take for domestic users which would require all apps to operate within a specific framework of privacy expectations. Nevertheless, after last year’s iteration of the bill, the Center for Strategic and International Studies pointed out that a better option would be to “impose legal boundaries on how all businesses operating in the United States amass, store, and share user information in the first place”.

The U.S. Office of the Director of National Intelligence:

As part of our ongoing transparency efforts to enhance public understanding of the Intelligence Community’s (IC) work and to provide insights on national security issues, ODNI today is releasing this declassified IC report dated January 2022.

This report (PDF) was released Friday and, while its (redacted) authors admit the use of what they call “commercially available information”1 purchased from brokers can be useful in investigations, the tone is occasionally scathing.

Dell Cameron, Wired:

Perhaps most controversially, the report states that the government believes that it can “persistently” track the phones of “millions of Americans” without a warrant so long as it pays for the information. Were the government to simply demand access to a device’s location instead, it would be considered a Fourth Amendment “search” and would require a judge’s signoff. Because the same companies are willing to sell the information — not only to the US government but to other companies as well — the government considers it “publicly available” and therefore “can purchase it.”

It is no secret, the report adds, that it is often trivial “to deanonymize and identify individuals” from data that was packaged as ethically fine for commercial use because it had been “anonymized” first. Such data may be useful, it says, to “identify every person who attended a protest or rally based on their smartphone location or ad-tracking records.” Such civil liberties concerns are prime examples of how “large quantities of nominally ‘public’ information can result in sensitive aggregations.” What’s more, information collected for one purpose “may be reused for other purposes,” which may “raise risks beyond those originally calculated,” an effect called “mission creep.”

While relationships between data brokers and intelligence agencies are not new, the report’s authors acknowledge an increase in the “volume and sensitivity” of this information in “recent years” presents an opportunity for “substantial harm, embarrassment, and inconvenience”. Its use represents one of the most significant changes in surveillance since the whistleblower disclosures of ten years ago.

But, while the DNI trumpets its “ongoing transparency efforts” in releasing this report, it would be surprising if anything immediately came of it. Contrary to popular belief, nihilism is not the default position for most people. In surveys in Canada and the United States, people have indicated they care deeply about their privacy — even online. Regulations have been slowly taking effect around the world which more accurately reflect these views. But there remains little national control in the U.S. over the collection and use of private data, either commercially or by law enforcement and intelligence agencies; and, because of the U.S.’ central location in the way many of us use the internet, it represents the biggest privacy risk. Even state-level policies — like California’s data broker law — are ineffectual because the onus continues to be placed on individual users to find and remove themselves from brokers’ collections, which is impractical at best.


  1. Because the intelligence apparatus loves acronyms, this is referred to as “CAI” in the report and is — rather predictably — confused with the CIA in at least one instance that I spotted. ↥︎

I received more feedback than I had expected on my recent link to an article about people who routinely — and often permanently — share their live location with friends, and I thought it was worth highlighting here.

A reader sent me this by email, which I am publishing with permission:

[…] I do share my location with a couple of long-time trusted friends. I’m a full-time RVer, and I not only move around from place to place, but spend a lot of time boondocking in the desert or on other public lands. Having my friends able to see my location if necessary makes me feel a bit safer. I know other full-time RVers who do this, but we’re a small minority of the general population.

And, on a similar note, Stuart Breckenridge shares this use case:

It’s common in cycling clubs to share live location should something untoward happen. Garmin/Telegram/Find My (etc.) are all useful for this.

And Nathan Snelgrove:

I also actually know women who share locations with friends and tell them when they’re out with men they don’t know and where they should be on the map. That use case is very real.

Jennings also mentioned that use case in the article I linked to, citing a 2019 TechCrunch piece by Rae Witte, and noted how often families use it to know the whereabouts of their children. All of these responses make sense to me. This angle unfortunately explains the popularity of the Life360 app, which is now being sued for selling the location data of children to third-party data brokers. Safety is such a smart rationale that it is disappointing to see such private data entrusted to garbage businesses with exploitative side hustles.

I received a few other replies for why people use permanent location sharing, too, most commonly within families. One, also from Snelgrove:

My wife and my in-laws all do it. We started doing it together when we would ski together, because it’s so easy to lose each other skiing. But far and away the most common use now is checking to see how far away they are if we know they are coming over for dinner.

From Felix, via email, also published with permission:

Indispensable for one use case: Picking up kids from kindergarten. Spares us the question “got time to pick ’em up today?” As I can see whether my wife has already left the office or is still miles away. Also: I’d rather share my location with her than to feel guilty not seeing her message “where you at?” (Which, to me, feels more intrusive, ironically)

These both feel like good examples of the convenience of sharing locations within a family, which reflects my own use case: when I am making a timing-sensitive dinner, I occasionally check my partner’s location on her way home from work. But neither reflects the apparently common case of sharing with a bunch of friends.

Wil Turner, in 2015, wrote a lovely piece about how the Find My network led to chance encounters with friends while travelling:

I wait on an overhead walkway in the reflected lights of a Las Vegas evening for a friend. We live five hundred miles apart, and are lucky to be briefly so close. He is here with friends from high school, I with some from Houston, some from San Francisco. In a small bar we have a drink and he puts Johnny Cash on the record player. It’s a brief break from the rest of our weekends, which are a brief break from the rest of our lives.

Except in so many ways neither of these are a break, both of our lives are a mishmash of locations and people that we have somehow managed to keep up with for a decade or more. Thanks to jobs, education, and opportunities that take us from one place to another and to technology, from Instagram to Find my Friends, we’re in fact growing more connected to more people.

I guess I need to get out more.

After the Supreme Court of the United States overturned Roe v. Wade last year, a bunch of the corporations which have built their business on knowing the physical locations of people without their knowledge or explicit permission said they would not permit the use of that information for health-related reasons. Google promised it would delete records of visits to abortion providers, domestic violence shelters, and rehab centres; when Accountable Tech checked several months later, it found much of that information was still retained.

Geoffrey Fowler, of the Washington Post, decided to look again:

To test Google’s privacy promise, I’ve been running an experiment. Over the last few weeks, I visited a dozen abortion clinics, medical centers and fertility specialists around California, using Google Maps for directions. A colleague visited two more in Florida.

In about half of the visits, I watched Google retain a map of my activity that looked like it could have been made by a private investigator.

[…]

This didn’t happen every time. After I sat for 15 minutes in the parking lots of two clinics south of San Francisco, Google deleted each from my location history within 24 hours. It did the same for my colleague’s two visits to clinics in Florida.

To state the obvious, about half the time, Google will keep a record of a visit to a sensitive location even though a user might not have been aware it was tracking them, which is better than all the time, but substantially worse than none of the time.

Google is one of the most valuable businesses in the world and cannot reliably prevent a record of a visit to a more sensitive location. What about a smaller business similarly built on violating individuals’ privacy?

Byron Tau and Patience Haggin, the Wall Street Journal:

A Midwest antiabortion group used cellphone location data to target online content to visitors of certain Planned Parenthood clinics, according to people familiar with the matter and documents reviewed by The Wall Street Journal.

[…]

Near, the location broker whose data was used to geofence the Planned Parenthood clinics and extract the mobile-phone data, was one among a number of similar companies that faced inquiries from regulators last year over whether its data set could be used for tracking people seeking abortions.

The company told investors it received an inquiry from members of the House of Representatives about abortion tracking in July. “We communicated to the Members that the company doesn’t allow the use of its data for law enforcement or healthcare purposes, including the disclosure of reproductive rights information,” Near said in a filing with the U.S. Securities and Exchange Commission.

Why anybody is wasting their time pretending that companies with this business model are willing and able to protect users’ privacy would be laughable if it were not so dangerous. This is a solvable problem, but the answer will not be provided by organizations like these.

In August, the FTC sued location broker Kochava over its sale of device-specific location data. The company asked for the suit to be dropped and, this week, it got its wish.

Jessica Lyons Hardcastle, the Register:

An FTC lawsuit against Kochava, alleging the data broker harmed Americans by selling records of their whereabouts, has failed.

That said, a federal court has given the US government agency 30 days to come up with a better legal argument and try again.

If this feels familiar, it may be because a judge rejected the first FTC complaint against Facebook — filed in December 2020 — before agreeing to hear an amended version.

Hardcastle:

In a ruling on Thursday, the federal court agreed with Kochava in that the FTC’s lawsuit didn’t make a strong enough case to prove consumer injury.

“Although the FTC’s first legal theory of consumer injury is plausible, the FTC has not made sufficient factual allegations to proceed,” US District Court Judge Lynn Winmill wrote [PDF]. “To do so, it must not only claim that Kochava’s practices could lead to consumer injury, but that they are likely to do so, as required by the statute.”

This is a frustrating argument. To the same extent there may be questions about whether Kochava’s sale of personal data could be injurious to consumers is likely or merely possible, there is little awareness of data collection by this specific company. Consumers do not know they are providing their location to Kochava for resale when using a given app. They are increasingly aware of the privacy risks of using smartphones, I am sure, but not of these specific contracts. How should any consumer realistically protect themselves when data brokers are prevalent yet invisible in their lives?

Meanwhile, it is worth noting, the harm of an unregulated market for private data is far from a theoretical concern. Trying to figure out whether it is definitively the data bought from a specific broker’s market which leads to some awful circumstance is a profoundly idiotic pursuit. The only reasonable course of action is a comprehensive prophylactic federal data privacy law.