Search Results for: vice.com

U.S. President Joe Biden today signed an executive order, previously covered, which intends to limit the sale and distribution of Americans’ sensitive data to “countries of concern”:

To address this threat and to take further steps with respect to the national emergency declared in Executive Order 13873, the order authorizes the Attorney General, in coordination with the Secretary of Homeland Security and in consultation with the heads of relevant agencies, to issue, subject to public notice and comment, regulations to prohibit or otherwise restrict the large-scale transfer of Americans’ personal data to countries of concern and to provide safeguards around other activities that can give those countries access to sensitive data. […]

According to a fact sheet (PDF) from the U.S. Department of Justice, six countries are being considered for restrictions: “China (including Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela”. The sensitive data which will be covered includes attributes like a person’s name, their location, and health and financial information.

This sounds great in theory, but it will be difficult to enforce in practice as data brokers operating outside the U.S. will not have the same restrictions. That is not to say it is useless. However, it is not as effective as creating conditions hostile to this kind of exploitation to begin with. You should not have to worry that your precise location is being shared with a data broker somewhere just because you checked the weather, nor should you need to be extremely diligent in reviewing the specific policies of each app or website you visit.

See Also: Dell Cameron, Wired.

What people with Big Business Brains often like to argue about the unethical but wildly successful ad tech industry is that it is not as bad as it looks because your individual data does not have any real use or value. Ad tech vendors would not bother retaining such granular details because it is beneficial, they say, only in a more aggregated and generalized form.

The problem with this argument is that it keeps getting blown up by their demonstrable behaviour.1 For a recent example, consider Avast, an antivirus and security software provider, which installed to users’ computers a web browser toolbar that promised to protect against third-party tracking but, in actual fact, was collecting browsing history for — and you are not going to believe this — third-party tracking and advertising companies on behalf of the Avast subsidiary Jumpshot. It was supposed to be anonymized but, according to the U.S. Federal Trade Commission, this “proprietary algorithm” was so ineffective that Avast managed to collect six petabytes of revealing browsing history between 2014–2020. Then, it sold access (PDF):

[…] For example, from May 2017 to April 2019, Jumpshot granted LiveRamp, a data company that specializes in various identity services, a “world-wide license” to use consumers’ granular browsing information, including all clicks, timestamps, persistent identifiers, and cookie values, for a number of specified purposes. […]

One agreement between LiveRamp and Jumpshot stated that Jumpshot would use two services: first, “ID Syncing Services,” in which “LiveRamp and [Jumpshot] will engage in a synchronization and matching of identifiers,” and second, “Data Distribution Services,” in which “LiveRamp will ingest online Client Data and facilitate the distribution of Client’s Data (i.e., data segments and attributes of its users associated with Client IDs) to third-party platforms for the purpose of performing ad targeting and measurement.” These provisions permit the targeting of Avast consumers using LiveRamp’s ability to match Respondents’ persistent identifiers to LiveRamp’s own persistent identifiers, thereby associating data collected from Avast users with LiveRamp’s data.

We know these allegations due to the FTC’s settlement — though, I should say, these claims have not been proven, because Avast paid a $16.5 million penalty and said it would not use any of the data it collected “for advertising purposes”. The caveat makes this settlement feel a little incomplete to me. While there are other ways aggregated personal data can be used, like in market research, it does not seem Avast and Jumpshot were all that careful about obtaining consent when this software was first rolled out. When they did, the results were predictable (PDF):

Respondents had direct evidence that many consumers did not want their browsing information to be sold to third parties, even when they were told that the information would only be shared in de-identified form. In 2019, when Avast asked users of other Avast antivirus software to opt-in to the collection and sale of de-identified browsing information, fewer than 50% of consumers did so.

I am interpreting “fewer than 50%” as “between 40–49%”; if 18% of users had opted in, I expect the FTC would have said “fewer than 20%”. Most people do not want to be tracked. For comparison, this seems to be at the upper end of App Tracking Transparency opt-in rates.

I noted the LiveRamp connection when I first linked to investigations of Avast’s deceptive behaviour, though it seems Wolfie Christl beat me to the punch in December 2019. Christl also pointed out Jumpshot’s supply of data to Lotame, something the FTC also objected to. LiveRamp’s whole thing is resolving audiences based on personal information, though it says it will not return this information directly. Still, this granular identity resolution is not the kind of thing most people would like to participate in. Even if they consent, it is unclear if they are fully aware of the consequences.

This is just one settlement but it helps illustrate the distribution and mingling of granular user data. Marketers may be restricted to larger audiences and it may not be possible to directly extract users’ personally identifiable information — though it is often trivial to do so. But it is not comforting to be told collected data is only useful as part of a broader set. First of all, it is not: there are existing albeit limited ways it is possible to target small numbers of people. Even if that were true, though, this highly specific data is the foundation of larger sets. Ad tech companies want to follow you as specifically and closely as they can, and there are only nominal safeguards because collecting it all is too damn valuable.


  1. Well, and also how weird it is to be totally okay with collecting a massive amount of data with virtually no oversight or regulations so long as industry players pinky promise to only use some of it. ↥︎

Alexander Saeedy and Alexandra Bruell, Wall Street Journal:

Vice Media said it would stop publishing content on its flagship website and plans to cut hundreds of jobs, following a failed effort by owner Fortress Investment Group to sell the embattled digital publisher and its brands.

From the internal memo sent by Bruce Dixon, Vice CEO:

We create and produce outstanding original content true to the Vice brand. However, it is no longer cost-effective for us to distribute our digital content the way we have done previously. Moving forward, we will look to partner with established media companies to distribute our digital content, including news, on their global platforms, as we fully transition to a studio model. As part of this shift, we will no longer publish content on vice.com, instead putting more emphasis on our social channels as we accelerate our discussions with partners to take our content to where it will be viewed most broadly.

The way Vice has “distributed [its] digital content […] previously” is by having a website. That it is not “cost-effective” to run a website is creating rumours that it is about to be shuttered without any real effort at preservation.

This is a real shame; Vice had some of the best privacy and security coverage in the industry. I am sure I have referenced the site’s work at least dozens of times. Its record is imperfect, especially recently, but it has published solid, creative reporting for decades. Four of its former writers founded 404 Media last year, and other have found new gigs. Still, if all these articles disappear from everywhere but the Internet Archive, it will be a deep loss.

Matthew Gault, Vice:

In January of 2022, Harvey Murphy was arrested and thrown in jail while trying to get his driver’s license renewed at a local DMV. According to a $10 million lawsuit Murphy has since filed, a “loss prevention” agent working for a Sunglass Hut retail store used facial recognition software to accuse Murphy of perpetrating an armed robbery at a store in Houston, Texas. In reality, Murphy was more than 2,000 miles away at the time of the robbery.

According to a lawsuit 61-year-old Murphy has filed against Macy’s and Sunglass Hut, “he was arrested and put into an overcrowded maximum-security jail with violent criminals. While in jail trying to prove his innocence, he was beaten, gang-raped, and left with permanent and awful life-long injuries. Hours after being beaten and gang-raped, the charges against him were dropped and he was released.”

Via Tim Cushing, Techdirt:

Law enforcement loves to portray detentions that only last “hours” to be so minimally detrimental to people’s rights as to not be worth of additional attention, much less the focus of a civil rights lawsuit. But it only takes seconds to violate a right. And once it’s violated, it stays violated, even if charges are eventually dropped.

As usual, nobody is posting the original text of the lawsuit, so I pulled a copy myself.

What happened to Murphy is obviously the product of several failures at a human level. However, it is hard for me to believe any of this would have materialized without a false positive facial recognition which, allegedly, was conducted by EssilorLuxottica and Macy’s, not law enforcement. Even for the regulation reluctant, this must be one area where much stricter oversight and policies are enforced.

Update: I adjusted the headline for clarity.

In 2020, Joseph Cox of Vice published an investigation into HYAS, explaining how it received precise location data from X-Mode; I linked to this story at the time. The latter company, now Outlogic, obtained that data from, according to Cox’s reporting, an SDK embedded “in over 400 apps [which] gathers information on 60 million global monthly users on average”. It sold access to that data to marketers, law enforcement, and intelligence firms. Months later, Apple and Google said apps in their stores would be prohibited from embedding X-Mode’s SDK.

Even in the famously permissive privacy environment of the United States, it turns out some aspects of the company’s behaviour could be illegal and, in 2022, the FTC filed a complaint (PDF) alleging seven counts of “unfair and deceptive” trade. Today, the Commission has announced a settlement.

Lesley Fair of the FTC:

[…] Among other things, the proposed order puts substantial limits on sharing certain sensitive location data and requires the company to develop a comprehensive sensitive location data program to prevent the use and sale of consumers’ sensitive location data. X-Mode/Outlogic also must take steps to prevent clients from associating consumers with locations that provide services to LGBTQ+ individuals or with locations of public gatherings like marches or protests. In addition, the company must take effective steps to see to it that clients don’t use their location data to determine the identity or location of a specific individual’s home. And even for location data that may not reveal visits to sensitive locations, X-Mode/Outlogic must ensure consumers provide informed consent before it uses that data. Finally, X-Mode/Outlogic must delete or render non-sensitive the historical data it collected from its own apps or SDK and must tell its customers about the FTC’s requirement that such data should be deleted or rendered non-sensitive.

This all sounds good — it really does — but a closer reading of the reasoning behind the consent order (PDF) leaves a lot to be desired. Here are the seven counts from the original complaint (linked above) as described by the section title for each:

  • “X-Mode’s location data could be used to identify people and track them to sensitive locations”

  • “X-Mode failed to honour consumers’ privacy choices”

  • “X-Mode failed to notify users of its own apps of the purposes for which their location data would be used”

  • “X-Mode has provided app publishers with deceptive consumer disclosures”

  • “X-Mode fails to verify that third-party apps notified consumers of the purposes for which their location data would be used”

  • “X-Mode has targeted consumers based on sensitive characteristics”

  • “X-Mode’s business practices cause or are likely to cause substantial injury to consumers”

These are not entirely objections to X-Mode’s sale of location data in a gross violation of their privacy. These are mostly procedural violations, which you can see more clearly in the analysis of the proposed order (PDF). The first and fifth counts are both violations of the rights of protected classes; the second is an allegation of data collection after users had opted out. But the other four are all related to providing insufficient notice or consent, which is the kind of weak justification illustrating the boundaries of U.S. privacy law. Meaningful privacy regulation would not allow the exploitation of real-time location data even if a user had nominally agreed to it. Khan’s FTC is clearly working with the legal frameworks that are available, not the ones that are needed.

Sen. Ron Wyden’s office, which ran an investigation into X-Mode’s practices, is optimistic with reservations. Wyden correctly observes that this should not be decided on a case-by-case basis; everyone deserves a minimum standard of privacy. Though this post and case is U.S.-focused, that expectation is true worldwide, and we ought to pass much stricter privacy laws here in Canada.

Emily Badger, Ben Blatt, and Josh Katz, New York Times (via Clive Thompson):1

Sometime around 2009, American roads started to become deadlier for pedestrians, particularly at night. Fatalities have risen ever since, reversing the effects of decades of safety improvements. And it’s not clear why.

What’s even more perplexing: Nothing resembling this pattern has occurred in other comparably wealthy countries. In places like Canada and Australia, a much lower share of pedestrian fatalities occurs at night, and those fatalities — rarer in number — have generally been declining, not rising.

This is a fascinating article precisely because of this second quoted paragraph. It has become commonplace to blame increased pedestrian deaths on the growing presence of massive trucks and SUVs, but I am skeptical of that reasoning.

The Times article does not really get into this, but the Canadian and U.S. auto markets are pretty similar, with basically the same vehicles for sale, the same trend of trucks replacing cars, and the same overwhelming preference for vehicles with automatic transmissions instead of those with a clutch pedal. Canadians do drive less than Americans — an average of 15,366 kilometres per year compared to 21,687 kilometres per year in the U.S. — though the latest data I can find for Canada is from 2009. It is rather bizarre to me that the Times ignores such a close comparison after acknowledging it exists, instead choosing to treat the U.S. data in isolation for much of the rest of the article.

We also seem to do the same selfish and shameful things when we drive, though this is where I get a little out of my wheelhouse. Here is what I can find: U.S. drivers get arrested more often for driving under the influence: around 312 times per 100,000 people.2 The rate in Canada is lower, at 228 per 100,000, but it is not like assholes do not drink and drive here. We also legalized marijuana nationally in 2018, but trying to reliably measure drug impairment is considerably more difficult, resulting in a controversial criminal code change.

Plenty of people also use their phones while driving. I question the reliability of these reports; the Times cites a report from Cambridge Mobile Telematics, a company which produces an “AI-driven platform” for safer driving that “gathers sensor data from millions of IoT devices — including smartphones, proprietary Tags, connected vehicles, dashcams, and third-party devices — and fuses them with contextual data to create a unified view of vehicle and driver behavior”. I have only a vague sense of what that means, and I am not sure I can trust its precise-sounding finding of 2 minutes and 11 seconds per hour of driving time in the U.S. compared to a U.K. average of just 44 seconds.

I cannot trust the self-reported data of surveys conducted by Travelers Insurance, but if it is even remotely close to accurate, it suggests one reason why there is such a divergence. In the United States, 57% of survey respondents admitted to using a handheld device while driving, compared to just 17% in Canada. There is a similar split in people who read texts or emails — 57% in the U.S. and 21% in Canada — and in phone calls: 80% in the U.S. compared to 48% in Canada. Again, I do not think it is worth reading too much into this or any distracted driving statistic because it is obviously hard to accurately measure what drivers do when they are alone behind the wheel.

National collision data may be more reliable. In the U.S., the NHTSA says (PDF) around 8% of fatal collisions in 2021 were “distraction-affected’. According to Transport Canada, distraction was a “contributing factor” in an estimated 19.7% of fatal collisions the same year. These seem to me that they are reporting the same thing but they are so different I am not sure they are comparable. I was going to remove this section because I do not know how to reconcile these numbers, but then I would be left with only that dumb self-reported Travelers survey.

As I noted, the above few paragraphs are out of my lane of expertise. Call this more of a working-it-out-in-public post than an authoritative perspective, because this specific divergence of the safety of pedestrians in Canada and the U.S. — on roads which, as discussed, are fairly similar — is something which has haunted me since I wrote about it in July. I kept seeing articles blaming increasing vehicle size for pedestrian deaths, and it made sense to me: the top of the hood of a stock Chevrolet Silverado is very nearly the same height as the roof of a Volkswagen Golf like the one we have. I am most often a pedestrian and cyclist, and it can be terrifying to cross in front of these massive trucks, even though I am fairly tall. I fully bought this premise in part because I do think trucks and SUVs should have a legislated maximum size, and that most people should really just buy a hatchback or station wagon. I have heard all the excuses for why people in Canada need to daily drive a large vehicle, and they all fall flat. Other countries have dry-wallers and plumbers and larger families and camping holidays, and they all get by without needing to go everywhere in an apartment building on alloy rims.

The proliferation of these massive cars makes life worse for everyone else on the road who does not have one. At night, the headlights of oncoming traffic are pointed at my eye line, which means my eyes compensate and make it impossible to see the dark road ahead. When I am entering an intersection with an SUV parked on either side on the perpendicular road, I cannot see oncoming traffic until I nearly pull into its path. When I drive down the narrow-for-Calgary streets in my area, it is an increasingly tight squeeze when most other cars are massive and their drivers often less confident.

But all of these issues are shared between Canada and the U.S. — and only one of those countries is seeing soaring pedestrian death rates. The Times says this is happening particularly often at night, but we also have night, and commuting, and Daylight Saving Time.

In related news, Apple still says its new and more immersive version of CarPlay will appear in vehicle announcements in “late 2023” which, if I am not mistaken, is approximately now. It still worries me that drivers will soon be surrounded by screens with more information, more open apps, and more to be distracted by. A driver does not need to simultaneously see, as Apple illustrates, the current weather, international clocks, a list of smart home devices, a calendar, and the current song while doing 72 kilometres (45 miles) per hour. They need to look at and around the road — not drunk, not high, not using their phone, and not speeding — and recognize the responsibility of guiding a multi-tonne automobile on roads alongside other people who may not be enrobed in metal.

I know this is a very boring public service announcement but, in fairness, I do not want to die on the grille of a Chevrolet. It is unfortunate that so much else is more exciting than driving, yet someone who is unable to focus cannot instead do their daily tasks in many Canadian and American cities using public transit.


  1. The passive voice of the Times headline irks me:

    Why Are So Many American Pedestrians Dying at Night?

    Pedestrians are not “dying”. They are actually being killed, and at an alarming rate. ↥︎

  2. The FBI reported 1,024,508 DUI arrests in 2019 in a country of 328,239,523 people. Then I did math. ↥︎

Lorenzo Franceschi-Bicchierai, TechCrunch:

On Friday, genetic testing company 23andMe announced that hackers accessed the personal data of 0.1% of customers, or about 14,000 individuals. The company also said that by accessing those accounts, hackers were also able to access “a significant number of files containing profile information about other users’ ancestry.” But 23andMe would not say how many “other users” were impacted by the breach that the company initially disclosed in early October.

As it turns out, there were a lot of “other users” who were victims of this data breach: 6.9 million affected individuals in total.

The announcement Friday was made in a financial disclosure, and the company updated an old blog post a day after this TechCrunch article was published. According to 23andMe, the information disclosed by the “DNA Relatives” feature will at minimum include a display name derived from one’s (presumably real) name, recent site activity, and “predicted” relationship.

Jason Koebler, 404 Media:

Every few years, I write an article about how it is generally not a good idea to voluntarily give your immutable genetic code to a for-profit company (or any other genetic database, for that matter), and how it is an even worse deal to pay money to do so. It is also not wise or ethical to gift a 23andMe Saliva Collection Kit to your loved ones for Christmas, their birthday, or any other reason.

Give your family and friends the gift of not subjecting their genetics to businesses with a data breach record of, as of writing and I cannot stress this enough, half their customer base.

Update: A very important postscript, via Brian Sutorius. Matthew Cortland:

So what measures has 23andMe announced to mitigate the tremendous harm their negligence has caused? If you guessed, “updating their Terms of Service to force customers – including everyone who has used 23andMe since their first product became available in the United States in 2007 – into binding arbitration” you’d be correct. 23andMe is updating their TOS to strip victims of the company’s negligence of the right to seek justice in a court of law, instead forcing those harmed by 23andMe’s conduct into binding arbitration. […]

Notification of the updated Terms of Service was sent to 23andMe users one day before it disclosed the results of its investigation. If you are a user, there are specific steps you need to follow this month to opt out of binding arbitration. Read Cortland’s post in full for more information.

Recently, you may recall, Elon Musk amplified some antisemitic conspiracy theories on the social media platform he owns and, notably, is its most popular user, and that caused widespread outrage. Which conspiracy theory? Which backlash? Well, it depends on how far back you want to look — but you need not rewind the clock very much at all.

David Gilbert, Vice:

Musk was repeating an oft-repeated and widely debunked claim that [George] Soros is attempting to help facilitate the replacement of Western civilization with immigrant populations, a conspiracy known as the Great Replacement Theory.

[…]

Musk also responded to tweets spreading other Soros conspiracy theories, including false claims that Soros, a Holocaust survivor, helped roundup Jews for the Nazis, and claims that Soros is somehow linked to the Rothschilds, an entirely separate antisemitic conspiracy theory about Jewish bankers which the Soros’ conspiracies have largely replaced.

This was from six months ago. I think that qualifies as “recent”. If I were a major advertiser, I would still be hesitant to write cheques today to promote my products in the vicinity of posts like these and others far, far worse.

So that is May; in June, Musk decided to reply to an explicitly antisemitic tweet — an action which, due to Twitter’s design, would have pushed both the reply and the context of the original tweet into some number of users’ feeds.

Which brings us to September.

Judd Legum and Tesnim Zekeria, Popular Information:

Musk quickly lost interest in banning the ADL and began discussing suing the organization. In a series of posts, Musk said the ADL “has been trying to kill this platform by falsely accusing it & me of being anti-Semitic” and “almost succeeded.” He claimed that the ADL was “responsible for most of our revenue loss” and said he was considering suing them for $4 billion. In a subsequent post, he upped the figure to $22 billion.

“To clear our platform’s name on the matter of anti-Semitism, it looks like we have no choice but to file a defamation lawsuit against the Anti-Defamation League … oh the irony!,” Musk said.

The ADL, however, never accused Musk or X of being anti-Semitic. The group reported, correctly, that X was hosting anti-Semitic content and Musk had rolled back efforts to combat hate speech. And the ADL, exercising its First Amendment rights, encouraged advertisers to spend their money elsewhere unless and until Musk changed course. The notion that the ADL, a Jewish group, has the power to force corporations to bend to its will is rooted in anti-Semitic tropes about Jewish power over the business world.

Perhaps you feel like being charitable to Musk, for some reason, and would like to assume that he does not understand the tropes and innuendo with which he has engaged. That seems overly kind to me, and I am impressed you are more willing than I to give him the benefit of the doubt. But it sure seems like Musk took the condemnation of his tweets seriously, as he hosted Benjamin Netanyahu, the prime minister of Israel, in San Francisco in an attempt to smooth things over. How did that go?

Well, on November 15, Musk doubled down.

Lora Kolodny, CNBC:

Musk, who has never reserved his social media posts for business matters alone, drew attention to a tweet that said Jewish people “have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them.”

Musk replied to that tweet in emphatic agreement, “You have said the actual truth.”

That, and several other things, is a likely explanation for why major advertisers decided to pause or stop spending on the platform. On Friday, Ryan Mac and Kate Conger of the New York Times reported that Twitter may miss up to $75 million in ad revenue this year as a result of these withdrawals; Twitter disputes that number. Some companies have also stopped posting.

Clearly, this is all getting out of hand for Musk. But his big dumb posting fingers have gotten him into trouble before, and he knows just what to do: an apology tour.

Jenna Moon, Semafor:

Elon Musk toured the site of the Oct. 7 massacre by Hamas in southern Israel on Monday, as the billionaire made a wartime visit to the nation amid allegations of antisemitism.

How long are the remaining advertisers on Musk’s platform going to keep propping it up? How many times do they need to see that he is openly broadcasting agreement with disturbing and deeply bigoted views? I selected just the stories with an antisemitic component, and only those from this year; Musk routinely dips his fingers into other extremist views in a way that can most kindly be compared to a crappy edgelord.

I will leave you with the story of what happened when Henry Ford bought the Dearborn Independent.

Aaron Gordon, writing at the remaining shell of Vice:

For the last three months, I’ve been trying to find an answer to a basic question at the heart of this theft wave: Why didn’t the U.S. follow Canada’s lead and mandate immobilizers, too? If it had, either around the same time as Canada or when it considered new regulations in the mid-2010s, the method of stealing Kias and Hyundais widely popularized in online videos would not be possible, as evidenced by the fact that no similar theft wave is occurring north of the border. (Canada is experiencing its own problems with auto thefts, as explained below, but the trend is tied to organized crime and not centered around Kias and Hyundais or engine immobilizers).

I appreciate Gordon looking into this more than I was able to when I wrote about it earlier this year. TikTok and other social media platforms are being scapegoated for not preventing the spread of the technique behind this wave of thefts when it could have been prevented in the first place by regulators and automakers.

Benedict Evans:

Whenever anyone proposes new rules or regulations, the people affected always have reasons why this is a terrible idea that will cause huge damage. This applies to bankers, doctors, farmers, lawyers, academics… and indeed software engineers. They always say ‘no’ and policy-makers can’t take that at face value: they discount it by some percentage, as a form of bargaining. But when people say ‘no’, they might actually mean one of three different things, and it’s important to understand the difference.

This is a good piece categorizing the three types of “no” by industries facing new policies as: an aversion to lawmakers telling them what to do, plausible negative consequences of regulation, and technical impossibilities. But Evans does not give nearly enough weight to how often big industry players and their representatives simply lie. They often claim the effects of new regulations will be of the second or third type when there is no evidence to support their claims.

Corporations lying to get their way is not news, of course. A common thread among the examples cited by Evans is that policies which actually do fall into the categories of causing unintended negative effects or being impossible to achieve are noted as such by truly independent experts.

In 2015, after Uber launched in Calgary, the city proposed reasonable and sensible rules, which Uber claimed were entirely “unworkable” for ride sharing as a genre. Many, including popular media outlets, concurred with Uber and begged the city to fold. But it compromised on only a single rule; everything else was passed, meaning that Uber drivers were subject to the same sorts of regulations as taxi drivers because they do the same job. And guess what? Uber has been happily operating in Calgary ever since.

Apple spent years opposing repair legislation on the basis that people would hurt themselves replacing batteries, and that any state which passed such laws would become a “mecca for bad actors”. That line of argument was echoed by some, only for Apple to now support such legislation — with caveats — despite using exactly the same type of battery it says is dangerous for people to swap themselves.

After multiple reports — including several stories by Reveal in 2019 and 2020 — of serious injuries at Amazon warehouses caused in part by its infamously rigorous quotas, and a general avoidance of workers’ rights, lawmakers in California proposed corrective legislation in early 2021. Lobbyists freaked out. In the interim, Reveal found itself smeared “on background” by Amazon’s public relations team, which tells “outright lies” according to multiple reporters. The legislation was signed into law anyway in 2021. It is certainly too early to know its long-term effects, but injury rates at Amazon facilities fell in 2022, though its rates remain double (PDF) the rate of the rest of the industry.

Evans:

I think the structural problem here, across all three kinds of ‘no’, is that this is pretty new to most of us. I often compare regulation of tech to regulation of cars – we do regulate cars, but it’s complicated and there are many different kinds of question. ‘Should cars have different emissions requirements?’ is a different kind of question to ‘does the tax code favour too much low-density development?’ and both questions are complicated. It’s a lot easier to want less congestion in cities than to achieve it, and it’s a lot easier to worry about toxic content on social media than to solve it, or even agree what ‘solve’ would mean.

But we all grew up with cars. We have a pretty good idea of how roads work, and what gearboxes are, even if we’ve never seen one, and if someone proposed that cars should not come with seats or headlights because that’s unfair competition for third-party suppliers, we could all see the problem. When policy-makers ask for secure encryption with a back door, we do not always see that this would like be telling Ford and GM to stop their cars from crashing, and to make them run on gasoline that doesn’t burn. Well yes, that would be nice, but how? They say ‘no’? Easy – just threaten them with a fine of 25% of global revenue and they’ll build it!

The comparison to regulating cars is apt, though bungled by Evans. In the earliest days, cars killed a lot of people — drivers, passengers, and others. Some manufacturers introduced features like seatbelts, but safety was not an effective sales pitch (PDF). The U.S. federal government responded by passing the National Traffic and Motor Vehicle Safety Act which mandated the installation of various safety features, and Ralph Nader published “Unsafe at Any Speed” the same year. Laws were passed to encourage use of seatbelts and discourage drunk driving, and the rate of death and serious injury has fallen even as the sales and use of automobiles have risen. These laws were disputed by automakers and the public but they worked well.

Ever since, most laws which govern cars have trended toward increasing their efficiency, decreasing their damage to the environment, and improving their safety. These laws have arguably not gone far enough. Because many of the pickup trucks and SUVs which are sold to suburban families who dodge potholes and puddles are very heavy, they are exempt from stricter U.S. fuel economy standards. And I would be filled with regret if I did not remind you of the extensive lobbying orchestrated by automakers over the past century-and-a-bit to make walking across a road a crime, reduce taxes on auto sales, and lots of other things.

Lawmakers should of course be attentive to instances where everyone who knows about a topic is telling them it is impossible to do in the way they are proposing. It is not possible to create encryption which ensures no criminals or rogue states are able to intrude but permits execution of a secret wiretap warrant.

But can you really blame lawmakers when regulations are disputed by industry representatives? It sure does not help when the press and public repeat the myths they have created. If industries want to be regulated more effectively, they need to start by being honest. The press can help by more carefully scrutinizing corporate claims; even conservative, business-focused publications should be able to see that simply opposing regulation by parroting public relations pap is a worthless use of their time and words.

Steve Stecklow and Norihiko Shirouzu, Reuters:

Tesla years ago began exaggerating its vehicles’ potential driving distance – by rigging their range-estimating software. The company decided about a decade ago, for marketing purposes, to write algorithms for its range meter that would show drivers “rosy” projections for the distance it could travel on a full battery, according to a person familiar with an early design of the software for its in-dash readouts.

[…]

Tesla supervisors told some virtual team members to steer customers away from bringing their cars into service whenever possible. One current Tesla “Virtual Service Advisor” described part of his job in his LinkedIn profile: “Divert customers who do not require in person service.”

Such advisors handled a variety of issues, including range complaints. But last summer, Tesla created the Las Vegas “Diversion Team” to handle only range cases, according to the people familiar with the matter.

[…]

Tesla also updated its phone app so that any customer who complained about range could no longer book service appointments, one of the sources said. Instead, they could request that someone from Tesla contact them. It often took several days before owners were contacted because of the large backlog of range complaints, the source said.

A large portion of this report conveys Tesla’s internal decision-making based on the word of a single source. I am a little surprised Reuters decided to publish it.

Scott Case, of Recurrent, which is a service that monitors electric vehicle batteries and provides information to current owners and buyers of used vehicles:

The reality is that the laws of physics apply to Tesla, too – Tesla is not much different than other automakers. When you need to heat and cool your car – and your battery – in hot and cold weather conditions, you can’t drive as far. That impact is substantially lessened when a car is equipped with a heat pump and advanced thermal management, which many newer Teslas (and other cars) are.

It’s also worth noting that it’s not like other manufacturers have perfected accurate range estimates either. Actual range varies according to all kinds of different factors, like speed, temperature or use of climate control. Every other automaker has a different approach to sharing those estimates, but it often tends to be closer to reality than Tesla’s approach.

All electric cars are affected by temperature because people turn on the heater or air conditioner. So, too, are cars with internal combustion engines — the Canadian government estimates fuel consumption is increased by up to 20% because of air conditioners.

But there are chemistry changes which are specific to electric vehicles, and the combined impact on range is notable. Recurrent’s data indicates the Chevrolet Bolt and Ford Mustang Mach-E both get around 65% of their EPA-estimated range in sub-freezing temperatures, while Tesla models get less than half their estimated range. Even at more temperate spring or autumn temperatures, Teslas only give around 60% of the range estimated by the EPA, while it is between 90% and a little over 100% for the Bolt and Mustang. What is unique about the Tesla models is how they consistently display a range of around 90% of the EPA sticker, no matter the conditions.

The Reuters report portrays the range estimate as deliberately misleading in the sense that it is intelligently giving drivers an optimistic number: it uses “algorithms for its range meter that […] show drivers ‘rosy’ projections”. But I think the reality is even worse — it is actually a very dumb estimate which is not adjusted based on any real-world factors. These cars are supposed to be so smart they nearly drive themselves, but they use a range calculation that is the definition of ‘ignorance is bliss’? Choosing to use this misleading estimate is obviously beneficial to Tesla because it is not affected by actual driving. Even in a best-case scenario, the data collected by Recurrent suggests a Tesla’s displayed range is not actually achievable.

Recurrent’s data also shows why so many electric cars seem to be overbuilt, and please forgive me for this slow-to-realize lightbulb moment. There are plenty of people for whom a car with a 100 kilometre range would be acceptable, and it would make electric cars more affordable.1 But if it gets closer to 50 or 60 kilometres on one charge for several months a year when it is brand new and at its best, that is a severe compromise in the ugly sprawling car-centric cities of Canada and the United States.

Reuters’ source also told the reporters that customers complaining about range problems would be denied an appointment for service. That makes sense: Tesla allegedly knew its range estimates were not a reflection of reality, so there really was nothing wrong with the cars themselves. But it is horrible for customer trust. Just two days ago, Tesla settled a class-action lawsuit which claimed the company would sign a contract for installing solar panels on customers’ homes, then argue their roofs had too many angular bits and increase the cost. This does not appear to be a company that prides itself on service or communication.


  1. Whether there is a buying market for these cars is another matter entirely. Most buyers of pickup trucks and SUVs do not need a large vehicle with those capabilities. But they are routinely the best-selling vehicles in Canada and the U.S. because people buy what they want, not what actually fits their life or their garage↥︎

The biggest story in tech for the past fifteen years has been the convergence of a bag full of stuff into a single, pocket-sized, take-everywhere product. From its beginnings on the hips of Wall Street types, it rapidly became the best-selling piece of consumer electronics ever — and it is not even a close race.

I mean, of course it is a success without equal. Many of us can leave our houses with scarcely more than our phone and a set of keys, and the latter is becoming optional, too.

But its Jack-of-all-trades status of course implies it is a master of none. And, as great as a smartphone is, there are still things which other devices do better. That argument was the premise for the introduction of the iPad. It is the reason why I drafted the first notes for this on my phone, but I am currently writing it on a laptop. A smartphone is by no means the best camera you can buy, for example, so it is not uncommon to see people carrying a dedicated camera even if they own a smartphone. I am one of those people.

What if there are other categories for which most people currently find a smartphone useful, but which a dedicated device could do a better job? What if the big story in tech for the next fifteen years — aside from the rise of A.I. — is an undoing of this great convergence, at least in part?

This is not entirely speculative; or, at least, not any more so than the future of tech is in general. The device Humane previewed at TED earlier this year is approximately a standalone version of Siri, for example. Whether it will be a success is a good question, and I have doubts. But some people clearly believe someone would buy one of these things for use in addition to a smartphone, if not to replace one entirely for some people.

So, this is an article of mostly guesswork. I have no confidence in this; let us not even call them “predictions”. But there seems to be something worth exploring here and, since this website has no market swaying powers, I feel totally fine with spending a few hundred words thinking more deeply about this.

Back to Humane. Its product looks like an unbundled and perhaps better personal assistant. Smart speakers are already one example of a device extricated from the confines of the smartphone world, and Humane’s product is effectively one which you can wear, having seemingly similar benefits and restrictions. You cannot watch a movie on one, but you can ask it for nearby recommendations or to translate something. It is a peek at a world seamlessly augmented by high technology.

That future is something which is apparently in the works at every giant computer company. Microsoft released a video in 2008 — you can tell it was 2008 because everything is typeset in Gotham — predicting magic translation glasses by the year 2040. Google actually released augmented reality glasses in 2012 without success. Scaled-back attempts at similar devices have been released by Snap and Meta. The latter is also reportedly working on a more capable product to the point it staked its very identity on its ability to deliver. Apple might be working on some kind of augmented reality glasses as well.

The devices we have today already allow us a taste of an augmented reality experience. It works fine, I suppose. I have used it to place furniture in my living room and try on eyeglasses. I have also used it to plunk a giant skeleton inside my house.

The devices which have been released after the smartphone seem more specialized than ever. Perhaps that is in part because nearly anything looks more specialized than a smartphone, but there are also whole categories of seemingly niche products. Headphones were barely thought of as a device before the craze for wireless earbuds; the market for advanced fitness trackers and smart watches has been booming for years. These were niche markets, yes, until they were not.

This is what got me thinking about this more deeply: these are products which do not need to do everything better all of the time; they are things which can do a lot of things better some of the time, or a handful of things better a lot of the time.

Products with an increased degree of specialization have business justifications, too, since there are more products to sell. It may be very difficult to beat the smartphone in terms of raw sales of another single product, but it is possible to get similar results in the aggregate. It seems like this would benefit tightly integrated businesses, too.

One reason the smartphone is so popular is because it has become possible to make very good phones for not very much money — partly thanks to standardization, partly thanks to components no longer needing to be cutting-edge to be very good, and partly due to exploitative labour practices. As a result, it has become possible for people across income brackets and around the world to use a smartphone. As remarkable as that may be, it is worth remembering technology is not a panacea. Smartphones will not correct the inequality we see in cities or around the world. That said, these devices have been beneficial in developing regions and for individuals of a wide range of incomes. They are the best-selling devices ever created for a reason: they connect just about everyone. People are able to make a living by selling goods through WhatsApp, and can find jobs and services locally.

It therefore seems unlikely to me for the smartphone to disappear in the near future. But, for some, perhaps it becomes increasingly optional. Perhaps the story for them is of less convergence and more specialization. That was an early vision for the Apple Watch. Maybe some of those ideas, while premature, will finally begin to come to fruition in a more meaningful sense for more of us. For what it is worth, I cannot imagine giving up my iPhone but, then again, I could not imagine how truly great a smartphone could be before I saw one.

Chloe Xiang, Vice:

This blog post and OpenAI’s recent actions — all happening at the peak of the ChatGPT hype cycle — is a reminder of how much OpenAI’s tone and mission have changed from its founding, when it was exclusively a nonprofit. While the firm has always looked toward a future where [Artificial General Intelligence] exists, it was founded on commitments including not seeking profits and even freely sharing code it develops, which today are nowhere to be seen.

[…]

Will this AI be shared responsibly, developed openly, and without a profit motive, as the company originally envisioned? Or will it be rolled out hastily, with numerous unsettling flaws, and for a big payday benefitting OpenAI primarily? Will OpenAI keep its sci-fi future closed-source?

This was published February 28, roughly two weeks before GPT-4 was launched.

Ben Schmidt, of Nomic AI, on Twitter:

I think we can call it shut on ‘Open’ AI: the 98 page paper introducing GPT-4 proudly declares that they’re disclosing *nothing* about the contents of their training set.

James Vincent, the Verge:

Speaking to The Verge in an interview, Ilya Sutskever, OpenAI’s chief scientist and co-founder, expanded on this point. Sutskever said OpenAI’s reasons for not sharing more information about GPT-4 — fear of competition and fears over safety — were “self evident”:

“On the competitive landscape front — it’s competitive out there,” said Sutskever. “GPT-4 is not easy to develop. It took pretty much all of OpenAI working together for a very long time to produce this thing. And there are many many companies who want to do the same thing, so from a competitive side, you can see this as a maturation of the field.”

In addition to effort and competition, Sutskever also raises questions about what it would mean for safety if the company was more transparent — something Schmidt pushes back on — while Vincent documents potential legal liability. But are these not foreseeable complications, at least for competition and safety? Why maintain the artifice of the OpenAI non-profit and the suggestive name? A growing problem with things like these is questions about their trustworthiness; why not pick a new name that is not, you know, objectively incorrect?

Jason Koebler, Vice:

“For years we’ve had to deal with the fact that an entire copy of our phone lives on a server that’s outside of our control. Now the data on that server is under our control. That’s really all that’s changed here,” Matthew Green, associate professor at Johns Hopkins University, told Motherboard in an online chat. “I think it’s an extremely important development.”

In my tests, the process of setting up Advanced Data Protection was a bit buggy, but if the system delivers what it promises, iCloud’s new security add-on could be a game changer for people who have avoided cloud backup tools due to the lack of end-to-end encryption.

There are two reasons to mistrust cloud storage providers. The first is that other people, unauthorized by users, could be able to access data they store; Advanced Data Protection solves that problem. The second reason is that users will be unable to access something for they are authorized due to a bug, systems failure, or data loss.1

Do not get me wrong; I am impressed with the surprise release and rapid rollout of Advanced Data Protection. It is worth pointing out that appears to include availability in China, contrary to comments from some speculators at the feature’s announcement. I feel more secure knowing iCloud Drive operates more like an extension of my personal devices. Where I would like to see progress next is in ongoing stability and quality improvements. iCloud is a higher quality service than it used to be, but it can be better.


  1. A third fear is the possibility of irreversible changes made by others, something which was apparently possible in iCloud Drive↥︎

Maddie Stone, Grist:

The passage of the Digital Fair Repair Act last June reportedly caught the tech industry off guard, but it had time to act before Governor Kathy Hochul would sign it into law. Corporate lobbyists went to work, pressing Albany for exemptions and changes that would water the bill down. They were largely successful: While the bill Hochul signed in late December remains a victory for the right-to-repair movement, the more corporate-friendly text gives consumers and independent repair shops less access to parts and tools than the original proposal called for. (The state Senate still has to vote to adopt the revised bill, but it’s widely expected to do so.)

[…]

Hochul’s office sent TechNet’s revised draft to repair advocates to get their reaction. Those advocates shared the TechNet-edited version of the bill with Fahy’s staff, which gave it to the Federal Trade Commission, or FTC, the agency charged with protecting American consumers. Documents that Repair.org shared with Grist show that FTC staff were highly critical of many of the changes. The parts assembly provision, one commission staffer wrote in response to TechNet’s edits, “could be easily abused by a manufacturer” to create a two-tiered system in which individual components like batteries are available only to authorized repair partners. Another of TechNet’s proposed changes — deleting a requirement that manufacturers give owners and independent shops the ability to reset security locks in order to conduct repairs — could result in a “hollow right to repair” in which security systems thwart people from fixing their stuff, the staffer wrote.

TechNet is an industry lobbying group with members like Amazon, Apple, Google, Meta, and Samsung.

I do not think the concerns raised by TechNet should be dismissed out of hand as simple influence peddling. There are real security and privacy concerns if it is possible to disable lockout features, even if it is being done with the best of intentions and with full permission. But there ought to be a solution; I think John Bumstead’s idea is worth considering. Security risks should not be used as a convenient excuse for restricting third-party repairability.

Roshan Abraham, Vice:

“Algorithmic wage discrimination allows firms to personalize and differentiate wages for workers in ways unknown to them, to behave in ways that the firm desires, perhaps as little as the system determines that they may be willing to accept,” [Veena] Dubal writes. The wages are “calculated with ever-changing formulas using granular data on location, individual behavior, demand, supply, and other factors,” she adds.

In a study combining legal analysis and interviews with gig workers, Dubal concludes that Prop 22 has turned working into gambling. From a driver’s point of view, every time they log in to work they are essentially gambling for wages, as the algorithm provides no reason why those wages are what they are.

In a statement to Vice, an Uber spokesperson vehemently denied the claims in Dubal’s preprint study, emphasizing that its pricing algorithms do not include “factors like a driver’s race [or] ethnicity”. From what I can tell, Dubal never makes any such claim in her study, only stating that automatic fare structures may exacerbate existing pay disparities.

No matter whether these fee structures were made more transparent, Dubal’s study acknowledges the dangers of normalizing them. She writes of jobs which already have unpredictable wages which could be worsened by a wider rollout of pay determined dynamically by a series of factors out of their control. “Gig economy” drivers may be the first to experience it, but you just know there are employers salivating at the thought of saving money by allowing computers to make constant adjustments to worker pay.

We are officially one month into Elon Musk’s ownership of Twitter. One month of needlessly cruel layoffs, of cozying up to far right goons, of uncertainty about the direction my favourite bar is taking. It is under new management which thinks few people are unwelcome to stay regardless of their behaviour, and fired most of the bouncers so there are fewer people keeping an eye out for things that drive others away. At best, he is spineless. At worst, he is enabling and even welcoming terrible people; that is certainly how they read it.

Is it any wonder advertisers are reportedly spooked?

Now he has decided to take on what used to be his biggest advertiser after they, in the words of Musk, “threatened to withhold Twitter” from the App Store, apparently without explanation. But it does not take a close Apple watcher to speculate on why it would be newly concerned about the Twitter app: it requires all apps which permit user submissions to have functional filtering, blocking, and reporting mechanisms. This is not a mystery. Apple is probably — understandably — worried about Musk’s statements and the laying off of thousands of moderators. In fairness, Twitter does not have a spectacular track record of ridding its platform of even the most heinous material but, also in fairness, eliminating all but one person tasked with removing CSAM in the world’s most populous region will make it harder to solve this problem, despite claims to the contrary.

Musk framed Apple’s reduced advertising spend as an attack on free speech. That is a wild accusation to throw at a company that, as Jason Koebler at Vice pointed out, twice challenged the FBI when the Bureau attempted to compromise encryption. Apple’s control of native app distribution on iOS devices means it is uniquely positioned to influence acceptable limits of speech and, as Musk also complained about today, it extracts fees from digital businesses. Those are also concerning factors — ones which I have repeatedly writen about. But Musk has no credibility in framing its ad spending as a free speech issue.

Of note, Twitter has also been a staunch defender of free speech. This bar I love has long been home to anonymous users and a crack legal team pushing back against worldwide interference. It has also established internal boundaries to try to improve the comfort of its guests. Many of the people making those decisions have been pushed out, replaced by people more obedient to the whims of an owner who believes none of that is necessary. He says he will comply with regulators while laying off staff responsible for that. This bar is filling up with assholes who are making many of us uncomfortable and driving some away. Hopefully, the new spot can fill the void. Even so, it still feels like a loss.

Chloe Xiang, Vice:

On Tuesday, the Edmonton Police Service (EPS) shared a computer generated image of a suspect they created with DNA phenotyping, which it used for the first time in hopes of identifying a suspect from a 2019 sexual assault case. Using DNA evidence from the case, a company called Parabon NanoLabs created the image of a young Black man. The composite image did not factor in the suspect’s age, BMI, or environmental factors, such as facial hair, tattoos, and scars. The EPS then released this image to the public, both on its website and on social media platforms including its Twitter, claiming it to be “a last resort after all investigative avenues have been exhausted.”

This is not the first time police in Canada have turned to Parabon to create DNA-based predictive composites of suspects.

Sarah Rieger produced some terrific reporting on the use of this tool and its murky ethics while she was at CBC News. Here is Rieger in 2018 after a Parabon portrait was used to try to find the mother of a baby abandoned in Calgary:

Benedikt Hallgrímsson, a biological anthropologist and evolutionary biologist who studies the significance of phenotypic variation and variability at the University of Calgary, said he wouldn’t recommend phenotyping be used as a regular technique by law enforcement.

[…]

Hallgrímsson said the risk of these composite images is “twofold.” First, the image might lead to someone being falsely accused of a crime. Second, the actual suspect might not look anything like the picture and could be overlooked.

And here is a 2018 followup story from Rieger, answering the question of why they are used at all in Canada instead of family matches from the national databank of DNA from convicted criminals, missing persons, and volunteered samples:

A public affairs spokesperson told CBC that Canada is one of the only western countries not to allow familial DNA typing, even though it has been used to solve dozens of cases in the United States and around the globe.

“Jurisdictions that currently do use familial searching do so either on the basis of explicit legislative permission, or in some cases, more disturbingly, in the absence of any legislation explicitly prohibiting it,” Patricia Kosseim, the senior general counsel at the office of Canada’s privacy commissioner, said during a 2015 speech at the Canadian Institute on the Administration of Justice.

Rieger says consumer DNA databases like those used to crack cold cases in the U.S. would still be permissible for police to search. All of these options make me uncomfortable, but permitting exploratory use of the national database of criminals’ DNA seems like it could incentivize its expansion. When it was launched in 1998, only serious crimes required the collection of a convicted offender’s DNA. In 2008, amidst Stephen Harper’s crime-and-punishment tenure, law enforcement was permitted to collect offenders’ DNA for less violent criminal convictions. It would be worrisome if there were more reasons for more Canadians’ DNA to be in that databank. The whole point of the DNA Identification Act was to ensure the database does not become a means to collect a biological identity marker for everyone.

At the end of the second story, Rieger points out that the mother of the abandoned baby would not be identified using familial DNA matching unless one of her relatives was a convicted criminal. Rieger also notes that, at the time of writing, many of the cases involving Parabon’s predictive portraits remained open.

One of those cases was the 1998 murder of Renee Sweeney in Sudbury. In January 2017, police used Parabon’s software to update a sketch of the suspect. The suspect was arrested in December 2018, but police said they only received a tip that November. Even in favourable coverage, police do not seem like they want to draw a straight line between Parabon’s generated portrait and an arrest nearly two years after its release.

Do you remember having the capacity for shock?

To be fair, it may have been muted by years of relentless news stories exploring an entire industry of privacy invasions. Some of these articles might involve subjects familiar to you; perhaps you were an early worrier about how Facebook apps could harvest data on users’ friends, a capability which the company later found was happening at shocking scale. Unfortunately, most of the general-audience press began paying attention to these concerns after the 2016 U.S. election, when that Facebook scandal was disproportionately blamed for a particularly idiotic presidency. But, at last, mainstream newsrooms did cover these problems, and they brought the budget, sources, and access to uncover some truly horrifying news items, with such regularity that my ability to be shocked has been blunted.

This made my jaw drop.

Joseph Cox, Vice:

Multiple branches of the U.S. military have bought access to a powerful internet monitoring tool that claims to cover over 90 percent of the world’s internet traffic, and which in some cases provides access to people’s email data, browsing history, and other information such as their sensitive internet cookies, according to contracting data and other documents reviewed by Motherboard.

[…]

“The network data includes data from over 550 collection points worldwide, to include collection points in Europe, the Middle East, North/South America, Africa and Asia, and is updated with at least 100 billion new records each day,” a description of the Augury platform in a U.S. government procurement record reviewed by Motherboard reads. It adds that Augury provides access to “petabytes” of current and historical data.

The NSA and GCHQ have, for years, intercepted and ingested data as it flows from server farms through fibre optic cables and across the internet. These programs built upon previous general surveillance efforts like the FBI’s Carnivore software.

These wildly intrusive and untargeted capabilities, once the domain of government intelligence gathering efforts, now appear to be offered to anyone who can afford whatever Team Cymru is charging. Regardless of your opinion of the programs operated by the NSA and GCHQ, at least they had the appearance of formal controls and specific goals. As Cox reports, now that the monitoring is done by a private business, it eliminates the need for pesky roadblocks like warrants.

This is wild, too:

Beyond his day job as CEO of Team Cymru, Rabbi Rob Thomas also sits on the board of the Tor Project, a privacy focused non-profit that maintains the Tor software. That software is what underpins the Tor anonymity network, a collection of thousands of volunteer-run servers that allow anyone to anonymously browse the internet.

I am not sure if the dissidents and drug seekers who rely on Tor should be worried, but I do not know what to make of this conflict. The Tor Project says there is no conflict of interest, though, so I feel silly.

Bennett Cyphers, Electronic Frontier Foundation:

The company, Fog Data Science, has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and associate. Fog sells access to this data via a web application, called Fog Reveal, that lets customers point and click to access detailed histories of regular people’s lives. This panoptic surveillance apparatus is offered to state highway patrols, local police departments, and county sheriffs across the country for less than $10,000 per year.

The records received by EFF indicate that Fog has past or ongoing contractual relationships with at least 18 local, state, and federal law enforcement clients; several other agencies took advantage of free trials of Fog’s service. EFF learned about Fog after filing more than 100 public records requests over several months for documents pertaining to government relationships with location data brokers. EFF also shared these records with The Associated Press.

Cyphers found several connections between Fog Data Science and a data broker called Venntel. While Fog Data focuses on smaller police departments, Venntel works mostly with national agencies and, according to Cypher’s reporting, also provides data to other law enforcement-connected location companies like Babel Street and X-Mode. Venntel is well-connected in Washington. The Department of Homeland Security is a current user of its software; in the past, it has also held contracts with the FBI, DEA, ICE, and IRS, according to a search of USAspending.gov.

Cyphers:

Together, the “area search” and the “device search” functions allow surveillance that is both broad and specific. An area search can be used to gather device IDs for everyone in an area, and device searches can be used to learn where those people live and work. As a result, using Fog Reveal, police can execute searches that are functionally equivalent to the geofence warrants that are commonly served to Google.

The EFF says Fog Reveal will display a proprietary hash of the advertiser ID for devices within a geofence instead of the actual ID. But that may not be the case for all users.

Will Greenberg, EFF:

Federal users have access to an interface for converting between Fog’s internal device IDs (“FOG IDs”) and the device’s actual Advertiser ID:

This is eyebrow raising for a couple reasons. First, if this feature is operational, it would contradict assurances made in a sample State search warrant Fog sends to customers that FOG IDs can’t be converted back into Advertiser IDs. Second, if users could retrieve the Advertiser IDs of all devices in a query’s results, it would make Reveal far more capable of unmasking the identities of those device’s owners. This is due to the fact that if you have access to a device, you can read its Advertiser ID, and thus law enforcement would be able to verify if a specific person’s device was part of a query’s results.

To be clear, the EFF does not know if this extra level of federal functionality is available to end users. The U.S. Marshals had a two-year contract with Fog Data, which ended in 2020. It is the only national-level contract the EFF could find, and there is no evidence the Marshals or any Fog Data customer has access to unhashed advertiser IDs.

Even so, the presence of this functionality is worrisome. Last year, Joseph Cox of Vice explained how “identity resolution” companies like BIGDBM and FullContact brag about their ability to tie advertising identifiers to individual profiles of people: their names, physical addresses, IP addresses, property records, and more. If a law enforcement agency has contracts with a device location aggregator like Fog Data and an identity resolution company, and has access to this feature, officers could create full named profiles of people’s movements without a warrant.

Even if an agency does not have access to an unhashed device identifier, the repeated presence of a device at an address is a strong indicator that its owner lives there. It is hard to overstate how easy it is to link an address back to a name and phone number with free and publicly accessible web tools. That is, even though Fog Data may not collect what it deems is personally identifiable information — which, somehow, does not include device advertising identifiers — it is trivial to tie what it does show back to a specific person. And, again, police somehow do not need a warrant for this because the location data is bought from data brokers which harvest it from apps instead of cell towers.