Search Results for: ostensibly

In the 1970s and 1980s, in-house researchers at Exxon began to understand how crude oil and its derivatives were leading to environmental devestation. They were among the first to comprehensively connect the use of their company’s core products to the warming of the Earth, and they predicted some of the harms which would result. But their research was treated as mere suggestion by Exxon because the effects of obvious legislation would “alter profoundly the strategic direction of the energy industry”. It would be a business nightmare.

Forty years later, the world has concluded its warmest year in recorded history by starting another. Perhaps we would have been more able to act if businesses like Exxon equivocated less all these years. Instead, they publicly created confusion and minimized lawmakers’ knowledge. The continued success of their industry lay in keeping these secrets.


“The success lies in the secrecy” is a shibboleth of the private surveillance industry, as described in Byron Tau’s new book, “Means of Control”. It is easy to find parallels to my opening anecdote throughout though, to be clear, a direct comparison to human-led ecological destruction is a knowingly exaggerated metaphor. The erosion of privacy and civil liberties is horrifying in its own right, and shares key attributes: those in the industry knew what they were doing and allowed it to persist because it was lucrative and, in a post-9/11 landscape, ostensibly justified.

Tau’s byline is likely familiar to anyone interested in online privacy. For several years at the Wall Street Journal, he produced dozens of deeply reported articles about the intertwined businesses of online advertising, smartphone software, data brokers, and intelligence agencies. Tau no longer writes for the Journal, but “Means of Control” is an expansion of that earlier work and carefully arranged into a coherent set of stories.

Tau’s book, like so many others describing the current state of surveillance, begins with the terrorists attacks of September 11 2001. This was the early days, when Acxiom realized it could connect its consumer data set to flight and passport records. The U.S. government ate it up and its appetite proved insatiable. Tau documents the growth of an industry that did not exist — could not exist — before the invention of electronic transactions, targeted advertising, virtually limitless digital storage, and near-universal smartphone use. This rapid transformation occurred not only with little regulatory oversight, but with government encouragement, including through investments in startups like Dataminr, GeoIQ, PlaceIQ, and PlanetRisk.

In near-chronological order, Tau tells the stories which have defined this era. Remember when documentation released by Edward Snowden showed how data created by mobile ad networks was being used by intelligence services? Or how a group of Colorado Catholics bought up location data for outing priests who used gay-targeted dating apps? Or how a defence contractor quietly operates nContext, an adtech firm, which permits the U.S. intelligence apparatus to effectively wiretap the global digital ad market? Regarding the latter, Tau writes of a meeting he had with a source who showed him a “list of all of the advertising exchanges that America’s intelligence agencies had access to”, and who told him American adversaries were doing the exact same thing.

What impresses most about this book is not the volume of specific incidents — though it certainly delivers on that front — but the way they are all woven together into a broader narrative perhaps best summarized by Tau himself: “classified does not mean better”. That can be true for volume and variety, and it is also true for the relative ease with which it is available. Tracking someone halfway around the world no longer requires flying people in or even paying off people on the ground. Someone in a Virginia office park can just make that happen and likely so, too, can other someones in Moscow and Sydney and Pyongyang and Ottawa, all powered by data from companies based in friendly and hostile nations alike.

The tension running through Tau’s book is in the compromise I feel he attempts to strike between acknowledging the national security utility of a surveillance state while describing how the U.S. has abdicated the standards of privacy and freedom it has long claimed are foundational rights. His reporting often reads as an understandable combination of awe and disgust. The U.S. has, it seems, slid in the direction of the kinds of authoritarian states its administration routinely criticizes. But Tau is right to clarify in the book’s epilogue that the U.S. is not, for example, China, separated from the standards of the latter by “a thin membrane of laws, norms, social capital, and — perhaps most of all — a lingering culture of discomfort” with concentrated state power. However, the preceding chapters of the book show questions about power do not fully extend into the private sector, where there has long been pride in the scale and global reach of U.S. businesses but concern about their influence. Tau’s reporting shows how U.S. privacy standards have been exported worldwide. For a more pedestrian example, consider the frequent praise–complaint sandwiches of Amazon, Meta, Starbucks, and Walmart, to throw a few names out there.

Corporate self-governance is an entirely inadequate response. Just about every data broker and intermediary from Tau’s writing which I looked up promised it was “privacy-first” or used similar language. Every business insists in marketing literature it is concerned about privacy and says they ensure they are careful about how they collect and use information, and they have been doing so for decades — yet here we are. Entire industries have been built on the backs of tissue-thin user consent and a flexible definition of “privacy”.

When polled, people say they are concerned about how corporations and the government collect and use data. Still, when lawmakers mandate choices for users about their data collection preferences, the results do not appear to show a society that cares about personal privacy.

In response to the E.U.’s General Data Privacy Regulation, websites decided they wanted to continue collecting and sharing loads of data with advertisers, so they created the now-ubiquitous cookie consent sheet. The GPDR does not explicitly mandate this mechanism and many remain non-compliant with the rules and intention of the law, but they are a particularly common form of user consent. However, if you arrive at a website and it asks you whether you are okay with it sharing your personal data with hundreds of ad tech firms, are you providing meaningful consent with a single button click? Hardly.

Similarly, something like 10–40% of iOS users agree to allow apps to track them. In the E.U., the cost of opting out of Meta’s tracking will be €6–10 per month which, I assume, few people will pay.

All of these examples illustrate how inadequately we assess cost, utility, and risk. It is tempting to think of this as a personal responsibility issue akin to cigarette smoking but, as we are so often reminded, none of this data is particularly valuable in isolation — it must be aggregated in vast amounts. It is therefore much more like an environmental problem.

As with global warming, exposé after exposé after exposé is written about how our failure to act has produced extraordinary consequences. All of the technologies powering targeted advertising have enabled grotesque and pervasive surveillance as Tau documents so thoroughly. Yet these are abstract concerns compared to a fee to use Instagram, or the prospect of reading hundreds of privacy policies with a lawyer and negotiating each of them so that one may have a smidge of control over their private information.

There are technical answers to many of these concerns, and there are also policy answers. There is no reason both should not be used.

I have become increasingly convinced the best legal solution is one which creates a framework limiting the scope of data collection, restricting it to only that which is necessary to perform user-selected tasks, and preventing mass retention of bulk data. Above all, users should not be able to choose a model that puts them in obvious future peril. Many of you probably live in a society where so much is subject to consumer choice. What I wrote sounds pretty drastic, but it is not. If anything, it is substantially less radical than the status quo that permits such expansive surveillance on the basis that we “agreed” to it.

Any such policy should also be paired with something like the Fourth Amendment is Not For Sale Act in the U.S. — similar legislation is desperately needed in Canada as well — to prevent sneaky exclusions from longstanding legal principles.

Last month, Wired reported that Near Intelligence — a data broker you can read more about in Tau’s book — was able to trace dozens of individual trips to Jeffrey Epstein’s island. That could be a powerful investigative tool. It is also very strange and pretty creepy all that information was held by some random company you probably have not heard of or thought about outside stories like these. I am obviously not defending the horrendous shit Epstein and his friends did. But it is really, really weird that Near is capable of producing this data set. When interviewed by Wired, Eva Galperin, of the Electronic Frontier Foundation, said “I just don’t know how many more of these stories we need to have in order to get strong privacy regulations.”

Exactly. Yet I have long been convinced an effective privacy bill could not be implemented in either the United States nor European Union, and certainly not with any degree of urgency. And, no, Matt Stoller: de facto rules on the backs of specific FTC decisions do not count. Real laws are needed. But the products and services which would be affected are too popular and too powerful. The E.U. is home to dozens of ad tech firms that promise full identity resolution. The U.S. would not want to destroy such an important economic sector, either.

Imagine my surprise when, while I was in middle of writing this review, U.S. lawmakers announced the American Privacy Rights Act (PDF). If passed, it would give individuals more control over how their information — including biological identifiers — may be collected, used, and retained. Importantly, it requires data minimization by default. It would be the most comprehensive federal privacy legislation in the U.S., and it also promises various security protections and remedies, though I think lawmakers’ promise to “prevent data from being hacked or stolen” might be a smidge unrealistic.

Such rules would more-or-less match the GDPR in setting a global privacy regime that other countries would be expected to meet, since so much of the world’s data is processed in the U.S. or otherwise under U.S. legal jurisdiction. The proposed law borrows heavily from the state-level California Consumer Privacy Act, too. My worry is that it will be treated by corporations similarly to the GDPR and CCPA by continuing to offload decision-making to users while taking advantage of a deliberate imbalance of power. Still, any progress on this front is necessary.

So, too, is it useful for anyone to help us understand how corporations and governments have jointly benefitted from privacy-hostile technologies. Tau’s “Means of Control” is one such example. You should read it. It is a deep exploration of one specific angle of how data flows from consumer software to surprising recipients. You may think you know this story, but I bet you will learn something. Even if you are not a government target — I cannot imagine I am — it is a reminder that the global private surveillance industry only functions because we all participate, however unwillingly. People get tracked based on their own devices, but also those around them. That is perhaps among the most offensive conclusions of Tau’s reporting. We have all been conscripted for any government buying this data. It only works because it is everywhere and used by everybody.

For all they have erred, democracies are not authoritarian societies. Without reporting like Tau’s, we would be unable to see what our own governments are doing and — just as important — how that differs from actual police states. As Tau writes, “in China, the state wants you to know you’re being watched. In America, the success lies in the secrecy“. Well, the secret is out. We now know what is happening despite the best efforts of an industry to keep it quiet, just like we know the Earth is heating up. Both problems massively affect our lived environment. Nobody — least of all me — would seriously compare the two. But we can say the same about each of them: now we know. We have the information. Now comes the hard part: regaining control.

Maxwell Zeff, Gizmodo:

Just over half of Amazon Fresh stores are equipped with Just Walk Out. The technology allows customers to skip checkout altogether by scanning a QR code when they enter the store. Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped.

Zeff says, paraphrasing the Information’s reporting, that 70% of sales needed human review as of 2022, though Amazon says that is inaccurate.

Based on this story and reporting from the Associated Press, it sounds like Amazon is only ending Just Walk Out support in its own stores. According to the AP and Amazon’s customer website and retailer marketing page, several other stores will still use a technology it continues to say works by using “computer vision, sensor fusion, and deep learning”.

How is this not basically a scam? It certainly feels that way: if I was sold this ostensibly automated feat of technology, I would feel cheated by Amazon if it was mostly possible because someone was watching a live camera feed and making manual corrections. If the Information’s reporting is correct, only 30% of transactions are as magically automated as Amazon claims. However, Amazon told Gizmodo that only a “small minority” of transactions need human review today — but, then again, Amazon has marketed this whole thing from jump as though it is just computers figuring it all out.

Amazon says it will be replacing Just Walk Out with its smart shopping cart. Just like those from Instacart, it will show personalized ads on a cart’s screen.

Semafor published in a new format it calls Signals — sponsored by Microsoft, though I am earnestly sure no editorial lines were crossed — aggregated commentary about the U.S. iPhone antitrust case:

If the government wins the suit, “the walls of Apple’s walled garden will be partially torn down,” wrote New York Times opinion columnist Peter Coy, meaning its suite of products will be “more like a public utility,” available to its rivals to use. “That seems to me like stretching what antitrust law is for,” Coy wrote. Tech policy expert Adam Kovacevich agreed, writing on Medium that people have long gone back and forth between iPhones and Android devices. “People vote with their pocketbooks,” Kovacevich said. “Why should the government force iPhones to look more like Androids?”

Many argue that this is an issue of consumer choice, and the government shouldn’t intervene to help companies such as Samsung gain a better footing in the market. The Consumer Choice Center’s media director put it this way: “Imagine the classroom slacker making the case to the teacher that the straight-A student in the front of the class is being anti-competitive by not sharing their lecture notes with them.”

The Kovacevich article this links to is the same one I wrote about over the weekend. His name caught my eye, but not nearly as much as the way he is described: as a “tech policy expert”. That is not wrong, but it is incomplete. He is the CEO of the Chamber of Progress, an organization that lobbies for policies favourable to the large technology companies that fund it.

It also seems unfair to attribute the latter quote to the Consumer Choice Center without describing what it represents — though I suppose its name makes it pretty obvious. It positions itself at the centre of “the global grassroots movement for consumer choice”, and you do not need the most finely tuned bullshit detector to be suspicious of the “grassroots” nature of an organization promoting the general concept of having lots of stuff to buy.

Indeed, the Center acknowledges being funded by a wide variety of industries, including “energy” — read: petroleum — nicotine, and “digital”. According to tax documents, it pulled in over $4 million in 2022. It shares its leadership with another organization, Consumer Choice Education. It brought in $1.5 million in 2022, over half of which came from the Atlas Network, a network of libertarian think tanks that counts among its supporters petroleum companies and the billionaire Koch brothers. The ostensibly people-centred Center just promoting the rights of consumers is, very obviously, supported by corporations either directly or via other pro-business organizations that also get their funding either directly from corporations or via other — oh, you understand how this works.

None of that inherently invalidates the claims made by either Kovacevich or Stephen Kent for the Consumer Choice Center, but I fault Semafor for the lack of context for either quote. Both people surely believe what they wrote. But organizations that promote the interests of big business exist to provide apparently independent supporting voices because it is more palatable than those companies making self-interested arguments.

Has the rapid availability of images generated by A.I. programs duped even mainstream news into unwittingly using them in coverage of current events?

That is the impression you might get if you read a report from Cam Wilson, at Crikey, about generated photorealistic graphics available from Adobe’s stock image service that purportedly depict real-life events. Wilson pointed to images which suggested they depict the war in Gaza despite being created by a computer. When I linked to it earlier this week, I found similar imagery ostensibly showing Russia’s war on Ukraine, the terrorist attacks of September 11, 2001, and World War II.

This story has now been widely covered but, aside from how offensive it seems for Adobe to be providing these kinds of images — more on that later — none of these outlets seem to be working hard enough to understand how these images get used. Some publications which referenced the Crikey story, like Insider and the Register, implied these images were being used in news stories without knowing or acknowledging they were generative products. This seemed to be, in part, based on a screenshot in that Crikey report of one generated image. But when I looked at the actual pages where that image was being used, it was a more complicated story: there were a couple of sketchy blog posts, sure, but a few of them were referencing an article which used it to show how generated images could look realistic.1

This is just one image and a small set of examples. There are thousands more A.I.-generated photorealistic images that apparently depict real tragedies, ongoing wars, and current events. So, to see if Adobe’s A.I. stock library is actually tricking newsrooms, I spent a few nights this week looking into this in the interest of constructive technology criticism.

Here is my methodology: on the Adobe Stock website, I searched for terms like “Russia Ukraine war”, “Israel Palestine”, and “September 11”. I filtered the results to only show images marked as A.I.-generated, then sorted the results by the number of downloads. Then, I used Google’s reverse image search with popular Adobe images that looked to me like photographs. This is admittedly not perfect and certainly not comprehensive, but it is a light survey of how these kinds of images are being used.

Then, I would contact people and organizations which had used these images and ask them if they were aware it was marked as A.I.-generated, and if they had any thoughts about using A.I. images.

I found few instances where a generated image was being used by a legitimate news organization in an editorial context — that is, an A.I.-generated image being passed off as a photo of an event described by a news article. I found no instances of this being done by high-profile publishers. This is not entirely surprising to me because none of these generated images are visible on Adobe Stock when images are filtered to Editorial Use only; and, also, because Adobe is not a major player in editorial photography to the same extent as, say, AP Photos or Getty Images.

I also found many instances of fake local news sites — similar to these — using these images, and examples from all over the web used in the same way as commercial stock photography.

This is not to suggest some misleading uses are okay, only to note a difference in gravity between egregious A.I. use and that which is a question of taste. It would be extremely deceptive for a publisher to use a generated image in coverage of a specific current event, as though the image truly represents what is happening. It seems somewhat less severe should that kind of image be used by a non-journalistic organization to illustrate a message of emotional support, to use a real example I found. And it seems further less so for a generated image of a historic event to be used by a non-journalistic organization as a kind of stock photo in commemoration.

But these are distinctions of severity; it is never okay for media to mislead audiences into believing something is a photo related to the story when it is neither. For example, here are relevant guidelines from the Associated Press:

We avoid the use of generic photos or video that could be mistaken for imagery photographed for the specific story at hand, or that could unfairly link people in the images to illicit activity. No element should be digitally altered except as described below.

[…]

[Photo-based graphics] must not misrepresent the facts and must not result in an image that looks like a photograph – it must clearly be a graphic.

From the BBC:

Any digital manipulation, including the use of CGI or other production techniques (such as Photoshop) to create or enhance scenes or characters, should not distort the meaning of events, alter the impact of genuine material or otherwise seriously mislead our audiences. Care should be taken to ensure that images of a real event reflect the event accurately.

From the New York Times:

Images in our pages, in the paper or on the Web, that purport to depict reality must be genuine in every way. No people or objects may be added, rearranged, reversed, distorted or removed from a scene (except for the recognized practice of cropping to omit extraneous outer portions). […]

[…]

Altered or contrived photographs are a device that should not be overused. Taking photographs of unidentified real people as illustrations of a generic type or a generic situation (like using an editor or another model in a dejected pose to represent executives being laid off) usually turns out to be a bad idea.

And from NPR:

When packages call for studio shots (of actors, for example; or prepared foods) it will be obvious to the viewer and if necessary it will be made perfectly clear in the accompanying caption information.

Likewise, when we choose for artistic or other reasons to create fictional images that include photos it will be clear to the viewer (and explained in the caption information) that what they’re seeing is an illustration, not an actual event.

I have quoted generously so you can see a range of explanations of this kind of policy. In general, news organizations say that anything which looks like a photograph should be immediately relevant to the story, anything which is edited for creative reasons should be obviously differentiated both visually and in a caption, and that generic illustrative images ought to be avoided.

I started with searches for “Israel Palestine war” and “Russia Ukraine war”, and stumbled across an article from Now Habersham, a small news site based in Georgia, USA, which originally contained this image illustrating an opinion story. After I asked the paper’s publisher Joy Purcell about it, they told me they “overlooked the notation that it was A.I.-generated” and said they “will never intentionally publish A.I.-generated images”. The article was updated with a real photograph. I found two additional uses of images like this one by reputable if small news outlets — one also in the U.S., and one in Japan — and neither returned requests for comment.

I next tried some recent events, like wildfires in British Columbia and Hawaii, an “Omega Block” causing flooding in Greece and Spain, and aggressive typhoons this summer in East Asia. I found images marked as generated by A.I. in Adobe Stock used to represent those events, but not indicated as such in use — in an article in the Sheffield Telegraph; on Futura, a French science site; on a news site for the debt servicing industry; and on a page of the U.K.’s National Centre for Atmospheric Science. Claire Lewis, editor of the Telegraph’s sister publication the Sheffield Star, told me they “believe that any image which is AI generated should say that in the caption” and would “arrange for its removal”. Requests for comment from the other three organizations were not returned.

Next, I searched “September 11”. I found plenty of small businesses using generated images of first responders among destroyed towers and a firefighter in New York in commemorative posts. And seeing those posts changed my mind about the use of these kinds of images. When I first wrote about this Crikey story, I suggested Adobe ought to prohibit photorealistic images which claim to depict real events. But I can also see an argument that an image representative of a tragedy used in commemoration could sometimes be more ethical than a real photograph. It is possible the people in a photo do not want to be associated with a catastrophe, or that its circulation could be traumatizing.

It is Remembrance Day this weekend in Canada — and Veterans Day in the United States — so I reverse-searched a few of those images and spotted one on the second page of a recent U.S. Department of Veterans Affairs newsletter (PDF). Again, in this circumstance, it serves only as an illustration in the same way a stock photo would, but one could make a good argument that it should portray real veterans.

Requests for comment made to the small businesses which posted the September 11 images, and to Veterans Affairs, went unanswered.

As a replacement for stock photos, A.I.-generated images are perhaps an okay substitute. There are plenty of photos representing firefighters and veterans posed by models, so it seems to make little difference if that sort of image is generated by a computer. But in a news media context these images seem like they are, at best, an unnecessary source of confusion, even if they are clearly labelled. Their use only perpetuates the impression that A.I. is everywhere and nothing can be verifiable.

It is offensive to me that any stock photo site would knowingly accept A.I.-generated graphics of current events. Adobe told PetaPixel that its stock site “is a marketplace that requires all generative AI content to be labeled as such when submitted for licensing”, but it is unclear to me how reliable that is. I found a few of these images for sale from other stock photo sites without any disclaimers. That means these were erroneously marked as A.I.-generated on Adobe Stock, or that other providers are less stringent — and that people have been using generated images without any possibility of foreknowledge. Neither option is great for public trust.

I do think there is more that Adobe could do to reduce the likelihood of A.I.-generated images used in news coverage. As I noted earlier, these images do not appear when the “Editorial” filter is selected. However, there is no way to configure an Adobe account to search this selection by default.2 Adobe could permit users to set a default set of search filters — to only show editorial photos, for example, or exclude generative A.I. entirely. Until that becomes possible from within Adobe Stock itself, I made a bookmark-friendly empty search which shows only editorial photographs. I hope it is helpful.

Update: On November 11, I updated the description of where one article appeared. It was in the Sheffield Telegraph, not its sister publication, the Sheffield Star.


  1. The website which published this article — Off Guardian — is crappy conspiracy theory site. I am avoiding linking to it because I think it is a load of garbage and unsubtle antisemitism, but I do think its use of the image in question was, in a vacuum, reasonable. ↥︎

  2. There is also no way to omit editorial images by default, which makes Adobe Stock frustrating to use for creative or commercial projects, as editorial images are not allowed to be manipulated. ↥︎

Gerrit De Vynck, Washington Post:

A paper from U.K.-based researchers suggests that OpenAI’s ChatGPT has a liberal bias, highlighting how artificial intelligence companies are struggling to control the behavior of the bots even as they push them out to millions of users worldwide.

The study, from researchers at the University of East Anglia, asked ChatGPT to answer a survey on political beliefs as it believed supporters of liberal parties in the United States, United Kingdom and Brazil might answer them. They then asked ChatGPT to answer the same questions without any prompting, and compared the two sets of responses.

The survey in question is the Political Compass.

Arvind Narayanan on Mastodon:

The “ChatGPT has a liberal bias” paper has at least 4 *independently* fatal flaws:

– Tested an older model, not ChatGPT.

– Used a trick prompt to bypass the fact that it actually refuses to opine on political q’s.

– Order effect: flipping q’s in the prompt changes bias from Democratic to Republican.

– The prompt is very long and seems to make the model simply forget what it’s supposed to do.

Colin Fraser appears to be responsible for finding that the order of how the terms appear affects the political alignment displayed by ChatGPT.

Narayanan and Sayash Kapoor tried to replicate the paper’s findings:

Here’s what we found. GPT-4 refused to opine in 84% of cases (52/62), and only directly responded in 8% of cases (5/62). (In the remaining cases, it stated that it doesn’t have personal opinions, but provided a viewpoint anyway). GPT-3.5 refused in 53% of cases (33/62), and directly responded in 39% of cases (24/62).

It is striking to me how the claims of this paper were widely repeated with apparent confirmation that tech companies are responsible for pushing the liberal beliefs that are ostensibly a reflection of mainstream news outlets.

Mike Masnick, Techdirt:

[…] Last year, the US and the EU announced yet another deal on transatlantic data flows. And, as we noted at the time (once again!) the lack of any changes to NSA surveillance meant it seemed unlikely to survive yet again.

In the midst of all this, Schrems also went after Meta directly, claiming that because these US/EU data transfer agreements were bogus, that Meta had violated data protection laws in transferring EU user data to US servers.

And that’s what this fine is about. The European Data Protection Board fined Meta all this money based on the fact that it transferred some EU user data to US servers. And, because, in theory, the NSA could then access the data. That’s basically it. The real culprit here is the US being unwilling to curb the NSA’s ability to demand data from US companies.

As noted, and something which aligns with other examples of GDPR violations.

There is one aspect of Masnick’s analysis which I dispute:

Of course, the end result of all this could actually be hugely problematic for privacy around the globe. That might sound counterintuitive, seeing as here is Meta being dinged for a data protection failure. But, when you realize what the ruling is actually saying, it’s a de facto data localization mandate.

And data localization is the tool most frequently used by authoritarian regimes to force foreign internet companies (i.e., US internet companies) to host user data within their own borders where the authoritarian government can snoop through it freely. Over the years, we’ve seen lots of countries do this, from Russia to Turkey to India to Vietnam.

Just because data localization is something used by authoritarian governments does not mean it is an inherently bad idea. Authoritarian governments are going to do authoritarian government things — like picking through private data — but that does not mean people who reside elsewhere would face similar concerns.

While housing user data in the U.S. may offer protection for citizens, it compromises the privacy and security of others. Consider that non-U.S. data held on U.S. servers lacks the protections ostensibly placed on U.S. users’ information, meaning U.S. intelligence agencies are able to pick through it with little oversight. (That is, after all, the E.U.’s argument in its charges against Meta.) Plenty of free democracies also have data localization laws for at least some personal information without a problem. For example, while international agreements prevent the Canadian government from requiring data residency as a condition for businesses, privacy regulations require some types of information to be kept locally, while other types must have the same protections as Canadian-hosted data if stored elsewhere.

Michelle Boorstein and Heather Kelly, Washington Post:

A group of conservative Colorado Catholics has spent millions of dollars to buy mobile app tracking data that identified priests who used gay dating and hookup apps and then shared it with bishops around the country.

[…]

One report prepared for bishops says the group’s sources are data brokers who got the information from ad exchanges, which are sites where ads are bought and sold in real time, like a stock market. The group cross-referenced location data from the apps and other details with locations of church residences, workplaces and seminaries to find clergy who were allegedly active on the apps, according to one of the reports and also the audiotape of the group’s president.

Boorstein and Kelly say some of those behind this group also outed a priest two years ago using similar tactics, which makes it look like a test case for this more comprehensive effort. As they write, a New York-based Reverend said at the time it was justified to expose priests who had violated their celibacy pledge. That is a thin varnish on what is clearly an effort to discriminate against queer members of the church. These operations have targeted clergy by using data derived almost exclusively by the use of gay dating apps.

Data brokers have long promised the information they supply is anonymized but, time and again, this is shown to be an ineffective means of protecting users’ privacy. That ostensibly de-identified data was used to expose a specific single priest’s use of Grindr in 2021, and the organization in question has not stopped. Furthermore, nothing would prevent this sort of exploitation by groups based outside the United States, which may be able to obtain similar data to produce the same — or worse — outcomes.

This is some terrific reporting by Boorstein and Kelly.

Kareem Abdul-Jabbar:

What we need to always be aware of is that how we treat any one marginalized group is how we will treat all of them—given the chance. There is no such thing as ignoring the exploitation of one group hoping they won’t come for you.

This goes for us individually, but especially for a paper with a massive platform like the New York Times, which Abdul-Jabbar is responding to. A recent episode of Left Anchor is a good explanation of why the Times’ ostensibly neutral just-asking-questions coverage of trans people and issues is so unfairly slanted as to be damaging.

Chance Miller, 9to5Mac:

If you view the November [2020] announcement [of the first M1 Macs] as the start of the transition process, Apple would have needed to have everything wrapped up by November 2022. This deadline, too, has passed. This means Apple has missed its two-year transition target regardless of which deadline you consider.

[…]

So that leaves us where we are today. You have Apple Silicon options for every product category in the Mac lineup, with the exception of the Mac Pro. During its March event, Apple exec John Ternus teased that the Mac Pro with Apple Silicon was an announcement “for another day.” That day, however, hasn’t yet come.

Miller also notes that an Intel version of the Mac Mini remains available. But it hardly matters for Apple to have technically missed its goal since all of its mainstream Macs have transitioned to its own silicon, and it has released an entirely new Mac — in the form of the Mac Studio — and begun the rollout of its second generation of chips in that timeframe. Also, it sure helps that people love these new Macs.

Update: The December 18 version of Mark Gurman’s newsletter contains more details about the forthcoming Mac Pro:

An M2 Extreme [Gurman’s own term for two M2 Ultras] chip would have doubled that to 48 CPU cores and 152 graphics cores. But here’s the bad news: The company has likely scrapped that higher-end configuration, which may disappoint Apple’s most demanding users — the photographers, editors and programmers who prize that kind of computing power.

[…]

Instead, the Mac Pro is expected to rely on a new-generation M2 Ultra chip (rather than the M1 Ultra) and will retain one of its hallmark features: easy expandability for additional memory, storage and other components.

I am interested to see how this works in practice. One of the trademarks of Macs based on Apple’s silicon is the deep integration of all these components, ostensibly for performance reasons.

Nur Dayana Mustak, Bloomberg:

Zuckerberg, 38, now has a net worth of $38.1 billion, according to the Bloomberg Billionaires Index, a stunning fall from a peak of $142 billion in September 2021. While many of the world’s richest people have seen their fortunes tumble this year, Meta’s chief executive officer has seen the single-biggest hit among those on the wealth list.

As if you needed more reasons to be skeptical of billionaires’ motivations for ostensibly charitable uses of their wealth, here is another. Zuckerberg has tied the success of Meta to his family’s Chan Zuckerberg Initiative by funding it through their personally-held shares in Meta. According to its website, that foundation — an LLC, not a charity — is focused on finding cures for diseases, reducing youth homelessness, and improving education. If you like the sound of those things, you should therefore hope for a skyrocketing Meta stock price. If, on the other hand, shareholders are concerned that Meta’s business model is detrimental to society at large and do not approve of the company’s vision for its future, they are compromising the efforts of Zuckerberg’s foundation.

L’affaire the Wire sure has taken a turn since yesterday. First, Kanishk Karan, one of the security researchers ostensibly contacted by reporters, has denied ever doing so:

It has come to my attention that I’ve been listed as one of the “independent security researchers” who supposedly “verified” the Wire’s report on FB ‘Xcheck’ in India. I would like to confirm that I did NOT DO the DKIM verification for them.

Aditi Agrawal, of Newslaundry, confirmed the non-participation of both researchers cited by the Wire:

The first expert was initially cited in the Wire’s Saturday report to have verified the DKIM signature of a contested internal email. He is a Microsoft employee. Although his name was redacted from the initial story, his employer and his positions in the company were mentioned.

This expert – who was later identified by [Wire founding editor Siddharth] Varadarajan in a tweet – told Newslaundry he “did not participate in any such thing”.

Those factors plus lingering doubts about its reporting have led to this un-bylined note from the Wire:

In the light of doubts and concerns from experts about some of this material, and about the verification processes we used — including messages to us by two experts denying making assessments of that process directly and indirectly attributed to them in our third story — we are undertaking an internal review of the materials at our disposal. This will include a review of all documents, source material and sources used for our stories on Meta. Based on our sources’ consent, we are also exploring the option of sharing original files with trusted and reputed domain experts as part of this process.

An internal review is a good start, but the Wire damaged its credibility when it stood by its reporting for a week as outside observers raised questions. This was a serious process failure that stemmed from a real issue — a post was removed for erroneous reasons, though it has been silently reinstated. In trying to report it out, the best case scenario is that this publication relied on sources who appear to have fabricated evidence. This kind of scandal is rare but harmful to the press at large. An internal review may not be enough to overcome this breach of trust.

Last week, New Delhi-based the Wire published what seemed like a blockbuster story, claiming that posts reported by high-profile users protected by Meta’s XCheck program would be removed from Meta properties with almost no oversight — in India, at least, but perhaps elsewhere. As public officials’ accounts are often covered by XCheck, this would provide an effective way for them to minimize criticism. But Meta leadership disputed the story, pointing to inaccuracies in the supposed internal documentation obtained by the Wire.

The Wire stood by its reporting. On Saturday, Devesh Kumar, Jahnavi Sen and Siddharth Varadarajan published a response with more apparent evidence. It showed that @fb.com email addresses were still in use at Meta, in addition to newer @meta.com addresses, but that merely indicated the company is forwarding messages; the Wire did not show any very recent emails from Meta leadership using @fb.com addresses. The Wire also disputed Meta’s claim that instagram.workplace.com is not an actively used domain:

The Wire’s sources at Meta have said that the ‘instagram.workplace.com’ link exists as an internal subdomain and that it remains accessible to a restricted group of staff members when they log in through a specific email address and VPN. At The Wire’s request, one of the sources made and shared a recording of them navigating the portal and showing other case files uploaded there to demonstrate the existence and ongoing use of the URL.

Meta:

The account was set up externally as a free trial account on Meta’s enterprise Workplace product under the name “Instagram” and using the Instagram brand as its profile picture. It is not an internal account. Based on the timing of this account’s creation on October 13, it appears to have been set up specifically in order to manufacture evidence to support the Wire’s inaccurate reporting. We have locked the account because it’s in violation of our policies and is being used to perpetuate fraud and mislead journalists.

The screen recording produced for the Wire shows the source navigating to internalfb.com to log in. That is a real domain registered to Meta and with Facebook-specific domain name records. Whoever is behind this apparent hoax is working hard to make it believable. It is trivial for a technically sophisticated person to recreate that login page and point the domain on their computer to a modified local version instead of Meta’s hosted copy. I do not know if that is the case here, but it is plausible.

The Wire also produced a video showing an apparent verification of the DKIM signature in the email Andy Stone ostensibly sent to the “Internal” and “Team” mailing lists.1 However, the signature shown in the screen recording appears to have some problems. For one, the timestamp appears to be incorrect; for another, the signature is missing the “to” field which is part of an authentic DKIM signature for emails from fb.com, according to emails I have received from that domain.

The Wire issued a statement acknowledging a personal relationship between one of its sources and a reporter. The statement was later edited to remove that declaration; the Wire appended a note to its statement saying it was changed to be “clearer about our relationships with our sources”. I think it became less clear as a result. The Wire also says Meta’s purpose in asking for more information is to expose its sources. I doubt that is true. When the Wall Street Journal published internal documents leaked by Frances Haugen, Meta did not claim they were faked or forged. For what it is worth, I believe Meta when it says the documentation obtained by the Wire is not real.

But this murky case still has one shred of validity: when posts get flagged, how does Meta decide whether the report is valid and what actions are taken? The post in question is an image that had no nudity or sexual content, yet was reported and removed for that reason. Regardless of the validity of this specific story, Meta ought to be more accountable, particularly when it comes to moderating satire and commentary outside the United States. At the very least, it does not look good for political interference under the banner of an American company.

Update: Alex Stamos tweeted about another fishy edit made by the Wire — a wrong timestamp on screenshots of emails from the experts who verified the DKIM signatures, silently changed after publishing.

Update: Pranesh Prakash has been tweeting through his discoveries. The plot thickens.


  1. There is, according to one reporter, no list at Meta called “Internal”. It also does not pass a smell check of what function an email list would have. This is wholly subjective, for sure, but think about what purpose an organization’s email lists serve, and then consider why a big organization like Meta would need one with a vague name like “Internal”. ↥︎

Felix Krause:

The iOS Instagram and Facebook app render all third party links and ads within their app using a custom in-app browser. This causes various risks for the user, with the host app being able to track every single interaction with external websites, from all form inputs like passwords and addresses, to every single tap.

This is because apps are able to manipulate the DOM and inject JavaScript into webpages loaded in in-app browsers. Krause elaborated today:

When you open any link on the TikTok iOS app, it’s opened inside their in-app browser. While you are interacting with the website, TikTok subscribes to all keyboard inputs (including passwords, credit card information, etc.) and every tap on the screen, like which buttons and links you click.

[…]

Instagram iOS subscribes to every tap on any button, link, image or other component on external websites rendered inside the Instagram app.

[…]

Note on subscribing: When I talk about “App subscribes to”, I mean that the app subscribes to the JavaScript events of that type (e.g. all taps). There is no way to verify what happens with the data.

Is TikTok a keylogger? Is Instagram monitoring every tap on a loaded webpage? It is impossible to say, but it does not look good that either of these privacy-invasive apps are so reckless with users’ ostensibly external activity.

It reminds me of when iOS 14 revealed a bunch of apps, including TikTok, were automatically reading pasteboard data. It cannot be known for certain what happened to all of the credit card numbers, passwords, phone numbers, and private information collected by these apps. Perhaps some strings were discarded because they did not match the format an app was looking for, like a parcel tracking number or a URL. Or perhaps some ended up in analytics logs collected by the developer. We cannot know for sure.

What we do know is how invasive big-name applications are, and how little their developers really care about users’ privacy. There is no effort at minimization. On the contrary, there is plenty of evidence for maximizing the amount of information collected about each user at as granular a level as possible.

Kristin Cohen, of the U.S. Federal Trade Commission:

The conversation about technology tends to focus on benefits. But there is a behind-the-scenes irony that needs to be examined in the open: the extent to which highly personal information that people choose not to disclose even to family, friends, or colleagues is actually shared with complete strangers. These strangers participate in the often shadowy ad tech and data broker ecosystem where companies have a profit motive to share data at an unprecedented scale and granularity.

This sounds promising. Cohen says the FTC is ready to take action against companies and data brokers misusing health information, in particular, in a move apparently spurred or accelerated by the overturning of Roe v. Wade. So what is the FTC proposing?

[…] There are numerous state and federal laws that govern the collection, use, and sharing of sensitive consumer data, including many enforced by the Commission. The FTC has brought hundreds of cases to protect the security and privacy of consumers’ personal information, some of which have included substantial civil penalties. In addition to Section 5 of the FTC Act, which broadly prohibits unfair and deceptive trade practices, the Commission also enforces the Safeguards Rule, the Health Breach Notification Rule, and the Children’s Online Privacy Protection Rule.

I am no lawyer, so it would be ridiculous for me to try to interpret these laws. But what is there sure seems limited in scope — in order: personal information entrusted to financial companies, security breaches of health records, and children under 13 years old. This seems like the absolute bottom rung on the ladder of concerns. It is obviously good that the FTC is reiterating its enforcement capabilities, though revealing of its insipid authority, but what is it about those laws which will permit it to take meaningful action against the myriad anti-privacy practices covered by over-broad Terms of Use agreements?

Companies may try to placate consumers’ privacy concerns by claiming they anonymize or aggregate data. Firms making claims about anonymization should be on guard that these claims can be a deceptive trade practice and violate the FTC Act when untrue. Significant research has shown that “anonymized” data can often be re-identified, especially in the context of location data. One set of researchers demonstrated that, in some instances, it was possible to uniquely identify 95% of a dataset of 1.5 million individuals using four location points with timestamps. Companies that make false claims about anonymization can expect to hear from the FTC.

Many digital privacy advocates have been banging this drum for years. Again, I am glad to see it raised as an issue the FTC is taking seriously. But given the exuberant data broker market, how can any company that collects dozens or hundreds of data points honestly assert their de-identified data cannot be associated with real identities?

The only solution is for those companies to collect less user data and to pass even fewer points onto brokers. But will the FTC be given the tools to enforce this? Its funding is being increased significantly, so it will hopefully be able to make good on its cautionary guidance.

Cristiano Lima, Washington Post:

An academic study finding that Google’s algorithms for weeding out spam emails demonstrated a bias against conservative candidates has inflamed Republican lawmakers, who have seized on the results as proof that the tech giant tried to give Democrats an electoral edge.

[…]

That finding has become the latest piece of evidence used by Republicans to accuse Silicon Valley giants of bias. But the researchers said it’s being taken out of context.

[Muhammad] Shahzad said while the spam filters demonstrated political biases in their “default behavior” with newly created accounts, the trend shifted dramatically once they simulated having users put in their preferences by marking some messages as spam and others as not.

Shahzad and the other researchers who authored the paper have disputed the sweeping conclusions of bias drawn by lawmakers. Their plea for nuance has been ignored. Earlier this month, a group of senators introduced legislation to combat this apparent bias. It intends to prohibit email providers from automatically flagging any political messages as spam, and requires providers to publish quarterly reports detailing how many emails from political parties were filtered.

According to reporting from Mike Masnick at Techdirt, it looks like this bill was championed by Targeted Victory, which also promoted the study to conservative media channels. You may remember Targeted Victory from their involvement in Meta’s campaign against TikTok.

Masnick:

Anyway, looking at all this, it is not difficult to conclude that the digital marketing firm that Republicans use all the time was so bad at its job spamming people, that it was getting caught in spam filters. And rather than, you know, not being so spammy, it misrepresented and hyped up a study to pretend it says something it does not, blame Google for Targeted Victory’s own incompetence, and then have its friends in the Senate introduce a bill to force Google to not move its own emails to spam.

I am of two minds about this. A theme you may have noticed developing on this website over the last several years is a deep suspicion of automated technologies, however they are branded — “machine learning”, “artificial intelligence”, “algorithmic”, and the like. So I do think some scrutiny may be warranted in understanding how automated systems determine a message’s routing.

But it does not seem at all likely to me that a perceived political bias in filtering algorithms is deliberate, so any public report indicating the number or rate of emails from each political party being flagged as spam is wildly unproductive. It completely de-contextualizes these numbers and ignores decades of spam filters being inaccurate from time to time for no good reason.

A better approach for all transparency around automated systems is one that helps the public understand how these decisions are made without playing to perceived bias by parties with a victim complex. Simply counting the number of emails flagged as spam from each party is an idiotic approach. I, too, would like to know why many of the things I am recommended by algorithms are entirely misguided. This is not the way.

By the way, politicians have a long and proud history of exempting themselves from unfavourable regulations. Insider trading laws virtually do not apply to U.S. congresspersons, even with regulations to ostensibly rein it in. In Canada, politicians excluded themselves from unsolicited communications laws by phone and email. Is it any wonder why polls have showed declining trust in institutions for decades?

Kashmir Hill, New York Times:

For $29.99 a month, a website called PimEyes offers a potentially dangerous superpower from the world of science fiction: the ability to search for a face, finding obscure photos that would otherwise have been as safe as the proverbial needle in the vast digital haystack of the internet.

A search takes mere seconds. You upload a photo of a face, check a box agreeing to the terms of service and then get a grid of photos of faces deemed similar, with links to where they appear on the internet. The New York Times used PimEyes on the faces of a dozen Times journalists, with their consent, to test its powers.

PimEyes found photos of every person, some that the journalists had never seen before, even when they were wearing sunglasses or a mask, or their face was turned away from the camera, in the image used to conduct the search.

You do not even need to pay the $30 per month fee. You can test PimEyes’ abilities for free.

PimEyes disclaims responsibility the results of its search tool through some ostensibly pro-privacy language. In a blog post published, according to metadata visible in the page source, one day before the Times’ investigation, it says its database “contains no personal information”, like someone’s name or contact details. The company says it does not even have any photos, storing only “faceprint” data and URLs where matching photos may be found.

Setting aside the question of whether a “faceprint” ought to be considered personal information — it is literally information about a person, so I think it should — perhaps you have spotted the sneaky argument PimEyes is attempting to make here. It can promote the security of its database and its resilience against theft all it wants, but its real privacy problems are created entirely through its front-end marketed features. If its technology works anywhere near as well as marketed, a search will lead to webpages that do contain the person’s name and contact details.

PimEyes shares the problem found with any of these people finding tools, no matter their source material: they do not seem dangerous in isolation, but it is their ability to coalesce and correlate different data points to create a complete profile. Take a picture of anyone, then dump it into PimEyes to find their name and, perhaps, a username or email address correlated with the image. Use a different people-based search engine to find profiles across the web that share the same online handle, or accounts registered with that email address. Each of those searches will undoubtedly lead to greater pools of information, and all of this is perfectly legal. The only way to avoid being a subject is to submit an opt-out request to services that offer it. Otherwise, if you exist online in any capacity, you are a token in this industry.

Hill:

PimEyes users are supposed to search only for their own faces or for the faces of people who have consented, Mr. Gobronidze said. But he said he was relying on people to act “ethically,” offering little protection against the technology’s erosion of the long-held ability to stay anonymous in a crowd. PimEyes has no controls in place to prevent users from searching for a face that is not their own, and suggests a user pay a hefty fee to keep damaging photos from an ill-considered night from following him or her forever.

This is such transparent bullshit. Gobronidze has to know that not everybody using its service is searching for pictures of themselves or those who have consented. As Hill later writes, it requires more stringent validation of a request to opt out of its results than it does a request to search.

Update: On July 16, Mara Hvistendahl of the Intercept reported on a particularly disturbing use of PimEyes:

The online facial recognition search engine PimEyes allows anyone to search for images of children scraped from across the internet, raising a host of alarming possible uses, an Intercept investigation has found.

It would be more acceptable if this service were usable only by a photo subject or their parent or guardian. As it is, PimEyes stands by its refusal to gate image searches, permitting any creep to search for images of anyone else through facial recognition.

Joseph Cox, Vice:

The Centers for Disease Control and Prevention (CDC) bought access to location data harvested from tens of millions of phones in the United States to perform analysis of compliance with curfews, track patterns of people visiting K-12 schools, and specifically monitor the effectiveness of policy in the Navajo Nation, according to CDC documents obtained by Motherboard. The documents also show that although the CDC used COVID-19 as a reason to buy access to the data more quickly, it intended to use it for more general CDC purposes.

Location data is information on a device’s location sourced from the phone, which can then show where a person lives, works, and where they went. The sort of data the CDC bought was aggregated — meaning it was designed to follow trends that emerge from the movements of groups of people — but researchers have repeatedly raised concerns with how location data can be deanonymized and used to track specific people.

Remember, during the early days of the pandemic, when the Washington Post published an article chastising Apple and Google for not providing health organizations full access to users’ physical locations? In the time since it was published, the two companies released their jointly-developed exposure notification framework which, depending on where you live, has either been somewhat beneficial or mostly inconsequential. Perhaps unsurprisingly, regions with more consistent messaging and better privacy regulations seemed to find it more useful than places where there were multiple competing crappy apps.

The reason I bring that up is because it turns out a new app that invades your privacy in the way the Post seemed to want was unnecessary when a bunch of other apps on your phone do that job just fine. And, for the record, that is terrible.

In a context vacuum, it would be better if health agencies were able to collect physical locations in a regulated and safe way for all kinds of diseases. But there have been at least stories about wild overreach during this pandemic alone: this one, in which the CDC wanted location data for all sorts of uses beyond contact tracing, and Singapore’s acknowledgement that data from its TraceTogether app — not based on the Apple–Google framework — was made available to police. These episodes do not engender confidence.

Also — and I could write these words for any of the number of posts I have published about the data broker economy — it is super weird how this data can be purchased by just about anyone. Any number of apps on our phones report our location to hundreds of these companies we have never heard of, and then a government agency or a media organization or some dude can just buy it in ostensibly anonymized form. This is the totally legal but horrific present.

Reports like these underscore how frustrating it was to see the misplaced privacy panic over stuff like the Apple–Google framework or digital vaccine passports. Those systems were generally designed to require minimal information, report as little externally as possible, and use good encryption for communications. Meanwhile, the CDC can just click “add to cart” on the location of millions of phones.

Taylor Lorenz and Drew Harwell, the Washington Post:

Facebook parent company Meta is paying one of the biggest Republican consulting firms in the country to orchestrate a nationwide campaign seeking to turn the public against TikTok.

The campaign includes placing op-eds and letters to the editor in major regional news outlets, promoting dubious stories about alleged TikTok trends that actually originated on Facebook, and pushing to draw political reporters and local politicians into helping take down its biggest competitor. These bare-knuckle tactics, long commonplace in the world of politics, have become increasingly noticeable within a tech industry where companies vie for cultural relevance and come at a time when Facebook is under pressure to win back young users.

Employees with the firm, Targeted Victory, worked to undermine TikTok through a nationwide media and lobbying campaign portraying the fast-growing app, owned by the Beijing-based company ByteDance, as a danger to American children and society, according to internal emails shared with The Washington Post.

Zac Moffatt, Targeted Victory’s CEO, disputed this reporting on Twitter, but many of his complaints are effectively invalid. He complains that only part of the company’s statement was included by the Post, but the full statement fits into a tweet and is pretty vacuous. The Post says the company refused to answer specific questions, which Moffatt has not disputed.

Moffatt also says the Post called two letters to the editor a “scorched earth campaign”, but the oldest copy of the story I could find, captured just twenty minutes after publishing and well before Moffatt tweeted, does not contain that phrasing, and neither does the current copy. I am not sure where that is from.

But one thing Moffatt does nail the Post on, a little bit, is its own reporting on TikTok moral panics. For example, the “slap a teacher challenge” was roundly debunked when it began making headlines in early October 2021 and was traced back to rumours appearing on Facebook a month earlier, but that did not stop the Post from reporting on it. It appears Targeted Victory used the Post’s reporting, among that from other publications, to further concerns about this entirely fictional story. That is embarrassing for the Post, which cited teachers and school administrators for its story.

The Post should do better. But it is agencies like Targeted Victory that the Post and other media outlets should be steeling themselves against, as well as in-house corporate public relations teams. When reporters receive a tip about a company’s behaviour — positive or negative — the source of that information can matter as much as the story itself. It is why I still want more information about the Campaign for Accountability’s funders: it has been successful in getting media outlets to cover its research critical of tech companies, but its history with Oracle has muddied the waters of its ostensibly pure concern. Oracle also tipped off Quartz reporters to that big Google location data scandal a few years ago. These sources are not neutral. While the stories may be valid, readers should not be misled about their origin.

Troy Hunt on Twitter:

Why are you still claiming this @digicert? This is extremely misleading, anyone feel like reporting this to the relevant advertising standards authority in their jurisdiction? https://www.digicert.com/faq/when-to-use-ev-ssl.htm

The linked page touted some supposed benefits of Extended Verification SSL certificates. Those are the certificates that promise to tie a company’s identity to their website, which was ostensibly confirmed by the company’s name appearing in a web browser’s address bar alongside the HTTPS icon.

Troy Hunt:

I have a vehement dislike for misleading advertising. We see it every day; weight loss pills, make money fast schemes and if you travel in the same circles I do, claims that extended validation (EV) certificates actually do something useful:

[…]

Someone had reached out to me privately and shared the offending page as they’d taken issue with the false claims DigiCert was making. My views on certificate authority shenanigans spinning yarns on EV are well known after having done many talks on the topic and written many blog posts, most recently in August 2019 after both Chrome and Firefox announced they were killing it. When I say “kill”, that never meant that EV would no longer technically work, but it killed the single thing spruikers of it relied upon – being visually present beside the address bar. That was 2 and a half years ago, so why is DigiCert still pimping the message about the green bar with the company name? Beats me (although I could gue$$), but clearly DigiCert had a change of heart after that tweet because a day later, the offending image was gone. You can still see the original version in the Feb 9 snapshot on archive.org.

Website identity is a hard thing to prove, even to those who are somewhat technically literate. Bad security advice is commonplace, but it is outrageous to see companies like DigiCert using such frail justifications for marketing fodder.