Written by Nick Heer.

The Pit of Bad Decisions Made by Clearview AI Simply Has No Bottom

A unique quality of companies that are inherently unethical is that once the narrative thread starts to unravel, the whole thing collapses pretty quickly. As reporters begin digging into it and those with insider knowledge begin to speak up, it’s hard for their public relations teams to keep everything in a nice well-packaged story.

So, let’s look at a few developments regarding Clearview AI.

Dave Gershgorn, OneZero:

Clearview AI worked to build a national database of every mug shot taken in the United States during the past 15 years, according to an email obtained by OneZero through a public records request.


It’s unclear how many images a national database of mug shots would add to the online sources Clearview AI has already scraped. For context, the FBI’s national facial recognition database contains 30 million mug shots. Vigilant Solutions, another facial recognition company, has also compiled a database of 15 million mug shots from public sources.

Caroline Haskins, Ryan Mac, and Logan McDonald, Buzzfeed News:

Clearview AI, the secretive company that’s built a database of billions of photos scraped without permission from social media and the web, has been testing its facial recognition software on surveillance cameras and augmented reality glasses, according to documents seen by BuzzFeed News.

Clearview, which claims its software can match a picture of any individual to photos of them that have been posted online, has quietly been working on a surveillance camera with facial recognition capabilities. That device is being developed under a division called Insight Camera, which has been tested by at least two potential clients according to documents.

On its website — which was taken offline after BuzzFeed News requested comment from a Clearview spokesperson — Insight said it offers “the smartest security camera” that is “now in limited preview to select retail, banking and residential buildings.”

Kashmir Hill, New York Times:

In response to the criticism, Clearview published a “code of conduct,” emphasizing in a blog post that its technology was “available only for law enforcement agencies and select security professionals to use as an investigative tool.”

The post added: “We recognize that powerful tools always have the potential to be abused, regardless of who is using them, and we take the threat very seriously. Accordingly, the Clearview app has built-in safeguards to ensure these trained professionals only use it for its intended purpose: to help identify the perpetrators and victims of crimes.”

The Times, however, has identified multiple individuals with active access to Clearview’s technology who are not law enforcement officials. And for more than a year before the company became the subject of public scrutiny, the app had been freely used in the wild by the company’s investors, clients and friends.

Those with Clearview logins used facial recognition at parties, on dates and at business gatherings, giving demonstrations of its power for fun or using it to identify people whose names they didn’t know or couldn’t recall.

Any one of these stories would, in isolation, be worrying. But seeing all three together — particularly with the context of the things I’ve linked to about Clearview over the past several weeks — shine a light on a distressing nascent industry. I strongly suspect that there are other companies exactly like Clearview that are taking steps to avoid exposure.

This industry simply should not exist.