An Investigation Into Clearview AI, a Highly Accurate Facial Recognition Company That Uses Images Scraped From Social Media
Kashmir Hill, New York Times:
His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.
This investigation was published on Saturday. I’ve read it a few times and it has profoundly disturbed me on every pass, but I haven’t been surprised by it. I’m not cynical, but it doesn’t surprise me that an entirely unregulated industry motivated to push privacy ethics to their revenue-generating limits would move in this direction.
Clearview’s technology makes my skin crawl; the best you can say about the company is that its limited access prevents the most egregious privacy violations. When something like this is more widely available, it will be dangerous for those who already face greater threats to their safety and privacy — women, in particular, but also those who are marginalized for their race, skin colour, gender, and sexual orientation. Nothing will change on this front if we don’t set legal expectations that limit how technologies like this may be used.