Buzzfeed News Obtains Clearview AI’s Client List ⇥ buzzfeednews.com
About a week ago, Hoan Ton-That, the CEO of Clearview AI — the creepy facial recognition company that the New York Times revealed in January and which has a database filled with photos posted to social media — claimed in an interview on Fox Business that his company’s technology was “strictly for law enforcement to do investigations”. That has been revealed to be a lie after Buzzfeed News acquired a leaked copy of Clearview’s client list.
Ryan Mac, Caroline Haskins, and Logan McDonald:
The internal documents, which were uncovered by a source who declined to be named for fear of retribution from the company or the government agencies named in them, detail just how far Clearview has been able to distribute its technology, providing it to people everywhere, from college security departments to attorneys general offices, and in countries from Australia to Saudi Arabia. BuzzFeed News authenticated the logs, which list about 2,900 institutions and include details such as the number of log-ins, the number of searches, and the date of the last search. Some organizations did not have log-ins or did not run searches, according to the documents, and BuzzFeed News is only disclosing the entities that have established at least one account and performed at least one search.
[…]
“This is completely crazy,” Clare Garvie, a senior associate at the Center on Privacy and Technology at Georgetown Law School, told BuzzFeed News. “Here’s why it’s concerning to me: There is no clear line between who is permitted access to this incredibly powerful and incredibly risky tool and who doesn’t have access. There is not a clear line between law enforcement and non-law enforcement.”
Ryan Mac on Twitter:
Reporting this story was surreal. Numerous organizations initially denied that they had ever used Clearview. We then followed up, and those same orgs later found that employees had signed up and used the software without approval from higher ups. This happened multiple times.
A lack of general privacy principles written into law makes it possible for Clearview to indiscriminately sell its highly accurate facial recognition software with little oversight. That is extremely concerning. It should not be so trivial to reduce the overall expectation of privacy to zero for a company’s profits.
Update: Allana Smith, Calgary Herald:
The Calgary Police Service has confirmed two of its officers tested controversial facial-recognition software made by Clearview AI, Postmedia has learned.
While the police service doesn’t use Clearview AI in any capacity, it said two of its members had tested the technology to see if it was worthwhile for potential investigative use.
[…]
Both the Calgary Police Service and the Edmonton Police Service had denied use of the software earlier this month, but both have since come forward with reports that several of their officers had tested the Clearview AI software.
As Mac pointed out, there’s a curious pattern to these responses: agencies that vehemently denied using Clearview are now turning around and admitting that they have, at least in some capacity.