Search Results for: Clearview

John Oliver:

This technology raises troubling philosophical questions about personal freedom, and, right now, there are also some very immediate practical issues. Even though it is currently being used, this technology is still very much a work in progress, and its error rate is particularly high when it comes to matching faces in real time. In fact, in the U.K., when human rights researchers watched police put one such system to the test, they found that only eight out of 42 matches were ‘verifiably correct’ — and that’s even before we get into the fact that these systems can have some worrying blind spots, as one researcher found out when testing numerous algorithms, including Amazon’s own Rekognition system:

At first glance, MIT researcher Joy Buolamwini says that the overall accuracy rate was high, even though all companies better detected men’s faces than women’s. But the error rate grew as she dug deeper.

“Lighter male faces were the easiest to guess the gender on, and darker female faces were the hardest.”

One system couldn’t even detect whether she had a face. The others misidentified her gender. White guy? No problem.

Yeah: “white guy? No problem” which, yes, is the unofficial motto of history, but it’s not like what we needed right now was to find a way for computers to exacerbate the problem. And it gets worse. In one test, Amazon’s system even failed on the face of Oprah Winfrey, someone so recognizable her magazine only had to type the first letter of her name and your brain autocompleted the rest.

Oliver covers a broad scope of different things that fit under the umbrella definition of “facial recognition” — everything from Face ID to police databases and Clearview AI.

Today, the RCMP and Clearview suspended their contract; the RCMP was, apparently, Clearview’s last remaining client in Canada.

Such a wide range of technologies raise complex questions about their regulation. Sweeping bans may prohibit the use of something like Face ID or Windows Hello, but even restricting use based on consent would make it difficult to build something like the People library built into Photos. Here’s how Apple describes it:

Face recognition and scene and object detection are done completely on your device rather than in the cloud. So Apple doesn’t know what’s in your photos. And apps can access your photos only with your permission.

Apple even put together a lengthy white paper (PDF) that, in part, describes how iOS and MacOS keep various features in Photos private to the user. However, in this case, the question is not about the privacy of one’s own data, but whether it is fair for someone to use facial recognition privately. It is a question of agency. Is it fair for anyone to have their face used, without their permission, to automatically associate pictures of themselves? Perhaps it is, but is it then fair to do so more publicly, as Facebook does? What is a comfortable line?

I don’t mean that as a rhetorical question. As Oliver often says, “the answer to the question of ‘where do we draw the line?’ is somewhere”, and I think there is a “somewhere” in the case of facial recognition. But the legislation to define it will need to be very nuanced.

Rebecca Heilweil, Vox:

So it seems that as facial recognition systems become more ambitious — as their databases become larger and their algorithms are tasked with more difficult jobs — they become more problematic. Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, told Recode that facial recognition needs to be evaluated on a “sliding scale of harm.”

When the technology is used in your phone, it spends most of its time in your pocket, not scanning through public spaces. “A Ring camera, on the other hand, isn’t deployed just for the purpose of looking at your face,” Guariglia said. “If facial recognition was enabled, that’d be looking at the faces of every pedestrian who walked by and could be identifying them.”

[…]

A single law regulating facial recognition technology might not be enough. Researchers from the Algorithmic Justice League, an organization that focuses on equitable artificial intelligence, have called for a more comprehensive approach. They argue that the technology should be regulated and controlled by a federal office. In a May proposal, the researchers outlined how the Food and Drug Administration could serve as a model for a new agency that would be able to adapt to a wide range of government, corporate, and private uses of the technology. This could provide a regulatory framework to protect consumers from what they buy, including devices that come with facial recognition.

This is such a complex field of technology that it will take a while to establish ground rules and expectations. Something like Clearview AI’s system should not be allowed; it is a heinous abuse of publicly-visible imagery. Real-time recognition is also extremely creepy and I believe should also be prohibited.

There are further complications: though the U.S. may be attempting to sort out its comfort level, those boundaries have been breached elsewhere.

Adam Satariano, New York Times:

The law, known as the General Data Protection Regulation, or G.D.P.R., created new limits on how companies can collect and share data without user consent. It gave governments broad authority to impose fines of up to 4 percent of a company’s global revenue, or to force changes to its data-collection practices. The policy served as a model for new privacy rules in Brazil, Japan, India and elsewhere.

But since the law was enacted, in May 2018, Google has been the only giant tech company to be penalized — a fine of 50 million euros, worth roughly $54 million today, or about one-tenth of what Google generates in sales each day. No major fines or penalties have been announced against Facebook, Amazon or Twitter.

The inaction is creating tension within European governments, as some leaders call for speedier enforcement and broader changes. Privacy groups and smaller tech companies complain that companies like Facebook and Google are avoiding tough oversight. At the same time, the public’s experience with the G.D.P.R. has been a frustrating number of pop-up consent windows to click through when visiting a website.

It seems bizarre to treat the total number of punishments issued as a barometer for a law’s effectiveness. Surely a regulation’s primary purpose is to change behaviour, particularly as GDPR was passed two years before it went into effect. Indeed, it seems as though this law has made some headway: tech companies now offer ways for users to download personal data — something that was never possible before and is increasingly important — users can now revoke permissions, and companies of all types have been forced to reevaluate how much personal information they collect. The New York Times itself changed its ad policies in the E.U., and total penalties assessed have been in the range of hundreds of millions of Euros. The law is, to some extent, working — and those cookie consent forms are broadly illegal.

But it is frustrating to read how poorly resources have been allocated to GDPR investigators, particularly in Ireland, where big tech companies have historically situated their E.U. headquarters for tax avoidance reasons. For what it’s worth, I doubt any officials could have successfully revealed the rats’ nest of data harvesting technologies in two years, but it sure is not helped by insufficient funding and corporate obfuscation tactics.

Thomas Smith, OneZero:

What does a Clearview profile contain? Up until recently, it would have been almost impossible to find out. Companies like Clearview were not required to share their data, and could easily build massive databases of personal information in secret.

Thanks to two landmark pieces of legislation, though, that is changing. In 2018, the European Union began enforcing the General Data Protection Regulation (GDPR). And on January 1, 2020, an equivalent piece of legislation, the California Consumer Privacy Act (CCPA), went into effect in my home state.

[…]

Within a week of the Times’ expose, I submitted my own CCPA request to Clearview. For about a month, I got no reply. The company then asked me to fill out a web form, which I did. Another several weeks passed. I finally received a message from Clearview asking for a copy of my driver’s license and a clear photo of myself.

I provided these. In minutes, they sent back my profile.

Companies like Clearview AI are the next level in the kind of “data enrichment” firms of the type that suffered a massive data breach last year. After that breach, I submitted requests to every big data enrichment company I could find to see what they had on me. Many had nothing, but a few had built extremely accurate profiles of me based solely on whatever they could scrape. I had never heard of these companies; I had to ask them, individually, to delete anything they had on me.

A unique quality of companies that are inherently unethical is that once the narrative thread starts to unravel, the whole thing collapses pretty quickly. As reporters begin digging into it and those with insider knowledge begin to speak up, it’s hard for their public relations teams to keep everything in a nice well-packaged story.

So, let’s look at a few developments regarding Clearview AI.

Dave Gershgorn, OneZero:

Clearview AI worked to build a national database of every mug shot taken in the United States during the past 15 years, according to an email obtained by OneZero through a public records request.

[…]

It’s unclear how many images a national database of mug shots would add to the online sources Clearview AI has already scraped. For context, the FBI’s national facial recognition database contains 30 million mug shots. Vigilant Solutions, another facial recognition company, has also compiled a database of 15 million mug shots from public sources.

Caroline Haskins, Ryan Mac, and Logan McDonald, Buzzfeed News:

Clearview AI, the secretive company that’s built a database of billions of photos scraped without permission from social media and the web, has been testing its facial recognition software on surveillance cameras and augmented reality glasses, according to documents seen by BuzzFeed News.

Clearview, which claims its software can match a picture of any individual to photos of them that have been posted online, has quietly been working on a surveillance camera with facial recognition capabilities. That device is being developed under a division called Insight Camera, which has been tested by at least two potential clients according to documents.

On its website — which was taken offline after BuzzFeed News requested comment from a Clearview spokesperson — Insight said it offers “the smartest security camera” that is “now in limited preview to select retail, banking and residential buildings.”

Kashmir Hill, New York Times:

In response to the criticism, Clearview published a “code of conduct,” emphasizing in a blog post that its technology was “available only for law enforcement agencies and select security professionals to use as an investigative tool.”

The post added: “We recognize that powerful tools always have the potential to be abused, regardless of who is using them, and we take the threat very seriously. Accordingly, the Clearview app has built-in safeguards to ensure these trained professionals only use it for its intended purpose: to help identify the perpetrators and victims of crimes.”

The Times, however, has identified multiple individuals with active access to Clearview’s technology who are not law enforcement officials. And for more than a year before the company became the subject of public scrutiny, the app had been freely used in the wild by the company’s investors, clients and friends.

Those with Clearview logins used facial recognition at parties, on dates and at business gatherings, giving demonstrations of its power for fun or using it to identify people whose names they didn’t know or couldn’t recall.

Any one of these stories would, in isolation, be worrying. But seeing all three together — particularly with the context of the things I’ve linked to about Clearview over the past several weeks — shine a light on a distressing nascent industry. I strongly suspect that there are other companies exactly like Clearview that are taking steps to avoid exposure.

This industry simply should not exist.

Logan McDonald, Ryan Mac, and Caroline Haskins, Buzzfeed News:

In distributing its app for Apple devices, Clearview, which BuzzFeed News reported earlier this week has been used by more than 2,200 public and private entities including Immigration and Customs Enforcement (ICE), the FBI, Macy’s, Walmart, and the NBA, has been sidestepping the Apple App Store, encouraging those who want to use the software to download its app through a program reserved exclusively for developers. In response to an inquiry from BuzzFeed News, Apple investigated and suspended the developer account associated with Clearview, effectively preventing the iOS app from operating.

An Apple spokesperson told BuzzFeed News that the Apple Developer Enterprise Program should only be used to distribute apps within a company. Companies that violate that rule, the spokesperson said, are subject to revocation of their accounts. Clearview has 14 days to respond to Apple.

Zack Whittaker, TechCrunch:

TechCrunch found Clearview AI’s iPhone app on an public Amazon S3 storage bucket on Thursday, despite a warning on the page that the app is “not to be shared with the public.”

The page asks users to “open this page on your iPhone” to install and approve the company’s enterprise certificate, allowing the app to run.

But this, according to Apple’s policies, is prohibited if the app’s users are outside of Clearview AI’s organization.

Dell Cameron, Dhruv Mehrotra, and Shoshana Wodinsky of Gizmodo found the Android version of the app yesterday as well. They were unable to log in, but observed connections being opened to third-party app analytics providers:

Clearview CEO Hoan Ton-That said in an email to Gizmodo that the companion app is a prototype and “is not an active product.” RealWear, another company, which makes “a powerful, fully-rugged, voice operated Android computer” that is “worn on the head,” is also mentioned in the app, though it’s not immediately clear what for.

The app also contains a script created by Google for scanning barcodes in connection with drivers licenses. (The file is named “Barcode$DriverLicense.smali”) Asked about the feature, Ton-That responded: “It doesn’t scan drivers licenses.” Gizmodo also inquired about the app’s so-called “private search mode” but did not get a response.

The company frequently demurs when asked difficult but legitimate questions, and its clients deny all knowledge before recanting upon evidence being presented. Everything about Clearview is skeevy and it should not exist. I propose that everything it has ever created be sunk into the ocean.

About a week ago, Hoan Ton-That, the CEO of Clearview AI — the creepy facial recognition company that the New York Times revealed in January and which has a database filled with photos posted to social media — claimed in an interview on Fox Business that his company’s technology was “strictly for law enforcement to do investigations”. That has been revealed to be a lie after Buzzfeed News acquired a leaked copy of Clearview’s client list.

Ryan Mac, Caroline Haskins, and Logan McDonald:

The internal documents, which were uncovered by a source who declined to be named for fear of retribution from the company or the government agencies named in them, detail just how far Clearview has been able to distribute its technology, providing it to people everywhere, from college security departments to attorneys general offices, and in countries from Australia to Saudi Arabia. BuzzFeed News authenticated the logs, which list about 2,900 institutions and include details such as the number of log-ins, the number of searches, and the date of the last search. Some organizations did not have log-ins or did not run searches, according to the documents, and BuzzFeed News is only disclosing the entities that have established at least one account and performed at least one search.

[…]

“This is completely crazy,” Clare Garvie, a senior associate at the Center on Privacy and Technology at Georgetown Law School, told BuzzFeed News. “Here’s why it’s concerning to me: There is no clear line between who is permitted access to this incredibly powerful and incredibly risky tool and who doesn’t have access. There is not a clear line between law enforcement and non-law enforcement.”

Ryan Mac on Twitter:

Reporting this story was surreal. Numerous organizations initially denied that they had ever used Clearview. We then followed up, and those same orgs later found that employees had signed up and used the software without approval from higher ups. This happened multiple times.

A lack of general privacy principles written into law makes it possible for Clearview to indiscriminately sell its highly accurate facial recognition software with little oversight. That is extremely concerning. It should not be so trivial to reduce the overall expectation of privacy to zero for a company’s profits.

Update: Allana Smith, Calgary Herald:

The Calgary Police Service has confirmed two of its officers tested controversial facial-recognition software made by Clearview AI, Postmedia has learned.

While the police service doesn’t use Clearview AI in any capacity, it said two of its members had tested the technology to see if it was worthwhile for potential investigative use.

[…]

Both the Calgary Police Service and the Edmonton Police Service had denied use of the software earlier this month, but both have since come forward with reports that several of their officers had tested the Clearview AI software.

As Mac pointed out, there’s a curious pattern to these responses: agencies that vehemently denied using Clearview are now turning around and admitting that they have, at least in some capacity.

CBS News, in an un-bylined story:

Google and YouTube have sent a cease-and-desist letter to Clearview AI, a facial recognition app that scrapes images from websites and social media platforms, CBS News has learned. The tech companies join Twitter, which sent a similar letter in January, in trying to block the app from taking pictures from their platforms.

[…]

Ton-That argued that Clearview AI has a First Amendment right to access public data. “The way we have built our system is to only take publicly available information and index it that way,” he said.

I’m no lawyer, but that’s certainly a creative interpretation of the First Amendment.

For what it’s worth, a 2018 ruling indicates that website scraping does have some First Amendment protections, but that was decided in the context of research. But it’s unclear whether that would extend to copying third-party materials for first-party profit.

Facebook and Venmo also said scraping was against their policies, but have so far not sent cease-and-desist letters.

Peter Thiel sits on Facebook’s board and is also an investor in Clearview. You’d think he could pass along the message.

Bruce Schneier, in an op-ed for the New York Times:

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect. The large internet surveillance companies like Facebook and Google collect dossiers on us more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

This is a timely article — not only because of the publicity Clearview AI has received, but also because the European Commission is considering a ban on the use of facial recognition in public, as London’s Metropolitan Police have announced they will be using widely.

Ryan Mac, Caroline Haskins, and Logan McDonald, Buzzfeed News:

Originally known as Smartcheckr, Clearview was the result of an unlikely partnership between Ton-That, a small-time hacker turned serial app developer, and Richard Schwartz, a former adviser to then–New York mayor Rudy Giuliani. Ton-That told the Times that they met at a 2016 event at the Manhattan Institute, a conservative think tank, after which they decided to build a facial recognition company.

While Ton-That has erased much of his online persona from that time period, old web accounts and posts uncovered by BuzzFeed News show that the 31-year-old developer was interested in far-right politics. In a partial archive of his Twitter account from early 2017, Ton-That wondered why all big US cities were liberal, while retweeting a mix of Breitbart writers, venture capitalists, and right-wing personalities.

It is revealing that the people behind tools that are borderline unethical and threaten our privacy expectations often also happen to be aggressively protective of their own privacy.

Nelson Minar building on top of a report published last month in Bellingcat about Yandex’s superior reverse image search:

Right now an ordinary person still can’t, for free, take a random photo of a stranger and find the name for him or her. But with Yandex they can. Yandex has been around a long time and is one of the few companies in the world that is competitive to Google. Their index is heavily biased to Eastern European data, but they have enough global data to find me and Andrew Yang.

If you use Google Photos or Facebook you’ve probably encountered their facial recognition. It’s magic, the matching works great. It’s also very limited. Facebook seems to only show you names for faces that people you have some sort of Facebook connection to. Google Photos similarly doesn’t volunteer random names. They could do more; Facebook could match a face to any Facebook user, for instance. But both services seem to have made a deliberate decision not to be a general purpose facial recognition service to identify strangers.

At the time that I linked to the Bellingcat report, I wondered why Google’s reverse image recognition, in particular, was so bad in comparison. In tests, it even missed imagery from Google Street View despite Google regularly promoting its abilities in machine learning, image identification, and so on. In what I can only explain as a massive and regrettable oversight, it is clear to me that the reason Google’s image search is so bad is because Google designed it that way. Otherwise, Google would have launched something like Yandex or Clearview AI, and that would be dangerous.

Google’s restraint is admirable. What’s deeply worrying is that it is optional — that Google could, at any time, change its mind. There are few regulations in the United States that would prevent Google or any other company from launching its own invasive and creepy facial recognition system. Recall that the only violation that could be ascribed to Clearview’s behaviour — other than an extraordinary violation of simple ethics — is that the company scraped social media sites’ images without permission. It’s a pretty stupid idea to solely rely upon copyright law as a means of reining in facial recognition.

Kashmir Hill, New York Times:

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

This investigation was published on Saturday. I’ve read it a few times and it has profoundly disturbed me on every pass, but I haven’t been surprised by it. I’m not cynical, but it doesn’t surprise me that an entirely unregulated industry motivated to push privacy ethics to their revenue-generating limits would move in this direction.

Clearview’s technology makes my skin crawl; the best you can say about the company is that its limited access prevents the most egregious privacy violations. When something like this is more widely available, it will be dangerous for those who already face greater threats to their safety and privacy — women, in particular, but also those who are marginalized for their race, skin colour, gender, and sexual orientation. Nothing will change on this front if we don’t set legal expectations that limit how technologies like this may be used.