Search Results for: Clearview

David Jeans, Forbes:

Clearview co-founder Hoan Ton-That stepped back as CEO in December and took on the role of president. Then, on Friday, following a Forbes inquiry, he told the company he was resigning, noting he was leaving to start the “next chapter of my life.” In a statement to Forbes, Ton-That said he would continue serving as a board member.

Jeans does not say why Forbes would have recently contacted Clearview or Ton-That personally. I am curious about the circumstances of his resignation.

Jeans:

Early investor and board member Hal Lambert took over as co-CEO in December, alongside cofounder Richard Schwartz, who is overseeing day-to-day operations, Lambert told Forbes in an interview. A former fundraiser for President Trump, Lambert said that he had stepped in to help Clearview “with the new administration,” adding, “There’s some opportunities there. I’m going to be helping with that effort.”

Near-universal facial recognition in the hands of this administration — what a frightening prospect. Keep an eye on that terrible settlement.

Kashmir Hill, New York Times:

[Clearview AI] A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

This is an awful move by an awful company. It turns U.S. victims of its global privacy invasion into people who are invested and complicit in its success.

Bryan Carney, the Tyee, March 2019:

The RCMP has been quietly running an operation monitoring individuals’ Facebook, Twitter, Instagram and other social media activity for at least two years, The Tyee has learned.

[…]

“There is a position taken that this is public information and does not constitute private information, and that is an inaccurate assessment of the way that Canadian law assess public and private in this country as far as I’m concerned,” he [Chris Parsons of Citizen Lab] said.

Carney, of the Tyee, in a November 2020 followup article:

A 3,000-page batch of internal communications from the RCMP obtained by The Tyee provides a window into how the force builds its capabilities to spy on internet users and works to hide its methods from the public.

[…]

Back on Dec. 28, 2016, the RCMP ordered “optional goods” — extra software and features — in a Babel X contract found in the documents, but the list was blanked out. No contract or procurement documents naming Babel X appeared on Public Services and Procurement Canada websites until 2020.

Last year, the U.S. Office of the Director of National Intelligence published a report acknowledging it collects vastly more information than it needs for immediate investigative purposes.

Philippe Dufrense, Privacy Commissioner of Canada, in the introduction to a similarly scathing report about the RCMP’s Project Wide Awake program, published Thursday:

These issues are at the heart of the Office of the Privacy Commissioner of Canada’s (OPC) investigation into the Royal Canadian Mounted Police’s (RCMP) Project Wide Awake initiative.

The initiative uses privacy impactful third-party services to collect personal information from a range of sources, including social media, forums, the dark web, location-based services and fee-for-access private databases. The data is used for a variety of policing purposes, including investigating suspected unlawful activity, locating missing persons, identifying suspects, detecting threats at public events attended by high-profile individuals, and maintaining situational awareness during an active situation.

The OPC’s investigation identified concerns related to both accountability and transparency, namely that the RCMP did not take the necessary steps to ensure that the personal information collection practices of all of its service providers were compliant with Canadian privacy law.

The Commissioner found possible violations of privacy law, particularly with the use of Babel X, and says the office made three specific recommendations, “none of which were accepted by the RCMP”. Alas, this office has little recourse; Facebook and Clearview could simply ignore the results of similar investigations.

Eyal Press, the New Yorker:

In June, an appellate court ordered the N.Y.P.D. to turn over detailed information about a facial-recognition search that had led a Queens resident named Francisco Arteaga to be charged with robbing a store. The court requested both the source code of the software used and information about its algorithm. Because the technology was “novel and untested,” the court held, denying defendants access to such information risked violating the Brady rule, which requires prosecutors to disclose all potentially exculpatory evidence to suspects facing criminal charges. Among the things a defendant might want to know is whether the photograph that had been used in a search leading to his arrest had been digitally altered. DataWorks Plus notes on its Web site that probe images fed into its software “can be edited using pose correction, light normalization, rotation, cropping.” Some systems enable the police to combine two photographs; others include 3-D-imaging tools for reconstructing features that are difficult to make out.

This example is exactly why artificial intelligence needs regulation. There are many paragraphs in this piece which contain alarming details about overconfidence in facial recognition systems, but proponents of allowing things to play out as things are currently legislated might chalk that up to human fallibility. Yes, software might present a too-rosy impression of its capabilities, one might argue, but it is ultimately the operator’s responsibility to cross-check things before executing an arrest and putting an innocent person in jail. After all, there are similar problems with lots of forensic tools.

Setting aside how much incentive there is for makers of facial recognition software to be overconfident in their products, and how much leeway law enforcement seems to give them — agencies kept signing contracts with Clearview, for example, even after stories of false identification and arrests based on its technology — one could at least believe searches use photographs. But that is not always the case. DataWorks Plus markets tools which allow searches using synthesized faces which are based on real images, as Press reports — but you will not find that on its website. When I went looking, DataWorks Plus seems to have pulled the page where it appeared; happily, the Internet Archive captured it. You can see in its examples how it is filling in the entire right-hand side of someone’s face in a “pose correction” feature.

It is plausible to defend this as just a starting point for an investigation, and a way to generate leads. If it does not pan out, no harm, right? But it does seem to this layperson like a computer making its best guess about someone’s facial features is not an ethical way of building a case. This is especially true when we do not know how systems like these work, and it does not inspire confidence that there are no standards specific with which “A.I.” tools must comply.

It should have been a surprise to exactly nobody that Threads, Meta’s Twitter-alike, was going to seem hungrier for personal data than Bluesky or Mastodon.

That is how Meta makes its money: its products surveil as much of your behaviour as possible, then they let others buy targeted advertising. That differs from the other two services I mentioned. Bluesky says its business model “must be fundamentally different” because anyone will be able to spin up their own server; it has raised venture capital and is selling custom domain names. Mastodon is similarly free; its development is supported by various sponsors and it has a Patreon account pulling in nearly half a million dollars annually, while most individual servers are donationware.

Meta is not currently running advertising on Threads, but one day it will. Even so, its listings in Apple’s App Store and Google’s Play Store suggest a wide range of privacy infractions, and this has not gone unnoticed. Reece Rogers, of Wired, compared the privacy labels of major Twitter-like applications and services on iOS, and Tristan Louis did the same with a bunch of social apps on Android. Michael Kan, of PC Magazine, noticed before Threads launched that its privacy label indicated it collected all the same data as Twitter plus “Health & Fitness, Financial Info, Sensitive Info, and “Other Data””. That seems quite thorough.

Just as quickly as these critical articles were published came those who rationalized and even defended the privacy violations implied by these labels. Dare Obasanjo — who, it should be noted, previously worked for Meta — said it was “a list of features used by the app framed in the scariest way possible”. Colin Devroe explained Threads had to indicate such a vibrant data collection scheme because Threads accounts are linked to Instagram accounts. My interpretation of this is because you can, for example, shop with Instagram, it is possible to link billing information to a profile.

That there is any coverage whatsoever of the specific privacy impacts of these applications is a testament to the direct language used in these labels. Even though Meta’s privacy policy and the supplemental policy for Threads have been written with comprehension in mind, they are nowhere near as readable as these simplified privacy labels.

Whether those labels are being accurately comprehended, though, is a different story, as indicated in the above examples. The number of apparent privacy intrusions in which Threads engages is alarming at first glance. But many of the categories, at least on iOS, require that users grant permission first, including health information, photos, contacts, and location. Furthermore, it is not clear to me how these data types are used. I only see a couple of passing references to the word “health” in Meta’s privacy policy, for example, and nothing says it communicates at all with the Health database in iOS. Notably, not only does Threads not ask for access to Health information, it also does not request access to any type of protected data — there is no way to look for contacts on Threads, for example — nor does it ask to track when launched. In covering all its bases, Meta has created a privacy label which suggests it is tracking possibly everything, but perhaps not, and there is no way to tell how close that is to reality nor exactly what is being done with that information.

This is in part because privacy labeling is determined by developers, and the consequences of violations for misrepresentation are at the discretion of Apple and Google. Ironically, because each of the players involved are giant businesses, it seems to me like there may be limitations about the degree to which these privacy labels are able to be policed. If Apple or Google were to de-list or block Meta’s apps, you know some lawyers would be talking about possibly anti-competitive motives.

That is not to say privacy labels are useless. A notable difference between the privacy label for Threads and some other apps is not found in the list of categories of information collected. Instead, it is in the title of that list: “Data Linked to You”. It should not be worrisome for a developer to collect diagnostic information, for example, but is it it necessary to associate it with a specific individual? Sure enough, while some apps — like Tapbots’ Ivory and the first-party Mastodon client — say they collect nothing, others, like Bluesky and Tooot — a Mastodon client — acknowledge collecting diagnostic information, but say they do not associate it with individual users. Apple is also pushing for greater transparency by requiring that developers disclose third-party SDKs which collect user data.

All of this, however, continues to place the onus of responsibility on individual users. Somehow, we must individually assess the privacy practices of the apps we use and the SDKs they use. We must be able to forecast how our granting of specific types of data access today will be abused tomorrow, and choose to avoid all features and apps which stray too far. Perhaps the most honest disclosure statements are in the form of the much-hated cookie consent screens which — at their best — give users the option for agreeing to each possible third-party disclosure, or agreeing or disagreeing in bulk. While they provide an aggressive freedom of choice, they are overwhelming and easily ignored.

A better option may be found in not giving users a choice.

The rate at which tech products have changed and evolved has made it impossible to foresee how today’s normal use becomes tomorrow’s privacy exploitation. Vacation photos and selfies posted ten or twenty years ago are now part of at least one massive facial recognition database. Doorbell cameras become a tool for vigilante justice. Using the web and everyday devices normally subjects everyone to unchecked surveillance, traces of which persist for years. The defaults are all configured against personal privacy, and it is up to individuals to find ways of opting out of this system where they can. Besides, blaming users for not fully comprehending all possible consequences of their actions is the weakest rebuttal to reasonable consumer protections.

Privacy labeling, which appeared first in the App Store before it was added to the Play Store, were inspired by food nutrition labels. I am happy to extend that metaphor. At the bottom of many restaurant menus will often be printed a statement which reads something like “eating raw or lightly cooked foods of animal origin may increase your risk of food poisoning”. There are good reasons (PDF) to be notified of that risk and make judgements based on your personal tolerance. But nobody expects the secret ingredient added by a restaurant to their hollandaise to be salmonella. This reasonable disclosure statement is not sufficient to protect kitchen staff from taking reasonable precautions to avoid poisoning patrons.

We can only guess at some pretty scary ways these everyday exploitations of our private data may be used, but we do not have to. We have plenty of evidence already that we need more protections against today’s giant corporations and tomorrow’s startups. It should not be necessary to compare ambiguous labels against company privacy policies and imagine what they could do with all that information just to have a text-based social media account. Frivolity should not be so poisoned.

From a press release issued by the Office of the Privacy Commissioner of Canada:

The privacy protection authorities for Canada, Québec, British Columbia and Alberta announced today that they will jointly investigate the short-form video streaming application TikTok.

[…]

The four privacy regulators will examine whether the organization’s practices are in compliance with Canadian privacy legislation and in particular, whether valid and meaningful consent is being obtained for the collection, use and disclosure of personal information. The investigation will also determine if the company is meeting its transparency obligations, particularly when collecting personal information from its users.

This comes after multiple European authorities have investigated TikTok or are in the process of doing so; the company has been fined for its practices in France and the Netherlands. It will be interesting to see what Canadian regulators can dig up.

A quirk of the OPC is how it can make recommendations but has no authority to prosecute. After a similar investigation into Clearview’s facial recognition systems, it concluded the company conducted “mass surveillance of Canadians”, but could not issue fines or order Clearview to make changes. The company’s response was predictably weak: it created a manual opt-out mechanism and pulled out of the Canadian market. But Clearview is still conducting mass surveillance on all Canadians who have not requested removal.

Similarly, while the OPC may find embarrassing and dangerous things TikTok could be doing in Canada, Bytedance can simply deny any wrongdoing and carry on — unless the OPC pursues the matter in court.

Gilad Edelman, Wired:

Now comes an even bigger surprise: A new version of the ADPPA has taken shape, and privacy advocates are mostly jazzed about it. It just might have enough bipartisan support to become law — meaning that, after decades of inaction, the United States could soon have a real federal privacy statute.

Perhaps the most distinctive feature of the new bill is that it focuses on what’s known as data minimization. Generally, companies would only be allowed to collect and make use of user data if it’s necessary for one of 17 permitted purposes spelled out in the bill — things like authenticating users, preventing fraud, and completing transactions. Everything else is simply prohibited. Contrast this with the type of online privacy regime most people are familiar with, which is all based on consent: an endless stream of annoying privacy pop-ups that most people click “yes” on because it’s easier than going to the trouble of turning off cookies. That’s pretty much how the European Union’s privacy law, the GDPR, has played out.

If this law is as described and passes more-or-less intact, it could fundamentally reshape the economy of the web and be a model for the rest of the world.

The Electronic Frontier Foundation is “disappointed”:

We have three initial objections to the version that the committee passed this week. Before a floor vote, we urge the House to fix the bill and use this historic opportunity to strengthen — not diminish — the country’s privacy landscape now and for years to come.

The Foundation is concerned about rollbacks of FCC authority, poor individual right to action reform, and the preemption of state laws by this national law. The latter is a particularly fraught matter: a federal regulation simplifies compliance, reduces reliance on weak state-level laws lobbied for by tech companies, and improves international competitiveness, but it could mean privacy rollbacks for those in states with more stringent laws. The Foundation points to a few examples, undermining Edelman’s claim that “it goes further than any of the state laws it would preempt — even California’s”.

Look out for reactions to this bill from technology company front groups like the Competitiveness Coalition and American Edge. Both have been focused on the American Innovation and Choice Online Act — perhaps an indication of tech companies’ priorities — but keep an eye out. The Interactive Advertising Bureau unsurprisingly opposes the law, saying it would “impose heavier regulations than any state currently does” — a demonstrably untrue claim.

Muyi Xiao, Paul Mozur, Isabelle Qian, and Alexander Cardia of the New York Times put together a haunting short documentary about the state of surveillance in China. It shows a complete loss of privacy, and any attempt to maintain one’s sense of self is regarded as suspicious. From my limited perspective, I cannot imagine making such a fundamental sacrifice.

This is why it is so important to match the revulsion we feel over things like that Cadillac Fairview surreptitious facial recognition incident or Clearview AI — in its entirety — with strong legislation. These early-stage attempts at building surveillance technologies that circumvent legal processes forecast an invasive future for everyone.

This settlement is significant, but perhaps not as triumphant as the ACLU makes it out to be:

The central provision of the settlement restricts Clearview from selling its faceprint database not just in Illinois, but across the United States. Among the provisions in the binding settlement, which will become final when approved by the court, Clearview is permanently banned, nationwide, from making its faceprint database available to most businesses and other private entities. The company will also cease selling access to its database to any entity in Illinois, including state and local police, for five years.

This does not eliminate the need for stronger privacy laws in the United States. Outside the U.S., it seems that Clearview AI is able to continue developing and selling its product under the cover of American jurisdiction, unless expressly prohibited by local laws. Clearview is still expanding.

This settlement does prohibit Clearview from providing free trial access without supervisor approval, among its biggest sales tactics. Good.

Matt O’Brien and Tali Arbel, the Associated Press:

A controversial face recognition company that’s built a massive photographic dossier of the world’s people for use by police, national governments and — most recently — the Ukrainian military is now planning to offer its technology to banks and other private businesses.

[…]

The new “consent-based” product would use Clearview’s algorithms to verify a person’s face, but would not involve its ever-growing trove of some 20 billion images, which [Clearview CEO Hoan] Ton-That said is reserved for law enforcement use. Such ID checks that can be used to validate bank transactions or for other commercial purposes are the “least controversial use case” of facial recognition, he said.

Remember when the company promised to only allow law enforcement uses? Ton-That killed that principle earlier this year. If Clearview could have operated with individual consent, it would have obtained it already.

Every day this company is allowed to keep operating represents an increasing policy failure.

Thomas Brewster, Forbes:

[…] On Wednesday, deputy prime minister and head of the Digital Transformation Ministry in Ukraine, Mykhailo Fedorov, confirmed on his Telegram profile that surveillance technology was being used in this way, a matter of weeks after Clearview AI, the New York-based facial recognition provider, started offering its services to Ukraine for those same purposes. Fedorov didn’t say what brand of artificial intelligence was being used in this way, but his department later confirmed to Forbes that it was Clearview AI, which is providing its software for free. They’ll have a good chance of getting some matches: In an interview with Reuters earlier this month, Clearview CEO Hoan Ton-That said the company had a store of 10 billion users’ faces scraped from social media, including 2 billion from Russian Facebook alternative Vkontakte. Fedorov wrote in a Telegram post that the ultimate aim was to “dispel the myth of a ‘special operation’ in which there are ‘no conscripts’ and ‘no one dies.’”

Tim Cushing, Techdirt:

Or maybe it’s just Clearview jumping on the bandwagon by supporting a country that already has the support of the most powerful governments in the world. Grabbing onto passing coattails and contacting journalists to get the word out about the company’s reverse-heel turn is savvy marketing. But it’s little more than that. The tech may prove useful (if the Ukraine government is even using it), but that shouldn’t be allowed to whitewash Clearview’s (completely earned) terrible reputation. Even if it’s useful, it’s only useful because the company was willing to do what no other company was: scrape millions of websites and sell access to the scraped data to anyone willing to pay for it.

It has been abundantly clear for a long time that accurate facial recognition can have its benefits, just as recording everyone’s browser history could make it easier to investigate crime. Even if this seems helpful, it is still an uneasy technology developed by ethically bankrupt company. It is hard for me to see this as much more than Clearview cynically using a war as a marketing opportunity given that it spread news of its participation weeks before anyone in the Ukrainian government confirmed it.

Kate Kaye, Protocol:

When it comes to today’s data-centric business models, algorithmic systems and the data used to build and train them are intellectual property, products that are core to how many companies operate and generate revenue. While in the past the FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.

[…]

The winds inside the FTC seem to be shifting. “Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” former FTC Commissioner Rohit Chopra, now director of the Consumer Financial Protection Bureau, wrote in a statement related to the Everalbum case. He said requiring the company to “forfeit the fruits of its deception” was “an important course correction.”

As with authorities requiring Clearview to delete identification data, I am confused about how it is possible to extricate illegally-acquired materials from software like machine learning models. In the Everalbum case (PDF), for example, the FTC ordered the company to delete data derived from the collection of faces from users who deactivated their accounts, as well as any algorithms or models created from that information. But it is possible some or all of those photos were used to train the machine learning models used by all users. Without rolling the model back to a state before the creation of any data relevant to the FTC’s order, how is this possible? I am genuinely curious. This sounds like a far way to treat businesses that have exploited illegally acquired data, but I am unclear how it works.

The headline on this article is ridiculous, by the way. It claims this penalty strategy “spells death for algorithms”, but the article clarifies “the term ‘algorithm’ can cover any piece of code that can make a software application do a set of actions”. Whoever picked this headline entirely divorced it from Kaye’s excellent reporting.

Big scoop for Drew Harwell at the Washington Post:

The facial recognition company Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post.

[…]

And the company wants to expand beyond scanning faces for the police, saying in the presentation that it could monitor “gig economy” workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar.

Slides Harwell posted on Twitter reveal a stunning expansion of personal recognition. For example, Clearview is exploring ways of tying identity to license plates, movements, and locations. This is a brazen theft of personal liberty undertaken by a private company with virtually no accountability or regulation.

Harwell’s story is full of outrageous details, one paragraph after another. But one must be mindful they are sourced from an investor presentation that puts the company in the best possible light for people with deep pockets. While unearned optimism is allowed, outright lies are not supposed to be included, though some of the companies’ logos apparently associated with Clearview denied to Harwell any connection.

With that in mind, Clearview says it has thousands of clients in the United States alone, even as it is increasingly banned in other countries. It says it is ingesting photos at a rate of one-and-a-half billion per month, and wants to identify pretty much everyone on Earth, though that presumably excludes children and everyone from Australia, Canada, and France. It should also be excluding images posted to major social media networks, many of which have told Clearview to cease and desist its scraping practices.

It has plenty of ideas for how it wishes to use its database and, according to the slides posted by Harwell, it also wants to license them to third parties for their own uses. How does that square with its promise to only permit law enforcement uses?

Clearview has dismissed criticism of its data collection and surveillance work by saying it is built exclusively for law enforcement and the public good. In an online “principles” pledge, the company said that it works only with government agencies and that it limits its technology to “lawful investigative processes directed at criminal conduct, or at preventing specific, substantial, and imminent threats to people’s lives or physical safety.”

[…]

In his statement to The Post, [founder Hoan Ton-That] said: “Our principles reflect the current uses of our technology. If those uses change, the principles will be updated, as needed.”

If your principles change because of financial incentives, you do not have principles. Clearview is sketchy as hell and has no place being in business.

Ton-That told The Post the document was shared with a “small group of individuals who expressed interest in the company.” It included proposals, he said, not just for its main facial-search engine but also for other business lines in which facial recognition could be useful, such as identity verification or secure-building access.

He said Clearview’s photos have “been collected in a lawful manner” from “millions of different websites” on the public Internet. A person’s “public source metadata” and “social linkage information,” he added, can be found on the websites that Clearview has linked to their facial photos.

I found the second sentence here confusing, but it seems to mean that a user of Clearview is able to see where an image in the database was sourced from. The way Ton-That phrases it makes it sound like a reassurance or something, but nothing could be further from the case. The company still makes people individually opt out from the mass surveillance machine it is hoping to grow and license if they happen to live in California or Illinois. Otherwise, Clearview says it will scrape anything it can access and it is your responsibility to remove from the web anything you do not wish to contribute to its business.

Any reasonable nation should be working hard on legislation that would prevent anything like Clearview from being used, and level crippling penalties for any of its citizens’ data found to be in its systems. That is my baseline expectation. I am not optimistic it will be achieved.

Tonya Riley, CyberScoop:

In fact, CyberScoop identified more than 20 federal law enforcement contracts with a total overall ceiling of over $7 million that included facial recognition in the award description or to companies whose primary product is facial recognition technology since June, when a government watchdog released a report warning about the unmitigated technology. Even that number, which was compiled from a database of government contracts created by transparency nonprofit Tech Inquiry and confirmed with federal contracting records, is likely incomplete. Procurement awards often use imprecise descriptions and sometimes the true beneficiary of the award is obscured by subcontractor status.

Among the contracts CyberScoop cites is one between the FBI and Clearview AI. Quite a stark contrast compared to countries like Canada and France that have banned the company from operating within their borders or using any citizens’ data.

Natasha Lomas, TechCrunch:

France’s privacy watchdog said today that Clearview has breached Europe’s General Data Protection Regulation (GDPR).

In an announcement of the breach finding, the CNIL also gives Clearview formal notice to stop its “unlawful processing” and says it must delete user data within two months.

Good; keep these orders coming. Like previous deletion demands, there are likely problems with ascertaining who in Clearview’s database is covered, but at least there is collective action by countries that have laws concerning individuals’ privacy. It is a stance that declares its entire operation an unacceptable violation. I see nothing wrong with putting Clearview out of business and discouraging others from replicating it.

John Paczkowski, Buzzfeed News:

Australia’s national privacy regulator has ordered controversial facial recognition company Clearview AI to destroy all images and facial templates belonging to individuals living in Australia, following a BuzzFeed News investigation.

On Wednesday, the Office of the Australian Information Commissioner (OAIC) said Clearview had violated Australians’ privacy by scraping their biometric information from the web and disclosing it via a facial recognition tool built on a vast database of photos scraped from Facebook, Instagram, LinkedIn, and other websites.

This sounds great, but it faces some of the same problems as removing Canadian faces. Does Clearview know which faces in its collection are Australian? In the Commissioner’s determination, Clearview told the office that it, in the words of the Commissioner, “collects images without regard to geography or source”. Presumably, it can tie some photos to people living in Australia. But does it reliably retain information about, say, an Australian living abroad? It seems like one of those edge cases that could affect about a million people.

Also, in that determination, I found it telling that Clearview “repeatedly asserted that it is not subject to the Privacy Act” largely because it is a company based in the U.S. and, under U.S. law, it is arguing its collection of biometric information from published images is legal. The U.S. is like a tax haven but for privacy abuses. (And taxes too.)

Will Knight, Wired:

The company’s cofounder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from across the web — more than three times as many as has been previously reported.

[…]

Some of Clearview’s new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including “deblur” and “mask removal” tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person’s face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images.

I am stunned Clearview is allowed to remain in business, let alone continue to collect imagery and advance new features, given how invasive, infringing, and dangerous its technology is.

Sometimes, it makes sense to move first and wait for laws and policies to catch up. Facial recognition is not one of those times. And, to make matters worse, policymakers have barely gotten started in many jurisdictions. We are accelerating toward catastrophe and Clearview is leading the way.

The marketplace for exploits and software of an ethically questionable nature is a controversial one, but something even I can concede has value. If third-party vendors are creating targeted surveillance methods, it means that the vast majority of us can continue to have secure and private systems without mandated “back doors”. It seems like an agreeable compromise so long as those vendors restrict their sales to governments and organizations with good human rights records.

NSO Group, creators of Pegasus spyware, seems to agree. Daniel Estrin, reporting last month at NPR:

NSO says it has 60 customers in 40 countries, all of them intelligence agencies, law enforcement bodies and militaries. It says in recent years, before the media reports, it blocked its software from five governmental agencies, including two in the past year, after finding evidence of misuse. The Washington Post reported the clients suspended include Saudi Arabia, Dubai in the United Arab Emirates and some public agencies in Mexico.

Pegasus can have legitimate surveillance use, but it has great potential for abuse. NSO Group would like us to believe that it cares deeply about selling only to clients that will use the software to surveil possible terrorists and valuable criminal targets. So, how is that going?

Bill Marczak, et al., Citizen Lab:

We identified nine Bahraini activists whose iPhones were successfully hacked with NSO Group’s Pegasus spyware between June 2020 and February 2021. Some of the activists were hacked using two zero-click iMessage exploits: the 2020 KISMET exploit and a 2021 exploit that we call FORCEDENTRY.

[…]

At least four of the activists were hacked by LULU, a Pegasus operator that we attribute with high confidence to the government of Bahrain, a well-known abuser of spyware. One of the activists was hacked in 2020 several hours after they revealed during an interview that their phone was hacked with Pegasus in 2019.

As Citizen Lab catalogues, Bahrain’s record of human rights failures and internet censorship should have indicated to NSO Group that misuse of its software was all but guaranteed.

NSO Group is just one company offering software with dubious ethics. Remember Clearview? When Buzzfeed News reported last year that the company was expanding internationally, Hoan Ton-That, Clearview’s CEO, brushed aside human rights concerns:

“Clearview is focused on doing business in USA and Canada,” Ton-That said. “Many countries from around the world have expressed interest in Clearview.”

Later last year, Clearview went a step further and said it would terminate private contracts, and its Code of Conduct promises that it only works with law enforcement entities and that searches must be “authorized by a supervisor”. You can probably see where this is going.

Ryan Mac, Caroline Haskins, and Antonio Pequeño IV, Buzzfeed News:

Like a number of American law enforcement agencies, some international agencies told BuzzFeed News that they couldn’t discuss their use of Clearview. For instance, Brazil’s Public Ministry of Pernambuco, which is listed as having run more than 100 searches, said that it “does not provide information on matters of institutional security.”

But data reviewed by BuzzFeed News shows that individuals at nine Brazilian law enforcement agencies, including the country’s federal police, are listed as having used Clearview, cumulatively running more than 1,250 searches as of February 2020. All declined to comment or did not respond to requests for comment.

[…]

Documents reviewed by BuzzFeed News also show that Clearview had a fledgling presence in Middle Eastern countries known for repressive governments and human rights concerns. In Saudi Arabia, individuals at the Artificial Intelligence Center of Advanced Studies (also known as Thakaa) ran at least 10 searches with Clearview. In the United Arab Emirates, people associated with Mubadala Investment Company, a sovereign wealth fund in the capital of Abu Dhabi, ran more than 100 searches, according to internal data.

As noted, this data only covers up until February last year; perhaps the policies governing acceptable use and clientele were only implemented afterward. But it is alarming to think that a company which bills itself as the world’s best facial recognition provider ever felt comfortable enabling searches by regimes with poor human rights records, private organizations, and individuals in non-supervisory roles. It does jibe with Clearview’s apparent origin story, and that should be a giant warning flag.

These companies can make whatever ethical promises they want, but money talks louder. Unsurprisingly, when faced with a choice about whether to allow access to their software judiciously, they choose to gamble that nobody will find out.

Kashmir Hill, the New York Times:

Clearview AI is currently the target of multiple class-action lawsuits and a joint investigation by Britain and Australia. That hasn’t kept investors away.

The New York-based start-up, which scraped billions of photos from the public internet to build a facial-recognition tool used by law enforcement, closed a Series B round of $30 million this month.

The investors, though undeterred by the lawsuits, did not want to be identified. Hoan Ton-That, the company’s chief executive, said they “include institutional investors and private family offices.”

It makes sense that these investors would want their association with the company kept secret, since identifying them as supporters of a creepy facial recognition company is more embarrassing that their inability to understand irony. Still, it shows how the free market is betting that this company will grow and prosper despite its disregard for existing laws, proposed legislation, and a general sense of humanity or ethics.

Dismantle this company and legislate its industry out of existence. Expose the investors who are propping it up.

Kashmir Hill has continued to report on Clearview AI after breaking the news of its existence early last year. Today, for the New York Times Magazine, she shared an update on the company:

It seemed entirely possible that Clearview AI would be sued, legislated or shamed out of existence. But that didn’t happen. With no federal law prohibiting or even regulating the use of facial recognition, Clearview did not, for the most part, change its practices. Nor did it implode. While it shut down private companies’ accounts, it continued to acquire government customers. Clearview’s most effective sales tool, at first, was a free trial it offered to anyone with a law-enforcement-affiliated email address, along with a low, low price: You could access Clearview AI for as little as $2,000 per year. Most comparable vendors — whose products are not even as extensive — charged six figures. The company later hired a seasoned sales director who raised the price. “Our growth rate is crazy,” Hoan Ton-That, Clearview’s chief executive, said.

Clearview has now raised $17 million and, according to PitchBook, is valued at nearly $109 million. As of January 2020, it had been used by at least 600 law-enforcement agencies; the company says it is now up to 3,100. […]

Any way you cut it, this is disturbing. The public’s reaction to news of Clearview’s existence was overwhelmingly negative, but police saw that article as an advertisement.

Shameless companies will not change from public pressure.

Hill:

Clearview is now fighting 11 lawsuits in the state [Illinois], including the one filed by the A.C.L.U. in state court. In response to the challenges, Clearview quickly removed any photos it determined came from Illinois, based on geographical information embedded in the files it scraped — but if that seemed on the surface like a capitulation, it wasn’t.

Clearview assumes that it can scrape, store, and transform anything in the public realm unless it is certain it would be prohibited from doing so. Data is inherently valuable to the company, so it is incentivized to capture as much as possible.

But that means there is likely a whole bunch of stuff in its systems that it cannot legally use but has no way of knowing that. For example, there are surely plenty of photos taken in Illinois that do not have GPS coordinates in their metadata. Why would any of those be cleared from Clearview’s inventory? Clearview also allows people to request removal from its systems, but there are surely photographs from those people that are not positively matched, so the company has no way of identifying them as part of a removal request.

This is an aside, but that raises an interesting question: if images scraped without legal consent were used to train Clearview’s machine learning models, is it truly possible to remove those illegal images?

If Clearview were even slightly more ethical, it would only scrape the images it has explicit permission to access. I would still disagree with that on its face, but at least it would be done with permission. But this is the perhaps inevitable consequence of the Uber-like fuck your rules philosophy — as Hill writes, it is a “gamble that the rules would successfully be bent in their favor”.

Sadly, that Silicon Valley indifference to legality and ethics will not remain localized. There is no way to know for certain that Clearview has complied with the Privacy Commissioner’s recommendation that the company must delete all collected data on Canadians.

Hill digs into Clearview’s origin story, too, which of course involves Peter Thiel and someone who is even more detestable:

After I broke the news about Clearview AI, BuzzFeed and The Huffington Post reported that Ton-That and his company had ties to the far right and to a notorious conservative provocateur named Charles Johnson. I heard the same about Johnson from multiple sources. So I emailed him. At first, he was hesitant to talk to me, insisting he would do so only off the record, because he was still frustrated about the last time he talked to a New York Times journalist, when the media columnist David Carr profiled him in 2014.

“Provacateur” is an awfully kind description of Johnson, though Hill expands further in the successive paragraphs. Just so we’re clear here, Johnson is a hateful subreddit in human form; a moron attached to a megaphone. Johnson has a lengthy rap sheet of crimes against intelligence, decency, facts, and ethics. He has denied the Holocaust, and did Jacob Wohl’s dumb bit before Wohl was old enough to vote.

Johnson is, apparently, a sort of unofficial cofounder of Clearview, who agreed to talk with Hill apparently because he thought it would rehabilitate his image. Reading between the lines, as of earlier this month he still held shares in a company that seeks to eradicate privacy on a global scale, so I am not sure how that is supposed to make me think more highly of him.

I thought this was amusing:

Johnson believes that giving this superpower only to the police is frightening — that it should be offered to anyone who would use it for good. In his mind, a world without strangers would be a friendlier, nicer world, because all people would be accountable for their actions.

I thought “cancel culture” was a scourge; I guess some fairly terrible people want to automate it.

Let’s not give this “superpower” to anyone.