Search Results for: Clearview

The Office of the Privacy Commissioner of Canada has been investigating Clearview’s behaviour since Kashmir Hill of the New York Times broke the story a little more than a year ago. In its overview, the Office said:

Clearview did not attempt to seek consent from the individuals whose information it collected. Clearview asserted that the information was “publicly available”, and thus exempt from consent requirements. Information collected from public websites, such as social media or professional profiles, and then used for an unrelated purpose, does not fall under the “publicly available” exception of PIPEDA, PIPA AB or PIPA BC. Nor is this information “public by law”, which would exempt it from Quebec’s Private Sector Law, and no exception of this nature exists for other biometric data under LCCJTI. Therefore, we found that Clearview was not exempt from the requirement to obtain consent.

Furthermore, the Offices determined that Clearview collected, used and disclosed the personal information of individuals in Canada for inappropriate purposes, which cannot be rendered appropriate via consent. We found that the mass collection of images and creation of biometric facial recognition arrays by Clearview, for its stated purpose of providing a service to law enforcement personnel, and use by others via trial accounts, represents the mass identification and surveillance of individuals by a private entity in the course of commercial activity. We found Clearview’s purposes to be inappropriate where they: (i) are unrelated to the purposes for which those images were originally posted; (ii) will often be to the detriment of the individual whose images are captured; and (iii) create the risk of significant harm to those individuals, the vast majority of whom have never been and will never be implicated in a crime. Furthermore, it collected images in an unreasonable manner, via indiscriminate scraping of publicly accessible websites.

The Office said that Clearview should entirely exit the Canadian market and remove data it collected about Canadians. But, as Kashmir Hill says, it is not a binding decision, and it is much easier said than done:

The commissioners, who noted that they don’t have the power to fine companies or make orders, sent a “letter of intention” to Clearview AI telling it to cease offering its facial recognition services in Canada, cease the scraping of Canadians’ faces, and to delete images already collected.

That is a difficult order: It’s not possible to tell someone’s nationality or where they live from their face alone.

The weak excuse for a solution that Clearview has come up with is to tell Canadians to individually submit a request to be removed from its products. To be removed, you must give Clearview your email address and a photo of your face. Clearview expects that it is allowed to process facial recognition for every single person for whom images are available unless they manually opt out. It insists that it does not need consent because the images it collects are public. But, as the Office correctly pointed out, the transformative use of these images requires explicit consent:

Beyond Clearview’s collection of images, we also note that its creation of biometric information in the form of vectors constituted a distinct and additional collection and use of personal information, as previously found by the OPC, OIPC AB and OIPC BC in the matter of Cadillac Fairview.

[…]

In our view, biometric information is sensitive in almost all circumstances. It is intrinsically, and in most instances permanently, linked to the individual. It is distinctive, unlikely to vary over time, difficult to change and largely unique to the individual. That being said, within the category of biometric information, there are degrees of sensitivity. It is our view that facial biometric information is particularly sensitive. Possession of a facial recognition template can allow for identification of an individual through comparison against a vast array of images readily available on the Internet, as demonstrated in the matter at hand, or via surreptitious surveillance.

The Office also found that scraping online profiles does not match the legal definition of “publicly accessible”.

This is such a grotesque violation of privacy that there is no question in my mind that Clearview and companies like it cannot continue to operate. United States law has an unsurprisingly permissive attitude towards this sort of thing, but its failure to legislate on a national level should not be exposed to the rest of the world.

Unfortunately, this requires global participation. Every country must have better regulation of this industry because, as Hill says, there is no way to determine nationality from a photo. If Clearview is outlawed in the U.S., what is there to stop it registering in another nationality with similarly weak regulation?

Clearview is almost certainly not the only company scraping the web with the intent of eradicating privacy as we know it, too. Decades of insufficient regulation have brought us to this time. We cannot give up on the basic right to privacy. But I fear that it has been sacrificed to a privatized version of the police state.

If a government directly created something like the Clearview system, it would be seen as a human rights violation. How is there any moral difference when it is instead created by private industry?

Jason Snell again graciously allowed me to participate in the annual Six Colors Apple report card, so I graded the performance of a multi-trillion-dollar company from my low-rent apartment. There simply aren’t enough column inches in his report card for all of my silly thoughts. I have therefore generously given myself some space here to share them with you.

As much as 2020 was a worldwide catastrophe, it was impressive to see Apple handle pandemic issues remarkably well and still deliver excellence in the hardware, software, and services that we increasingly depended on. If there wasn’t widespread disease, Apple’s year could have played out nearly identically and I do not imagine it would have been received any differently.

Now, onto specific categories, graded from 1–5, 5 being best and 1 being Apple TV-iest. Spoiler alert!

Mac: 4

It will be a while before we know if 2020 was to personal computers what 2007 was to phones, but the M1 Macs feel similarly impactful on the industry at large. Apple demonstrated a scarcely-believable leap by delivering Macs powered by its own SoCs that got great battery life and outperformed just about any other Mac that has ever existed. And to make things even more wild, Apple shoehorned this combination into the least-expensive computers it makes. A holy crap revolutionary year, and it is only an appetizer for forthcoming iMac and MacBook Pro models.

Aside from the M1 models, Apple updated nearly all of its Mac product range except the Mac Pro. The iMac Pro only dropped its 8-core config, but pretty much everything else is the same as when it debuted three years ago.

The best news, aside from the M1 lineup, is that the loathed butterfly keyboard was finally banished from the Mac. Good riddance.

MacOS Big Sur is a decent update by recent MacOS standards. The new design language is going in a good direction, but there are contrast and legibility problems. It is, thankfully, night-and-day more stable than Catalina which I am thrilled that I skipped on my iMac and annoyed that I installed on a MacBook Air that will not get a Big Sur update. Fiddlesticks. But Big Sur has its share of new and old bugs that, while doing nothing so dramatic as forcing the system to reboot, indicate to me that the technical debt of years past is not being settled. More in the Software Quality section.

iPhone: 4

I picked a great year to buy a new iPhone; I picked a terrible year to buy a new iPhone. The five new phones released in 2020 made for the easiest product line to understand and the hardest to choose from. Do I get the 12 Mini, the size I have been begging Apple to make? Do I get the 12 Pro Max with its ridiculously good camera? How about one of the middle models? What about the great value of the SE? It was a difficult decision, but I got the Pro. And then, because I wish the Pro was lighter and smaller, I seriously considered swapping it for the Mini, but didn’t because ProRAW was released shortly after. Buying a telephone is just so hard.

iOS 14 is a tremendous update as well. Widgets are a welcome addition to everyone’s home screen and have spurred a joyous customization scene. ProRAW is a compelling feature for the iPhone 12 Pro models, and is implemented thoughtfully and simply. The App Drawer is excellent for a packrat like me.

2019 was a rough year for Apple operating system stability but, while iOS 13 was better for me than Catalina, iOS 14 has been noticeably less buggy and more stable. I hope this commitment to features and quality can be repeated every year.

Consider my 4-out-of-5 grade a very high 4, but not quite a 5. The iPhone XR remains in the lineup and feels increasingly out of place, and I truly wish the Pro came in a smaller and lighter package. I considered going for a perfect score but, well, it’s my report card.

iPad: 3

The thing the iPad lineup has needed most from the late 2010s was clarity; for the past few years, that is what it has gotten. 2020 brought good hardware updates that has made each iPad feel more accurately placed in the line — with the exception of the Mini, which remains a year behind its entry-level sibling.

But the biggest iPad updates this year were in accessories and in software. Trackpad and mouse compatibility updated a legacy input method for a modern platform, and its introduction was complemented by the new Magic Keyboard case. iPadOS 14 brought further improvements like sidebars, pull-down menus, and components that no longer cover the entire screen.

Despite all of these changes, I remain hungry for more. This is only the second year the iPad has had “iPadOS” and, while it is becoming more of its own thing, its roots in a smartphone operating system are still apparent in a way that sometimes impedes its full potential.

After many difficult years, it seems like Apple is taking the iPad seriously again. I would like to see more steady improvements so that every version of iPadOS feels increasingly like its own operating system even if it continues to look largely like iOS. This one is tougher to grade. I have waffled between 3 and 4, but I settled on the lower number. Think of it as a positive and enthusiastic 3-out-of-5.

Wearables (including Apple Watch): 3

Grades were submitted separately for the Apple Watch and Wearables. I have no experience with the Apple Watch this year, so I did not submit a grade.

Only one new AirPods model was introduced in 2020 but it was big. The AirPods Max certainly live up to their name in weight alone.

Aside from that, rattling in the AirPods Pro models was a common problem from when they were released and it took until October 2020, a full year after the product’s launch, for Apple to correct the problem. Customers can exchange their problematic pair for free, but the environmental waste of even a small percentage of flawed models is hard to bat away.

AirPods continue to be the iconic wireless headphone in the same way that white earbuds were to the iPod. I wish they were less expensive, though, particularly since the batteries have a lifespan of only a couple of years of regular use.

Apple TV: 1

I guess my lowest grade must go to the product that seems like Apple’s lowest priority. It is kind of embarrassing at this point.

The two Apple TV models on sale today were released three and five years ago, and have remained unchanged since. It isn’t solely a problem of device age or cost; it is that these products feel like they were introduced for a different era. This includes the remote, by the way. I know it is repetitive to complain about, but it still sucks and there appears to be no urgency for completing the new remote.

On the software side, tvOS 14 contains few updates. It now supports 4K videos in YouTube and through AirPlay, and HomeKit camera monitoring. Meanwhile, the Music app still does not work well, screensavers no longer match the time of day so there are sometimes very bright screensavers at night, and the overuse of slow animations makes the entire system feel sluggish. None of these things are new in tvOS 14; they are all very old problems that remain unfixed.

The solution to a good television experience remains elusive — and not just for Apple.

Services: 4

No matter whether you look at Apple’s balance sheet or its product strategy, it is clear that it is now fully and truly a services company. That focus has manifested in an increasingly compelling range of things you can give Apple five or ten dollars a month for; or, if you are fully entranced, you can get the whole package for a healthy discount in the new Apple One bundle subscription. Cool.

It has also brought increased reliability to the service offerings. Apple’s internet products used to be a joke, but they have shown near-perfect stability in recent years. Cloud-based services had a rocky year for stability in 2020 and iCloud was no exception around Christmastime but, generally, the reliability of these services instills confidence.

New for this year were the well-received Fitness+ workout service and a bevy of new TV+ shows. Apple also rolled out services to a bunch more countries. But this focus on services has not come without its foibles, as Apple aggressively promotes subscriptions throughout its products in advertisements, up-sells, and push notifications to the irritation of anyone who wishes not to subscribe. Some of these services also introduce liabilities in antitrust and corporate behaviour, something which I will explore later.

HomeKit

I have no experience with HomeKit so I did not grade it.

Hardware Reliability: 3

2020 was the year we bid farewell to the butterfly keyboard and, with it, the most glaring hardware reliability problem in Apple’s lineup. A quick perusal of Apple’s open repair programs and the “hardware” tag on Michael Tsai’s blog shows a few notable quality problems:

  • “Stained” appearance with the anti-reflective coating on Retina display-equipped notebooks

  • Display problems with iPhone 11 models manufactured into May 2020

  • AirPods Pro crackling problems that were only resolved a full year after the product’s debut

Overall, an average year for hardware quality, but an improvement in the sense that you can no longer buy an Apple laptop with a defective keyboard design.

I suppose this score could have gone one notch higher.

Software Quality: 4

The roller coaster ride continues. 2019? Not good! 2020? Pretty good!

Big Sur is stable, but its redesign contains questionable choices that impair usability, some of which I strongly feel should not have shipped — to name two, notifications and the new alert style. Outside of redesign issues, I have seen new graphical glitches when editing images in Photos or using Finder’s Quick Look feature on my iMac. The Music app, while better than the one in Catalina, is slower and more buggy than iTunes used to be. There are older problems, too: with PDF rendering in Preview, with APFS containers in Finder (and Finder’s overall speed), and with data loss in Mail.

iOS 14 is much stable and without major bugs; or, at least, none that I have seen. There are animation glitches here-and-there, and I wish Siri suggestions were better.

On the other end of the scale, tvOS 14 is mediocre, some first-party apps have languished, and using Siri in any context is an experience that still ranges from lacklustre to downright shameful. I hope software quality improves in the future, particularly on the Mac. MacOS has never seemed less like it will cause a whole-system crash, but the myriad bugs introduced in the last several years have made it feel brittle.

I am now thinking I mixed up the scores for software and hardware quality. Oops.

Developer Relations: 2

An absolutely polarized year for developer relations.

On the one hand, Apple introduced a new mechanism to challenge rulings and added a program to reduce commissions to 15% for developers making less than $1 million annually. Running WWDC virtually was also a silver lining in a dark year. It’s the first WWCC I attended because hotels are thousands of dollars but my apartment has no extra cost.

On the other — oh boy, where do we begin? Apple is being sued by Epic Games along antitrust lines; Epic’s arguments are being supported by Facebook, Microsoft, and plenty of smaller developers. One can imagine ulterior motives for the plaintiff’s side, but it does not speak well for Apple’s status among developers that it is being sued. Also, there was that matter of the Hey app’s rejection just days before WWDC, and the difficulty of trying to squeeze the streaming game app model into Apple’s App Store model. Documentation still stinks, and Apple still has communication problems with developers.

Apple’s relationship with developers hit its lowest point in recent memory in 2020, but it also spurred the company to make changes. Developers should be excited to build apps for the Mac instead of relying on shitty cross-platform frameworks like Electron. They should be motivated by the jewellery-like quality of the iPhone 12 models and build apps that match in fit and finish. But I have seen enough comments this year that indicate that everyone — from one-person shops to moderate indies to big names — is worried that their app will be pulled from the store for some new interpretation of an old rule, or that Apple’s services push will raid their revenue model. There must be a better way.

Social/Societal Impact: 2

As with its developer relations, Apple’s 2020 environmental and social record sits at the extreme ends of the scale.

Apple’s response to the pandemic is commendable, from what I could see on the outside. Its store closures often outpaced restrictions from local health authorities in Canada and the U.S., but it kept retail staff on and found ways for them to work from home. It was also quick to allow corporate employees to work remotely, something it generally resists.

In a year of intensified focus on racial inequities, Apple pledged $100 million to projects intended to help right long-standing wrongs, and committed to diversity-supporting corporate practices. There is much more progress that it can make internally, particularly in leadership roles, but its recent hiring practices indicate that it is trying to do better.

Apple continues to invest in privacy and security features across its operating system and services lineup, like allowing users to decline third-party tracking in iOS apps. It also bucked another ridiculous request from the Justice Department and disabled an enterprise distribution certificate used by the creepy facial recognition company Clearview AI.

But a report at the beginning of 2020 drew a connection between discussions with the FBI and Apple’s failure to encrypt iCloud backups. It remains unclear whether one directly followed the other. Apple’s encryption policies remain confusing as far as knowing exactly which parties have access to what data. Still, Apple’s record on privacy is a high standard that its peers will never meet unless they change their business model.

China remains Apple’s biggest liability on two fronts: its supply chain, and services like the App Store and Apple TV Plus. Several reports in 2020 said that Apple was uniquely deferential to Chinese government sensitivities in its App Store policies and its original media. Many other big name companies, wary of being excluded from the Chinese market, have also faced similar accusations. But it is hard to think of one other than Apple that must balance those demands against its entire manufacturing capability. No company can be complicit in the Chinese government’s inhumane treatment of Uyghurs.

Apple is also facing increased antitrust scrutiny around the world for the way it runs the App Store, the commissions it charges third-party developers, and the way it uses private APIs.

Apple’s environmental record is less of a mixed bag. It is recycling more of the materials used in its products, new iPhones come in much smaller boxes containing nearly no plastic. Apple also says that its own operations are entirely carbon neutral, and says that its supply chain will follow by 2030.

For environmental reasons, many new products no longer ship with AC adapters in the box, and to prove it wasn’t screwing around, Apple made Lisa Jackson announce this while standing on the roof of its headquarters. Reactions to this change were predictably mixed, but it seems plausible that this has a big impact at Apple’s scale. I’m still not convinced that it makes sense to sell its charging mat without one.

Apple still isn’t keen on third-party repairs of its products, but it expanded its independent repair shop program to allow servicing of Macs.

If this were two separate categories, I think Apple’s environmental record is a 4/5 and its social record is a 2/5 — at best. I am not averaging those grades because I see liabilities with China and antitrust to be too significant.

Closing Remarks

As I wrote at the top, 2020 was a standout year in Apple’s history — even without considering the many obstacles created by this ongoing pandemic. As my workflow is dependent on these products and services, I appreciate the hard work that has gone into improving their features, but I am even happier that everything I use is, on the whole, more reliable.

What the heck is up with the Apple TV, though?

John Oliver:

This technology raises troubling philosophical questions about personal freedom, and, right now, there are also some very immediate practical issues. Even though it is currently being used, this technology is still very much a work in progress, and its error rate is particularly high when it comes to matching faces in real time. In fact, in the U.K., when human rights researchers watched police put one such system to the test, they found that only eight out of 42 matches were ‘verifiably correct’ — and that’s even before we get into the fact that these systems can have some worrying blind spots, as one researcher found out when testing numerous algorithms, including Amazon’s own Rekognition system:

At first glance, MIT researcher Joy Buolamwini says that the overall accuracy rate was high, even though all companies better detected men’s faces than women’s. But the error rate grew as she dug deeper.

“Lighter male faces were the easiest to guess the gender on, and darker female faces were the hardest.”

One system couldn’t even detect whether she had a face. The others misidentified her gender. White guy? No problem.

Yeah: “white guy? No problem” which, yes, is the unofficial motto of history, but it’s not like what we needed right now was to find a way for computers to exacerbate the problem. And it gets worse. In one test, Amazon’s system even failed on the face of Oprah Winfrey, someone so recognizable her magazine only had to type the first letter of her name and your brain autocompleted the rest.

Oliver covers a broad scope of different things that fit under the umbrella definition of “facial recognition” — everything from Face ID to police databases and Clearview AI.

Today, the RCMP and Clearview suspended their contract; the RCMP was, apparently, Clearview’s last remaining client in Canada.

Such a wide range of technologies raise complex questions about their regulation. Sweeping bans may prohibit the use of something like Face ID or Windows Hello, but even restricting use based on consent would make it difficult to build something like the People library built into Photos. Here’s how Apple describes it:

Face recognition and scene and object detection are done completely on your device rather than in the cloud. So Apple doesn’t know what’s in your photos. And apps can access your photos only with your permission.

Apple even put together a lengthy white paper (PDF) that, in part, describes how iOS and MacOS keep various features in Photos private to the user. However, in this case, the question is not about the privacy of one’s own data, but whether it is fair for someone to use facial recognition privately. It is a question of agency. Is it fair for anyone to have their face used, without their permission, to automatically associate pictures of themselves? Perhaps it is, but is it then fair to do so more publicly, as Facebook does? What is a comfortable line?

I don’t mean that as a rhetorical question. As Oliver often says, “the answer to the question of ‘where do we draw the line?’ is somewhere”, and I think there is a “somewhere” in the case of facial recognition. But the legislation to define it will need to be very nuanced.

Rebecca Heilweil, Vox:

So it seems that as facial recognition systems become more ambitious — as their databases become larger and their algorithms are tasked with more difficult jobs — they become more problematic. Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, told Recode that facial recognition needs to be evaluated on a “sliding scale of harm.”

When the technology is used in your phone, it spends most of its time in your pocket, not scanning through public spaces. “A Ring camera, on the other hand, isn’t deployed just for the purpose of looking at your face,” Guariglia said. “If facial recognition was enabled, that’d be looking at the faces of every pedestrian who walked by and could be identifying them.”

[…]

A single law regulating facial recognition technology might not be enough. Researchers from the Algorithmic Justice League, an organization that focuses on equitable artificial intelligence, have called for a more comprehensive approach. They argue that the technology should be regulated and controlled by a federal office. In a May proposal, the researchers outlined how the Food and Drug Administration could serve as a model for a new agency that would be able to adapt to a wide range of government, corporate, and private uses of the technology. This could provide a regulatory framework to protect consumers from what they buy, including devices that come with facial recognition.

This is such a complex field of technology that it will take a while to establish ground rules and expectations. Something like Clearview AI’s system should not be allowed; it is a heinous abuse of publicly-visible imagery. Real-time recognition is also extremely creepy and I believe should also be prohibited.

There are further complications: though the U.S. may be attempting to sort out its comfort level, those boundaries have been breached elsewhere.

Adam Satariano, New York Times:

The law, known as the General Data Protection Regulation, or G.D.P.R., created new limits on how companies can collect and share data without user consent. It gave governments broad authority to impose fines of up to 4 percent of a company’s global revenue, or to force changes to its data-collection practices. The policy served as a model for new privacy rules in Brazil, Japan, India and elsewhere.

But since the law was enacted, in May 2018, Google has been the only giant tech company to be penalized — a fine of 50 million euros, worth roughly $54 million today, or about one-tenth of what Google generates in sales each day. No major fines or penalties have been announced against Facebook, Amazon or Twitter.

The inaction is creating tension within European governments, as some leaders call for speedier enforcement and broader changes. Privacy groups and smaller tech companies complain that companies like Facebook and Google are avoiding tough oversight. At the same time, the public’s experience with the G.D.P.R. has been a frustrating number of pop-up consent windows to click through when visiting a website.

It seems bizarre to treat the total number of punishments issued as a barometer for a law’s effectiveness. Surely a regulation’s primary purpose is to change behaviour, particularly as GDPR was passed two years before it went into effect. Indeed, it seems as though this law has made some headway: tech companies now offer ways for users to download personal data — something that was never possible before and is increasingly important — users can now revoke permissions, and companies of all types have been forced to reevaluate how much personal information they collect. The New York Times itself changed its ad policies in the E.U., and total penalties assessed have been in the range of hundreds of millions of Euros. The law is, to some extent, working — and those cookie consent forms are broadly illegal.

But it is frustrating to read how poorly resources have been allocated to GDPR investigators, particularly in Ireland, where big tech companies have historically situated their E.U. headquarters for tax avoidance reasons. For what it’s worth, I doubt any officials could have successfully revealed the rats’ nest of data harvesting technologies in two years, but it sure is not helped by insufficient funding and corporate obfuscation tactics.

Thomas Smith, OneZero:

What does a Clearview profile contain? Up until recently, it would have been almost impossible to find out. Companies like Clearview were not required to share their data, and could easily build massive databases of personal information in secret.

Thanks to two landmark pieces of legislation, though, that is changing. In 2018, the European Union began enforcing the General Data Protection Regulation (GDPR). And on January 1, 2020, an equivalent piece of legislation, the California Consumer Privacy Act (CCPA), went into effect in my home state.

[…]

Within a week of the Times’ expose, I submitted my own CCPA request to Clearview. For about a month, I got no reply. The company then asked me to fill out a web form, which I did. Another several weeks passed. I finally received a message from Clearview asking for a copy of my driver’s license and a clear photo of myself.

I provided these. In minutes, they sent back my profile.

Companies like Clearview AI are the next level in the kind of “data enrichment” firms of the type that suffered a massive data breach last year. After that breach, I submitted requests to every big data enrichment company I could find to see what they had on me. Many had nothing, but a few had built extremely accurate profiles of me based solely on whatever they could scrape. I had never heard of these companies; I had to ask them, individually, to delete anything they had on me.

A unique quality of companies that are inherently unethical is that once the narrative thread starts to unravel, the whole thing collapses pretty quickly. As reporters begin digging into it and those with insider knowledge begin to speak up, it’s hard for their public relations teams to keep everything in a nice well-packaged story.

So, let’s look at a few developments regarding Clearview AI.

Dave Gershgorn, OneZero:

Clearview AI worked to build a national database of every mug shot taken in the United States during the past 15 years, according to an email obtained by OneZero through a public records request.

[…]

It’s unclear how many images a national database of mug shots would add to the online sources Clearview AI has already scraped. For context, the FBI’s national facial recognition database contains 30 million mug shots. Vigilant Solutions, another facial recognition company, has also compiled a database of 15 million mug shots from public sources.

Caroline Haskins, Ryan Mac, and Logan McDonald, Buzzfeed News:

Clearview AI, the secretive company that’s built a database of billions of photos scraped without permission from social media and the web, has been testing its facial recognition software on surveillance cameras and augmented reality glasses, according to documents seen by BuzzFeed News.

Clearview, which claims its software can match a picture of any individual to photos of them that have been posted online, has quietly been working on a surveillance camera with facial recognition capabilities. That device is being developed under a division called Insight Camera, which has been tested by at least two potential clients according to documents.

On its website — which was taken offline after BuzzFeed News requested comment from a Clearview spokesperson — Insight said it offers “the smartest security camera” that is “now in limited preview to select retail, banking and residential buildings.”

Kashmir Hill, New York Times:

In response to the criticism, Clearview published a “code of conduct,” emphasizing in a blog post that its technology was “available only for law enforcement agencies and select security professionals to use as an investigative tool.”

The post added: “We recognize that powerful tools always have the potential to be abused, regardless of who is using them, and we take the threat very seriously. Accordingly, the Clearview app has built-in safeguards to ensure these trained professionals only use it for its intended purpose: to help identify the perpetrators and victims of crimes.”

The Times, however, has identified multiple individuals with active access to Clearview’s technology who are not law enforcement officials. And for more than a year before the company became the subject of public scrutiny, the app had been freely used in the wild by the company’s investors, clients and friends.

Those with Clearview logins used facial recognition at parties, on dates and at business gatherings, giving demonstrations of its power for fun or using it to identify people whose names they didn’t know or couldn’t recall.

Any one of these stories would, in isolation, be worrying. But seeing all three together — particularly with the context of the things I’ve linked to about Clearview over the past several weeks — shine a light on a distressing nascent industry. I strongly suspect that there are other companies exactly like Clearview that are taking steps to avoid exposure.

This industry simply should not exist.

Logan McDonald, Ryan Mac, and Caroline Haskins, Buzzfeed News:

In distributing its app for Apple devices, Clearview, which BuzzFeed News reported earlier this week has been used by more than 2,200 public and private entities including Immigration and Customs Enforcement (ICE), the FBI, Macy’s, Walmart, and the NBA, has been sidestepping the Apple App Store, encouraging those who want to use the software to download its app through a program reserved exclusively for developers. In response to an inquiry from BuzzFeed News, Apple investigated and suspended the developer account associated with Clearview, effectively preventing the iOS app from operating.

An Apple spokesperson told BuzzFeed News that the Apple Developer Enterprise Program should only be used to distribute apps within a company. Companies that violate that rule, the spokesperson said, are subject to revocation of their accounts. Clearview has 14 days to respond to Apple.

Zack Whittaker, TechCrunch:

TechCrunch found Clearview AI’s iPhone app on an public Amazon S3 storage bucket on Thursday, despite a warning on the page that the app is “not to be shared with the public.”

The page asks users to “open this page on your iPhone” to install and approve the company’s enterprise certificate, allowing the app to run.

But this, according to Apple’s policies, is prohibited if the app’s users are outside of Clearview AI’s organization.

Dell Cameron, Dhruv Mehrotra, and Shoshana Wodinsky of Gizmodo found the Android version of the app yesterday as well. They were unable to log in, but observed connections being opened to third-party app analytics providers:

Clearview CEO Hoan Ton-That said in an email to Gizmodo that the companion app is a prototype and “is not an active product.” RealWear, another company, which makes “a powerful, fully-rugged, voice operated Android computer” that is “worn on the head,” is also mentioned in the app, though it’s not immediately clear what for.

The app also contains a script created by Google for scanning barcodes in connection with drivers licenses. (The file is named “Barcode$DriverLicense.smali”) Asked about the feature, Ton-That responded: “It doesn’t scan drivers licenses.” Gizmodo also inquired about the app’s so-called “private search mode” but did not get a response.

The company frequently demurs when asked difficult but legitimate questions, and its clients deny all knowledge before recanting upon evidence being presented. Everything about Clearview is skeevy and it should not exist. I propose that everything it has ever created be sunk into the ocean.

About a week ago, Hoan Ton-That, the CEO of Clearview AI — the creepy facial recognition company that the New York Times revealed in January and which has a database filled with photos posted to social media — claimed in an interview on Fox Business that his company’s technology was “strictly for law enforcement to do investigations”. That has been revealed to be a lie after Buzzfeed News acquired a leaked copy of Clearview’s client list.

Ryan Mac, Caroline Haskins, and Logan McDonald:

The internal documents, which were uncovered by a source who declined to be named for fear of retribution from the company or the government agencies named in them, detail just how far Clearview has been able to distribute its technology, providing it to people everywhere, from college security departments to attorneys general offices, and in countries from Australia to Saudi Arabia. BuzzFeed News authenticated the logs, which list about 2,900 institutions and include details such as the number of log-ins, the number of searches, and the date of the last search. Some organizations did not have log-ins or did not run searches, according to the documents, and BuzzFeed News is only disclosing the entities that have established at least one account and performed at least one search.

[…]

“This is completely crazy,” Clare Garvie, a senior associate at the Center on Privacy and Technology at Georgetown Law School, told BuzzFeed News. “Here’s why it’s concerning to me: There is no clear line between who is permitted access to this incredibly powerful and incredibly risky tool and who doesn’t have access. There is not a clear line between law enforcement and non-law enforcement.”

Ryan Mac on Twitter:

Reporting this story was surreal. Numerous organizations initially denied that they had ever used Clearview. We then followed up, and those same orgs later found that employees had signed up and used the software without approval from higher ups. This happened multiple times.

A lack of general privacy principles written into law makes it possible for Clearview to indiscriminately sell its highly accurate facial recognition software with little oversight. That is extremely concerning. It should not be so trivial to reduce the overall expectation of privacy to zero for a company’s profits.

Update: Allana Smith, Calgary Herald:

The Calgary Police Service has confirmed two of its officers tested controversial facial-recognition software made by Clearview AI, Postmedia has learned.

While the police service doesn’t use Clearview AI in any capacity, it said two of its members had tested the technology to see if it was worthwhile for potential investigative use.

[…]

Both the Calgary Police Service and the Edmonton Police Service had denied use of the software earlier this month, but both have since come forward with reports that several of their officers had tested the Clearview AI software.

As Mac pointed out, there’s a curious pattern to these responses: agencies that vehemently denied using Clearview are now turning around and admitting that they have, at least in some capacity.

CBS News, in an un-bylined story:

Google and YouTube have sent a cease-and-desist letter to Clearview AI, a facial recognition app that scrapes images from websites and social media platforms, CBS News has learned. The tech companies join Twitter, which sent a similar letter in January, in trying to block the app from taking pictures from their platforms.

[…]

Ton-That argued that Clearview AI has a First Amendment right to access public data. “The way we have built our system is to only take publicly available information and index it that way,” he said.

I’m no lawyer, but that’s certainly a creative interpretation of the First Amendment.

For what it’s worth, a 2018 ruling indicates that website scraping does have some First Amendment protections, but that was decided in the context of research. But it’s unclear whether that would extend to copying third-party materials for first-party profit.

Facebook and Venmo also said scraping was against their policies, but have so far not sent cease-and-desist letters.

Peter Thiel sits on Facebook’s board and is also an investor in Clearview. You’d think he could pass along the message.

Bruce Schneier, in an op-ed for the New York Times:

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect. The large internet surveillance companies like Facebook and Google collect dossiers on us more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

This is a timely article — not only because of the publicity Clearview AI has received, but also because the European Commission is considering a ban on the use of facial recognition in public, as London’s Metropolitan Police have announced they will be using widely.

Ryan Mac, Caroline Haskins, and Logan McDonald, Buzzfeed News:

Originally known as Smartcheckr, Clearview was the result of an unlikely partnership between Ton-That, a small-time hacker turned serial app developer, and Richard Schwartz, a former adviser to then–New York mayor Rudy Giuliani. Ton-That told the Times that they met at a 2016 event at the Manhattan Institute, a conservative think tank, after which they decided to build a facial recognition company.

While Ton-That has erased much of his online persona from that time period, old web accounts and posts uncovered by BuzzFeed News show that the 31-year-old developer was interested in far-right politics. In a partial archive of his Twitter account from early 2017, Ton-That wondered why all big US cities were liberal, while retweeting a mix of Breitbart writers, venture capitalists, and right-wing personalities.

It is revealing that the people behind tools that are borderline unethical and threaten our privacy expectations often also happen to be aggressively protective of their own privacy.

Nelson Minar building on top of a report published last month in Bellingcat about Yandex’s superior reverse image search:

Right now an ordinary person still can’t, for free, take a random photo of a stranger and find the name for him or her. But with Yandex they can. Yandex has been around a long time and is one of the few companies in the world that is competitive to Google. Their index is heavily biased to Eastern European data, but they have enough global data to find me and Andrew Yang.

If you use Google Photos or Facebook you’ve probably encountered their facial recognition. It’s magic, the matching works great. It’s also very limited. Facebook seems to only show you names for faces that people you have some sort of Facebook connection to. Google Photos similarly doesn’t volunteer random names. They could do more; Facebook could match a face to any Facebook user, for instance. But both services seem to have made a deliberate decision not to be a general purpose facial recognition service to identify strangers.

At the time that I linked to the Bellingcat report, I wondered why Google’s reverse image recognition, in particular, was so bad in comparison. In tests, it even missed imagery from Google Street View despite Google regularly promoting its abilities in machine learning, image identification, and so on. In what I can only explain as a massive and regrettable oversight, it is clear to me that the reason Google’s image search is so bad is because Google designed it that way. Otherwise, Google would have launched something like Yandex or Clearview AI, and that would be dangerous.

Google’s restraint is admirable. What’s deeply worrying is that it is optional — that Google could, at any time, change its mind. There are few regulations in the United States that would prevent Google or any other company from launching its own invasive and creepy facial recognition system. Recall that the only violation that could be ascribed to Clearview’s behaviour — other than an extraordinary violation of simple ethics — is that the company scraped social media sites’ images without permission. It’s a pretty stupid idea to solely rely upon copyright law as a means of reining in facial recognition.

Kashmir Hill, New York Times:

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

This investigation was published on Saturday. I’ve read it a few times and it has profoundly disturbed me on every pass, but I haven’t been surprised by it. I’m not cynical, but it doesn’t surprise me that an entirely unregulated industry motivated to push privacy ethics to their revenue-generating limits would move in this direction.

Clearview’s technology makes my skin crawl; the best you can say about the company is that its limited access prevents the most egregious privacy violations. When something like this is more widely available, it will be dangerous for those who already face greater threats to their safety and privacy — women, in particular, but also those who are marginalized for their race, skin colour, gender, and sexual orientation. Nothing will change on this front if we don’t set legal expectations that limit how technologies like this may be used.