Jerome Pesenti, VP at Meta:
In the coming weeks, we will shut down the Face Recognition system on Facebook as part of a company-wide move to limit the use of facial recognition in our products. As part of this change, people who have opted in to our Face Recognition setting will no longer be automatically recognized in photos and videos, and we will delete the facial recognition template used to identify them.
But like most challenges involving complex social issues, we know the approach we’ve chosen involves some difficult tradeoffs. For example, the ability to tell a blind or visually impaired user that the person in a photo on their News Feed is their high school friend, or former colleague, is a valuable feature that makes our platforms more accessible. But it also depends on an underlying technology that attempts to evaluate the faces in a photo to match them with those kept in a database of people who opted-in. The changes we’re announcing today involve a company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.
Good. Pesenti says this will affect over a billion users, or about one-third Facebook’s user base. When it launched in 2010, users were opted into it by default; it took until 2019 for the company to require that users switch it on themselves. The risks of facial recognition — stalking, abuse, false matches, devalued privacy — are too great for ad hoc regulatory intervention.
The format of this announcement is interesting. It tries desperately to strike a positive tone, with several paragraphs citing specific examples of the benefits of facial recognition and only gesturing to the potential for harm and abuse. I am glad Facebook sees so many great uses for it; I see them, too. But I wish the company were anywhere near as specific about the acknowledgements of harm. As it is presented, it looks defensive.
Kashmir Hill and Ryan Mac, New York Times:
When the Federal Trade Commission fined Facebook a record $5 billion to settle privacy complaints in 2019, the facial recognition software was among the concerns. Last year, the company also agreed to pay $650 million to settle a class-action lawsuit in Illinois that accused Facebook of violating a state law that requires residents’ consent to use their biometric information, including their “face geometry.”
While Facebook says it is deleting information used to recognize individual faces, it is not clear if the products of that data will — or even can — be deleted. If the faces of a billion users have already been integrated into machine learning models, it seems likely they are inseparable and irrevocable.