Artificial Intelligence and Automation Bias newyorker.com

Eyal Press, the New Yorker:

In June, an appellate court ordered the N.Y.P.D. to turn over detailed information about a facial-recognition search that had led a Queens resident named Francisco Arteaga to be charged with robbing a store. The court requested both the source code of the software used and information about its algorithm. Because the technology was “novel and untested,” the court held, denying defendants access to such information risked violating the Brady rule, which requires prosecutors to disclose all potentially exculpatory evidence to suspects facing criminal charges. Among the things a defendant might want to know is whether the photograph that had been used in a search leading to his arrest had been digitally altered. DataWorks Plus notes on its Web site that probe images fed into its software “can be edited using pose correction, light normalization, rotation, cropping.” Some systems enable the police to combine two photographs; others include 3-D-imaging tools for reconstructing features that are difficult to make out.

This example is exactly why artificial intelligence needs regulation. There are many paragraphs in this piece which contain alarming details about overconfidence in facial recognition systems, but proponents of allowing things to play out as things are currently legislated might chalk that up to human fallibility. Yes, software might present a too-rosy impression of its capabilities, one might argue, but it is ultimately the operator’s responsibility to cross-check things before executing an arrest and putting an innocent person in jail. After all, there are similar problems with lots of forensic tools.

Setting aside how much incentive there is for makers of facial recognition software to be overconfident in their products, and how much leeway law enforcement seems to give them — agencies kept signing contracts with Clearview, for example, even after stories of false identification and arrests based on its technology — one could at least believe searches use photographs. But that is not always the case. DataWorks Plus markets tools which allow searches using synthesized faces which are based on real images, as Press reports — but you will not find that on its website. When I went looking, DataWorks Plus seems to have pulled the page where it appeared; happily, the Internet Archive captured it. You can see in its examples how it is filling in the entire right-hand side of someone’s face in a “pose correction” feature.

It is plausible to defend this as just a starting point for an investigation, and a way to generate leads. If it does not pan out, no harm, right? But it does seem to this layperson like a computer making its best guess about someone’s facial features is not an ethical way of building a case. This is especially true when we do not know how systems like these work, and it does not inspire confidence that there are no standards specific with which “A.I.” tools must comply.