Pixel Envy

Written by Nick Heer.

Unproven Biometrics Are Increasingly Being Used at Border Crossings and by Law Enforcement

Hilary Beaumont, the Walrus:

In recent years, and whether we realize it or not, biometric technologies such as face and iris recognition have crept into every facet of our lives. These technologies link people who would otherwise have public anonymity to detailed profiles of information about them, kept by everything from security companies to financial institutions. They are used to screen CCTV camera footage, for keyless entry in apartment buildings, and even in contactless banking. And now, increasingly, algorithms designed to recognize us are being used in border control. Canada has been researching and piloting facial recognition at our borders for a few years, but — at least based on publicly available information — we haven’t yet implemented it on as large a scale as the US has. Examining how these technologies are being used and how quickly they are proliferating at the southern US border is perhaps our best way of getting a glimpse of what may be in our own future—especially given that any American adoption of technology shapes not only Canada–US travel but, as the world learned after 9/11, international travel protocols.


Canada has tested a “deception-detection system,” similar to iBorderCtrl, called the Automated Virtual Agent for Truth Assessment in Real Time, or AVATAR. Canada Border Services Agency employees tested AVATAR in March 2016. Eighty-two volunteers from government agencies and academic partners took part in the experiment, with half of them playing “imposters” and “smugglers,” which the study labelled “liars,” and the other half playing innocent travellers, referred to as “non-liars.” The system’s sensors recorded more than a million biometric and nonbiometric measurements for each person and spat out an assessment of guilt or innocence. The test showed that AVATAR was “better than a random guess” and better than humans at detecting “liars.” However, the study concluded, “results of this experiment may not represent real world results.” The report recommended “further testing in a variety of border control applications.” (A CBSA spokesperson told me the agency has not tested AVATAR beyond the 2018 report and is not currently considering using it on actual travellers.)

These technologies are deeply concerning from a privacy perspective. The risks of their misuse are so great that their implementation should be prohibited — at least until a legal framework is in place, but I think forever. There is no reason we should test them on a “trial” basis; no new problems exist that biometrics systems are solving by being used sooner.

But I am curious about our relationship with their biases and accuracy. The fundamental concerns about depending on machine learning boil down to whether suspicions about its reliability are grounded in reality, and whether we are less prone to examining its results in depth. I have always been skeptical of machines replacing humans in jobs that require high levels of judgement. But I began questioning that very general assumption last summer after reading a convincing argument from Aaron Gordon at Vice that speed cameras are actually fine:

Speed and red light cameras are a proven, functional technology that make roads safer by slowing drivers down. They’re widely used in other countries and can also enforce parking restrictions like not blocking bus or bike lanes. They’re incredibly effective enforcers of the law. They never need coffee breaks, don’t let their friends or coworkers off easy, and certainly don’t discriminate based on the color of the driver’s skin. Because these automated systems are looking at vehicles, not people’s faces, they avoid the implicit bias quandaries that, say, facial recognition systems have, although, as Dave Cooke from the Union of Concerned Scientists tweeted, “the equitability of traffic cameras is dependent upon who is determining where to place them.”

Loath as I am to admit it, Gordon and the researchers in his article have got a point. There are few instances where something is as unambiguous as a vehicle speeding or running a red light. If the equipment is accurately calibrated and there is ample amber light time, the biggest frustration for drivers is that they can no longer speed with abandon or race through changing lights — which are things they should not have been doing in any circumstance. I am not arguing that we should put speed cameras every hundred metres on every road, nor that punitive measures are the only or even best behavioural correction, merely that these cameras can actually reduce bias. Please do not send hate mail.

Facial recognition, iris recognition, gait recognition — these biometrics methods are clearly more complex than identifying whether a car was speeding. But I have to wonder if there is an assumption by some that there is a linear and logical progression from one to the other, and there simply is not. Biometrics are more like forensics, and courtrooms still accept junk science. It appears that all that is being done with machine learning is to disguise the assumptions involved in matching one part of a person’s body or behaviour to their entire self.

It comes back to Maciej Cegłowski’s aphorism that “machine learning is money laundering for bias”:

When we talk about the moral economy of tech, we must confront the fact that we have created a powerful tool of social control. Those who run the surveillance apparatus understand its capabilities in a way the average citizen does not. My greatest fear is seeing the full might of the surveillance apparatus unleashed against a despised minority, in a democratic country.

What we’ve done as technologists is leave a loaded gun lying around, in the hopes that no one will ever pick it up and use it.

Well we’re using it now, and we have done little to assure there are no bystanders in the path of the bullet.