Once just an element of science-fiction, facial recognition has transitioned from fantasy to a popular commodity. The technology has developed from an obscure 1960s lab project, into an asset for governments, law enforcement, businesses, and consumers worldwide. You’ve likely used the technology in the past day – it has become a key part of Apple’s devices, Facebook’s platforms, and even your Snapchat dog-filter.

Beyond that, you’ve also probably been affected in ways you could never know. Particularly if you are an American adult, there is around a 50 per cent chance that your face’s biometric scan is held in a database somewhere. Of course, you likely did not consent to this – a fact that the international community has begun to recognize. 

Advocacy groups such as the American Civil Liberties Union (ACLU), have recently taken steps to address what many call the “Pandora’s box” that is facial recognition. In their view, the tech is dangerously unchecked, legally unregulated, and ripe for abuse. In its current state, it presents a violation of privacy that tramples upon civil liberties, among other concerns.

These claims begin with the mass, nonconsensual accumulation of peoples’ biometric facial data. In a cutthroat market to create the most efficient software, competitors require access to massive training sets of faces to increase the accuracy and versatility of what the A.I is able to “recognize”. Without regulation, corners are cut. Notoriously, companies have covertly extracted personal photos via their mobile photo apps. These collections are done without permission or disclosure, and tread over the International Covenant on Civil and Political Rights.

In the US, the use of facial recognition has sparked a debate on constitutionality, as well. Having your face recognized is a form of identification, and creates certain conflicts with the first amendment. The FBI itself has suggested that the use of these systems can easily “lead to self-censorship and inhibition”. Anonymity acts as an important assurance that one will not be prosecuted for the beliefs they express – part of the “vital relationship between freedom to associate and privacy in one’s associations.” Without the right to anonymity, freedom of speech is impaired.

In a 2016 report, researchers illustrate how invasive this process of identification is. Once the scan of a subject’s face reveals who that person is, the system can scour the internet for access to their data and digital footprint. Facial recognition, therefore, also constitutes a type of “search.” By scanning your face, someone can extract information about who you are, what you are doing, and so on. It is the digital equivalent to having your license plate recognized, and someone subsequently searching through your car. 

The ease with which this “search” can be achieved – instantaneously and without the awareness of the subject – means that it can be deployed indiscriminately and without accountability. In other words, law enforcement can use it to search without reason or probable cause, despite the protections of the Fourth Amendment. This dilemma highlights another way in which technology poses a challenge to constitutional standards.

Facial recognition surveillance also facilitates discrimination. Despite assurances by developers, the systems are often deeply technically flawed. Specifically, they perform along racial and gender-based lines. In practice, certain demographics tend to yield higher error-rates, or “false-positive” identifications, than others. In effect, these groups have a higher risk of being wrongly convicted of crime, and mistaken for somebody else.

Statistically, minorities are the ones who suffer the consequences. Women of colour, for example, yield an error rate of 23.8 per cent to 36 per cent, whereas caucasian males yield a rate of 0.0 per cent to 1.6 per cent, according to one study. In order to expose these flaws, the ACLU publicly conducted a test, running members of the US Congress against a database of convicted felons. Using a system endorsed by law enforcement agencies, the test produced 28 false-positives, most of which were based on Congresspeople of colour.

Interpretations of the origins of such discrepancies vary, however many believe that they are a product of the environment in which they were developed- an industry dominated by white men. Under this line of thought, the developer’s human bias is reflected in the performance of the systems that they create. In other words, people are naturally better at identifying members of their own demographic, and the relative homogeneity of the developers means their systems are predisposed to recognizing white males and misidentifying minorities. 

This challenges the idea that A.I performs impartially, and free from the bias within human-thought. Instead, it affirms the reality that A.I is code constructed by humans, and inevitably reflects the socially constructed biases which dominate the way in which humans think. In practice, this means that without regulatory performance standards, facial recognition has a tendency to increase the risk that minorities will be disproportionately convicted of a crime. This reinforces patterns of systemic oppression in the United States and internationally. 

While openly wary of the dangers that a misled facial recognition system can bring, groups such as the ACLU do not advocate for an unconditional end to their use. For one, this is practically impossible. An industry expected to be worth $9 billion by 2022, the technology has unstoppable economic momentum. In addition, transparent usage at places like airport security and voting booths, where identification already takes place, has been proven to drastically reduce rates of crime and fraud. Usages for personal ends, such as on your iPhone, bring huge convenience with little risk.

The line is drawn, however, when the technology is not used transparently, and in ways that compromise privacy, safety, and civil liberties. Similarly to how courts have addressed other controversial technologies like geotracking and drones, the ACLU is pushing for regulatory legal oversight. Their goal is to ensure that the people – those with their faces on the line – are the ones to dictate how the technology is used. They are hoping to guarantee that this usage is not only safe, but that the concept of a dystopian surveillance state remains an element of science-fiction. 

Edited by Rebecka Pieder.

The opinions expressed in this article are solely those of the author and they do not reflect the position of the McGill Journal of Political Studies or the Political Science Students’ Association. 


Image via Flickr Creative Commons