On February 21st, the Federal Office of the Privacy Commissioner and its three provincial counterparts in British Columbia, Quebec, and Alberta announced they are teaming up to investigate whether Canadian laws are being broken by Clearview AI’s powerful facial recognition software.

The investigation into the U.S. firm was initiated in the wake of media reports alleging the company had amassed over 3 billion images in its inventory without consent. 

Canada currently does not have a policy regarding the collection of biometrics, which are physical characteristics that can be used to identify people digitally. Due to a lack of regulations, facial recognition in Canada is not subject to minimum standards for privacy, mitigation of risk, or public transparency. 

In this legal vacuum, several police departments across Canada have started using facial recognition technology. Calgary police was the first police agency in Canada to use facial recognition software as a screening tool in 2014, while Toronto Police Services ran a pilot project in 2018 that led to the identification of previously unknown criminal suspects.

Despite beginning in 2018, the use of facial recognition technology in Toronto was not made known to the public until after the pilot project ended, leading civil liberties advocates to raise concerns around the lack of transparency regarding recognition technology.

Clearview AI’s facial recognition software appears to pose a much larger privacy threat to Canadians than any other facial recognition software tested in Canada. While Toronto and Calgary’s mugshot inventory is limited to 1.5 million and 300,000 images respectively, Clearview AI has amassed over 3 billion images that the company claims it obtained from social media websites such as Facebook, Instagram, and Youtube. Clearview AI’s powerful technology can also reveal items of personal information — such as a person’s name, phone number, or address — based on nothing more than a photo, even if a person does not previously have a criminal record. 

Further complicating the issue is the fact that police agencies across Canada have been testing Clearview AI’s technology without public knowledge, and often without systematic oversight from the police agencies themselves. On February 13th, Toronto Police Service spokesperson Meaghan Gray stated that several officers began informally testing Clearview AI in October 2019, unbeknownst to Toronto Police Chief Mark Saunders. Although Gray stated that Saunders directed officers to stop using the technology as soon as he was made aware of the tests, this came just one month after Toronto police denied using facial recognition through Clearview AI. 

Similarly, Edmonton Police Service is conducting a review after Clearview AI’s software was used by three officers before the technology had been approved by the department. The officers learned about the software at a law enforcement conference and obtained a code to try out the technology on a trial basis, without any organizational oversight.

York Regional Police has also admitted to using Clearview AI after previously denying using the controversial software. York Regional Police stated that it had been previously unaware that several of its officers were using the Clearview AI’s free trial without the authorization of the police department. 

In January, the RCMP released a statement that they would neither confirm nor deny their usage of Clearview AI’s technology, saying that the RCMP “does not comment on specific investigative tools or techniques.” However, in the wake of heightened privacy concerns and the joint investigation by federal and provincial privacy watchdogs, the RCMP confirmed on February 27th that the police force was using Clearview AI’s software in fifteen child exploitation investigations over the past four months, resulting in the identification and rescue of two children. In light of the RCMP’s announcement, the Office of the Privacy Commissioner of Canada has opened a separate investigation into the RCMP’s use of facial recognition technology. 

Since 2017, Clearview AI has explicitly marketed itself to law enforcement agencies across Canada and the United States as a powerful tool that has the potential to revolutionize crime-solving capabilities through identifying suspects in a matter of seconds. In doing this, the US firm has broken a long-standing taboo surrounding facial recognition technology. In 2011, Eric Schmidt, Google’s CEO at the time — stated that although the company had built software capable of facial recognition, they refrained from releasing the technology out of fear that it could get into the wrong hands. 

In general, beyond privacy concerns, tech companies have been hesitant to develop facial recognition software due to the unintended consequences of “false positive” matches in criminal cases. Artificial intelligence and facial recognition technology have been proven to be pervious to racial and gender biases, which has the possibility of perpetuating systemic biases that have historically plagued Canadian law enforcement agencies.

Although Clearview AI claims a 75 per cent success rate in finding suspect matches from surveillance cameras or witness’ pictures, the rate of false matches remains unknown, as the technology remains untested by an independent third-party. 

While Clearview AI is not available for public use, privacy advocates claim that the possibility of a copycat app is high, considering the growing normalization of the technology and the possibility of a lucrative market opportunity. In the wake of this proliferation of facial recognition technology, privacy commissioners across Canada have been urging federal and provincial governments to modernize privacy laws to keep up with the pace of technological change. 

Despite Clearview AI claiming to only work with law enforcement agencies, a recent data breach has exposed the firm’s client list — with companies such as Best Buy, Walmart, and the NBA using the technology on a trial basis. As Canada begins to investigate facial recognition technology usage by law enforcement, the imperative for more stringent and robust privacy laws will only grow stronger. 

Edited by Eyitayo Kunle-Oladosu.

The opinions expressed in this article are solely those of the author and they do not reflect the position of the McGill Journal of Political Studies or the Political Science Students’ Association. 


Image via Flickr Creative Commons.