Privacy Commissioners Reprimand Clearview AI: What’s Next for Facial Recognition?

Privacy Commissioners Reprimand Clearview AI: What’s Next for Facial Recognition?

“He is seen, but he does not see; he is the object of information, never a subject in communication…Hence the major effect of the Panopticon: to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power” – Michel Foucault (Discipline and Punish, 1975)

In 2019, Clearview AI started licencing facial recognition software to law enforcement agencies in Canada and the United States. In addition to regional police, the Royal Canadian Mounted Police (RCMP) used it to conduct child sexual exploitation investigations. At first, the product seemed promising. Clearview AI’s application helped law enforcement officers track down otherwise unidentifiable criminals. However, Privacy Commissioners for Canada, Alberta, British Columbia and Quebec immediately launched a joint investigation into the company when news reports began to circulate raising questions and concerns about Clearview AI’s facial recognition technology.

The problem with Clearview AI’s application is that it requires a large biometric dataset in order to operate. To obtain biometric information, Clearview AI has scraped the Internet and collected over 3.3 billion images of faces and associated data from publicly accessible online sources, including Facebook, YouTube and Instagram. Clearview AI uses facial recognition software to create a biometric array for each of its images. When a user uploads a photograph, Clearview AI assesses its biometric data and retrieves images with corresponding information from its database. Each image in its database contains metadata and a link to its original source, so users can cross-reference and identify people using images found online.

Now, almost a year since news first broke about Clearview AI, Canada’s Privacy Commissioners have finally concluded their investigation into the company. In a report issued on February 2, 2021, the Commissioners collectively condemned Clearview AI for collecting, using and disclosing personal information without the requisite consent. Subject to Canadian privacy laws governing private sector entities, Clearview AI’s activities contravene principle 4.3 of Schedule 1, as well as section 6.1 of PIPEDA, section 7(1) of PIPA AB, sections 6-8 of PIPA BC, and sections 6 and 12-14 of Quebec’s Private Sector Act

Clearview AI ceased offering its services in Canada last year. However, many are now calling on the Trudeau government to ban federal law enforcement and intelligence agencies from using facial recognition for surveillance purposes entirely. Last July, individuals and organizations representing privacy, human rights and civil liberty advocates penned an open letter to Public Safety Minister Bill Blaire, calling on the federal government to “[e]stablish clear and transparent policies and laws regulating the use of facial recognition in Canada, including reforms to the Personal Information Protection and Electronic Documents Act (PIPEDA) and the Privacy Act.” 

Law enforcement agencies can now identify suspects within a matter of seconds. With facial recognition, all it takes is a single photograph to obtain a wealth of personal information about an individual. However, everything comes at a cost, and utilizing facial recognition for law enforcement purposes is no exception. If I have learned anything from Michel Foucault, it is that collective security should not come at the expense of individual autonomy. Do you disagree?

Lamont Abramczyk is a JD Candidate at Osgoode Hall Law School. He is the Deputy Director of the Osgoode Art Law Society and an IP Osgoode Innovation Clinic Fellow.