The Toronto police are facing controversy after its members have admitted to using Clearview AI – a ground-breaking facial recognition app that a New York Times investigation has been said to “end privacy as we know it.” To use the system, you take a picture of a person, upload it to the database, and it will provide you with public photos of that person along with other identifying information such as their name, phone number, and address. The database is comprised of more than three billion images that Clearview claims to have scraped from Facebook, Instagram, Venmo, and millions of other websites.
Clearview AI has been used by police services in the United States to solve identity theft, credit card fraud, and assault cases. Canadian police services, on the other hand, have been quiet about the extent that Clearview AI has been used. While the benefits of the use of such a model by law enforcement in uncovering criminals are apparent, it does raise significant privacy concerns.
Ann Cavoukian, the former Privacy Commissioner, states that scraping the information from these social media sites amounts to stealing. Although the information shared with LinkedIn, Facebook, and other social media websites can be made publicly available, their terms of service prohibit people from scraping users’ images. In response to the New York Times story, Twitter sent a cease-and-desist letter demanding that Clearview stop taking data from the social media website “for any reason” and delete data previously collected.
The use of facial recognition technology by the police is concerning because the AI is not perfect. Studies have repeatedly found racial bias in facial recognition technology. For example, a recent study from the National Institute of Standards and Technology in the US found that many facial recognition systems have a higher rate of false positives for people of colour than white people. This means that these facial recognition systems were seeing matches when there weren’t any. Applied to a context used by the police, this type of inaccuracy could potentially lead to people of colour being falsely misidentified for crimes they did not commit.
Indigenous peoples are already overrepresented in our criminal justice system, and systems that reflect racial bias could further exacerbate this issue. The fact that Clearview AI’s accuracy has not been tested by an independent third-party is concerning given its use by police services across Canada and the US.
Following reports of its use by Toronto police, chief Mark Saunders ordered officers to stop using Clearview AI. Similarly, the Ontario Privacy Commissioner released a statement that police services should stop using this technology until the Privacy Commission can examine it.
But the cat is out of the bag. People have become normalized to the use of facial recognition technology in our everyday lives, from unlocking phones to using face filters. Before Clearview’s emergence, technology companies who could build a program of this calibre, like Google, decided not to because of the potential for abuse. Barring a federal or provincial law that bans the use of this type of technology, like San Francisco in the United States, it is likely that we will have to figure out how to regulate its use, and fast.
Regulating these corporations is complicated since third-parties have to be regulated as well, and it’s not always clear which third-parties information is shared with. Some legal scholars assert that our current system is broken since it relies on an underlying assumption that any use of data is fine, absent any explicit government regulation. Instead, there needs to be an internal regulatory framework in place where organizations gain approval from the Privacy Commissioner before implementing technology, given the potential for misidentification and abuse.
Government institutions should also be transparent with the public about how technology is used. PEI, for example, has only recently shared it’s been using facial recognition technology to prevent identity theft and suspended drivers from getting a licence since 2007. Toronto police have also been using facial recognition technology of a lower capacity for years. Like using a virtual book of approximately 1.5 million mugshots, lawfully obtained, and vetted by the Privacy Commissioner.
Some legal commentators have called for a moratorium on “governmental and commercial use of facial recognition” until the risks are thoroughly studied to give lawmakers time to put adequate regulations in place. Given that Canada’s data privacy and digital governance laws are not able to keep up with growing technological advances, this may be a viable option. Ultimately, the onus may fall on individuals to become aware of the repercussions of the terms they are consenting to in their everyday life.
Written by Nikita Munjal, guest editor and JD candidate at Osgoode Hall Law School