Photo by: Antoine Beauvillain (Unsplash)
Junghi Woo is an IPilogue Writer and a 3L JD Candidate at Osgoode Hall Law School
We’ve come a long way from the days of legally forcing people to take their own pictures for their driver’s licenses to store in the provincial facial recognition data bank. Facial recognition technology, despite various controversies, has continued its presence in society. Its biggest client? Law enforcement.
Despite facial recognition technology’s illegal status in Canada, ongoing investigations, and multiple class-action lawsuits in Britain and Australia, the U.S. start-up Clearview AI raised $30 million from investors, showing that it is here to stay. Facial recognition technology is only going to get more precise- but at what cost?
How does it work?
While Facial Recognition may have various uses, the general process is that the software first accesses an internet database of photos and videos, then uses biometrics and algorithms to identify the same people in other photos and videos. Namely, law enforcement can use it to “identify” faces in surveillance software.
Privacy Concerns- Did I consent to this?
Most likely not. For example, Clearview AI collected billions of images from people on the internet without any clarification as to whether this was done lawfully or not. This led to Canada’s investigation on the RCMP’s use of Clearview AI, which resulted in the company halting its services in Canada. Four Privacy Commissioners have stated that Clearview AI has violated privacy laws in its methods of collecting data without consent and using personal information for inappropriate purposes. Clearview AI disagreed with these findings and was unwilling to follow the privacy authorities’ recommendations. You are probably thinking back to all those Snapchats of your face, your selfie uploads, and your phone’s facial recognition unlock setting. I am too.
Biases in Facial Recognition Technology
Technology can’t be perfect. One of the main issues to consider is whom technology sets out to serve . Studies have demonstrated clear inequity in face recognition algorithms. The 2018 “Gender Shades” project found the algorithm’s performance was at its worst when encountering individuals with darker skin tones
Where does this become a serious problem? Ultimately, law enforcement claims to use this technology to “identify criminals”. What happens when law enforcement relies on this technology, despite its proven inaccuracies, when encountering members of racialized groups? Facial recognition technology is more likely to inaccurately match people’s faces if they were of a certain race. A prime example is the NYPD’s use of facial recognition technology during the Black Lives Matter protests earlier this year.
Robert Julian-Borchak Williams from Michigan was the first person to be wrongfully convicted based on a flawed match from a facial recognition algorithm. Police, relying on surveillance video footage of a shoplifter, handcuffed Mr. Williams on his front lawn in front of his whole family. Mr. Williams was then driven to a detention center, got his mugshot, fingerprints, and DNA taken, and was held overnight, then interrogated the next day. All because of a faulty system that could not differentiate between two Black men.
Trade Secrets Implications
Taking this a step further, we look to another concern with facial recognition technology: the lack of transparency in its processes. In the U.S., forensic algorithm vendors (including those who sell facial recognition technology) have relied on trade secrets to protect their intellectual property, hence, withholding relevant evidence at trial. To properly challenge the reliability of this technology, it is essential to understand how it works. Therefore, relying on intellectual property protections to prevent this does not achieve the true purpose of trade secret law – to incentivize businesses to innovate.
What does the Future Hold?
Governments in both the U.S. and Canada have started to realize the risks of using facial recognition technology and consider whether they are truly outweighed by its supposed “benefits”. Representative Mark Takano from California has recently introduced H.R. 2438, the Justice in Forensic Algorithms Act of 2021, which includes a prohibition on the use of trade secret privileges to withhold relevant evidence in criminal cases.
One may ask, what is facial recognition technology truly useful for? Are any of its uses significant enough to outweigh the serious dangers it poses? Without proper transparency, we cannot assure reliability. If essential information is only in the hands of vendors and law enforcement, what type of harmful power dynamic could this create in society?