A Legal Framework for Artificial Intelligence

A Legal Framework for Artificial Intelligence

 

The current legal framework for AI can be grouped as follows:

(1) regulation specific to AI technology (e.g. automated decision making, facial recognition)

(2) regulation specific to a use case or industry application (e.g. finance, health, human resources);

(3) legal accountability for (unintended) consequences by use of AI (e.g. criminal, civil); and

(4) voluntary ethics codes;

Regulations are being introduced or proposed specific to AI technology such as those directed to facial recognition software. For example, cities and regions propose to ban use of facial recognition technology by police and other municipal agencies. A major body camera company voluntarily banned facial recognition software. Blunt regulatory instruments that apply to AI should clearly define the technology and limited fields of use to avoid overly broad application and stifling research and development.

There are regulations specific to a use case for AI technology such as healthcare and finance. For example, the regulatory approach of a medical decision support tool with AI software might change if assumptions or limitations of the software are clear. This might include limitations of training data, selection of features, and algorithmic assumptions. As another example, the use of AI as a human resource tool for hiring and promotion is subject to employment and discrimination laws.

In some cases, AI software code can change as a result of machine learning which can result in unintended consequences such as privacy violations, criminal liabilities, and reputation risk. An autonomous vehicle can cause property damage. AI can generate fake images and videos that can be used to spoof facial authentication systems and commit theft, for example.

Artificial intelligence poses novel ethical considerations. These complex systems automate decisions that were traditionally in the human realm. Law attempts to codify policies which are often driven by ethical or moral principles. There is now a (very) long list of voluntary codes for AI ethics such as ethical frameworks, principles, oaths, tool kits and declarations. Some of the voluntary codes are directed to a global audience and others are country specific or directed to a use case. There is a UK Data Ethics Framework. There is a US Department of Defence report that outlines ethics in AI as responsible, equitable, traceable, reliable, and governable. The principles are often described in broad terms which makes it difficult to operationalize these codes internally. Compliance and enforcement are also challenges. A company might make misleading statements such as “only using ethical AI” or “developing AI for good” even though their operations are not in compliance with the relevant voluntary code. In some instances, AI can be used to enforce AI ethics. For example, so called audio “deepfakes” involve computer generated audio similar to a human voice. Canadian company Dessa built a deepfake decoder to help combat misuse. To discern between real and fake audio, the detector uses visual representations of audio clips called spectrograms, which are also used to train speech synthesis models. While to the unsuspecting ear they sound basically identical, spectrograms of real audio and fake audio actually appear different from one another to their decoder. See https://medium.com/dessa-news/detecting-audio-deepfakes-f2edfd8e2b35

Given the widespread adoption of AI, new law will be created that hopefully maximizes its benefits and reduces harm. Companies developing or deploying AI should diligently track legal updates.

 

Maya Medeiros Written by partner, lawyer, patent agent, and trade-mark agent at Norton Rose Fulbright LLP Canada (Toronto). Maya Medeiros’ practice focuses on the creation and management of intellectual property assets in Canada, the United States and around the world. 

Reposted with permission from the author. Originally published on Social Media Law Bulletin.