AI & Industries — An Interplay That Hints at the Way to Governance

AI & Industries — An Interplay That Hints at the Way to Governance

Virtually every industry resorts to artificial intelligence (“AI”) technologies to streamline processes, enhance performance, and improve service provision. As AI becomes ubiquitous in our everyday lives, it is necessary to create guidelines to help us navigate the changes these advancements cause in our society. Crafting such a roadmap for AI governance is nonetheless an uphill task and involves confronting legal, ethical, and social issues that may unfold in unpredictable ways.

“Bracing for Impact – The Artificial Intelligence Challenge (A Road Map for AI Governance in Canada)”, a conference organized by IP Osgoode earlier this year, included a panel entitled “AI & Industries” that covered a wide range of current issues as a result of AI technologies and anticipated other looming issues. The panelists shed light on what may be sensible to expect from AI governance in the near future and hinted at what may constitute the cornerstones for effective AI regulation.

Competent practice management

As I boiled down the main threads called forth by the panel, I realized they provided scope for an interesting case for AI regulation in light of a lawyer’s duty of competence, as per the Law Society of Ontario’s Rules of Professional Responsibility (“Rules”). Practice management by way of available systems, technologies, and other methods is one of the areas of competence where lawyers must meet minimum standards to ensure they serve clients well, timely, and at a reasonable cost (see, s. 3.1-1(i), 3.1-2, and commentary 15, 15.1). These three mandates of competent practice management provide an initial framework for addressing the concerns raised by the panelists and gauging whether the deployment of AI meets minimum ethical standards in different circumstances.

 

Well-served society

To gauge the appropriateness of AI, one may need to make an initial assessment of how well the deployment of a specific technology serves society. Generally, the results weigh staggeringly in favour of the machines, for the high efficiency and low likelihood of errors technology offers. Dr. Ronald Cohn, Chief Pediatrician at the Hospital for Sick Children, discussed the benign effects that flow from the synergy between artificial and human intelligence, while Prof. Deirdre Mulligan, Professor at the UC Berkeley School of Information, made a counterpoint by touching on problems related to inappropriate uses of AI.

Dr. Cohn noted that AI is a tool that allows professionals —in his particular case, physicians — to make better informed decisions. AI-enabled tools do not eliminate (medical) school training nor the need for a skill set based on first principles for their use (see, though, concerns with deskilling here and here). AI has been crucial, Dr. Cohn added, for the development of medicine’s three main scopes: prediction, prevention, and precision. He remarked that AI-assisted clinical trials are cheaper, faster and more informative; yet, the doctor still bears the decision-making responsibility and the human element that allows for effective communication with patients. As to preventive medicine, Dr. Cohn remarked that AI data analytics tools permit physicians to quickly identify environmental risk factors and promptly propose actions in response. Finally, Dr. Cohn noted that the third scope of medicine — precision — has improved substantially with AI technologies, which are able to identify pattern deviations and provide precise diagnoses that could take even the most experienced, senior professional years to unveil.

Prof. Mulligan presented a counterpoint related to the inappropriate use of AI, such as when this unduly interferes with professional judgement. Two examples arise in the context of the criminal justice system and the healthcare industry. In both fields, over-reliance on algorithms that are not always free of undue bias and other flaws may trigger bad and unfair results. Prof. Mulligan argued, “error avoidance is an ethical imperative, both to maximize positive, short-term consequences and to ensure that, in the long run, informatics is not associated with error or carelessness, or the kind of cavalier stance sometimes associated with high-tech boosterism.” She added, citing K. W. Goodman, that the expansion of the field should be encouraged “but with appropriate levels of scrutiny, oversight, and, indeed, caution”. Transparency and proper justification of decision-making by professionals that rely heavily on AI may provide a solution to address undue bias built into the algorithms.

 

Time is of the essence

Prof. Ryan Calo, Professor at the University of Washington School of Law, raised the question of whether it is premature to think about regulation. Prof. Calo noted, while it may be premature to top-down regulate every aspect of life that AI touches upon, a sensible approach towards governance involves “watching for opportunities where there is a gap between what law assumes and what is happening on the ground in practice”.

In addition to confronting the legal and ethical social issues that may arise from the ever-broader use of AI, Prof. Calo noted that we should also be attentive to the issues of justice and equity that lurk in AI governance. There are concerns in our community respecting the inappropriate use of AI and, at a macro level, there is still some degree of uncertainty as to whether AI’s benefits and costs will be within reach of everyone equally.

 

Reasonable cost

Prof. Ian Kerr, the Canada Research Chair in Ethics Law and Technology at the University of Ottawa, tackled the problem of costs associated with AI use. In his presentation, he warned “to accept that things will be more expensive because we will have things that we never had before is to support a neo-liberal fantasy”. As AI tends to outperform humans in virtually every field, and as we increasingly seek ultimate efficiency in service provision, Prof. Kerr noted that this ongoing search for ever-enhancing efficiency may result in elevated costs whenever AI is engaged.

Indeed, it may be hasty to assume that AI, and technology in general, will always start being expensive but with no delay will become largely democratized. Although tech goods have become more affordable in the past decades, competition and the ‘miracle of manufacturing’ are what have increased the availability of these goods in the market. What must not be underrated as a considerable cost associated with AI is the ever-growing need to build, maintain, and monitor security systems. Companies and other service providers across industries are likely to make poor financial decisions if they cut expenses in this area and disregard this risk factor in their business as they might eventually feel the backlash from the elevated data breaches cost.

The monetary costs associated with cybersecurity and the endless upgrade path towards technological improvement are hardly overemphasized. As a result, cost regulation of AI may warrant increased security to stakeholders on a more regular basis, while also preventing AI from leveraging vulnerabilities in sensitive fields such as the healthcare industry.

Good governance and competent management of AI, as with the provision of legal services, may be tied to serving society well, timely, and at a reasonable cost. Hence, because time is of the essence, it flows logically that society will be underserved if the efforts to bring the rule of law into the technological world become deferred.

 

Bruna D. Kalinoski is a contributing editor for the IPilogue and holds an LLM from the Osgoode Professional Development Program at York University.