Artificial Intelligence and Human Rights at YorkU: A Panel Discussion on Impacts and Opportunities

Artificial Intelligence and Human Rights at YorkU: A Panel Discussion on Impacts and Opportunities

February 4th, 2020 marked York University’s 11th annual Inclusion Day, a joint partnership between York University’s Centre for Human Rights, Equity & Inclusion (REI), the Law Commission of Ontario, and York’s President's Advisory Committee on Human Rights Sub-Committees. Held in the Helliwell Centre at Osgoode Hall Law School, Inclusion Day 2020 focused on the theme of belonging, looking at equity, diversity and inclusion through the lens of human rights.

The event began with a morning forum aimed at providing an interdisciplinary exploration of artificial intelligence systems and the effects of these technologies on the day’s themes. Moderated by Ryan Fritsch, Counsel with the Law Commission of Ontario, the morning’s speakers included Insiya Essajee, Professor Trevor Farrow, Professor Regina Rini and Professor Ruth Urner.

Insiya Essajee is Counsel at the Ontario Human Rights Commission. Regina Rini is a professor of philosophy at York University who teaches and writes on a number of topics in ethics, including the moral status of artificial intelligence. Professor Ruth Urner teaches in the Electrical Engineering and Computer Science department of the Lassonde School of Engineering, with a focus on machine learning and the societal aspects of this technology. Trevor Farrow is a professor at Osgoode Hall Law School, with a research and teaching focus on access to justice, and legal and judicial ethics.

The paneled discussion was organized into 3 sections:

  • What is AI, and what kinds of concerns do these technologies raise? How might we address these challenges?

  • How does the use of AI impact York? What does the use of AI mean for human rights and inclusion?

  • Moving forward, what should we be prioritizing in responding to AI?

How is AI Used Day-to-Day and What Kind of Concerns are Raised by the use of AI?

The panel opened with moderator Ryan Fritsch describing some of the different uses for AI, and the associated benefits and concerns. Fritsch described for the audience the way in which AI is currently in use all around us, almost everywhere we go. In certain circumstances, the benefit conferred by the technology is relatively uncontroversial. In health care for example, the use of AI to check and compare radiology scans provides us with important health data at a rate of speed that would be impossible for a human to achieve. Weather predictions made by AI allow us to plan our lives in the safest way possible.

But what happens when machine learning is used to make decisions about people, as opposed to weather and radiology?  Fritsch described some instances where AI is already being deployed in these kinds of scenarios: in Public Safety Assessments, where AI is used to determine bail eligibility in various American jurisdictions based on “a person’s likelihood of returning to court for future hearings and remaining crime free while on pretrial release”, in predictive policing models currently in use by the Vancouver police department, or in the use of computer generated people on dating sites in order to generate the illusion of diversity when it does not really exist.

With these examples in mind, Fritsch identified four areas of contention that arise in looking at AI technologies through the lens of human rights. Fritsch first recognized the transparency and disclosure issues that arise with AI technologies that we may or may not be aware of at any given moment or in any particular interaction. Questions of data, bias and discrimination came next – how is data generated? Who is developing these systems, and who is auditing the data that is produced? A third issue identified was one of “explainability”; that is, how do we ask a machine to explain the conclusion it has arrived at via machine learning? Further, how do we challenge or cross-examine AI? Finally, AI’s issues with inclusion and exclusion were identified, including problems of overinclusion when AI technologies based on historically biased data reproduces such bias.

How do we address these challenges?

The first question Fritsch posed to the panel was directed at professor Urner: are the issues identified above tech issues? If so, why is there no tech solution? According to Professor Urner, while technology shines a light on the issue of bias in data, it can’t actually provide any solutions, because the questions we’re asking and the issues we’re flagging aren’t technical at all. Addressing these challenges requires an interdisciplinary approach, involving conversations between technologists and scholars from other disciplines – philosophy, ethics and law, for example. Professor Urner urged us to remember that the benefits of AI come at a cost.

In response, Insiya Essajee suggested that while AI does indeed present issues, we can also look at the ways in which we can use AI to promote human rights. She provided the example of AI technologies helping organizations such as Universities meet their human rights requirements through initiatives like accessible education. In response to Fritsch’s question as to whether we need new laws in order to deal with the human rights challenges presented by AI, Essajee told the audience that she believes we already have the appropriate laws in place, as the Human Rights Code already protects us from violations, whether they are committed by a person or a machine.

Regarding problems surrounding transparency and disclosure related to the use of AI, Professor Farrow argued that rule of law issues remain a major concern; everyone has the right to know by what standard they are being judged, and AI can make this impossible in a number of ways. First, people often do not know that AI is being used to make important decisions with real consequences to them. Second, there is no way to cross-examine a machine to understand how it arrived at its decision. Transparency and disclosure on the part of institutions can help address some of these concerns related to the rule of law, but solutions will need to be thought through on a continuing basis.

On the other hand, Farrow told the audience, AI has the potential to assist with access to justice goals. Citing research by Matthew Dylag, Farrow described Canada as experiencing an access to justice crisis, with AI technologies offering a way to take some of the pressure off people who find themselves requiring help with a legal issue. The potential of AI to help with services that could predict the outcome of a case based on certain factors might be of use to someone who is unable to afford or secure traditional legal services.

In a similar vein, Professor Regina Rini argued for education and awareness as critical to addressing the challenges presented by AI. Professor Rini reminded the audience of the importance of thinking through the ways in which AI interacts with people, in order to understand the capacity for AI to reflect our own biases back to us, but amplified. Professor Rini urged the audience to remain aware of the fact that machines are running on our own biased data, without the human mechanism to temper the resulting conclusions. For Professor Rini, education and awareness are critical to addressing the issues with AI in a meaningful way.

What does this mean for inclusion and diversity at York?

Ryan Fritsch provided the audience with some examples of the way in which AI is used on campuses: through facial recognition, social media monitoring, and social credit scoring, to name a few. The next question posed to the panel asked in what ways can AI help or hinder us in achieving our inclusion and diversity goals?

Professor Urner suggested that in asking this question, what we’re really asking about is the fairness of automatic systems. For Professor Urner, the question becomes “how do we force a machine to be fair”? Her answer is that an algorithm for fairness may be impossible.

In response, Essajee proposed ways that humans might audit AI for discriminatory practices. In this, there are at least two ways in which we can look at AI decisions: we either take the advice of machines full-stop, or we supplement our human decision making with AI advice. In terms of auditing, there was no clear consensus who would perform such a task. Do we make programmers responsible for auditing outputs for bias or is this a task for another discipline?

Moving forward, what should we be prioritizing in responding to AI?

Fritsch then asked the panel to share what they perceive as necessary steps for responding to AI in a world increasingly shaped by the use of such technologies.

For Essajee, in order to ensure the responsible use of AI moving forward, we need to have effective systems in place for deciding the readiness of certain types of AI, before they are unleashed. Farrow flagged ideas surrounding the ways in which AI might be written into our notions of procedural fairness, and beyond legal rights, ensuring that institutions are proceeding forward in a human way, even when AI is in use. Professor Urner drove home the message that we must remain focused on developing and instituting interdisciplinary approaches to creating and responding to AI technology. Professor Rini wrapped up by describing how classes on the responsible use of AI are already being offered, such as those at the Schulich School of Business, and the importance of institutional efforts in educating students on the issues raised throughout the morning’s discussion. On a broader scale, Professor Rini continued, we need to ensure the wider public is aware of the ways in which AI is used, and the concerns associated with such use.

Takeaways

Throughout the morning, two key points continued to be reiterated, providing us with some important takeaways from the discussion.

First, education and awareness will be critical to ensuring the responsible use of AI, now and in the future. This includes the transparency and disclosure necessary to ensure that rule of law principles are respected. Both institutions and individuals have a role in this. Those developing AI technologies must ensure that they are aware of the ethical issues involved and must provide assurance that best practices are in place. Further, as mentioned by Essajee, we need to build strong systems for deciding when certain types of AI technologies are ready for use, and for deciding how the technologies will be managed moving forward. Those employing AI technologies should disclose the use of AI and be transparent in explaining the purpose for its use. Educational institutions must provide the tools necessary to ensure that people understand the ways in which AI impacts their lives.

Further, interdisciplinary approaches to developing, thinking about, and responding to AI will be crucial to ensuring meaningful awareness of the benefits and costs of AI technologies. Technologists must work together with scholars from other disciplines in order to advance conversations around and develop solutions for the ethical and human rights issue raised by certain AI technologies.

Thinking deeply about the issues and concerns with AI is critical in a world shaped by AI technologies. While innovation will push technology ever forward, innovative thinking by scholars in other disciplines will be critical to resolving the human rights issues raised by AI technologies.

Written by Meghan Carlin, a first-year student at Osgoode Hall Law School.