The Artificial Intelligence Awakening: From Summits to Frameworks for Action

The Artificial Intelligence Awakening: From Summits to Frameworks for Action

The potential applications of artificial intelligence (AI) have been delighting and frightening the popular imagination for generations. From benevolent androids and talking cars to unwavering cyborgs and omnipotent neural networks, AI has captured a place in the collective consciousness. Now, with increased computer power and sophistication as well as over two decades of storing and assembling digitized data, the public policy and legal communities are bringing focussed attention to what AI means, especially as countries around the world – and here in Canada, with impressive results – dedicate millions of dollars for the technology’s development. It was in this vein that Norton Rose Fulbright Canada held the Artificial Intelligence Summit 2017 on November 15, 2017.

The Norton Rose AI summit, which was held in conjunction with the University of Toronto’s Department of Computer Science Innovation Lab (DCSIL) AI week, brought together representatives from the legal profession, business and academic communities, and government representatives for a full-day discussion about the current and future state of AI from business, ethical, and legal perspectives.

Anthony de Fazekas, Partner, Head of Technology and Innovation Canada, Norton Rose Fulbright, and Maya Medeiros, Partner, Norton Rose Fulbright, helped steer the day’s discussions to fundamental concerns, including the importance of developing and deploying AI with rights and values, transparency, and accountability in mind. These and other topics are explored further on Norton Rose Fulbright’s Artificial Intelligence microsite.

The ongoing and future developments of AI technologies have the potential to recast established modes of practice and institutions. As was the case with previous technological shifts, these changes will not occur in isolation and will develop in tandem with socio-cultural, political, and economic forces that both constrain and are shaped by broadly adopted technologies and practices. As Prof. Giuseppina D’Agostino, Founder and Director of IP Osgoode pointed out during the summit’s Fireside Chat, we must evaluate technological change in the context of history and recognize how such developments often benefit some stakeholders at the expense of others.

At the AI summit, discussants stated the necessity to create an ethical framework for the creation and deployment of AI at the levels of individual firms, society, and governments. These must recognize the ethical consequences new technologies may bring to bare. Such frameworks will need to mitigate safety, welfare, labour, and market concerns while providing room for beneficial creativity and innovation to occur.

In particular, discussants at the AI summit spoke about how existing biases and data management technologies can unknowingly reinforce discriminatory socio-economic and racial exclusions. As well, the potential for AI-based technologies to displace workers must be accounted for, with the goal that these advances will augment and not automate the existing labour force. For the legal profession, this will mean using LegalTech in ways that support the work of professionals by automating routine tasks so that clients are better able to access detailed, personalized, and cost-effective services going forward.

With an eye to the future, companies, such as Microsoft, and governing institutions, including the European Union (EU), are laying out principles and creating regulations to address some of the incursions increased AI adoption may bring. In a 2016 piece published by Slate, Microsoft CEO Satya Nadella presented six values that need to be considered: 1. AI must be designed to assist humanity, 2. AI must be transparent, 3. AI must maximize efficiencies without destroying the dignity of people, 4. AI must be designed for intelligent privacy, 5. AI must have algorithmic accountability so that humans can undo unintended harm, and 6. AI must guard against bias. Meanwhile, the EU’s General Data Protection Regulation (GDPR), which will come into force on May 25, 2018, will impact the data collection and management practices driving AI by foregrounding privacy concerns and data subject rights.

Here in Canada, various levels of government are promoting the promise of AI to contribute to both society and the economy. Edmonton, Montreal, and Toronto-Waterloo are home to burgeoning AI ecosystems, which are attracting international attention and investment. Protecting, commercializing, and capitalizing upon the IP and other intangible assets will be important for leveraging Canada’s AI capacity for social betterment and economic growth.

To inform the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, on February 2nd, 2018, IP Osgoode along with its partners, will host a full day conference entitled, “Bracing for Impact – The Artificial Intelligence Challenge (A Road Map for AI Governance in Canada)”. The conference, organized in collaboration with Osgoode PhD students Aviv Gaon and Ian Stedman, will be preceded by an invitation only round table discussion to set the stage for shaping AI policy in Canada.

The conference and round table will engage key figures in the federal and provincial governments, senior scholars from Canada, US, EU, Australia and Israel, and industry leaders. These events will in part build on the questions raised at Norton Rose Fulbright’s AI summit 2017 and assemble an internationally renowned group of AI researchers, legal scholars, practitioners, and industry experts along with provincial and federal government representatives. Bracing for Impact will focus on the legal, cybersecurity, and ethical considerations of AI innovation.

As the development of AI, machine learning, and deep learning technologies continues apace, the onus will be for the legal, public policy, and industry communities to learn from the unintended consequences of previous technological shifts and proactively create an inclusive governance roadmap for AI governance. IP Osgoode looks forward to being part of these efforts and to seeing you at our conference. Registration for the conference is now open, click here to register.

Editor’s Note: Mr. de Fazekas and Ms. Medeiros collaborate with IP Osgoode as part of our Innovation Clinic.

 

Giuseppina D’Agostino is the Founder & Director of IP Osgoode, the IP Intensive Program, and the Innovation Clinic, the Editor-in-Chief for the IPilogue and the Intellectual Property Journal, and an Associate Professor at Osgoode Hall Law School.

Joseph F. Turcotte is a Senior Editor with the IPilogue and the IP Osgoode Innovation Clinic Coordinator. He holds a PhD from the Joint Graduate Program in Communication & Culture (Politics & Policy) at York University and Ryerson University (Toronto, Canada) and can be reached on Twitter: @joefturcotte.