Can Legislation Restrain the Looming ‘Beast’ of Artificial Intelligence?

Can Legislation Restrain the Looming ‘Beast’ of Artificial Intelligence?

Amidst the unprecedented number of cyber-attacks in recent years, we have quickly transitioned into an Artificial Intelligence (AI) Era in which Intel predicts more than 200 billion Internet enabled devices by 2020. The use of Big Data to fuel AI development has brought about groundbreaking innovations that will impact virtually every aspect of human lives. In fact, jurisdictions around the world are already embracing this technology: Saudi Arabia has given citizenship to an AI Robot named Sophia, China has opened its first AI-assisted treatment centre, and the United Kingdom is poised to bring AI to public service delivery. The rise of AI brings on many challenges and, as revealed in 2017, the Government of Canada wants the country to be committed to global leadership in AI. Are we ready? As Canada braces for the impact of AI, legal and policy stakeholders continue to strategize how best to shape government cybersecurity policy going forward. On February 2, 2018, IP Osgoode’s Bracing for Impact: the Artificial Intelligence Challenge conference brought together experts, scholars and technology enthusiasts from around the world. In particular, the “Cybersecurity and International Risks in the AI Era” panel, chaired by Matthew Castel, discussed how cybersecurity risks have increased in this automated era. The panelists also commented on how best to leverage AI while mitigating these risks and the role legislation can play in addressing some of these challenges.

First Off, what really is AI?

Traditionally, computers were thought to be creatures of instructions. However, over six decades ago, the umbrella term, Artificial Intelligence, was coined to refer to a computer’s ability to make decisions without direct human intervention. According to Arthur Samuels, it is “a field of study that gives computers the ability to learn without being explicitly programmed.” AI’s deep learning capabilities involve complex algorithms that allow for human-like responses to problems by identifying patterns from enormous pools of data. From door-opening robots, search and rescue drones, self-driving cars, to smart glasses beaming information directly into your eyes, AI is literally in our faces and has the potential to infiltrate nearly every aspect of our lives.

 

AI Poses Both a Risk and Opportunity

Cyberspace, despite its many advantages, continues to be exploited. Since 2013, over 9 billion data records have been lost or stolen. AI provides a unique opportunity to bolster cybersecurity solutions by utilizing predictive analysis capabilities. Almost 1 million pieces of new malware are discovered every day and it appears cybercriminals are always a step ahead because anti-virus software are only as good as its most recent update.

AI is a game changer, which can allow for a more proactive and dynamic approach to cybersecurity. For example, deep learning technologies can analyze more than 10 million incidents per day, run numerous simulations, and predict potential attacks and respond accordingly.

However, AI comes with a number of risks, such as threat agents using AI to develop automated attacks that learn and adapt to vulnerable systems in real time. AI models also thrive on data, so bias or false positives could adversely affect decisions or actions taken by the algorithm. Issues of accountability and even tort liability may arise if the AI model goes rogue and does what it was not programmed to do.

Managing Director and CEO of ABCLive Corporation, Victor Garcia, expressed that the capacity of AI to learn and evolve will undoubtedly exceed human capacity. For example, in 2016, an AI robot outperformed doctors in a surgery where its sutures were found to be superior and done with more precision. As astounding as this may be, there are still risks to consider. If this AI bot was somehow compromised and succumbed to a cybercriminal’s ransomware attack, human lives could potentially be at risk.

 

Privacy vs. Security – Do We Tradeoff or Can We Have Both? 

So, how can these risks be mitigated? It appears that efforts to do so could impact privacy rights or even the fabric of a nation’s security.

According to Benjamin Franklin, “they who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.” Jurisdictions around the world continue to struggle with this concept, especially in light of autonomous AI weapons and other national security concerns. Roy Keidar, special counsel, Yigal Armon & Co. and former Israeli legal advisor for the National Security Council, argued that an invasion of privacy might be a necessary trade off in some instances, such as fighting terrorism. Keidar posited that while an individual’s privacy rights should be protected, the concept of freedom also extends to border security issues and other national security concerns. Clearly, this topic is a mammoth task for governments, so a natural corollary is that in order to preserve the security architecture of a nation, tools need to be developed to allow for a certain level of security. Consequently, in an attempt to keep citizens safe, this invariably might encroach on individuals’ privacy rights. Conversely, privacy expert Ann Cavoukian maintains that it is quite possible to have both privacy and security. Cavoukian argues that privacy is essentially about having control over one’s data and this should be reflected in free and democratic societies. Her proposed ‘Privacy by Design’ (PBD) approach seeks to change the paradigm from flawed ‘zero-sum’ models to ‘positive-sum’ models. Privacy and security would no longer be competing interests because measures safeguarding privacy would be proactively embedded into technological operations and security considerations. Cavoukian predicts that the implementation of the General Data Protection Regulation (GDPR) in May 2018 will replace current privacy laws in all European Union member countries, making privacy the default. As such, the use of data will be ‘user-centric’ and only used for the purposes it was collected for. Entities who do not abide by the GDPR could face fines of up to 4% of their global revenue. Cavoukian’s proposed ‘AI Ethics by Design’ would allow for transparency and oversight of algorithms with high levels of accountability, which could help to facilitate ethical algorithmic designs and data symmetry.

 

Is Legislation the Answer?

Technology enthusiasts around the world, including Stephen Hawking and Elon Musk, argue that AI is an existential threat to humanity and are calling for nations to regulate it before its too late. Indeed, there are concerns about AI’s impact on weapons and privacy rights, but whether legislation can restrain this ‘monstrous beast’ is moot. Not only are there jurisdictional issues as it relates to regulating AI in a borderless Cyberworld, but AI is evolving and parliaments do not have a good track record of keeping up with the pace of technology.

Even if policy makers manage to develop a legislative framework for AI, that is only one aspect of this labyrinthine technology. Other issues to grapple with include potential job losses in those roles at risk of automation. For example, a Ball State University study revealed that nearly 5 million manufacturing jobs have been lost since 2000 due to AI bots taking over assembly line work traditionally done by humans. Canada could see a 40 percent decline in jobs over the next decade  due to automation as well. Even the legal fraternity is not immune, especially with talks of IBM’s Watson possibly replacing lower-level legal assignments carried out by articling students or junior lawyers. Governments would also need to consider wider economic implications such as the decline in tax dollars received from those jobs.

AI could boost the economy by up to $14 trillion by 2035, so policy makers should be cautious in over regulating this invaluable resource that could drive innovation. Over regulation could possibly stifle growth in AI by making it a less attractive field for investors. For example, a potential backlash could be Google and other tech giants such as Microsoft, Amazon, Facebook, and Samsung Electronics no longer investing in Canada.

Perhaps more work needs to be done in developing ethical oversight of AI; particularly teaching AI the unique aspects of human values like privacy and freedom. Canada has some of the world’s brightest minds who have been trailblazers in AI for over 30 years, so their expertise in determining the extent to which these principles can be reflected in AI technologies could then inform legislation. Canada would then need to focus its policy lens on training and research, thus building and sustaining Canada’s AI ecosystem.

 

Andrae Campbell is an IPilogue Editor and an LLM Candidate at Osgoode Hall Law School.