A few thoughts on Prof. Jean-Gabriel Castel’s talk, Fully Autonomous Artificial Super-Intelligence: Is it a threat to the human race or a blessing? How can it be controlled?
Before they died, my parents told me stories of how the world once was … They remembered a green world, vast and beautiful. Filled with laughter and hope for the future. But it’s a world I never knew… On August 29, 1997, Skynet woke up. It decided all humanity was a threat to its existence. It used our own bombs against us. Three billion people died in the nuclear fire. Survivors called it Judgement Day. – Terminator Genisys (2015, dir. Alan Taylor).
On March 22, Prof. Jean-Gabriel Castel, gave a fascinating lecture organized by the Nathanson Centre on Transnational Human Rights, Crime and Security, which challenged current perspectives about artificial intelligence (AI) and raised several well-considered reservations and concerns regarding the future. Indeed, at first blush, the AI discussion may seem more at home in a science fiction novel or Terminator film script. However, the picture that Prof. Castel paints is unambiguous: the AI future is real and imminent.
The Three Stages of Development
Prof. Castel argues that there are three stages of AI development. In the first stage, AI has the ability to encode knowledge and perform several human functions. We are in this stage at present, as AI has not reached the capacity to do everything that humans can do. The most sophisticated AI can process information and calculate chess moves (see Computer Go competitions), for example, but no AI can reach behind the scope of the code supplied by the programmer, and no AI can replace human reason.
AI will reach the second stage of development when it reaches the cognitive level of humans in all domains. The computational capacity of the human brain is currently estimated in the range of 1018 computations per second; between 2007 and 2015, computers’ speed grew at a rate of 82 percent per year, and at this rate, supercomputers will reach human capacities by the year 2017. In the second stage, AI will have the same degree of autonomy and intelligence as human beings, and what they learn may be passed to all AI worldwide.
The third and final stage, as Prof. Castel argues, will begin when AI becomes ‘super-intelligent’. At that point, AI will possess an intellect greatly exceeding that of humans, eventually surpassing the performance of humans in all domains. Prof. Castel fears that when AI reaches this stage, it will become invincible. AI will be able to reprogram itself and rewrite and improve its software over and over again at a computer’s speed, protecting itself against any attempt to harm or hinder development by humans. At this stage, the evolved AI will be the strongest entities on the face of the Earth.
The Legal Aspects of the Future of AI
In the coming decades, humanity will be forced to address new social issues induced by scientific progress. It is reasonable to assume that these changes will affect social interactions and, subsequently, the development of legal norms. As James Boyle articulates, ‘Both the definition of legal persons and the rights accorded to those persons have changed over time … Progress may have been gradual, intermittent or savagely resisted by force. There may have been back-sliding. but, in the end the phrase “all men” actually came to mean all men, and women too.’ Indeed, when ‘our world fills with robotic and AI technologies, our lives and relationships of social, political, and economic power will also change, posing new and unexpected challenges for law’.
In his lecture, Prof. Castel addressed a few of the difficulties we might face in the AI revolution. An interesting question raised by his lecture is whether we can attribute human rights to AI. Can we teach AI to feel? To develop compassion for other humans (or AI)? Or to express empathy and moral goodness?
The debate regarding the personhood of non-humans is well-trodden ground: the notion of considering non-humans for legal rights goes back to the Middle Ages, when churches were subject to legal rights and animals were held accountable for their ‘criminal’ behaviour. Considering AI to have legal personhood might imply that a similar status ought to be granted to other subjects, such as animals. However, I believe that there are many differences between my envision AI and animals. For example, the second and the third generations of AI expect to possess significantly higher cognitive abilities than animals.
Assuming that AI can gain legal rights, the question of whether AI could (or should) be regarded as an author under the normative standards in copyright law will have to be addressed by the legal system. This question will generate several sub-questions specifically ‘who’ will own copyright and to ‘what’ content. The ‘who’ question shall explore the normative standard of authorship in the ongoing struggle between an author’s right and the public domain. The ‘what’ will ignite the originality debate and prompt a discussion of the standard of creation for granting legal protection to a particular work.
The originality standard involves a mix of different legal norms, reflecting differences between states. The originality debate is composed of two main factions: supporters of the ‘sweat of the brow’ criteria on the one hand, generally represented by Lockean labour theory; and those supporting creativity-oriented criteria on the other, as developed by the US Supreme Court (developed in Feist Publication, Inc v Rural Telephone Service Co. The concept of originality in Canada has shifted from the traditional ‘sweat of the brow’ standard toward that of ‘skill and judgement.’ The Supreme Court of Canada determined in the landmark CCH decision that the work ‘must be more than a mere copy of another work.’ However, the work does not need to be creative ‘in the sense of being novel or unique.’
Originality will serve as an important ‘valve’ for copyright protection for AI. In a complex future, we must adopt a mechanism that ensures that only worthy creations are given copyright protection. If we give rights to AI, we must ensure that the balance between the public domain and AI rights will not change unexpectedly. From a normative perspective, I believe that the copyright bar should be decided on merit and not on conceptual beliefs that deny AI any legal rights. For this reason, I offer to raise the bar and adopt a creativity-originality standard. This may serve the AI debate both by shaping a new standard for originality that is reflective of advancements in technology and, as part of a Turing (Copyright) Test, will serve as a mechanism for establishing copyright for AI.
AI development can affect every aspect of human life. However, as Prof. Castel stated, no country’s legal system has yet expressed concern with the consequences of the possible evolution of AI to super intelligence. Faced with a choice between the radical prevention of all AI research, versus increasing international collaboration to develop both legal and scientific solutions, Prof. Castel expressed an inclination toward the latter.
He believes (and I agree) that, as history has shown us, we cannot prevent technological advancement, so we should find ways to avert the perilous implications of such advancement ahead of time. Humanity can benefit greatly from the AI era by building a joint project to research AI technology. The recent examples of the Human Genome Project and the International Space Station have shown that when humanity combines knowledge and experience, significant achievements occur.
Prof. Castel also suggested that one possible approach to prevent the danger posed by AI is to limit what AI can do. Isaac Asimov’s three laws can be programmed into AI, and human ethics and moral values could be incorporated as well. However, what will happen when AI finds that laws and ethics contradict one another? It might resolve the contradiction by adopting its own laws and ethics. Moreover, in the AI super-intelligence future (and even in the second stage of AI development) AI can take over many human jobs, causing the unemployment rate to sharply rise. However, as the Industrial Revolution has taught us, human labor forces can adapt to change. We might even grow to develop a different economy where humans do not need to work – a world where humans can pursue their desires and dreams more fully and maybe, in the words of Star Trek, ‘boldly go where no one has gone before’.
Aviv Gaon is a PhD candidate at Osgoode Hall Law School. His research explores the question of whether AI creation deserves to be protected by copyright law and, subsequently, to address the current legal discussion considering the standard of copyright protection.
 A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given it by human beings except where such orders would conflict with the First Law; a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.