The Reasonable Robot by Ryan Abbott: Legally regulating AI—is it obvious?

Photo by Tara Winstead (Pexels)

Meena AlnajarMeena Alnajar is an IPilogue Writer, IP Innovation Clinic Fellow, and a 2L JD Candidate at Osgoode Hall Law School

 

Artificial Intelligence (AI) has become both a friend and a foe to our society. IBM’s Watson, a collection of algorithms, beat Jeopardy! champions and may soon become your next doctor. But AI can also be exploited to steal millions of dollars within minutes. Should we move to ban AI altogether or create more regulations and rules? Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey School of Law and Adjunct Assistant Professor of Medicine at UCLA posits in his latest book, The Reasonable Robot, that a different approach to law, not increasing the quantity of laws, is needed to regulate AI. Abbott proposes a new guiding tenet to AI regulation: AI legal neutrality. This tenet outlines that the law should not discriminate between AI and human behaviour. As AI increasingly invades spaces previously held for people, AI will need to be treated more like people. Abbott examines this concept in four legal areas: tax, torts, criminal, and intellectual property law.

Abbott frames each legal area in the context of growing automation, known as the Fourth Industrial Revolution. The book begins by discussing the changes to our tax regime if more work becomes automated. As an employee, AI does not require the same health benefits and protection. Therefore, businesses will have a tax incentive to automate. As we automate more work, the tax base shrinks, costing governments billions of dollars in tax revenue. To prevent such loss, tax neutrality between people and AI may be the solution. If there is no tax advantage to automated businesses and we move towards guaranteed income so that human employees are not disadvantaged because of AI’s inability to pay taxes, a neutral system for AI in the workplace can be achieved. Additional debates surrounding AI and unemployment currently exist. Some contest that the unemployment rate from automation is overstated while others point to the potential negative impacts of AI in the workplace. While automation does generate new jobs in the information sector, those jobs are not easily accessible. Jobs created as a result of the proliferation of AI technology will require extensive education and are often proven to become male-dominated industries. Increasing automation could create further disparities in the workplace.

The section on intellectual property (IP) is particularly intriguing in light of recent events. On July 29, 2021 South Africa approved a patent listing AI as the inventor. Dr. Stephen Thaler’s AI system ‘DABUS’ was listed as an inventor for a patent related to beverage packaging and emergency lights. On July 30, 2021, Australia followed suit and also overturned their rejection of Thaler’s patent, stating their decision to be “consistent with promoting innovation.” The patenting process, as Abbott predicted in this book, may need to be drastically changed to recognize AI as an inventor. Patent Offices have not yet issued guidance as to whether AI could be regarded as an inventor. To be patentable, an invention must be new, useful, and inventive, otherwise known as non-obvious to someone of average skill in the field. The standards for patentability may be raised if AI is allowed as an inventor.

The person of ‘average skill’ is unclear with AI as an inventor. For instance, if AI becomes part of the biomedical field, responsible for discovering and patenting antibodies, does AI become the person of average skill in biomedicine? Considering that AI is improving exponentially, and humans are not, humans filing to patent a “non-obvious” invention may face significant challenges. If inventive AI is allowed to patent and does so at a faster and greater capacity than humans, could this discourage the spirit of inventorship patent law seeks to foster in our society? Especially given that under AI legal neutrality, AI and humans should not be assessed through different standards. Abbott suggests the contrary: allowing AI to invent incentivizes innovation. AI legal neutrality suggests that work generated by AI be treated no less important than work patentable by humans.[1] Further, allowing AI to be listed as an inventor provides more transparency and accuracy to a patent’s creation,[2]ensuring fairness amongst innovators. AI’s capabilities as inventor should be recognized and encouraged since AI’s involvement in processes such as drug discovery are to our benefit.

A future where AI invents and outperforms humans seems inevitable. Rather than ban a future dominated by AI, humans should view AI as a means to improve on what we have already built . Technology can be dangerous when used for nefarious purposes. However, when regulated within a proper framework, AI promotes ingenuity, saves lives and eases hardships in our society. It is still within our control as legal scholars, lawyers, and leaders to create productive rules so that we can flourish alongside the reasonable, helpful robot as opposed to living in fear of AI’s growing capabilities.

[1] Ryan Abbott, The Reasonable Robot, (Cambridge: Cambridge University Press, 2020) at 164 (Kobo eBook).

[2] Ibid at 165.

Leave a reply

Your email address will not be published. Required fields are marked *

seven − 2 =