Taxing Robots Could Save Your Job, But What Else?

Taxing Robots Could Save Your Job, But What Else?

There is a public debate about the future of automation. On one side of the argument, some think the future of work will evolve into humans having new types of creative jobs, while others think that increased automation will mean the end of human employment. Economically, the corporate benefit of adopting automated technology is clear: employers do not need to account for taxes, benefits, or wages as they do with humans. In this way, the law does not treat humans and technology the same, but Professor Ryan Abbott suggests it ought to.

On Monday February 3, 2020, Professor Abbott, a prominent voice at the intersection between law and technology, joined the IP Osgoode community in his talk: “The Reasonable Robot: Artificial Intelligence and the Law”. He discussed recent advancements in technology, such as Alphabet’s DeepMind AI beating human players in GO and predicting protein folding structures, and how the law should rethink human and robot relationships as AI becomes more useful.

His answer is AI legal neutrality. The argument is simple on its face: the law should not discriminate between behaviour by AI and by humans. While they should not be treated identically, Abbott suggests that through tax, tort, and intellectual property law, it could be beneficial to create a similar standard for assessing AI and human behaviour. This article will explore the economic and tax arguments.

Most tax revenue comes from income taxes and payroll taxes – in fact payroll taxes make up 35% of federal revenue. So, it is contrary to the government’s interest to incentivize replacing human workers with new technology. Widespread unemployment means a widespread reduction in tax revenue. The concept of AI legal neutrality in the tax realm would mean that humans do not pay taxes. Then, increasing corporate taxes fills the gap in revenue. Professor Abbott suggests that this can level the playing field between people and machines.

Some have argued for a tax on robots in order to slow down the adoption of AI, which is fuelling automation. Others add that businesses adopt AI not because it is more productive, but because the tax code urges them to – that is, they will save money on taxes by eliminating human workers. South Korea, the most automated country in the global manufacturing industry as of 2016, implemented a reduction to their automation incentive in 2017—an act one can liken to the first instance of taxing robots, albeit indirectly.

My own view is that rather than taxing businesses outright for using automation, perhaps implementing some other corporate strategies would put humans and robots on a more equal playing field. South Korea did this by starting a “learning factories” initiative, where low-skilled workers develop skills to handle robots and automated machines. The Ex’ Tax Project is one organization that advocates for taxing capital over labour. While there is an obvious benefit to this when it comes to one of their main causes, which is taxing natural resources and pollution, it is more difficult to justify taxing a resource that would lead to a decrease in innovation. Innovation is not all about productivity, but also has implications for social good, such as finding solutions for health issues, pollution, and drawing important insights from data. I believe there is a risk to stifling innovation just for the sake of preserving human employment. For human contribution to be relevant, it is necessary to learn more about the technology, like AI, which is leading to new discoveries. In this way, humans can work with technology to augment it, rather than act as an obstacle to its progress.

Written by Summer Lewis, a second year JD Candidate at Osgoode Hall Law School. Summer is also the Content Editor of the IPilogue.