Henry Rhyu is a 1L JD Candidate at Osgoode Hall Law School. This article is a summary of the author’s dissertation written as part of his program requirement for his MSc in Criminology at the University of Oxford.
After decades of political instability and economic turmoil during the 20th century, Singapore has advanced into one of the wealthiest countries in the world. Currently boasting the fourth largest GDP per capita, Singapore is committed to becoming the world’s first “smart nation.” Applying the 2019 “National Artificial Intelligence Strategy,” the People’s Action Party (PAP) has been increasingly integrating AI and other cutting-edge technology into addressing the city-state’s concerns, prioritizing “safety and security” cited as a primary sector when implementing this strategy.
What are “Xavier” Surveillance Robots, and Do They Help Minimize Human Bias in Police Decision-Making?
AI-powered surveillance robots are at the forefront of Singapore’s commitment to enhancing the safety and security of the city-state. On September 5, 2021, the government released a statement that they will deploy the “Xavier” police robots as part of a 3-week trial. Developed together by the HTX and the Agency for Science, Technology and Research alongside several other government agencies, these twin artificial intelligence (AI) robots were stationed at a major shopping mall and a residential complex at the heart of Singapore. The robots were programmed to detect “undesirable” behaviours that amount to minor infractions, such as improperly parking a bicycle or smoking in forbidden areas.
As they gather more data with every novel type of infraction they are confronted with, the Xaviers continue to self-refine their crime prediction accuracy. Using various indicators – such as location, date, and time -these robots identify areas that demonstrate a statistical likelihood of exhibiting undesirable activity.
One societal benefit of the robots is combatting potential future shortages of human police, as well as allowing existing officers to allocate their limited resources.
Other proposed benefits are more difficult to verify. Local news reporting often describes the deployment of the Xaviers as a helpful method of reducing human bias in police decision-making, but this remains to be seen. Amy Zegart, a Stanford expert on cybersecurity and international security, argues that while AI is adept at recognizing patterns of behaviours, it fundamentally cannot explain nor question the logic underlying why they generate certain decisions.
Indeed, debates surrounding the potential for racial profiling have lead to pushback against predictive policing technology in some western countries. In 2021, the European Parliament prohibited the use of AI-powered preventive justice tools because they could generate racially biased outcomes. International news sources indicate that Singaporean citizens express similar concerns. One Singaporean human rights activist even stated that the Xaviers reminded her of “Robocop,” citing the potential for this surveillance technology to encroach on citizens’ right to privacy and due process.
What are the Existing AI regulations?
Presently, Singapore has no set “AI-specific” regulations. Instead, in 2019, the Personal Data Protection Commission – the national government-mandated body for AI-related concerns – established the “Model AI Governance Framework,” a non-binding ethics framework developed on behalf of Singapore-affiliated organizations that make use of AI in their company’s decision-making processes. This ethics framework states that 1) the decision-making process of AI technology must be “explainable, transparent, and fair,” and 2) that AI-based solutions must ensure that promoting the well-being of society is their number one priority.
Conclusion + Policy Implications
With more than $180 million allocated to AI research in Singapore, surveillance technology is expected to continue to become more sophisticated in the city-state. Whether the existing AI regulatory framework effectively in safeguards against various potential unethical manifestations and implications of predictive policing technologies is beyond the scope of this article. However, one thing is clear. Singapore should remain wary of arming surveillance robots. While the Xaviers are not programmed to apply force against citizens, armed robots exist in other countries. In 2016, the Dallas Police Department famously used a police robot to detonate a bomb against a suspect. Singapore must therefore identify and carefully straddle the fine line between using cutting-edge surveillance technology to enhance national security as opposed to providing the police with unfettered powers that risk violating citizens’ right to privacy and due process.