Ali Mesbahian is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School.
The Dystopian State of Surveillance in the Workplace
The ongoing pandemic has shaken up of the world of work. With the emergence of COVID-19, distinctions between on-site and remote workers became intuitive and reliance on technology for attuning the workforce to the situation of a pervasive contagion rapidly increased. This digital acceleration has naturally caused worries about the future of labour relations, the effects of workplace technologies, and automation.
While remote work has its benefits and popularity, caution must be exercised against hastily treating the situation as liberating. After all, the flexibility and cost-effectiveness associated with remote work did not result from collective bargaining by employees and labour unions, but emerged from a public-health necessity. As Antonio Aloisi and Valerio De Stefano observed in their June 2021 International Labour Review paper, the result is an asymmetrical distribution of the benefits of digitalization—as evidenced by the remarkable rise in demands for employee surveillance technology (more than 58 percent since the beginning of the pandemic). In many firms, working at home has made bare an extreme culture of employer distrust as workers are often called to download apps that track how long they are away from the computer (OccupEye), record and send their web-history to their employers (InterGuard), and take screenshots of their desktops (Hubstaff).
Against Technological Determinism
On the other hand, it is important to dispel any conception of technology that grants it an agency of its own, as if acting as an autonomous force, imposing itself on societal relations, labour markets, and daily tasks. For instance, AI is often said to inevitably replace many jobs performed by humans; self-checkout machines in grocery stores are a common example. While there is some truth to this statement, emphasizing inevitability tends to mask the bigger picture that human decisions—more precisely, those of the tech elite, policy makers, security and military officials, and employers – fundamentally shape technological progress.
As Aloisi and De Stefano emphasize, “the very same technology that are adopted to monitor workers could be used […] to improve transparency, verifiability and objectivity of managerial decisions, thus advancing inclusion of underrepresented populations and reducing socio-economic gaps.” In other words, it matters whether technology and machinery are used and developed for the purposes of extracting profit and cutting down costs, or for the purposes of building trust in work relations and increasing workers’ sovereignty over their own time. For instance, “even relatively modest gains [in workplace efficiency] from using robots and AI” may provide the conditions for establishing a four-day workweek, according to a 2018 study by the Social Market Foundation.
Moreover, automation often increases demand for complementary skills. For instance, A human-centered research mandate and policy objective would thereby identify the required complimentary skills borne out of the automation process in order to create jobs and increase wages. As Frank Pasquale, professor at Brooklyn Law School, rhetorically asks: “Would you rather see your parents or grandparents “cared for” by a robot […] or would you rather such technology be introduced to them by caring and well-trained professionals, who can spark conversations about the robot?” This invokes broader questions, often relegated to the sidelines, of what aspects of labour are to be replaced by technology. The issue is, of course, political at heart; a worker-affirming answer to such questions depends on structural changes that place workers as powerful stakeholders in the decision-making process that underlies the direction of technological advancement.