As the 2020 Presidential Primary begins to gather steam south of the border, US Senator Elizabeth Warren’s plan to break up big tech (Google, Amazon, and Facebook – she followed up later with a plan for Apple), has once again brought tech regulation into the political realm.
But the real crux of the problem, the source of tech companies’ economic and social clout, is papered over in only one sentence. It seems likely that’s not because Senator Warren’s team doesn’t care about the issue, but because when it comes to controlling how people consent to data collection, there don’t seem to be any easy answers. That’s especially apparent when it comes to how individuals, corporations, and governments have dealt with data coming from the most plugged in, yet also one of the most vulnerable segments of society – minors.
The Law on Privacy and Minors
In the US, online collection of personal data for children is primarily governed by the Federal Children’s Online Privacy Protection Act, or COPPA, which is enforced by the Federal Trade Commission (FTC). Under the law, it is illegal to collect the data of children under the age of 13 without parental permission. Given the costs of complying with that kind of consent requirement, many companies simply take the position of disallowing children under 13 from using their platforms altogether.
Canada’s federal system, on the other hand, has led to a more complicated overlay of laws. PIPEDA, the Canadian Federal privacy law, has no specific provisions regarding consent of minors. Under their guidelines, the Office of the Privacy Commissioner of Canada (OPC), which administers PIPEDA, generally considers anyone under the age of 13 incapable of giving consent. However, when it comes to provincial regulation, Alberta, British Columbia, and Quebec stick to a strict case-by-case model, rather than any blanket age restriction.
This means that, on the Canadian Federal standard, parental consent is required for collecting online information from children under the age of 13. In the case-by-case model, a child must be able to understand “the nature and consequences of the exercise of the right or power in question”. The case-by-case model also applies federally to children over the age of 13.
Two issues come to mind: are the laws on the books actually working? And furthermore, does anyone understand the nature and consequences of how they are exercising their privacy rights, let alone children?
Recent Cases: Where the Law Applies and Where it Runs Out
For a look at how child consent enforcement works in practice, let’s again turn to the United States. On February 27th, 2019, ByteDance, the parent company of TikTok (formerly Musical.ly) paid a $5.7 million settlement to the FTC for collecting the private information of children. The civil penalty, which is the largest ever for a violation of this kind, was administered because it was found that TikTok had been facilitating the uploads and private messages of children under the age of 13. In fact, a recent study from the UK found that 1 in 4 children had connected with a stranger through apps like TikTok, and 1 in 20 had been asked to strip by a stranger during a live stream. While the company has moved to stop users under 13 from uploading videos as a result of the fine, the change has not been without its problems.
Facebook has also been in the news lately when it comes to targeting youth, though in this case it was targeting users age 13-35. Participants were secretly paid up to $20 per month to install a “Facebook Research” app, with the goal of collecting vast amounts of data from users’ phones. The app demanded root access to the device it was installed on, giving it virtually complete access to the device’s data: photos and videos, web searches, private messages and texts, and location monitoring – all of which Facebook could collect continuously, regardless of encryption. The company has maintained that participation was consent-based, and that of the less than 5 percent of users under the age of 18, all were required to provide parental consent forms. It has since discontinued the program.
It’s not clear what the long-term effect of the TikTok FTC fine will be. That is to say, it’s not obvious that cost of policing 500 million users is outweighed by a $5.7 million fine on a company recently valuated at $75 billion. Instead, the way things are today, it seems entirely possible that serious harm to children is an externality of an anonymous (and extremely profitable) internet.
On the other hand, while Facebook’s behaviour might be considered by some to be unsavoury, the question becomes whether there is anything that can be done about it (especially since, unlike the TikTok example, it was not unlawful conduct). What is clear, however, is that ‘consent’ is a nebulous idea, and one not easily grasped by parents, let alone children.
Is ‘Parental Consent’ Meaningful?
For the sake of argument, let’s set aside the thornier issue of how feasible enforcing informed consent actually is. Instead, the core issue is whether individuals are informed enough to consent in the first place.
Parents often underestimate just how influential the way they handle their children’s data can be. Even before children are born, modern parents can begin creating a “digital footprint” (by posting ultrasounds for instance). Information as innocuous as sharing birthdays online can have devastating consequences for a child; reports have shown that identity thieves have been picking up on this kind of information, storing data while waiting for children to turn 18, then making credit card and loan applications in the child’s name.
‘Smart toys’ in the home are a problem in and of themselves. These devices are targets for hackers because they are Bluetooth and internet compatible with few safety mechanisms built-in (smart speakers have a similar problem, especially ones marketed to children). But despite concerns and breaches, sales of smart toys are expected to continue rising rapidly.
None of this is meant to assign any ill-intent to parents, who have a hard-enough time raising their kids without worrying about hacked toys or policing their own social media presence to protect their children. Whether through ignorance or by choice, though, parents have demonstrated that they are not all that better than their kids when it comes to handling data. But is there any better way to structure the system?
How DO We Handle Collection of Online Information?
Given all the problems, one might think we should just ban collection of minors’ data altogether, regardless of parental consent. Unfortunately, there are several problems with this kind of thinking.
First of all, there is the practical issue of whether it would even be possible without fundamentally reworking the anonymity of the internet (who wants to sign up for an online account by handing over government ID?). The fact that companies have effectively treated their platforms as 13+ without much success, speaks to the difficulty of implementing any kind of platform-directed user vetting.
Secondly, it would still be difficult to prevent the collection of inferred data, such as what users search and view online, because consent is not directly needed for that kind of data collection.
Lastly, there are the economic implications. Online marketing and data-driven e-commerce are obviously massive fields, but it bears emphasizing just how massive – in 2018, the top six companies in the world by market capitalization were all tech-based (in order: Apple, Amazon, Alphabet, Microsoft, Facebook and Alibaba). Moving the internet to the speed of bureaucracy would therefore undoubtedly have knock-on effects to the global economy.
So should the alternative be, as per the OPC’s guidelines for teens, consent requiring an understanding of “the nature and consequences of the exercise of the right or power in question”? As should be clear by now, that does not seem to be a workable standard, given that even adults have a tough time keeping up with all the implications of the digital economy.
The Broader Problem with Privacy in the Digital Age
Returning to Senator Warren’s plan, one can see that, notwithstanding the financial merits or demerits of breaking up big tech, we aren’t really at the point as a society of having an informed discussion as to the trade-offs and moral decisions we must make if we want to continue life in an interconnected world.
Breaking up tech companies won’t address the question of who has access to private information (child or adult) and what they’re doing with it. It’s more of a sui generis problem, where the domination of an economic field by an oligopoly (the reason why some call for breaking up the banks) is fundamentally intertwined with an inability to have a broad, meaningful conversation, thanks to a lack public knowledge (similar to how inadequate civic education affects our ability to engage in political discourse). When you factor in concerns about cyberwarfare and election meddling, the ‘big-tech’ debate simultaneously hits at all the core themes of our current world: nationalism versus globalism, social reform versus retrenchment, and wealth inequality.
The question of the global digital economy is thus as much a distillation of the political, economic, and social narrative of the early 21st century as the Cold War was of the later 20th century. In that spirit, it may be prudent to take some lessons from Dr. Strangelove, which perhaps best captured the intractability and seeming futility presented by the nuclear age. What I’m left with is the uncomfortable realization that I myself don’t fully understand the implications of my consent and privacy in the digital world; but short of unplugging, it looks like we will keep riding this bomb and hope our choices work out in the end. Is it crazy? Perhaps. But it’s not unprecedented. After all, it only takes one letter to go from MAD to ad.
Written by Peter Werhun. Peter is an IPilogue Editor and JD Candidate at Osgoode Hall Law School