All Rise For The Facebook Supreme Court of Free Expression

Earlier in the year, Facebook announced the appointment of 20 board members for its new Facebook Oversight Board (FOB). The announcement came after an increasing demand for Facebook to take responsibility for moderating third-party content and protecting the privacy of its users.

What is the Facebook Oversight Board?

Since at least early 2018, when the Cambridge Analytica scandal broke, Facebook has faced heavy criticism for the way the social media empire operates. Founder and CEO Mark Zuckerberg apologized before Capitol Hill and released a statement promising to “do better.” The FOB seems to be the newest endeavor in fulfilling that promise.

A quick scan of the FOB’s Charter makes it clear the FOB’s framework emulates a quasi-judicial system. For instance, the board is independently funded to ensure its autonomy from Facebook. Further, the board’s decisions are binding upon Facebook unless the decisions could violate the law.

The current board is comprised of an impressive 20 members, including law professors, the former Prime Minister of Denmark, and a Nobel Peace Prize Laureate. Some of the members, like Professor Nicolas Suzor, have historically criticized Facebook’s methodology. The appointment of renowned critics, combined with fixed terms for board members, lends credence to Facebook’s assertions of the board’s autonomy.

Yet, people are skeptical that the FOB will have any lasting impact. One of the reasons for the skepticism is that the members of the board are not diverse enough. Realistically, for a platform like Facebook, which has more than one billion users worldwide, it’s highly unlikely to have a board that reflects all its users’ needs. That does not excuse the lack of representation for LGTBQ+ advocates or people from certain regions – such as the Middle East, North Africa, and Southeast Asia. The lack of diversity highlights the more significant underlying problem Facebook and the board face, finding the line between freedom of expression and hate speech for people with vastly differing values.  

Freedom of expression, at whose expense?

Facebook’s unwillingness to moderate or censor what content is available to its users has had severe offline consequences. By championing “free expression,” Facebook allowed the spread of hate speech against the Rohingya in Myanmar and Muslims in Sri Lanka and social unrest and division in the United States.

The FOB will begin hearing cases later this year. They will listen to appeals of cases where Facebook or Instagram, owned by Facebook, reviewed and removed content reported as offensive. In the meantime, harmful information is still disseminating on Facebook’s platform without much censure. Even those within the company are displeased with the company’s inaction and staged a walk-out.

Critics and former employees of the company had repeatedly stated that real changes necessitated a threat to the company’s bottom line. Indeed, once companies such as North Face, Verizon, Coca-Cola, and Unilever announced they would be pulling advertisements from Facebook to combat the amount of hate speech online, Mark Zuckerberg quickly announced changes to the company policy. Under the new policy, “Facebook will ban ads that target people from a specific race, ethnicity, nationality, caste, gender, sexual orientation or immigration origin are a threat to the health or safety of anyone else.”

This policy was quite the turn-around from earlier in the summer when Facebook was adamant it would not change its current policy on free speech. Ultimately, we will have to wait until the FOB releases its first set of decisions, and until we see Facebook’s response to these decisions, to determine what values will be upheld in the name of free expression. As one of the leaders in technology, the precedent Facebook sets with the FOB could have lasting implications on how technology companies operate and are regulated.

Written by Nikita Munjal. Nikita is an IPilogue Editor, Clinic Fellow with the Innovation Clinic, and a JD/MBA Candidate at Osgoode Hall Law School.

One Comment
  1. On its face, the Facebook Oversight Board is appealing: it engenders the goodwill of the public in the face of many scandals but ultimately offers nothing substantive to back it up.

    The bigger question is why leave it to companies to police themselves? Facebook adheres to its own moral code and follows local and international laws only when it cannot avoid them. The FOB cannot establish an appropriate balance of free expression and moderation nor will it be vested with enough power to make changes occur.

    In the realm of privacy, one piece of legislation has had enormous positive effects: the European Union’s General Data Protection Regulation (“GDPR”) is a game changer not just for Europe but for the world. Since its inception, many non-European countries have adopted their own privacy statutes in the vein of GDPR. Tech companies around the world work hard to be compliant, knowing the steep fines and reputation loss that occurs if they are found in breach.

    The approach to moderating social media sites could be similar. If the EU takes a stand and creates a piece of legislation specifically targeting how companies approach moderation of user-generated content, then it is likely that other countries would follow. There will never be complete global consensus on what freedom of expression looks like, but there can be a set of guidelines, enhanced with consequences for inaction, by which these companies can be forced to comply. And that must come from legislators not from the companies themselves.

Leave a reply

Your email address will not be published. Required fields are marked *

eighteen − four =