Earlier in the year, Facebook announced the appointment of 20 board members for its new Facebook Oversight Board (FOB). The announcement came after an increasing demand for Facebook to take responsibility for moderating third-party content and protecting the privacy of its users.
What is the Facebook Oversight Board?
Since at least early 2018, when the Cambridge Analytica scandal broke, Facebook has faced heavy criticism for the way the social media empire operates. Founder and CEO Mark Zuckerberg apologized before Capitol Hill and released a statement promising to “do better.” The FOB seems to be the newest endeavor in fulfilling that promise.
A quick scan of the FOB’s Charter makes it clear the FOB’s framework emulates a quasi-judicial system. For instance, the board is independently funded to ensure its autonomy from Facebook. Further, the board’s decisions are binding upon Facebook unless the decisions could violate the law.
The current board is comprised of an impressive 20 members, including law professors, the former Prime Minister of Denmark, and a Nobel Peace Prize Laureate. Some of the members, like Professor Nicolas Suzor, have historically criticized Facebook’s methodology. The appointment of renowned critics, combined with fixed terms for board members, lends credence to Facebook’s assertions of the board’s autonomy.
Yet, people are skeptical that the FOB will have any lasting impact. One of the reasons for the skepticism is that the members of the board are not diverse enough. Realistically, for a platform like Facebook, which has more than one billion users worldwide, it’s highly unlikely to have a board that reflects all its users’ needs. That does not excuse the lack of representation for LGTBQ+ advocates or people from certain regions – such as the Middle East, North Africa, and Southeast Asia. The lack of diversity highlights the more significant underlying problem Facebook and the board face, finding the line between freedom of expression and hate speech for people with vastly differing values.
Freedom of expression, at whose expense?
Facebook’s unwillingness to moderate or censor what content is available to its users has had severe offline consequences. By championing “free expression,” Facebook allowed the spread of hate speech against the Rohingya in Myanmar and Muslims in Sri Lanka and social unrest and division in the United States.
The FOB will begin hearing cases later this year. They will listen to appeals of cases where Facebook or Instagram, owned by Facebook, reviewed and removed content reported as offensive. In the meantime, harmful information is still disseminating on Facebook’s platform without much censure. Even those within the company are displeased with the company’s inaction and staged a walk-out.
Critics and former employees of the company had repeatedly stated that real changes necessitated a threat to the company’s bottom line. Indeed, once companies such as North Face, Verizon, Coca-Cola, and Unilever announced they would be pulling advertisements from Facebook to combat the amount of hate speech online, Mark Zuckerberg quickly announced changes to the company policy. Under the new policy, “Facebook will ban ads that target people from a specific race, ethnicity, nationality, caste, gender, sexual orientation or immigration origin are a threat to the health or safety of anyone else.”
This policy was quite the turn-around from earlier in the summer when Facebook was adamant it would not change its current policy on free speech. Ultimately, we will have to wait until the FOB releases its first set of decisions, and until we see Facebook’s response to these decisions, to determine what values will be upheld in the name of free expression. As one of the leaders in technology, the precedent Facebook sets with the FOB could have lasting implications on how technology companies operate and are regulated.
Written by Nikita Munjal. Nikita is an IPilogue Editor, Clinic Fellow with the Innovation Clinic, and a JD/MBA Candidate at Osgoode Hall Law School.