Facebook’s Oversight Board, the independent body which acts as a court of appeals for the company, has announced its decisions on the first batch of cases it had picked up last month. The Board overturned Facebook’s content moderation decisions in four cases and upheld them in one case. In the case related to the removal of alleged hate speech by former Malaysian prime minister Dr Mahathir Mohamad, the Board did not pronounce a decision as Mohamad deleted the post on his own.
In its decisions, the Board criticised a lack of transparency and consistency in Facebook’s enforcement of its Community Standards. It was also critical of Facebook’s automated enforcement, and recommended improving detection algorithms.
The Board, whose constitution was announced in November 2018, can review cases referred to by users and Facebook itself. Facebook cannot overrule the Board’s decisions. Having started accepting appeals in October 2020, the Board picked up its first set of cases in December 2020. The Board subsequently picked up a case pertaining to India, a decision on which is expected in the coming days. Earlier this week, the Board picked up its biggest case yet — to consider whether former US president Donald Trump will stay suspended on Facebook, it said recently.
Key recommendations by Oversight Board
- Facebook needs to clearly notify users after moderation: The Board found that Facebook did not make it sufficiently clear to users as to why their posts were removed. In one case, Facebook only told a user their post had violated its Community Standard on Hate Speech, but not the specific rule that was broken. This lack of transparency allowed users to believe that Facebook removed content because of mere disagreement with the post.
- Automated enforcement unsatisfactory: Facebook’s automated enforcement systems are not doing a good job at enforcement. Users need to be told if their content was moderated using automated processes, and they should be allowed to appeal them in certain cases.
- Policy on Dangerous Individuals and Organisations unclear: There is a gap between what the public has been told, and the rules content moderators are enforcing. Facebook should make the policy clearer, with examples.
- Policy on health misinformation unclear: The enforcement of this policy is not transparent. There is a need for a new Community Standard on Health misinformation that will consolidate all existing rules in one place, with definitions for terms such as “misinformation”.
Decisions by Facebook’s Oversight Board
Nudity on Instagram from Brazil (Overturned): An Instagram user in Brazil had posted a pictures that showed breast cancer symptoms, with visible and uncovered nipples. The post was removed automatically for violating Facebook’s policies on nudity. Facebook is said to have told the Board to decline hearing the case, as it had already restored the post.
Board criticises lack of human oversight: In a scathing indictment of Facebook automated content moderation, the Board said that Facebook’s initial removal of the post “indicates the lack of proper human oversight”, which raises human rights concerns. It said that Facebook’s automated systems were unable to recognise the words “breast cancer” written in Portuguese. “Enforcement which relies solely on automation without adequate human oversight also interferes with freedom of expression,” it said.
It recommended that Facebook inform users when automated enforcement is used to moderate their content, ensure users can appeal such decisions to human beings in certain cases and improve automated detection of images with text-overlay.
Post with ‘quote’ by Goebbels (Overturned): A user had posted a quote incorrectly attributed to Nazi Germany’s Joseph Goebbels. Facebook deleted the post for sharing a quote attributed to a “dangerous individual”, and hence violating its policy on the same. The Board, however, found that the quote did not support the Nazi party’s ideology, and only sought to compare Donald Trump’s presidency to the Nazi regime.
- The Board recommended that Facebook make its Dangerous Individual policy clearer, with examples. It directed the company to release a public list of all organisations and individuals designated as “dangerous”. Additionally, it told Facebook to notify users with the reasons for any enforcement of Community Standards.
Post about medical information in France (Overturned): In a post, a French user criticised the country’s health regulatory body for refusing to authorise hydroxychloroquine. Facebook removed the post for violating rules on misinformation, explaining that the post misleadingly claimed that a cure for COVID-19 exists. The Board, however, said that the user was only opposing a governmental policy.
- The Board found that Facebook’s “misinformation and imminent harm rule”, which the post is supposed to have violated, is “inappropriately vague and incosistent with international human rights standards”.
“A patchwork of policies found on different parts of Facebook’s website make it difficult for users to understand what content is prohibited. Changes to Facebook’s COVID-19 policies announced in the company’s Newsroom have not always been reflected in its Community Standards, while some of these changes even appear to contradict them.” — Facebook Oversight Board
It recommended Facebook to create a new Community Standard on health information, and increase transparency around how it moderates health information. It also asked Facebook to publish a transparency report on how Community Standards were enforced during the COVID-19 pandemic.
Hate-speech from Burma against Muslims (Overturned): A Myanmar-based user had posted in Burmese, stating that there “is something wrong with Muslims (or Muslim men) psychologically or with their mindset”, while comparing reactions to cartoons depicting Prophet Muhammad in France, and to the treatment of Uyghur Muslims in China. Facebook had removed the content for violating its hate-speech policies.
- The Board, on the other hand, felt that the text was better understood as commentary, and hence an “expression of opinion”. It said that “while the post might be considered pejorative or offensive towards Muslims, it did not advocate hatred or intentionally incite any form of imminent harm. As such, the Board does not consider its removal to be necessary to protect the rights of others.”
Slur against Azerbaijanis (Upheld): A user had used an allegedly derogatory term to describe Azerbaijanis. It was posted during the recent conflict between Azerbaijan and Armenia. Facebook had deleted the post for violating hate-speech rules. The Board agreed with the decision and upheld it.
Also read:
- Facebook’s Oversight Board Announces First Batch Of Cases, Will Consider Matters Of Hate Speech, Nudity
- Facebook’s Oversight Board Picks Up An Indian Case For Review
- Facebook’s Oversight Board Will Decide If Trump Will Stay Suspended
