After a damaging leak of internal company records triggered calls for regulatory actions against Meta, independent researchers are now claiming that Facebook is stifling a report they had been commissioned to submit on its human rights impact in India, as per a report in the Wall Street Journal. Meta has refuted the claims saying that they were being thorough with the report and not trying to meet an ‘arbitrary deadline’.
However, researchers claim that Facebook’s human rights team, which oversees the work on the report, has been raising technical objections, narrowing the scope, etc., in an attempt to stifle the report. We were approached by the company to provide factual evidence of toxic content and their thoughts on Facebook use in India, Dr. Ritumbra Mauvie, one of the researchers on the project, told MediaNama.
Facebook in India faces renewed scrutiny from the government, legislative committees, and lawmakers following recent revelations by employee-turned-whistleblower Frances Haugen.
Allegations of the researchers
According to the WSJ report, researchers alleged the following actions are being taken up through Foley Hoag, a New York-based law firm hired by Facebook to take charge of the report.
Moving of goalposts: Foley Hoag challenged the content flagged as hate speech by one of the human rights organisations, Stichting The London Story (TLS), behind the report. The law firm asked them whether they had first reported it on Facebook and then asked them if they reported it within specific time frames, in a shifting of goalposts.
“In context of hate speech or human rights assessment of a company as big as Facebook it should not matter if the toxic content was found or flagged in a specific time frame. The fact that there was toxic content on the platform, which was widely viewed, and was not removed by Facebook despite user community flagging it – needs to be acknowledged,” Ritumbra Manuvie, the co-founder of TLS told MediaNama.
Further, the law firm asked TLS to prove that a piece of content had caused harm, which Manuvie said was a “higher bar than human rights impact assessments must typically meet and not in the spirit of assembling an independent report.”
Technical objections: Facebook raised technical objections on the report’s definition of hate speech and thus, the content flagged or included in the report, Ratik Asokan, an Indian Civil Watch researcher involved in the report, told MediaNama. Manuvie said that TLS had relied on Facebook’s definitions of hate speech, violence, etc., as per its content moderation policy.
Not removing flagged hateful material: Researchers claimed that they found and reported lots of hateful content on the platform, including a video that called for the extermination of Muslims and Islam and clocked 40 million views, but the content was not removed. However, a Facebook spokesperson denied this to WSJ.
“Facebook’s toxic content flagging systems are broken and Facebook has never given a detailed breakup of content it claims it has removed from the platform (for example we do not know what protocols Facebook followed, or how much of the content was in English language or Hindi or another non-English language, from which country they removed the content, etc are some of the unanswered questions),” Mauvie said.
How the study came to be
The researchers claim that the study was commissioned in mid-2020. Nick Clegg, Facebook’s VP of Global Affairs, revealed that Facebook had commissioned a Human Rights Impact Assessment “several months ago.” In response to a letter submitted by various Indian civil society groups asking Facebook to address dangerous content in India, Clegg said that Holey Foag would have “complete independence” in determining the methods and groups to consult but suggested the law firm incorporate feedback from individual Facebook users and vulnerable groups, WSJ reported.
In recent years, Facebook has released executive summaries of human rights impact assessments it commissioned on its operations in Indonesia, Sri Lanka, and Cambodia. In each instance, it said that the consultants who were engaged completed their work in less than one year, as per WSJ.
Facebook’s previous tussle with researchers
In August 2021, Facebook had banned the accounts of New York University researchers, threatened legal actions against researchers at Berlin Based Algorithm Watch, and restricted data access to researchers at Princeton University.
NYU researchers: Facebook banned the personal accounts of researchers who were part of the NYU Ad Observatory as well as suspended the team’s access to Facebook’s Ad Library and Crowdtangle where they were trying to study political ad-targeting. Facebook alleged that the researchers were doing unauthorised data collection through a plug-in they developed for the research. Multiple civil rights groups, privacy activists, and a group of US senators challenged Facebook’s decision.
AlgorithmWatch: Researchers at AlgorithmWatch were issued legal threats by Facebook on their research of Instagram’s algorithm, specifically on how Instagram prioritises pictures and videos in a user’s timeline. Facebook claimed that AlgorithmWatch’s study was flawed, with various issues in the methodology and in breach of Facebook’s terms of service. They also claimed that it was in violation of the GDPR.
Princeton University: The University’s researchers withdrew from studying political advertising on the platform through its ‘Facebook Open Research and Transparency’ platform (FORT) due to stringent contractual obligations. This included Facebook getting to review their research before publication for removal of any information related to “Facebook’s products and technology, its data processing systems, policies, and platforms, in addition to personal information pertaining to its users or business partners.”
- Explained: Facebook’s tussle with researchers studying its algorithms and political ad data
- Facebook whistleblower: A summary of all 8 complaints to the US SEC
Have something to add? Post your comment and gift someone a MediaNama subscription.