wordpress blog stats
Connect with us

Hi, what are you looking for?

Indian users share hateful content more readily on same-religion groups: Facebook internal docs

Respondents of the study considered WhatsApp groups to be a safe space for posting content targeting other religions.

“When users believed they were only talking to their religious group in messages or groups, they more readily shared inflammatory or misinformation content,” an internal report by Facebook on communal violence in India concluded. The report recommended various steps to reduce hateful content on Facebook and WhatsApp groups.

The findings of the internal report were based on 37 at-home interviews with Facebook users across four cities. Besides dealing with groups, the report also acknowledged that users encountered misinformation on Facebook during the Delhi Riots.

Till now, regulators in India and abroad have focused broadly on content moderation rules that apply across platforms. If hateful content is majorly spread through closed groups, however, regulations over how such groups must be moderated may be called for.

What Facebook’s report said about hateful content on groups

The internal document, titled ‘Communal Conflict in India,’ was included in disclosures made to the US Securities and Exchange Commission (SEC) and provided to US Congress in redacted form by whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were reviewed by a consortium of news organisations including MediaNama.

  • Comfort with sharing harmful content: “Some Hindu and Muslim participants felt more comfortable sharing harmful content when they believed only other members of their religion would see it,” the report said. Both communities cited WhatsApp groups as more comfortable spaces to share content that would offend another religious community, according to the report.
  • Moving to single religion spaces: When the participants of Facebook’s study experienced harassment or hate speech directed towards them, they migrated to more tightly-knit communities. “Many participants moved to perceived single religion spaces, where they felt they could express themselves freely,” the report said.

Recommendations: In light of these findings, the Facebook report made several recommendations for making groups on Facebook and WhatsApp safer:

  1. Expand enforcement against inflammatory content and hate speech in closed Facebook groups.
  2. Unpublish Facebook groups with a high number of hate strikes.
  3. Build major categories (hate, inflammatory, misinformation, violence and incitement) under which users can report content on WhatsApp.

In response to MediaNama’s queries, a Meta spokesperson told us that “we don’t allow hate speech on Facebook and we remove it when we find it or are made aware of it. We’re investing heavily in people and technology to help us find and remove this content quickly.” Earlier this year, the company also made several changes to keep groups safe, including demoting groups content from members who have violated the company’s community standards.

Users got misinformation on Facebook, WhatsApp during Delhi Riots

The report also acknowledged that users got misinformation from Facebook and WhatsApp during the Delhi Riots:

Advertisement. Scroll to continue reading.

“Most participants relied on Family Apps [Facebook and WhatsApp] for information during recent conflicts; much of it misinformation, and some of which they believed led to offline harms (e.g. Delhi riots)” – Facebook Internal Document

Recommendations: In order to tackle such misinformation and inflammatory content during conflicts, internal Facebook researchers suggested the following:

  • Uprank high-quality news on Facebook to debunk crisis misinformation.
  • Prioritise Facebook demotion of out of context images and videos, particularly during crisis events.
  • Designate hate events to enable removal/demotion of praise with quick turnaround time across Family Apps (e.g. Delhi Riots).

In a recent hearing held by the Delhi Peace and Harmony Committee, Facebook’s public policy head Shivnath Thukral refused to answer any questions about the company’s role in the Delhi Riots. You can read our detailed coverage of the hearing here.

Facebook’s failure to check hate speech in India

While Facebook’s researchers have prepared detailed documentation on hate speech and misinformation in India, the company has failed to take action on such content in several reported instances:

  • RSS, Bajrang Dal: Leaked documents showed in October this year that Facebook’s internal researchers flagged anti-Muslim content by the Rashtriya Swayamsevak Sangh and the Bajrang Dal. The researchers specifically listed the Bajrang Dal for takedown, but the organisation’s pages remain live on Facebook.
  • Telangana: Inflammatory posts by Raja Singh, a BJP MLA from Telangana, were left on the platform despite being marked as hate speech, WSJ has reported in August 2020. In his posts, Singh had said that Rohingya Muslim immigrants should be shot, called Muslims traitors, and threatened to raze mosques.
  • Assam: Facebook flagged accounts of BJP politicians posting inflammatory content in Assam ahead of the Assam elections, but did not take them down. They also did not remove a hateful post by Shiladitya Dev, a BJP MLA from Assam, for nearly a year, TIME reported in August 2020. Dev had shared a news report about a girl allegedly being drugged and raped by a Muslim man. He said this was how Bangladeshi Muslims target the “native people.”
  • No reason to remove Bajrang Dal: In December 2020, Facebook was questioned by the Parliamentary Standing Committee on IT regarding the allegations. Ajit Mohan, head of Facebook India, told the panel that the company has no reason to act against or take down content from Bajrang Dal.

Update (24 November, 10:03 AM: Responses from the Meta spokesperson were added.)

Also read:

Have something to add? Post your comment and gift someone a MediaNama subscription.

Written By

Figuring out subscriptions and growth at MediaNama. Email: nishant@medianama.com

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ