“When users believed they were only talking to their religious group in messages or groups, they more readily shared inflammatory or misinformation content,” an internal report by Facebook on communal violence in India concluded. The report recommended various steps to reduce hateful content on Facebook and WhatsApp groups.
The findings of the internal report were based on 37 at-home interviews with Facebook users across four cities. Besides dealing with groups, the report also acknowledged that users encountered misinformation on Facebook during the Delhi Riots.
Till now, regulators in India and abroad have focused broadly on content moderation rules that apply across platforms. If hateful content is majorly spread through closed groups, however, regulations over how such groups must be moderated may be called for.
What Facebook’s report said about hateful content on groups
The internal document, titled ‘Communal Conflict in India,’ was included in disclosures made to the US Securities and Exchange Commission (SEC) and provided to US Congress in redacted form by whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were reviewed by a consortium of news organisations including MediaNama.
- Comfort with sharing harmful content: “Some Hindu and Muslim participants felt more comfortable sharing harmful content when they believed only other members of their religion would see it,” the report said. Both communities cited WhatsApp groups as more comfortable spaces to share content that would offend another religious community, according to the report.
- Moving to single religion spaces: When the participants of Facebook’s study experienced harassment or hate speech directed towards them, they migrated to more tightly-knit communities. “Many participants moved to perceived single religion spaces, where they felt they could express themselves freely,” the report said.
Recommendations: In light of these findings, the Facebook report made several recommendations for making groups on Facebook and WhatsApp safer:
- Expand enforcement against inflammatory content and hate speech in closed Facebook groups.
- Unpublish Facebook groups with a high number of hate strikes.
- Build major categories (hate, inflammatory, misinformation, violence and incitement) under which users can report content on WhatsApp.
In response to MediaNama’s queries, a Meta spokesperson told us that “we don’t allow hate speech on Facebook and we remove it when we find it or are made aware of it. We’re investing heavily in people and technology to help us find and remove this content quickly.” Earlier this year, the company also made several changes to keep groups safe, including demoting groups content from members who have violated the company’s community standards.
Users got misinformation on Facebook, WhatsApp during Delhi Riots
The report also acknowledged that users got misinformation from Facebook and WhatsApp during the Delhi Riots:
“Most participants relied on Family Apps [Facebook and WhatsApp] for information during recent conflicts; much of it misinformation, and some of which they believed led to offline harms (e.g. Delhi riots)” – Facebook Internal Document
Recommendations: In order to tackle such misinformation and inflammatory content during conflicts, internal Facebook researchers suggested the following:
- Uprank high-quality news on Facebook to debunk crisis misinformation.
- Prioritise Facebook demotion of out of context images and videos, particularly during crisis events.
- Designate hate events to enable removal/demotion of praise with quick turnaround time across Family Apps (e.g. Delhi Riots).
In a recent hearing held by the Delhi Peace and Harmony Committee, Facebook’s public policy head Shivnath Thukral refused to answer any questions about the company’s role in the Delhi Riots. You can read our detailed coverage of the hearing here.
Facebook’s failure to check hate speech in India
While Facebook’s researchers have prepared detailed documentation on hate speech and misinformation in India, the company has failed to take action on such content in several reported instances:
- RSS, Bajrang Dal: Leaked documents showed in October this year that Facebook’s internal researchers flagged anti-Muslim content by the Rashtriya Swayamsevak Sangh and the Bajrang Dal. The researchers specifically listed the Bajrang Dal for takedown, but the organisation’s pages remain live on Facebook.
- Telangana: Inflammatory posts by Raja Singh, a BJP MLA from Telangana, were left on the platform despite being marked as hate speech, WSJ has reported in August 2020. In his posts, Singh had said that Rohingya Muslim immigrants should be shot, called Muslims traitors, and threatened to raze mosques.
- Assam: Facebook flagged accounts of BJP politicians posting inflammatory content in Assam ahead of the Assam elections, but did not take them down. They also did not remove a hateful post by Shiladitya Dev, a BJP MLA from Assam, for nearly a year, TIME reported in August 2020. Dev had shared a news report about a girl allegedly being drugged and raped by a Muslim man. He said this was how Bangladeshi Muslims target the “native people.”
- No reason to remove Bajrang Dal: In December 2020, Facebook was questioned by the Parliamentary Standing Committee on IT regarding the allegations. Ajit Mohan, head of Facebook India, told the panel that the company has no reason to act against or take down content from Bajrang Dal.
Update (24 November, 10:03 AM: Responses from the Meta spokesperson were added.)
Also read:
- What Facebook Told The Delhi Peace And Harmony Committee, And What Was Left Out
- Exclusive: Facebook Knew Of BJP Leaders Posting Hateful Content In Assam But Didn’t Stop Them
- How Can Facebook’s Content Decisions Resist Political Influence? Employees Knew Internally
Have something to add? Post your comment and gift someone a MediaNama subscription.
Figuring out subscriptions and growth at MediaNama. Email: nishant@medianama.com
