wordpress blog stats
Connect with us

Hi, what are you looking for?

Indian users want Facebook’s help in identifying fake news, internal researchers found

Leaked research files highlight how Indian users felt about misleading content on the platform, along with preferred steps.

Ahead of the 2019 general elections in India, Facebook’s misinformation team conducted research on displaying misinformation labels on fake news posts, internal documents seen by MediaNama reveal. Most surveyed users wanted Facebook’s help with identifying misinformation on the platform, the researchers found.

For the study, Facebook’s misinformation team conducted lab interviews with 30 users from Jaipur and Hyderabad. They found that users wanted Facebook to take action against misinformation, especially related to politics or religious tensions.

Labelling misinformation remains a double-edged sword for social media platforms. While displaying such labels might lead to a better user experience, it can often be grounds for controversy and political backlash. In such a situation, platforms need to walk a fine line between curtailing speech and spreading fake news.

What did Facebook’s internal study find about misinformation?

The internal report, titled ‘Misinformation In-Feed Warnings: Findings from India’ was included in disclosures made to the US Securities and Exchange Commission (SEC) and provided to Congress in redacted form by Frances Haugen’s legal counsel. The redacted versions received by Congress were reviewed by a consortium of news organisations including MediaNama.

Here are the conclusions that Facebook researchers drew from the study:

Advertisement. Scroll to continue reading.
  • Users want Facebook’s help: According to the study, 24 out of the 30 interviewed users wanted Facebook’s help with misinformation:

    This [misinformation] is a bad user experience. Users need to feel like they can distinguish between information they can and cannot trust on the platform. – Facebook’s internal research

  • Low trust in Facebook for local information: Users trusted Facebook’s fact check labels more when they were addressing international content, and less when they were placed on local issues, the document found.
  • Low recognition of fact-checkers: Users trusted fact-checks more when Facebook referred to them generally as ‘third-party fact checkers’ instead of referring to organisations like Boom or AFP, the study concluded. “Users trust fact-checkers because we trust fact-checkers,” it added.
  • The ideal intervention: The intervention most preferred by users was a stamp displayed over fake news posts, the research found. 

    Slide from Facebook’s internal report on Misinformation-In Feed Warnings

Twitter faces political backlash after displaying ‘manipulated media’ tag

While users might prefer help from social media platforms (according to Facebook’s research), extending such help comes with its own challenges.

In May this year, Twitter showed the ‘manipulated media’ tag on posts by senior BJP leader Sambit Patra among others. In response to this move, Twitter received heavy criticism from the Ministry of Electronics and Information Technology:

The Delhi Police also visited Twitter’s office in Delhi to serve them a notice regarding the ‘manipulated media’ tag.

Related: Centre Asks Twitter To Remove ‘Manipulated Media’ Tag From Toolkit Tweets

Is there a middle path? An interesting take

Labeling misinformation, especially from powerful figures, can come with political backlash and legal ramifications for social media platforms. At the same time, it helps contain misinformation and fake news. How should platforms deal with this challenge?

Advertisement. Scroll to continue reading.

In a recent Reddit AMA, Facebook whistleblower Sophie Zhang offered a solution to the misinformation problem. Zhang suggested that instead of focusing on fact-checking, platforms can focus on addressing fake news as a distribution problem:

It’s my personal belief that it’s incorrect to see misinformation and hate speech as a content issue solved via post moderation and fact checking. To me, this is an issue of distribution.

There have always been rumors and misinformation. What distinguishes the present day is that these rumors can go viral and be widely discussed and heard without the need for coverage by outlets such as the Times of India or Dainik Bhaskar. When mob lynchings occurred in India as a result of Whatsapp rumors in 2017-2020, it was not because rumors existed, but because they were easily spread and distributed. Fact checkers would not have stopped this because of the nature of Whatsapp as a private encrypted service.

People have the right to freedom of speech, but no one has a right to freedom of distribution. You are not being censored simply because your post doesn’t make it to the front page of Reddit. And there has been considerable research done within Facebook about actions to change distribution to minimize this distribution, that it has been reported that FB resisted or refused as it would hurt activity in general.

​If FB wanted to avoid the ongoing genocide in Myanmar, my personal belief is that it could have done so by turning down virality in the country. — Sophie Zhang

Also read:

Have something to add? Subscribe to MediaNama here and post your comment. 

Advertisement. Scroll to continue reading.
Written By

Figuring out subscriptions and growth at MediaNama. Email: nishant@medianama.com

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ