Ahead of the 2019 general elections in India, Facebook’s misinformation team conducted research on displaying misinformation labels on fake news posts, internal documents seen by MediaNama reveal. Most surveyed users wanted Facebook’s help with identifying misinformation on the platform, the researchers found.
For the study, Facebook’s misinformation team conducted lab interviews with 30 users from Jaipur and Hyderabad. They found that users wanted Facebook to take action against misinformation, especially related to politics or religious tensions.
Labelling misinformation remains a double-edged sword for social media platforms. While displaying such labels might lead to a better user experience, it can often be grounds for controversy and political backlash. In such a situation, platforms need to walk a fine line between curtailing speech and spreading fake news.
What did Facebook’s internal study find about misinformation?
The internal report, titled ‘Misinformation In-Feed Warnings: Findings from India’ was included in disclosures made to the US Securities and Exchange Commission (SEC) and provided to Congress in redacted form by Frances Haugen’s legal counsel. The redacted versions received by Congress were reviewed by a consortium of news organisations including MediaNama.
Here are the conclusions that Facebook researchers drew from the study:
- Users want Facebook’s help: According to the study, 24 out of the 30 interviewed users wanted Facebook’s help with misinformation:
This [misinformation] is a bad user experience. Users need to feel like they can distinguish between information they can and cannot trust on the platform. – Facebook’s internal research
- Low trust in Facebook for local information: Users trusted Facebook’s fact check labels more when they were addressing international content, and less when they were placed on local issues, the document found.
- Low recognition of fact-checkers: Users trusted fact-checks more when Facebook referred to them generally as ‘third-party fact checkers’ instead of referring to organisations like Boom or AFP, the study concluded. “Users trust fact-checkers because we trust fact-checkers,” it added.
- The ideal intervention: The intervention most preferred by users was a stamp displayed over fake news posts, the research found.
Twitter faces political backlash after displaying ‘manipulated media’ tag
While users might prefer help from social media platforms (according to Facebook’s research), extending such help comes with its own challenges.
In May this year, Twitter showed the ‘manipulated media’ tag on posts by senior BJP leader Sambit Patra among others. In response to this move, Twitter received heavy criticism from the Ministry of Electronics and Information Technology:
Ministry has further stated that Twitter unilaterally chose to go ahead & designate certain tweets as 'Manipulated', pending investigation. This action not only dilutes the credibility of Twitter but also puts question mark on status of Twitter as an “Intermediary”: MeitY Sources
— ANI (@ANI) May 21, 2021
The Delhi Police also visited Twitter’s office in Delhi to serve them a notice regarding the ‘manipulated media’ tag.
Related: Centre Asks Twitter To Remove ‘Manipulated Media’ Tag From Toolkit Tweets
Is there a middle path? An interesting take
Labeling misinformation, especially from powerful figures, can come with political backlash and legal ramifications for social media platforms. At the same time, it helps contain misinformation and fake news. How should platforms deal with this challenge?
In a recent Reddit AMA, Facebook whistleblower Sophie Zhang offered a solution to the misinformation problem. Zhang suggested that instead of focusing on fact-checking, platforms can focus on addressing fake news as a distribution problem:
It’s my personal belief that it’s incorrect to see misinformation and hate speech as a content issue solved via post moderation and fact checking. To me, this is an issue of distribution.
There have always been rumors and misinformation. What distinguishes the present day is that these rumors can go viral and be widely discussed and heard without the need for coverage by outlets such as the Times of India or Dainik Bhaskar. When mob lynchings occurred in India as a result of Whatsapp rumors in 2017-2020, it was not because rumors existed, but because they were easily spread and distributed. Fact checkers would not have stopped this because of the nature of Whatsapp as a private encrypted service.
People have the right to freedom of speech, but no one has a right to freedom of distribution. You are not being censored simply because your post doesn’t make it to the front page of Reddit. And there has been considerable research done within Facebook about actions to change distribution to minimize this distribution, that it has been reported that FB resisted or refused as it would hurt activity in general.
If FB wanted to avoid the ongoing genocide in Myanmar, my personal belief is that it could have done so by turning down virality in the country. — Sophie Zhang
Also read:
- Facebook Didn’t Take Down Fake Accounts Linked To BJP Ahead Of Elections, Whistleblower Claims
- How Can Facebook’s Content Decisions Resist Political Influence? Employees Knew Internally
Have something to add? Subscribe to MediaNama here and post your comment.
Figuring out subscriptions and growth at MediaNama. Email: nishant@medianama.com
