Imagine a biscuit brand shipped harmful biscuits to customers who fell ill after consuming them. There are two options regulators have: They could either check biscuits frequently and ask the brand to recall harmful packets, or frame laws to mandate systemic protections in the company’s distribution process. Which one is more efficient?
Through the IT Rules 2021, the Indian government has created a regulatory infrastructure for content takedowns. Focusing on takedowns, however, is like checking for individual bad biscuits: it’s inefficient and fails to address structural flaws.
The Facebook Papers leaked by Frances Haugen, which I have reported on for the past month, makes it clear that Facebook’s failures in content moderation are systemic, not instantial. The need of the hour is for lawmakers to understand the systems that are amplifying harmful content instead of focusing on taking down individual posts.
Why regulators need to focus on harmful algorithms
The intuitive approach to harmful content: Our intuitive understanding of the ‘bad content’ problem on Facebook is that content reviewers are not doing a good enough job of taking such content down. A criticism often levelled against Facebook is that it doesn’t have nearly enough such reviewers, or more specifically in India, that it is often unwilling to take down content by influential political figures.
Why that’s the wrong approach: While unbiased human oversight over content is crucial, there are other ways at Facebook’s disposal for reducing the spread of hateful content. Innumerable factors go into determining what content is distributed on Facebook and to what extent, and some of those classifiers answer such questions as ‘Is this potentially violative of our standards’ and ‘Is it likely that this content will receive high engagement from X user?’
Keeping such metrics in mind, Facebook has set up its algorithms to prioritize certain objectives when distributing content, like increasing user engagement or prioritizing meaningful social interactions. The real power of Facebook, which can often be misused, is in its control of what its algorithms aim to achieve. Regulatory efforts need to zero in on this decision-making power of platforms.
What can be done: The objective of reducing the spread of harmful content on Facebook is achieved more efficiently if algorithms detect potential harm and simply don’t amplify that piece of content. However, there is often a tradeoff between ensuring safety and increasing engagement, and Facebook has little incentive to prioritize safety.
That’s where regulators need to step in to make sure platforms prioritize safety in their algorithms, even at the cost of engagement. The policies that govern algorithms need oversight to ensure that they are not just aimed at making platforms money by increasing engagement, but also prioritize keeping users and societies safe.
Facebook whistleblower Sophie Zhang recently emphasized the same point in a Reddit AMA:
There has been considerable research done within Facebook about actions to change distribution to minimize this distribution [of harmful content], that it has been reported that FB resisted or refused as it would hurt activity in general. If FB wanted to avoid the ongoing genocide in Myanmar, my personal belief is that it could have done so by turning down virality in the country. – Sophie Zhang (emphasis ours)
Indian regulators focus on takedowns, missing the point
Current regulations in India don’t address the policies governing Facebook’s algorithms and focus instead on giving the government the power to dictate content takedowns. Here are the clauses of the IT Rules 2021 on content takedowns:
- Disabling content within 36 hours of government order: All intermediaries have to remove or disable access to information less than 36 hours of getting a court order or from an appropriate government agency under Section 79 of the IT Act.
- Voluntary takedowns: All intermediaries will have to take down content that violates any law; defamatory, obscene, pornographic, paedophilic, invasive of privacy, insulting or harassing on gender; content related to money laundering or gambling; or “otherwise inconsistent with or contrary to the laws of India”.
- Disabling content within 24 hours of user complaint: All intermediaries will have to take down content that exposes a person’s private parties (partial and full nudity); shows any sexual act; is impersonation; is morphed images within 24 hours of individuals (users or victims) reporting it.
Do the rules mention algorithms? The closest the rules come to addressing algorithms is when asking platforms to develop automated tools to identify content depicting rape, child sexual abuse and content that is “exactly identical” to previously removed content.
Such requirements, however, don’t do nearly enough to regulate the algorithms through which platforms distribute content. If we want social media to be safer at large, Indian regulators will need to ensure that these algorithms are geared towards that goal.
Get our white paper on the Data Protection Bill 2021 in your inbox
We may also reach out occasionally with our coverage of the Data Protection Bill and more.Also read:
- Indian Users Share Hateful Content More Readily On Same-Religion Groups: Facebook Internal Docs
- Indian Users Want Facebook’s Help In Identifying Fake News, Internal Researchers Found
- Meta Curtails Its Ad-Targeting Options Referencing Race, Religion, And Politics
Have something to add? Post your comment and gift someone a MediaNama subscription.
Figuring out subscriptions and growth at MediaNama. Email: nishant@medianama.com
