wordpress blog stats
Connect with us

Hi, what are you looking for?

Regulations to check Facebook should focus on safer algorithms, not content removal

The need of the hour is for lawmakers to understand the systems that are amplifying harmful content.

Imagine a biscuit brand shipped harmful biscuits to customers who fell ill after consuming them. There are two options regulators have: They could either check biscuits frequently and ask the brand to recall harmful packets, or frame laws to mandate systemic protections in the company’s distribution process. Which one is more efficient?

Through the IT Rules 2021, the Indian government has created a regulatory infrastructure for content takedowns. Focusing on takedowns, however, is like checking for individual bad biscuits: it’s inefficient and fails to address structural flaws.

The Facebook Papers leaked by Frances Haugen, which I have reported on for the past month, makes it clear that Facebook’s failures in content moderation are systemic, not instantial. The need of the hour is for lawmakers to understand the systems that are amplifying harmful content instead of focusing on taking down individual posts.

Why regulators need to focus on harmful algorithms

The intuitive approach to harmful content: Our intuitive understanding of the ‘bad content’ problem on Facebook is that content reviewers are not doing a good enough job of taking such content down. A criticism often levelled against Facebook is that it doesn’t have nearly enough such reviewers, or more specifically in India, that it is often unwilling to take down content by influential political figures.

Why that’s the wrong approach: While unbiased human oversight over content is crucial, there are other ways at Facebook’s disposal for reducing the spread of hateful content. Innumerable factors go into determining what content is distributed on Facebook and to what extent, and some of those classifiers answer such questions as ‘Is this potentially violative of our standards’ and ‘Is it likely that this content will receive high engagement from X user?’

Advertisement. Scroll to continue reading.

Keeping such metrics in mind, Facebook has set up its algorithms to prioritize certain objectives when distributing content, like increasing user engagement or prioritizing meaningful social interactions. The real power of Facebook, which can often be misused, is in its control of what its algorithms aim to achieve. Regulatory efforts need to zero in on this decision-making power of platforms.

What can be done: The objective of reducing the spread of harmful content on Facebook is achieved more efficiently if algorithms detect potential harm and simply don’t amplify that piece of content. However, there is often a tradeoff between ensuring safety and increasing engagement, and Facebook has little incentive to prioritize safety.

That’s where regulators need to step in to make sure platforms prioritize safety in their algorithms, even at the cost of engagement. The policies that govern algorithms need oversight to ensure that they are not just aimed at making platforms money by increasing engagement, but also prioritize keeping users and societies safe.

Facebook whistleblower Sophie Zhang recently emphasized the same point in a Reddit AMA:

There has been considerable research done within Facebook about actions to change distribution to minimize this distribution [of harmful content], that it has been reported that FB resisted or refused as it would hurt activity in general. If FB wanted to avoid the ongoing genocide in Myanmar, my personal belief is that it could have done so by turning down virality in the country. – Sophie Zhang (emphasis ours)

Indian regulators focus on takedowns, missing the point

Current regulations in India don’t address the policies governing Facebook’s algorithms and focus instead on giving the government the power to dictate content takedowns. Here are the clauses of the IT Rules 2021 on content takedowns:

  • Disabling content within 36 hours of government order: All intermediaries have to remove or disable access to information less than 36 hours of getting a court order or from an appropriate government agency under Section 79 of the IT Act.
  • Voluntary takedowns: All intermediaries will have to take down content that violates any law; defamatory, obscene, pornographic, paedophilic, invasive of privacy, insulting or harassing on gender; content related to money laundering or gambling; or “otherwise inconsistent with or contrary to the laws of India”.
  • Disabling content within 24 hours of user complaint: All intermediaries will have to take down content that exposes a person’s private parties (partial and full nudity); shows any sexual act; is impersonation; is morphed images within 24 hours of individuals (users or victims) reporting it.

Do the rules mention algorithms? The closest the rules come to addressing algorithms is when asking platforms to develop automated tools to identify content depicting rape, child sexual abuse and content that is “exactly identical” to previously removed content.

Such requirements, however, don’t do nearly enough to regulate the algorithms through which platforms distribute content. If we want social media to be safer at large, Indian regulators will need to ensure that these algorithms are geared towards that goal.

Advertisement. Scroll to continue reading.

Get our white paper on the Data Protection Bill 2021 in your inbox

We may also reach out occasionally with our coverage of the Data Protection Bill and more.
By filling out this form, you agree to receive a copy of MediaNama's white paper and further information about MediaNama's work and services.

Also read:

Have something to add? Post your comment and gift someone a MediaNama subscription.

Written By

Figuring out subscriptions and growth at MediaNama. Email: nishant@medianama.com

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ