wordpress blog stats
Connect with us

Hi, what are you looking for?

WhatsApp says it banned 3 million accounts in India but the actual number could be more

The monthly compliance reports by Facebook and WhatsApp indicate a spike in how much content is being moderated.

WhatsApp has banned over 3 million Indian accounts from its platform between June and July 2021, a new compliance report by the social media platform revealed. Facebook, Instagram, and WhatsApp all released their monthly compliance reports on August 31 as mandated by the Information Technology (Intermediaries Guidelines And Digital Media Ethics Code) Rules, 2021.

Overall, the reports show an increase in the amount of content that the platforms have flagged (through their automated systems) and taken action against as well as the number of user grievances received as compared to their earlier report which covered May and June 2021. These monthly reports show that major social media platforms like Facebook and WhatsApp are looking to stay in compliance with the requirements of the IT Rules as failure to do so could lead to platforms losing immunity under the IT Act 2000.

Content moderation by WhatsApp

The 3 million accounts banned by WhatsApp from its platform were done through automatic detection and in-app reporting by its users. However this number, WhatsApp said, does not include the number of user grievances received through emails and mails by WhatsApp’s Grievance Officer in India, Paresh B. Lal.

WhatsApp received 594 user grievances across topics of Account support, Ban appeal, Product support, Safety, and others. Out of the 594 grievances, 74 were actioned which, according to the report, could mean banning an account or restoring a previously banned one.

The rest of the 490 were not actioned against due to one or more of the following reasons:

Advertisement. Scroll to continue reading.
  • The user required assistance to access their account
  • The user required assistance to use one of WhatsApp’s features
  • The user wrote to provide feedback
  • The user requested restoration of a banned account and the request was denied.
  • The reported account did not violate the laws of India or WhatsApp’s Terms of Service

Content moderation by Facebook and Instagram

Facebook and Instagram disclosed their user requests and content actioning numbers in another report together. On almost all counts the two had detected and actioned against higher numbers of problematic content as well as received a high number of user grievances as compared to their last report.

Content actioned by Facebook

Here’s a breakdown of all problematic content Facebook took action against along with the percentage of such content its automated systems flagged:

Source: Facebook monthly compliance report

Content actioned by Instagram

As opposed to Facebook, Instagram does not have a metric for spam content yet. Here’s a breakdown of all the other content it reviewed:

Source: Facebook monthly compliance report

User grievances received by Facebook

All in all, Facebook said that it received 1,504 user grievances through the contact form on its website and its Grievance Officer in India, Spoorthi Priya. Of these all were responded to, however subsequent action for resolution was taken on 1,326 cases which included providing tools to the user to report content for specific violations, self-remediation flows where they can download their data, avenues to address account hacked issues etc.

The remaining 178 grievances were subject to specialised reviews by Facebook after which 44 grievances were actioned against. These actions include removing a post, covering it with a warning, or disabling the account.

Here’s a breakdown, by topic, of these grievances:

Source: Facebook monthly compliance report

User grievances received by Instagram

Instagram received 265 user grievances – a sharp rise from the 36 grievances it noted in its earlier report. Of these, 181 users were provided with tools for resolution while the remaining 84 were subject to specialised review after which 18 were actioned against.

Source: Facebook monthly compliance report

What the IT Rules 2021 require

The IT rules require social media intermediaries to:

  • Proactively identify and take down content: This includes content moderation (through automated mechanisms) of posts that are defamatory, obscene, pornographic, paedophilic, invasive of privacy, insulting or harassing on gender, and other types.
  • Publish periodic compliance reports: These reports should be published every month and have details of complaints received, action taken, and “other relevant information”.
  • Appoint key managerial roles: Significant social media intermediaries (with more than 50 lakh registered users) must appoint a chief compliance officer, nodal contact person, and resident grievance officer, all of whom must be Indian residents and employees of the platform.
  • Disable content within 36 hours of government order: The Rules also ask intermediaries to provide information for verification of identity or assist any government agency for crime prevention and investigations no later than 72 hours of receiving a lawful order. They also have to preserve records of disabled content for 180 days.

Also read:

Have something to add? Post your comment and gift someone a MediaNama subscription.

Advertisement. Scroll to continue reading.
Written By

I cover health technology for MediaNama, among other things. Reach me at anushka@medianama.com

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ