wordpress blog stats
Connect with us

Hi, what are you looking for?

Facebook and Instagram release partial compliance report as per new IT rules

Social Media app icons

Both social media platforms use machine-learning technologies to proactively detect content that violates their community guidelines. 

Facebook and Instagram released their interim compliance report on Friday in partial adherence with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Covering the period between May 15 to June 15, the report provides information on actions taken against violating content by Facebook and Instagram, and the percentage of violating content proactively detected by the platform(s).

According to the new IT rules, which came into effect on May 26, Significant Social Media Intermediaries (SSMIs) have to release monthly compliance reports on the details of complaints received, action taken thereon, and the number of links or parts of information removed. SSMIs are defined as social media intermediaries with more than 50 Lakh registered users in India like Facebook, Google, Twitter, and Koo.

What does the report say?

According to the report, Facebook proactively detected 99.9 percent of 25 million ‘Spam’ content and 2.5 million ‘Violent and Graphic’ content that it took action against.  Of 1.8 million content actioned against as ‘Adult Nudity and Sexual Activity’ and 5,89,000 content as Suicide and Self-Injury, 99.6 percent and 99.7 percent were detected proactively, respectively.

The lowest proactive detection rate was for Firearms (2,000) and Bullying and Harassment (1,18,000) content of which 89.4 and 36.7 percent were detected, respectively.

Advertisement. Scroll to continue reading.


On Instagram, actioned content involving suicide and self-injury was higher than that on Facebook at 6,99,000 content pieces, 99.8 percent of which were detected by the photo-sharing platform. Under Bullying and Harassment, Facebook took action against 118,000 content pieces while Instagram took action against 1,08,000 content pieces. Of this, Instagram had proactively detected 43.1 percent. It proactively censored 99.7 percent of 490,000 violent and graphic content pieces, and 99.6 percent of posts related to adult nudity and sexuality. 200 actioned posts on firearms, 1,100 posts on drugs, and 6,200 posts on organised hate saw a detection rate between 88 and 87 percent.

While metrics on content spamming were shared by Facebook, Instagram said that the same wasn’t available with it yet and that it was working on it.

How the two metrics are measured

Content is removed when it doesn’t follow Facebook’s community guidelines and includes comments, posts, photos, and videos. According to Facebook’s policy page on the content-actioned metric, in cases where a post has multiple photos or videos, each photo or video is regarded as one piece of content. This is different from Instagram where the whole post is counted as one piece of content if it is found to contain violating content.

Advertisement. Scroll to continue reading.

For both platforms, actions taken include removing the problematic content or issuing a content warning over it, and proactive detection is a result of machine learning technologies flagging content. This content is later looked at by trained human reviewers.

On its website, Facebook says that the proactive detection percentage is calculated as ‘the number of pieces of content acted on that we found and flagged before people using Facebook or Instagram reported them, divided by the total number of pieces of content we took action on’.

However, the measure on actioned content does not include any accounts, pages, or groups that were disabled or fake accounts that were prevented from being created. The report adds that the metrics also don’t take into account violating content that may have been posted by users masking the country that they are posting from (for example, through VPNs).

Full report yet to come

Facebook is expected to release its full compliance report on the number of user complaints received and action taken on July 15, which will include data related to the instant messaging app WhatsApp.

Notably, in its report, Facebook mentioned that it expects to publish subsequent editions with a 3-45 day lag over the reporting period. Google, in its report released on June 30, had also said that it will have a two-month lag in reporting.

Yesterday, Union Information Technology and Law Minister Ravi Shankar Prasad tweeted in appreciation of compliance reports released by Google and Facebook.

Advertisement. Scroll to continue reading.

Meanwhile, Twitter has not yet indicated when it would release its compliance report under the IT rules.

Written By

I cover health technology for MediaNama, among other things. Reach me at anushka@medianama.com

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ