The amount of sexually exploitative content related to children that were actioned by Instagram shot up in November, according to Meta’s latest compliance report released on December 30, 2021. The increase was ten-fold from 1,75,800 of such content being actioned in October to 1.2 million of such content being actioned in the subsequent month.
Over 424 user grievances were received by Instagram while Facebook saw 519 user grievances through the IT Rules-mandated redressal mechanisms, the report added.
Under the Information Technology (Intermediary Guidelines and Digital Media Ethics code) Rules 2021, significant social media intermediaries (like Meta’s platforms) have to publish periodic reports on the number of user grievances they received, action taken, and the number of content proactively removed by the companies.
In November, WSJ first reported that Facebook knew details about how using Instagram adversely impacts the mental health of its teenage users. Soon after, Facebook paused its plans for rolling out an Instagram version exclusively for kids.
The amount of content actioned by Facebook dips in comparison
Across most content categories, the number of content actioned by Facebook saw declines with slight changes in the proactive rates as well. However, an exception to the trend was content related to firearms, where the proactive rate reduced from 88.3% removals in October to 60% in November. The number of links on Facebook related to firearms dropped from 6,800 to 2,900.
According to Meta, ‘content actioned’ means that a photo, video, or text post was removed, covered with a warning label, or otherwise actioned. Meanwhile, proactive rates are a measurement of content acted on by the platforms before users reported it to them.
On content categorised as organised hate, Facebook actioned 12,400 pieces of content – 99.4% of which were detected proactively. Over 1,00,100 pieces of content related to hate speech were removed, of which 91.1% were done proactively.
561 user grievances received by Facebook
All in all, Facebook said that it received 519 user grievances through the contact form on its website and its Grievance Officer in India – Spoorthi Priya. Out of this, Facebook provided tools for 461 user grievances. The tools are mechanisms to report content for specific violations, self-remediation flows for users to download their data, avenues to address account hacked issues, etc. For the remaining 58 user grievances, Facebook reviewed and took action on thirteen of them while 45 grievances were not acted upon due to reasons such as insufficient information, content does not violate policies, etc.
Here’s a breakdown of user grievances by category:
Instagram’s proactive removal rate for hate speech lower than before
Instagram’s proactive rate for content related to organised hate was 87.3% which is down by 12% from October (94.1%) and for hate speech, the rate was 74% which is down by 7% from October (86%). Under organised hate and hate speech, 1,400 pieces of content and 24,900 pieces of content were actioned by Instagram, respectively.
The figures for content classified under other categories are shown here:
424 user grievances received by Instagram
Instagram responded to all 424 user grievances for the month; in October it had received 652 grievances. Out of 424, it provided users with tools for resolution in 332 cases while the remaining 92 were reviewed and addressed (42) by the platform itself.
Here’s a breakdown of user grievances by category:
What the IT Rules 2021 require
The IT Rules require social media intermediaries to:
- Proactively identify and take down content: This includes content moderation (through automated mechanisms) of posts that are defamatory, obscene, pornographic, paedophilic, invasive of privacy, insulting or harassing on gender, and other types.
- Publish periodic compliance reports: These reports should be published every month and have details of complaints received, action taken, and “other relevant information”.
- Appoint key managerial roles: Significant social media intermediaries (with more than 50 lakh registered users) must appoint a chief compliance officer, nodal contact person, and resident grievance officer, all of whom must be Indian residents and employees of the platform.
- Disable content within 36 hours of government order: The Rules also ask intermediaries to provide information for verification of identity or assist any government agency for crime prevention and investigations no later than 72 hours of receiving a lawful order. They also have to preserve records of disabled content for 180 days.
Also read:
- WhatsApp, Facebook, and Instagram publish compliance reports mandated by IT Rules
- Facebook compliance reports show WhatsApp banned 2mn Indian users
- Twitter’s compliance report reveals 94 grievances from May 26 to June 26
- A review of OTT streaming services compliance with IT Rules 2021 in August
Have something to add? Subscribe to MediaNama here and post your comment.
I cover health technology for MediaNama, among other things. Reach me at anushka@medianama.com
