wordpress blog stats
Connect with us

Hi, what are you looking for?

YouTube removes more than 800k videos and 78k channels that violated its child sexual abuse imagery guidelines

Young children looking a computer screen

In the first quarter of 2019, YouTube removed over 78,000 channels and 800,000 videos that compromised child safety.

The company, in a blog post, highlighted policies it has enforced in the recent past to protect children from becoming subjects of or being subjected to inappropriate content on YouTube:

  • Video removal and channel termination: YouTube enforces a 3-strike system when dealing with content that is in violation of its Community Guidelines. This results in temporary suspension of the channel, and if repeated within a 90-day period, permanent termination. Videos that compromise child safety are subject to this system. In the first quarter of 2019, of all the channels and videos that YouTube removed, 2.8% or 78,507 channels, and 9.7% or 807,676 videos, respectively were removed for compromising child safety.
  • Restrict live features: Younger minors, unless clearly accompanied by an adult, will be disallowed from live streaming. It’s not clear how YouTube intends to track live streams to determine whether a child is being accompanied by an adult in a stream.
  • Disable comments on videos featuring minors: Comments are disabled on videos featuring minors.
  • Reduce recommendations: Any videos that feature minors in risky situations and are borderline in terms of guidelines violation will be recommended in a limited manner.
  • CSAI Match technology: YouTube’s proprietary technology that identifies known Child Sexual Abuse Imagery (CSAI) content online. Once a match of CSAI content is found, it is flagged to YouTube partners so that it can be reported according to locals laws and regulations. This is designed for video, but Google’s Content Safety API offers machine learning based classification for images.
  • YouTube Kids: App with content only for children under 13 with greater controls for parents. Since children under 13 are not allowed on YouTube, such accounts, when discovered are terminated.

The regulation of content for children in India

The Indian government and judiciary have also been fighting the uphill battle of protecting children from ‘harmful’ content online. In the last couple of years, they have taken stringent action on issues of teenage suicide caused by social media interactions, pornography, and cyber-bullying.

On the alleged Blue Whale Challenge and teenage suicide

In November 2017, the Supreme Court of India disposed of a plea seeking a ban on the Blue Whale game, which allegedly gave children and teenagers a series of increasingly dangerous tasks and culminated with ordering the players to kill themselves. Instead, the Supreme Court directed all states and union territories to spread awareness in schools about the danger of such games.

During a previous hearing, the court had also directed Doordarshan and private TV channels to air a prime-time programme warning people about the grisly game.

Here are the steps that the Union government took:

Advertisement. Scroll to continue reading.
  • High-level committee constituted to investigate suicides allegedly committed as a result of the Blue Whale challenge; the committee found no relation between the suicides and the Blue Whale challenge after technical analysis of internet activities, device activities, social media activity, and call records
  • HRD ministry issued circulars to schools under it
  • MeitY issued directives to Google India, Microsoft India, Facebook India and Yahoo India, directing them to remove links to the Blue Whale Challenge and similar games from their platforms

On TikTok and allegations of pornography

On April 3, the Madras High Court temporarily banned TikTok after it found that the platform hosted pornographic content. Since the Chinese video-sharing app is popular amongst teenagers, the court said that it was worried about the content affecting children and making them vulnerable to sexual predators. As a result, MeitY directed Google and Apple to remove the app from their app stores. However, users who had downloaded the app prior to the ban and removal from app stores were able to continue using the app.

However, on April 22, the Supreme Court directed the Madras High Court to take a decision on the interim ban by April 24, failing which, the ban would be lifted. The Madras HC lifted the interim ban on April 24.

TikTok, since the ban has taken several self-regulatory steps:

  • Age gate: After the interim ban, TikTok announced an age gate feature for new users which will prevent children under 13 from creating an account. You can read more about this feature here.
  • Content moderation: Its content moderation strategy is a combination of technology and a human moderation team that together monitor and act upon offending comment. It covers major Indian languages, including Hindi, Tamil, Telugu, and Gujarati.
  • Filter Comments: TikTok’s comment control feature allows users to better manage comments, by filtering those containing words they deem undesirable. Users can choose up to 30 such words.
  • Anti-Bullying Guidelines: Part of their general user guidelines that now offer an easier way for users to report content.

On PUBG and violence, aggression, and cyber-bullying

In response to an 11-year-old’s PIL seeking a ban on PUBG for promoting violence, aggression and cyber-bullying, the Bombay HC directed the Union government to look into the app and ban it, if necessary.

In some cities, including Rajkot, the game had been temporarily banned. To combat addiction amongst Indian players, PUBG Mobile implemented a six-hour playing limit on Indian players.

Update: This story has been rewritten following editorial direction.

Advertisement. Scroll to continue reading.
Written By

Send me tips at aditi@medianama.com. Email for Signal/WhatsApp.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ