wordpress blog stats
Connect with us

Hi, what are you looking for?

Social media platforms cannot act as government’s proxy for censorship: UN Special Rapporteur David Kaye

Governments and nation-states shall not use internet companies or social media platforms as a proxy for restricting free speech, the United Nations said in its report on the protection of freedom of expression and regulation of online hate speech. It made several recommendations to governments and companies, including that companies should follow international human rights standards while governing hate speech on their platforms. It also noted that automated filters do not understand the context of speech and may curb legitimate speech, and recommended against the use of upload filters. It also called India’s internet shutdowns, especially in Kashmir, disproportionate and problematic.

The report ‘Promotion and Protection of the Right to Freedom of Opinion and Expression’ was submitted by David Kaye, UN’s Special Rapporteur on October 9. It highlighted that international law gives equal protection to online and offline speech and that states shouldn’t compel intermediaries, directly or indirectly, to do what they can’t do directly under international law.

“Hate speech” is a vaguely defined term: The report said that hate speech has acquired a double-edged ambiguity, which allows governments to misuse it as “fake news” to attack political enemies, non-believers, dissenters and critics. The ambiguity also stops countries and companies from addressing genuine hate speech, that incites violence or discrimination against vulnerable and marginalised communities, according to the report. 

The report also pointed out that no speech is hate speech, unless there is “advocacy of incitement”.  For instance, a person advocating a minority or even an offensive interpretation of a religious belief or historical event, or a person sharing examples of hatred and incitement to report or raise awareness – cannot be silenced under international law as there is no advocacy of hatred in this case. 

Internet shutdowns in Kashmir ‘disproportionate’

While the report criticised India for “disproportionate” internet shutdowns in Kashmir, the report appreciated certain Indian states for creating hotlines to report WhatsApp content to law enforcement agencies and establishing “social media labs” to monitor online hate speech.

Advertisement. Scroll to continue reading.

The report called out Myanmar for failing to protect people against incitement to genocide against Rohingya Muslims by Facebook posts. As for Russia and China, the report called out the fierce online censorship that happens in these countries. 

Define hate speech clearly, establish clear laws: recommendations to govts

Establish clear laws around hate speech: Governments should clearly define what constitutes hate speech in their legal system and shall refrain from criminalising such speech, except in the gravest situations. Formulation of such laws shall be done by public participation. Further, the report says that any law around hate speech should meet the requirements of legality, legitimacy, necessity and proportionality.

  • The law should also provide effective remedies to the persons affected by hate speech like compensation.

Judicial oversight: Private companies are not in a position to assess threats to national security and public order, and hate speech restrictions on those grounds should only be based on legal orders from the state and not left to assessment by companies.

Safe Harbor: Intermediary liability rules of States shall adhere strictly to human rights standards and do not demand that companies restrict expression that they would be unable to do directly, through legislation.

Follow the Rabat Plan of Action: States should adopt the interpretations of hate speech contained in the Rabat Plan of Action, which was a result of the UN’s efforts to clarify the scope of hate speech in international law. Singling out Germany’s intermediary liability law (NetzDG), the report called it  (NetzDG) vague as it does not define key terms like “incite” and “hatred”.

Evaluate speech in context, avoid upload filters among recommendations for companies

Avoid automated filters for content moderation: Automated filters often curb free speech even before it occurs on a platform, the report said. Use of filters is disproportional as it can lead to curbing of legal speech as filters are based on the use of “words” and often fail to understand the context of speech. Moreover, it gives little opportunity to the user or content creator to challenge the wrongful removal of the content.

  • The report recommended that any use of automation or Artificial Intelligence tools in determining or removing hate speech shall involve human-in the-loop.

Develop less restrictive alternatives to blocking hate speech: Companies can develop tools that promote freedom of expression having a mechanism for de-amplification, de-monetisation, education, counter-speech, reporting, and training as an alternative to outright bans and content removals.

Define hate speech ‘lucidly’: The report recommends that companies should “lucidly” define what they consider to be hate speech with reasoned explanations and approaches that are consistent across jurisdictions. It criticised China’s WeChat and Russia’s VK for their vague and broad definitions of hate speech.

Advertisement. Scroll to continue reading.

Assess impact on human rights: Companies in the ICT sector should periodically evaluate the impact of their products and services on human rights of their users and the public, and make these human rights impact assessment documents publicly available.

Industry collaboration: The report said that the big companies should bear the burden developing less restrictive resources and share their knowledge and tools widely, as open source, to ensure that smaller companies, and smaller markets, have access to such technology.

Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ