Facebook CEO Mark Zuckerberg wants regulation of Big Tech, he wrote in an op-ed on February 16, saying that he doesn’t want private companies alone to make decisions on fundamental democratic values. He said that companies like Facebook make trade-offs on “important social values” such as between free expression and safety, privacy and law enforcement, and between creating open systems and locking down data. Zuckerberg has previously called for regulation that focuses on harmful content, election integrity, privacy and data portability. He also said that that Facebook is looking at allowing external audit of its content moderation systems.
Following Zuckerberg’s op-ed, Facebook, released a White Paper on online content regulation. What’s significant about the op-ed and the White Paper is that both of them suggest measures that Facebook has already taken, and the white paper justifies steps taken by Facebook in the past, most notably its Oversight Board.
Five considerations for content regulation
The White Paper suggests that any regulation should consider the following aspects:
- Incentives for companies,
- Global nature of the internet and online communication,
- Freedom of expression,
- Constraints and affordances of the different technologies at play, and
- Proportionality and necessity.
Why rules for offline speech will not work for online speech
One interesting aspect of the paper is that it tries to distinguish online speech from offline speech, and argues that the rules (and hence laws) for offline speech will not work for online speech because:
- Legal environments and speech norms vary which make implementation of country-specific policies that can potentially hamper cross-border communication difficult.
- Norms vary according to the kind of content. What is acceptable in private messaging might not be acceptable as a public post and vice versa.
- Context matters: Enforcement will always be imperfect, given the scale of online speech and difficulties in discerning context (jest, satire, cultural context, etc.) of content.
- Companies are intermediaries and not speakers of content because of which they can’t approve every post before it is posted. Requiring such approval would result in companies restricting the service to a limited set of users and erring on the side of censorship.
Nikhil adds: It’s important to note that in the Shreya Singhal judgment, the Supreme Court of India had also said that “there is an intelligible differentia between speech on the internet and other mediums of communication for which separate offences can certainly be created by legislation”. The SC had specified the difference between “the medium of print, broadcast and real live speech as opposed to speech on the internet”, saying that:
- “… the internet gives any individual a platform which requires very little or no payment through which to air his views”,
- “… something posted on a site or website travels like lightning and can reach millions of persons all over the world”,
- “… there can be creation of offences which are applied to free speech over the internet alone as opposed to other mediums of communication.”
How Facebook thinks harmful speech can be reduced while still preserving free expression
Facebook believes, as per the White Paper, that platforms could be:
- Held accountable for having certain systems and procedures in place such as, user-friendly channels for reporting content, external oversight of policies and enforcement decisions, and procedures could include periodic public reporting of enforcement data. It cites two examples of norms already in place: Global Network Initiative Principles and the European Union Code of Conduct on Countering Illegal Hate Speech Online. Facebook prefers this method, and didn’t highlight concerns with this method in the White Paper.
- Required to meet specific performance targets when it comes to violative content, including decreasing prevalence of content that violates a site’s hate speech policies or responding to user/government reports of policy violations within a defined time frame. Crossing limits could lead to greater oversight or fines. The White Paper highlights three challenges to this approach: first, companies could define harmful speech more narrowly to reduce compliance burdens, thus making it harder for people to report policy violations; second, they could stop efforts to proactively identify unreported violating content; and third, a response-time based focus could mean that platforms address reports on a first-in-first-out manner, instead of taking the severity of content into account, or proactively searching for harmful content. Note that India is looking to impose demands for proactive monitoring and takedown of content in 24 hours, though the rules haven’t been finalised yet.
- Required to restrict hateful speech beyond what is illegal: Certain kinds of hateful speech require prompt action from the platforms. Thus, Facebook wants regulators to define hateful speech, which might not be illegal, so that companies can take quick action on them without the user or the company having to go through a long legal process to establish its (il)legality. Since harmful content varies across regions, the White Paper suggests that the governments:
- Create a practically enforceable standard that can work at scale even with limited context of speech
- Adopt a segmented approach to different kinds of platforms depending on the nature of the service (search engine versus social media) and nature of the audience (private chat versus public post)
- Be flexible about regulation so that platforms can evolve policies to keep pace with changes in language trends (such as Voldemorting) and efforts to avoid enforcement.
Facebook looks at external audit of its content moderation systems, hails its Oversight Board
Zuckerberg’s revelation that Facebook is considering external audit of its content moderation systems is significant because recent reports by The Verge (here and here) have shown that content moderation at Facebook, most of which is outsourced to Cognizant, Accenture and Genpact, is a mentally and emotionally taxing task that has even resulted in a death. Content moderators have filed a lawsuit against the company, alleging “psychological trauma and symptoms of post-traumatic stress disorder”. Facebook isn’t alone in failing to protect its content moderators from psychological harm of their job — YouTube and Twitter don’t fare well either.
Lauding its own Oversight Board, the independent body that Facebook had formed to review its content decisions, Zuckerberg said that such oversight is necessary. The White Paper wants regulation to ensure that internet content moderation systems are consultative, transparent and subject to meaningful independent oversight. Regulation could require companies to:
- Publish their content standards
- Allow users to report violating content and responding to such reports with a decision
- Notify users about content removal
- Let users appeal (non-)removal of content
- Give insight into the development of their content standards: It is not specified what this “insight” would look like and to whom.
- Consult with stakeholders when making significant changes to standards: The Paper does not specify who these stakeholders are and if they include (all) users.
- Get user inputs on content standards: The Paper does not specify how such inputs would be processed, especially given the scale at which Facebook operates.
- Publish transparency reports on policy enforcement: All major social media companies already publish them. The Paper warns that such reports could lead to companies cutting corners in unmeasured areas to boost performance in measured areas.
Protecting competition: The White Paper warns against any regulation becoming a barrier to entry for new competitors because of compliance burdens. It should instead adopt a graded approach depending on the size and capacity of the company.
“Regulation can have unintended consequences, especially for small businesses that can’t do sophisticated data analysis and marketing on their own. Millions of small businesses rely on companies like ours to do this for them.” — Mark Zuckerberg
Other issues Zuckerberg raised
- Political ads: Zuckerberg claimed that advertising on Facebook is more transparent compared to other media because of Facebook’s Ads Library and Transparency Reports. But he left the question of regulating political ads up in the air. This comes a few months after the company doubled down on its position to not moderate any political content — both ads and content from political candidates — even if it violated the site’s hate speech rules or other policies, as such content is “newsworthy”. But in the op-ed, Zuckerberg supported the Honest Ads Act and the Deter Act that seek to “prevent election interference”. The White Paper does not even mention the conundrum that political ads pose.
- Rules for data portability are a must: Zuckerberg called data portability as a necessary good, but left the question of ownership of data open-ended. Facebook’s earlier White Paper on data portability had raised similar questions about ownership of co-created data, but did not offer any solutions. This time, Zuckerberg categorically called for clear rules on portability, otherwise “strong privacy laws” would encourage companies to lock down data to minimise regulatory risks. The White Paper offers no insights on data portability and data sharing.
- Facebook supports OECD’s global tax rules: While talking about content regulation, Zuckerberg also said that Facebook supports OECD’s efforts to create fair global tax rules for the internet.
Send me tips at aditi@medianama.com. Email for Signal/WhatsApp.
