wordpress blog stats
Connect with us

Hi, what are you looking for?

Summary: All eight complaints made by Facebook Whistleblower Frances Haugen

Read the Facebook whistleblower’s redacted complaints to the US SEC that have been summarised for your benefit.

Facebook logo

Recently, former Facebook employee-turned whistleblower Frances Haugen filed eight complaints against the social media giant based on thousands of internal Facebook documents that were secretly copied by Haugen before she left the company, CBS News had reported.

Our anonymous client is disclosing original evidence showing that Facebook, Inc. (NASDAQ: FB) has, for years past and ongoing, violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors, including, inter alia, through filings with the SEC, testimony to Congress, online statements and media stories — Frances Haugen’s lawyers

During an interview with CBS News’ 60 Minutes, Haugen had said, “The things I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimise for its own interests, like making more money.”

MediaNama has reviewed the whistleblower’s redacted complaints to the US Securities and Exchange Commission (SEC) and has summarised all eight documents for your benefit.

#1 Facebook misled investors and public about equal enforcement

This particular disclosure alleged that Facebook through its XCheck program (pronounced Cross Check), ‘white-listed’ high profile users and furthered harmful content “causing significant and long-term risks to Facebook and its investors’. The complaint also said that these findings run contrary to sworn testimonies given by Mark Zuckerberg to the US Congress. Facebook had earlier claimed that they only provide “additional review” for high visibility users such as celebrities and high-paying advertisers.

Key takeaways

Advertisement. Scroll to continue reading.
  • Cross Check entities were exempted from enforcement: Internal Facebook records show that in 2020, Cross Check entities were shielded from the majority of Integrity actions. “There are 109 daily XCheck exempted actions and 90 actions per day without XCheck protection,” the whistleblower said citing an internal Facebook record. Another internal document said, “Over the years, many XChecked people and entities have been exempted from enforcement. That means for select members of our community, we are not enforcing our policies and standards. Unlike the rest of our community, these people can violate our standards without any consequences… Since we are currently reviewing less than 10 percent of Checked content… without optimising list of people XChecked, it would require significant investment in staffing (>10x increase).”
  • Facebook identified its content reviewing efforts as a material issue: “We are making significant investments in… content review efforts to combat the misuse of our services… Any of the foregoing developments may negatively affect user trust and engagement, harm our reputations and brands…,” Facebook was quoted as saying in the complaint.

#2 Facebook does not have adequate resources to cull hate speech in Bengali, Hindi

The complaint cites company internal records showing that Facebook lacks adequate systems to moderate languages other than English.

This document has particular relevance to India as one of the internal records blames Facebook users affiliated with the Rashtriya Swayamsevak Sangh (RSS) for fear-mongering and promoting anti-Muslim narratives in Hindi and Bengali languages. Facebook found that due to the lack of Hindi and Bengali classifiers, much of these posts were never flagged.

RSS [the ideological parent of India’s ruling party] Users, Groups, and Pages promote fear-mongering, anti-Muslim narratives targeted pro-Hindu populations with V&l [violence and inciting] intent. …There were a number of dehumanizing posts comparing Muslims to ‘pigs’ and ‘dogs’ and misinformation claiming the Quran calls for men to rape their female family members. Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned, and we have yet to put forth a nomination for designation of this group given political sensitivities. — Excerpt of Facebook internal document in SEC complaint

Other key takeaways

  • Facebook frequently observed activities on its platforms by problematic actors: In an internal record, Facebook said that it frequently observed activities on its family of apps and services by ‘problematic actors’ including states, foreign actors, actors with record of criminal, violent or hateful behaviours, aimed at promoting social violence, promoting hate…”
  • Facebook prioritises certain countries: For instance, in 2020 it categorised Brazil municipal elections and US presidential elections as Tier 1. Tier 0 included Brazil, India, and United States. “Only Tier 1 has selections for investment in proactive technical enforcement; EOC staffing, regular proactive manual fan-outs….” the complaint said.
  • Facebook’s record in Afghanistan, Israel, and Iraq: Instagram did not detect hate speech in the Arabic language, revealed a document about Facebook’s lack of resources in Iraq. In Afghanistan, the action rate for hate speech was at 0.23 percent. Facebook had also questioned if the social media platform was flagging posts of ex Israel Prime Minister Benjamin Netanyahu “similarly to how Twitter flagged Trump’s post”, as per the complaint.

#3 Facebook misled public about its promotion of human trafficking

This complaint by the whistleblower claimed that despite Facebook reassuring investors and the public that a lot of work has been done on human trafficking and related issues, Facebook’s internal investigation confirmed that the “platform enables all three stages of human exploitation lifecycle”. A review of the documents also showed that Facebook despite identifying such issues of human trafficking, did not take any action against such posts.

Key takeaways

  • Facebook found that its platforms were enabling human exploitation: A Facebook document said, “Our investigative findings demonstrate that … out platforms enables all three stages of the human exploitation lifecycle (recruitment, facilitation, exploitation) via complex real-world networks… The traffickers, recruiters and facilitators from these ‘agencies’ used FB profiles, IG profiles, pages, Messenger and WhatsApp.”
  • Apple threatened to pull Facebook and IG from its platforms: For instance, after a BBC report from October 2019 showed that domestic workers were being illegally bought and sold on Instagram and other apps, “Apple threatened to pull Facebook and IG from Apple store. In an internal document, it was questioned whether the human trafficking issue that was reported by BBC was earlier known to Facebook. The response was yes,” the complaint said.

#4 Negative consequences of Facebook algorithms

Key takeaways 

  • Facebook’s algorithms plays a key role in viral misinformation and negative posts: One of the documents said, “Our ranking systems have specific separate predictions for not just what you would engage with, but what we think you may pass along so that others may engage with. Unfortunately, research has shown how outrage and misinformation are more likely to be viral…” The complaint quoted another excerpt of the Facebook internal document which said, “The more negative comments a piece of content instigates, the higher likelihood for the link to get more traffic…might reach conclusion that darker and more divisive content is better for business.”
  • Facebook may never well be able to combat integrity problems on its platforms: In another excerpt, Facebook was quoted as saying, “The problem is we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas… Hate speech is one of the big three community integrity problems on Facebook (along with nudity and pornography and graphic violence).

    Since the Facebook’s algorithm was observed as promoting more negative and divisive content, a Facebook research conducted in the European Union showed political parties “were being forced” to skew negative in their communications on Facebook. “For example in Poland ‘one party’s social media management team estimate that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative and 20% positive, explicitly as a function of the change to the algorithm…”

  • Hybrid MSI model for India: In India, Facebook after recognising that algorithm heavy strategy was hurting Android in India, adopted a hybrid meaningful social interaction approach (MSI) approach wherein emphasis was increased on video in the form of in-feed recommendations (IFR).

#5 Facebook knew details about Instagram’s impact on teen health

This particular complaint alleges that Facebook made misrepresentations and omissions regarding the impact of its products such as Instagram and Facebook Blue on the mental and physical health of teenagers.

Key takeaways

  • 13.5 % of teen girls on Instagram say the platform makes thoughts of suicide and self-injury worse
  • 17% of teen girls on Instagram say the platform makes eating issues worse
  • “We make body image issues worse for 1 in 3 teen girls”

#6 Facebook removed only 3-5 % of identified hate speech

Despite reassuring investors and the public that it proactively removes hate speech, internal records of Facebook show that only a small single digit of such content is identified and removed, said the complaint.

Key takeaways

  • Facebook does not have a model to capture majority of integrity harms: Facebook in an internal document said, “We do not… have a model that captures even a majority of integrity harms, particularly in sensitive areas… We only take action against approximately 2% of the hate speech on the platform.” Another excerpt of a Facebook internal document showed that the social media platform recognises its shortcomings when it comes to resources for dealing with actionable content. “Our current approach of grabbing a hundred thousand pieces of content paying people to label them as Hate or Not Hate, training a classifier, and using it to automatically delete content at 95% precision is just never going to make a dent.”
  • Facebook’s role in promoting hate on its platforms: Facebook, in an excerpt of an internal document, also took cognisance of the fact that it is responsible for the hate being spread on the platform. “Actively ranking content in news feed and promoting content on recommendations…makes us responsible for any harm caused by exposure to that content…we are responsible for harmful experiences on any surface where we actively present content… Accounts of 99% of multiple offenders for spreading hate and misinformation remain active, “and some of them have passed dozens of authenticity checks.”

#7 Misinformation related to 2020 election and January 6 Capitol Hill attack

The whistleblower complaint said that Facebook records confirm that Facebook knowingly chose to permit political information and violent content/groups and failed to adopt or continue measures to combat these issues, including those related to the 2020 US election and January 6 insurrection. The excerpts from the documents said that —

  • Zuckerberg rejected intervention that would have reduced misinformation on platform: Excerpt of Facebook internal document in the SEC complaint said that CEO Mark Zuckerberg rejected an intervention to remove algorithms that would “significantly decrease the amount of hateful, polarization or violence inciting content from News Feed.”

    Through most of 2020, we saw non-violating content promoting QAnon spreading through our platforms. Belief in the QAnon conspiracy took hold in multiple communities, and we saw multiple cases in which such belief motivated people to kill or consipire to kill perceived enemies… Policies don’t fully cover harms [] — Facebook internal document in SEC complaint

Other key takeaways

Advertisement. Scroll to continue reading.
  • Facebook’s algorithms steer people towards misinformation: Facebook’s algorithms veer people interested in conservative topics into radical or polarizing ideas and group/pages.

    Page and Group recommendation System Risks: After a small number of high quality/verified conservative interest follows (Fox News, Donald Trump, Melania Trump — all official pages) , within just one day page recommendations had already devolved towards polarizing content. Although the account set out to follow conservative political news and humour content generally, and began by following verified/high quality conservative pages, Page recommendations began to include conspiracy recommendations after only 2 days — Excerpt of Facebook internal document in SEC complaint

  • Facebook did not take action against misinformation despite identifying trends: Facebook’s internal research highlighted in the complaint showed that despite the platform identifying trends of misinformation surrounding US Presidential elections, there was no ‘treatment’ for a false item.
  • Facebook’s methods to combat misinformation is mired in conflicts of interest: Although Facebook identified ways to combat misinformation, internal documents show that decisions are based on inherent conflicts of interest.

    Facebook’s decision-making on content policy is routinely influenced by political considerations. Facebook currently has no firewall to insulate content-related decisions from external pressures. It could have one. — Facebook internal document

#8 Facebook’s representation of core metrics, content production

Facebook has allegedly misrepresented core metrics to investors and advertisers including the amount of content produced on its platforms and growth in individual users. The whistleblower complaint also alleged that since Facebook did not properly account for Single Users With Multiple Accounts (SUMA), it misrepresented to advertisers the true number of individual users.

Key takeaways

  • Facebook did not provide details of reduction in user base: The whistleblower complaint said, “Facebook has failed to disclose internal data showing a contraction of the user base in important demographics, including the American teenagers and young adults. The company has also hidden the extent to which content production per user has been in the long-term decline.  Internal records also show that there has been in a decline in young adults’ engagement in key markets other than the US — Middle East, Africa, Asia Pacific, and so on. During Covid, every age cohort’s use of Facebook increased except for those 23 and under, which continued to decline, the Facebook internal records show. Facebook’s internal records also confirm that teens and young adults in developed economies are using the platform less.”
  • Facebook targets youth and teens to get more people on their platforms: Internal documents show that youth and teens, were allegedly being deliberately targeted for Instagram in order to bring their family members onto Facebook platforms.

    Teens shape the household’s perception of Instagram; Insight #1: Teens were often the reason other household members joined Instagram… One member within a household has the power to influence acquisition and retention for others. Family-first acquisition strategies are proven effective (e.g. TikTok) and warrant exploration on Instagram — Excerpt of Facebook internal document in SEC complaint

Also read:

Have something to add? Post your comment and gift someone a MediaNama subscription.

Written By

Among other subjects, I cover the increasing usage of emerging technologies, especially for surveillance in India

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.

News

The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.

News

In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?

News

The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.

News

The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ