wordpress blog stats
Connect with us

Hi, what are you looking for?

Facebook answered some questions and didn’t answer many others about election integrity

We missed this earlier

In late July, several core members of the Facebook team came together to speak about Facebook’s new and expanded election strategy. The agenda was to be election strategy, political advertising, news feed integrity, controlling the spread of fake news, and quality of news feed and how content rated and distributed.

Here are some highlights:

–Pages of political candidates now have an ‘Issues’ tab where users can learn more about the candidate and their views. The platform also provides for comparison of candidates, reminders to vote (cheeky), and so on.
–Facebook has partnered with 27 third-party fact-checking partners in 17 countries to root out and/or limit the spread of false news.
–Facebook’s election integrity team has now amounts to 15,000 people, and will be close to 20,000 by this year (US mid-term elections are in November). The platform says the team reviews content in over 50 languages, they also claim to stop over a million fake accounts per day at the ‘point of creation.’

What we didn’t know before

Facebook will reduce visibility of fake news instead of deleting it

Facebook will straight up remove anything that violates its Community Standards. However, if a post does not violate its Community Standards, but ‘does undermine the authenticity of our platform’, Facebook will reduce the distribution of such content. “For example, we show stories rated false by fact-checkers lower in News Feed so dramatically fewer people see them.” said Tessa Lyons, Product Manager for News Feed. Lyons clarifies that what community standards are isn’t agreeable by everybody all the time. But if certain content is ranked as having low credibility and comes close to false information, then its distribution will be reduced.

Advertisement. Scroll to continue reading.

If you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false and that many people, myself included, find offensive. . . Just because something is allowed to be on Facebook doesn’t mean it should get distribution. . . We know people don’t want to see false information at the top of their News Feed and we believe we have a responsibility to prevent false information from getting broad distribution. This is why our efforts to fight disinformation are focused on reducing its spread.
–Tessa Lyons, Product Manager for News Feed
(emphasis added)

On how Facebook detects disinformation threats targeting fair elections

Facebook’s work in identifying and disrupting information operations and coordinated threats includes manual and automated nitpicking to detect such networks. Their team manual investigations look for unique patterns and behaviours that are common to threat actors.

Manual investigations are the bread and butter of this team. These are like looking for a needle in a haystack. The challenge with manual investigations is that by themselves they don’t scale; they can’t be our only tool to tackle information operations. For each investigation, we identify particular behaviors that are common across threat actors. And then we work with our product and engineering colleagues as well as everyone else on this call to automate detection of these behaviors and even modify our products to make those behaviors much more difficult.
-Nathaniel Gleicher, Head of Cybersecurity Policy efforts

Automated processes then learn the patterns and methods used to carry out a threat or attack and then try to shrink those possibilities. “Our goal is to create this virtuous circle where we use manual investigations to disrupt sophisticated threats and continually improve our automation and products based on the insights from those investigations.”

Facebook also disclosed a partnership with Digital Forensic Research Lab at the Atlantic Council which will offer “real-time insights and intelligence on emerging threats and disinformation campaigns.”

Political ads labelling and ads archive

–From May onward, all political and issue ads on Facebook have to be labelled, and the advertiser has to disclose who has paid for the ad.
–Users can view the active ads on a age, even if they aren’t targeted to the user. Reporting an ad is also enabled.
–The ads archive has a separate section to distinguish between sponsored articles from news organizations and political ads.

We also introduced a searchable archive for political content that houses these ads for up to seven years, and a broad policy, to determine which ads go in the archive, which ads require a label, and the person placing them to confirm their identity. Some of the recent top keyword searches we’ve seen in the archive have been California, Clinton, Elizabeth Warren, Florida, Kavanaugh, North Carolina, and Trump.
–Rob Leathern, Product Manager, Ads Team.

What Facebook didn’t answer (or just ignored)

–Will Facebook be transparent about its down-ranking system of posts which it finds to be less authentic, and whose distribution it will reduce? Are they tracking the pages/persons who are being down-ranked? Will these users be informed on the occasion of being down-ranked? Will they make this information public at a later stage?
–Will Facebook give up on political ad revenue (meaning will they stop all political ads) to ensure there is no interference, given that they anticipate interference?
–Incendiary ads cost lower to distribute for advertisers, then neutral/positive ads. Is Facebook comfortable with the economics of this? Are they doing anything to change the kind of ads that succeed through the Facebook ad auction system?
–Has there been any evidence or indication of disinformation campaigns leading up to the US mid-term elections scheduled for November?

Advertisement. Scroll to continue reading.
Written By

I cover health, policy issues such as intermediary liability, data governance, internet shutdowns, and more. Hit me up for tips.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ