We missed this earlier
In late July, several core members of the Facebook team came together to speak about Facebook’s new and expanded election strategy. The agenda was to be election strategy, political advertising, news feed integrity, controlling the spread of fake news, and quality of news feed and how content rated and distributed.
Here are some highlights:
–Pages of political candidates now have an ‘Issues’ tab where users can learn more about the candidate and their views. The platform also provides for comparison of candidates, reminders to vote (cheeky), and so on.
–Facebook has partnered with 27 third-party fact-checking partners in 17 countries to root out and/or limit the spread of false news.
–Facebook’s election integrity team has now amounts to 15,000 people, and will be close to 20,000 by this year (US mid-term elections are in November). The platform says the team reviews content in over 50 languages, they also claim to stop over a million fake accounts per day at the ‘point of creation.’
What we didn’t know before
Facebook will reduce visibility of fake news instead of deleting it
Facebook will straight up remove anything that violates its Community Standards. However, if a post does not violate its Community Standards, but ‘does undermine the authenticity of our platform’, Facebook will reduce the distribution of such content. “For example, we show stories rated false by fact-checkers lower in News Feed so dramatically fewer people see them.” said Tessa Lyons, Product Manager for News Feed. Lyons clarifies that what community standards are isn’t agreeable by everybody all the time. But if certain content is ranked as having low credibility and comes close to false information, then its distribution will be reduced.
If you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false and that many people, myself included, find offensive. . . Just because something is allowed to be on Facebook doesn’t mean it should get distribution. . . We know people don’t want to see false information at the top of their News Feed and we believe we have a responsibility to prevent false information from getting broad distribution. This is why our efforts to fight disinformation are focused on reducing its spread.
–Tessa Lyons, Product Manager for News Feed
(emphasis added)
On how Facebook detects disinformation threats targeting fair elections
Facebook’s work in identifying and disrupting information operations and coordinated threats includes manual and automated nitpicking to detect such networks. Their team manual investigations look for unique patterns and behaviours that are common to threat actors.
Manual investigations are the bread and butter of this team. These are like looking for a needle in a haystack. The challenge with manual investigations is that by themselves they don’t scale; they can’t be our only tool to tackle information operations. For each investigation, we identify particular behaviors that are common across threat actors. And then we work with our product and engineering colleagues as well as everyone else on this call to automate detection of these behaviors and even modify our products to make those behaviors much more difficult.
-Nathaniel Gleicher, Head of Cybersecurity Policy efforts
Automated processes then learn the patterns and methods used to carry out a threat or attack and then try to shrink those possibilities. “Our goal is to create this virtuous circle where we use manual investigations to disrupt sophisticated threats and continually improve our automation and products based on the insights from those investigations.”
Facebook also disclosed a partnership with Digital Forensic Research Lab at the Atlantic Council which will offer “real-time insights and intelligence on emerging threats and disinformation campaigns.”
Political ads labelling and ads archive
–From May onward, all political and issue ads on Facebook have to be labelled, and the advertiser has to disclose who has paid for the ad.
–Users can view the active ads on a age, even if they aren’t targeted to the user. Reporting an ad is also enabled.
–The ads archive has a separate section to distinguish between sponsored articles from news organizations and political ads.
We also introduced a searchable archive for political content that houses these ads for up to seven years, and a broad policy, to determine which ads go in the archive, which ads require a label, and the person placing them to confirm their identity. Some of the recent top keyword searches we’ve seen in the archive have been California, Clinton, Elizabeth Warren, Florida, Kavanaugh, North Carolina, and Trump.
–Rob Leathern, Product Manager, Ads Team.
What Facebook didn’t answer (or just ignored)
–Will Facebook be transparent about its down-ranking system of posts which it finds to be less authentic, and whose distribution it will reduce? Are they tracking the pages/persons who are being down-ranked? Will these users be informed on the occasion of being down-ranked? Will they make this information public at a later stage?
–Will Facebook give up on political ad revenue (meaning will they stop all political ads) to ensure there is no interference, given that they anticipate interference?
–Incendiary ads cost lower to distribute for advertisers, then neutral/positive ads. Is Facebook comfortable with the economics of this? Are they doing anything to change the kind of ads that succeed through the Facebook ad auction system?
–Has there been any evidence or indication of disinformation campaigns leading up to the US mid-term elections scheduled for November?
I cover health, policy issues such as intermediary liability, data governance, internet shutdowns, and more. Hit me up for tips.
