“Facebook has effectively ended all this work,” said one researcher referring to her team’s work of identifying political disinformation and vaccine misinformation on the platform.
In the month of August alone, it was revealed that Facebook banned the accounts of a group of NYU researchers, threatened legal action against another group of AlgorithmWatch researchers, and imposed rigid data access restrictions on Princeton University researchers.
These researchers play an important role in holding social platforms accountable by studying fake news and disinformation patterns, algorithmic bias, ad targetting practices, the spread of hate speech, etc. But Facebook has restricted such research in the name of protecting privacy, which many privacy advocates have called a bogus claim. Here’s a round-up of what went down.
NYU Ad Observatory
What happened? On August 3, Facebook banned the personal accounts of researchers who are part of NYU Ad Observatory as well as suspended the team’s access to Facebook’s Ad Library and Crowdtangle, two platforms that provide data on how often particular posts are viewed, liked, and shared.
What is NYU Ad Observatory? It is a project studying ad transparency and the spread of misinformation on social media platforms. To carry out their work, the researchers developed a browser plug-in called Ad Observer, which collects data from users on what political ads are shown to them and why were they targeted. The plug-in’s website states that it does not collect any personally identifying information and has published the code for anyone to audit.
Why does this research project matter?
“Online ads are usually seen only by the audience the advertiser wants to target, and then they disappear. This makes it difficult for the public to monitor them and hold advertisers accountable. While platforms have developed some transparency libraries for political ads, these libraries are missing many ads featuring political content and often don’t include vital information such as ad targeting. This isn’t a partisan issue. We think it’s important to democracy to be able to check who is trying to influence the public and how.” – NYU Ad Observatory
Why did Facebook ban the accounts? In a blog post explaining the reasons for its decision to ban the researchers, Facebook said:
- Unauthorized data collection: “NYU’s Ad Observatory project studied political ads using unauthorized means to access and collect data from Facebook, in violation of our Terms of Service,” Facebook stated. The platform said that the AdObserver plugin was programmed to evade its detection systems and scrape data such as usernames, ads, links to user profiles, etc, even from users who did not install it or consent to the collection.
- In line with the FTC order: “We took these actions to stop unauthorized scraping and protect people’s privacy in line with our privacy program under the FTC Order,” Facebook said. Facebook is referring to the FTC order following the Cambridge Analytica Scandal, which involved a $5 billion fine and a number of privacy checks.
- Made repeated attempts to work with NYU Ad Observatory: Facebook claims it told the researchers a year ago that their plug-in would violate the platform’s terms and that it was willing to provide the researchers with “the precise access they’ve asked for in a privacy-protected way.”
- We offer researchers a number of official methods to collect data: “We welcome research that holds us accountable, and doesn’t compromise the security of our platform or the privacy of the people who use it. That’s why we created tools like the Ad Library and launched initiatives like Data for Good and Facebook Open Research & Transparency (FORT) — to provide privacy-protected APIs and data sets for the academic community,” the company said.
Reactions to Facebook’s decision:
- Don’t invoke privacy and FTC as a pretext to advance other aims: The Acting Director of the Bureau of Consumer Protection Samuel Levine sent a letter to Mark Zuckerberg regarding Facebook’s misleading claim that the company was working according to an FTC order. “The consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest. Indeed, the FTC supports efforts to shed light on opaque business practices, especially around surveillance-based advertising. While it is not our role to resolve individual disputes between Facebook and third parties, we hope that the company is not invoking privacy – much less the FTC consent order – as a pretext to advance other aims,” Levine wrote.
- Facebook’s misleading accusation: While Facebook appears to suggest that data about private individuals was collected without consent, Facebook is actually “referring to advertisers’ accounts, including the names and profile pictures of public Pages that run political ads and the contents of those ads,” Protocol reported.
- Facebook’s claims do not hold water: “In our view, those claims simply do not hold water. We know this, because before encouraging users to contribute data to the Ad Observer, which we’ve done repeatedly, we reviewed the code ourselves,” Mozilla’s chief security officer Marshall Erwin said.
- Project uncovered systematic flaws: “Over the last several years, we’ve used this access to uncover systemic flaws in the Facebook Ad Library, identify misinformation in political ads including many sowing distrust in our election system, and to study Facebook’s apparent amplification of partisan misinformation. By suspending our accounts, Facebook has effectively ended all this work,” NYU researcher Laura Edelson tweeted.
- Stops work on vaccine misinformation: “Facebook has also effectively cut off access to more than two dozen other researchers and journalists who get access to Facebook data through our project, including our work measuring vaccine misinformation with the Virality Project and many other partners who rely on our data,” Edelson tweeted.
- Cambridge Analytica is nothing like Ad Observer: “Let’s be clear; Cambridge Analytica is nothing like Ad Observer. Cambridge Analytica did its dirty work by deceiving users, tricking them into using a “personality quiz” app that siphoned away both their personal data and that of their Facebook “friends,” using a feature provided by the Facebook API.[…] The slimy practices of the Cambridge Analytica firm bear absolutely no resemblance to the efforts of the NYU researchers, who have prioritized consent and transparency in all aspects of their project,” the Electronic Frontier Foundation wrote.
- Senators write to Facebook: Several US senators wrote a letter to Facebook asking the company to explain its decision to block the NYU researchers’ accounts and why it had not contacted FTC before invoking its order as a pretext.
- Over 200 academics sign a letter of solidarity: Over 200 academics signed a letter in solidarity with the banned researchers and asked Facebook to reinstate their accounts and as well as make data available to researchers. They also urged regulators to compel access to data for research purposes.
- Congress can pass a law: Tech journalist Casey Newton suggested that Congress could pass a law and create a dedicated carveout for qualified academic researchers. “It could require platforms to disclose more data in general, to academics and everyone else. It could establish a federal agency dedicated to the oversight of online communication platforms,” he said.
What happened? On August 13, researchers at Berlin-based AlgorithmWatch said in a blog post that they had to terminate their research project monitoring the Instagram algorithm after legal threats from Facebook. “Ultimately, an organization the size of AlgorithmWatch cannot risk going to court against a company valued at one trillion dollars,” the organization stated.
What work was AlgorithmWatch doing? In March, AlgorithmWatch launched a project to monitor Instagram’s newsfeed algorithm by allowing users to install a browser add-on that scraped their Instagram feeds. The organisation studied this data to find how Instagram prioritises pictures and videos in a user’s timeline.
Why does what they do matter? The research allowed AlgorithmWatch “to show that Instagram likely encouraged content creators to post pictures that fit specific representations of their body and that politicians were likely to reach a larger audience if they abstained from using text in their publications (Facebook denied both claims). Although we could not conduct a precise audit of Instagram’s algorithm, this research is among the most advanced studies ever conducted on the platform,” the organization stated.
What did Facebook say?
- The research was flawed: Initially, Facebook refused to comment on the findings, but later said that the “research [was] flawed in a number of ways” and that they “found a number of issues with methodology.”
- Breach of terms of service: In May 2021, Facebook called for a meeting in which they told AlgorithmWatch that its project breached Facebook’s Terms of Service and that they would have to “move to more formal engagement” if the organisation did not resolve the issue.
- Violation of GDPR: Facebook also claimed that the project violated the GDPR because some of the collected data came from users who never agreed to the project. However, AlgorithmWatch claims that such data was deleted immediately when arriving at its server.
Reactions to the incident:
- Constant threat of being sued: “Facebook’s reaction shows that any organization that attempts to shed light on one of their algorithms is under constant threat of being sued,” AlgorithmWatch stated. “There are probably more cases of bullying that we do not know about. We hope that by coming forward, more organizations will speak up about their experiences,” the organization added.
- Only users’ own data collected: While Facebook terms state that one “may not access or collect data from [Facebook’s products] using automated means,” AlgorithmWatch said that “users of the plug-in were only accessing their own feed, and sharing it with us for research purposes.”
- Cannot rely on data provided by Facebook: In response to Facebook’s argument that it provides data to researchers in a privacy-safe manner, AlgorithmWatch said, “researchers cannot rely on data provided by Facebook because the company cannot be trusted.” “Even Facebook’s Ad Library, one of the company’s flagship transparency projects, suffered ‘bugs’ that harmed its credibility. In December 2019, a few days before the United Kingdom’s general election, almost half of the British advertisements stored in the Library disappeared,” AlgorithmWatch pointed out.
- Ensure access to data through Digital Services Act: AlgorithmWatch suggested that intermediary institutions could be established with the mandate to enable data access frameworks for public interest research. “European lawmakers have the chance, with the Digital Services Act, to ensure that public interest researchers, including academia, journalists, and civil society organizations, have access to the data we need from large platforms,” AlgorithmWatch stated.
What happened? Princeton University researchers who applied to Facebook for access to its political ad data through the FORT program pulled the plug on the would-be project because of the social media platform’s “rigid contractual requirements,” Digiday reported.
Why did the researchers abandon their plans for the project? “For Princeton researchers including Orestis Papakyriakopoulos, a Ph.D. at the University’s Center for Information Technology Policy, the key sticking point was a contract Facebook requires research institutions to sign before accessing its data. In particular, he and others on his digital tech policy research team were concerned that agreeing to the contract would give Facebook the right to remove information from their research findings had they actually went through with the project,” the Digiday report stated. “It doesn’t make sense for us to do research for six months and then not be able to publish it,” Papakyriakopoulos told Digiday.
What did Facebook’s “rigid contract” stipulate? The contract that researchers had to sign “states that research findings resulting from analysis ‘may not disclose any Confidential Information or any Personal Data’ and gives Facebook the opportunity to review publication drafts ‘to identify any Confidential Information or any Personal Data that may be included or revealed in those materials and which need to be removed prior to publication or disclosure.’ According to the contract, Confidential Information includes information relating to Facebook’s products and technology, its data processing systems, policies and platforms, in addition to personal information pertaining to its users or business partners,” Digiday reported.
What has Facebook said? A Facebook spokesperson told Digiday: “The questions these researchers ask and conclusions they draw are not restricted by Facebook. […] As of now, we have not rejected any research papers as a part of our standard review process to ensure no personal data or confidential information is included.”
Have something to add? Post your comment and gift a MediaNama subscription.