August 9, 2021

Decoder Newsletter: Let’s Talk Research

A Facebook logo created from pictures of Facebook users worldwide is pictured in the company's data center.
Margaret Sessa-Hawkins & Viviana Padelli

This past week, Facebook decided to ban NYU researchers from using a plug-in that tracks advertising information on its platform. The move comes shortly after Facebook relocated the team responsible for Crowdtangle, a popular tool with academics and social media researchers. This Decoder, we are looking at social media, research, and transparency. We examine the fallout from what happened to the NYU team, why research into social media is so critical, the barriers researchers face, and highlight recent studies. The Decoder is a weekly roundup of news about digital deception. Know someone who might be interested? Ask them to sign up!

  • Leadership position fighting disinformation: Decode Democracy is seeking an Executive Director to lead our policy and advocacy work, which includes advocacy for government action to reduce disinformation and hold social media companies accountable. We understand that democracy reform requires centering racial justice, equity, and diversity in every aspect of our work. If this is a position you, or someone you know, might be interested in, you can find more information here.
  • Facebook v. NYU:  Last week, Laura Edelson, a researcher at NYU, tweeted that Facebook had suspended the accounts of several people associated with the Cybersecurity for Democracy team at the school, effectively cutting off their ability to conduct research. In a blog post Mike Clark, Facebook’s Product Management Director, wrote that the accounts were disabled for using methods that violated Facebook’s terms of use, and were unauthorized under a Federal Trade Commission order, a claim the FTC subsequently said was misleading. Three Democratic Senators have now sent a letter to Mark Zuckerberg, asking him to explain why the company suspended the researchers’ accounts.
  • What’s going on here: The tensions between Facebook and NYU date back to November of 2020, when Facebook demanded that researchers stop using a browser plug-in to research the company’s political ads. The plug-in collects information about which ads are running on the platform, as well as how they are targeted. Facebook has always claimed that because the plug-in “scrapes” data from Facebook, it violates terms of service. In fact, users have to consent to use the plug in, and Protocol reported in March that the “scraping” Facebook claims to be worried about is actually from advertisers’ accounts. Although Facebook has an ad library, it does not include targeting information, and FORT, a tool which does, is often criticized as being too limited.
  • Reactions: Since the news first broke, a diverse host of organizations have come out against Facebook’s actions. Mozilla, which is responsible for the Firefox web browser, wrote a blog post condemning the move, and arguing that Facebook’s claims about privacy “don’t hold water.” In Wired, Senior Writer Gilad Edelman agreed that Facebook’s claims don’t hold up. The Anti-Defamation League has also condemned Facebook, while in Tech Policy Press, David Carroll of The New School argues that this ban just shows how much Facebook is in need of regulation. In another piece in Tech Policy Press, Nathalie Maréchal of Ranking Digital Rights argues that the move shows once again that Facebook is an ad tech company, and should be regulated as such. In The Verge, Casey Newton writes that to stop fights like this in the future, either Facebook should enable more research, or Congress should require more transparency from it.
  • History: This is not, of course, the first time Facebook has shown itself to be wary of transparency. Olivia Solon of NBC has a thread looking at times Facebook has ignored research that reflects badly on the platform. There were also issues when researchers tried to analyze its 2020 election data. In 2019 Facebook restricted the use of several ad transparency tools, including one from ProPublica. In 2019 Axel Bruns even wrote a research paper looking at how social media companies were restricting researchers’ access to data. More recently, Facebook sidelined Crowdtangle, one of the most popular tools used by researchers and journalists, and then blocked the newsletter FWIW’s access to the tool after it published a story on vaccine misinformation using Crowdtangle data.
  • Research: While we’re discussing social media and research, some interesting studies have come out recently that are well worth a read. The Brennan Center has a new report looking at how rules on content moderation across social media are often written in a way that creates a double-standard where marginalized communities are subjected to increased scrutiny but not protected from harm. Influence Map reports that despite publicly supporting climate action, Facebook continues to allow fossil fuel interest to use its platform to spread disinformation. The researchers noted that the work was made more difficult by a lack of transparency from social media platforms. In MIT’s Technology Review, Mar Hicks looks at how women’s voices are still being silenced in the tech world.
  • Election disinformation: The Tech Transparency Project has found that disinformation surrounding election audits is rife on Facebook. The groups spreading these claims remain active on Facebook, despite the company’s pledge to crack down on election-related disinformation, and many posts do not have flags or labels attached. In The New Yorker, Jane Meyer has written about the groups and donors that are fueling Donald Trump’s election-disinformation efforts. Meanwhile, The Washington Post has published a nifty timeline about Trump’s efforts to overturn the election.
  • Advertising: The Information is reporting that Facebook is looking into targeting encrypted messages — like those spread on WhatsApp — for advertising. The research behind the process is known as “homomorphic encryption” and Microsoft, Amazon, and Google are also apparently working on similar projects. Meanwhile, Representative Anna Eshoo has reintroduced legislation to ban microtargeting in political ads. “Microtargeting is particularly dangerous when it comes to online political advertising, allowing political actors to spread harmful and divisive messaging with little transparency for voters. Decode Democracy applauds this bill for working to eliminate this serious threat to democratic debate online,” Decode Democracy’s President, Daniel G. Newman, said.
  • Apple Privacy: Shortly after Apple introduced a new tool meant to spot child sexual abuse in iCloud accounts, the announcement faced some criticism for opening a privacy can of worms, especially when it comes to encryption. The feature where parents are notified if a child sends sexually explicit material is coming under special criticism, and a large number of organizations have now signed a letter decrying the move. The Electronic Frontier Foundation has a good piece outlining why these new changes constitute a very slippery, and scary, slope.

 

Join the fight
against online
political deception.