Today’s Decoder returns to a staid old topic: electoral disinformation. The theme has emerged primarily in light of the disinformation swirling around the California recall election. However, it is also relevant to the ongoing investigations into the January 6th capitol riots, and to Facebook’s continued efforts to stymy researchers trying to find out more about electoral disinformation on the platform. Additionally, we look at the ongoing problem of coronavirus disinformation on platforms, and what can be done about it. The Decoder is a weekly roundup of news about digital deception. Know someone who might be interested? Ask them to sign up!
- Recall disinformation: Amid a slew of misinformation around the California recall election, Decode Democracy coordinated with a dozen civic organizations to urge major social media and tech companies to limit the spread of disinformation ahead of the election next week. In an open letter to the chief executives of Facebook, Twitter, YouTube, and Google, the organizations ask the companies to address disinformation by elevating reputable voting information, prioritize fact-checking electoral content, providing real-time data to researchers, and addressing non-English disinformation. In an article for the Sacramento Bee, Decode Democracy policy director and former Chair of the Federal Election Commission Ann Ravel stressed the importance of acting on misinformation, saying that, “At its core, we know that misinformation does have an impact on democracy.”
- Capitol Riots threats: House Republican Leader Kevin McCarthy threatened retaliation Tuesday against companies complying with a congressional committee investigating the capitol riots. The threat comes after the committee asked firms — including social media platforms — to preserve phone and social media records of 11 far-right members of Congress. On Friday, the watchdog group Citizens for Responsibility and Ethics in Washington (CREW) asked the Office of Congressional Ethics to investigate whether McCarthy violated House rules with his threat.
- Research and electoral disinformation: Separately, Politico reported that many social media posts relating to the Jan. 6th events were missing from a Facebook transparency tool, CrowdTangle, since at least May. The company told Politico the posts were accidentally removed from the tool, but it is unclear when the situation will be remedied. A forthcoming study from researchers at NYU and Université Grenoble Alpes in France has also found that during the election cycle, news publishers putting out misinformation got six times more engagement compared to reputable sources. While Facebook criticized the study for only looking at engagement metrics, Dr. Rebekah Tromble points out that the company doesn’t make impressions data available.
- Covid disinformation: In a new op-ed for CNN, Decode Democracy’s Ann Ravel and Kristin Urquiza, who founded Marked by Covid after her father’s death from the disease, argue that social media companies must take greater steps to reduce vaccine disinformation. Reddit has also banned a forum filled with health disinformation after 135 subreddits ‘went dark’ in protest of the site’s inaction on health disinformation. In California, San Diego county has become the first in the US to declare coronavirus misinformation a public health crisis. For CNN Business, Claire Duffy looks at different social media platforms’ strike policies, and how they could address health disinformation. In Bloomberg Businessweek Daniel Zuidijk reports that to cut down on covid disinformation, we need to follow the money.
- Around the world: In Brazil, President Jair Bolsonaro has signed a decree limiting social media companies’ power to take down accounts and content. Bolsonaro has frequently had his own social media posts removed for spreading coronavirus disinformation. Texas is close to passing a similar (unconstitutional) law. Facebook has also expanded its plans to reduce political content. In an update to a February blog post extolling the success of the experiment so far, the company announced that it would be reducing political content in news feeds in four new countries: Costa Rica, Sweden, Spain and Ireland. In the Columbia Journalism Review, Matthew Ingram dives into the new changes, and what they might mean.
- Safeguarding social media research: In Nature, Ethan Zuckerman, director of the Initiative for Digital Public Infrastructure, outlined five tenets that should be enforced to give researchers access to social media data. These include ensuring access to targeting information, allowing users to ‘donate’ their data to research, and creating protections for research in the public interest.
- Research: A new study from the University of Maryland has found that facial recognition software continues to contain racial and gender biases in significant and easily detectable ways. The study comes as Facebook’s AI labeled pictures of black men as primates. In Rest of World, Vittoria Elliot writes about new research from the Mozilla Foundation looking at disinformation influencers for hire in Kenya. The Media Manipulation Casebook has also published a new study on disinformation in the ongoing conflict in Tigray. A new study in the journal Policy and Internet looks at the threats digital technology poses to democracy. One of the authors, Dr. Kate Dommett, has a good thread about the findings.