November 8, 2021

Decoder Newsletter: How Disinformation Affects the World

Margaret Sessa-Hawkins

Last week, we dedicated the Decoder to covering the nexus of social media and climate change. This week, we’re taking another global approach to the issue of online deception, looking at two new climate disinformation studies, platforms’ failures to address disinformation in non-English languages, and potential  tech regulations coming from Europe. Looking forward to next year’s midterm elections, we’re also announcing the launch of Decode Democracy’s Voter Empowerment Plan. The Decoder is a weekly roundup of news about digital deception. Know someone who might be interested? Ask them to sign up!

  • Voter Empowerment: Today, with the midterm elections exactly one year away, Decode Democracy is launching a Voter Empowerment Plan. So far, lawmakers have done remarkably little to implement protections for voters from online disinformation and deception that have plagued recent elections. Combining new and existing policy proposals, the Voter Empowerment Plan outlines steps to fight disinformation and give voters tools to be active participants in our democracy. You can sign the letter and tell your federal representatives today to support the Voter Empowerment Plan and protect our elections from online disinformation by clicking here.
  • Climate Change I: As the COP26 U.N. Conference to address the climate crisis continues in Glasgow, a new study from the Center for Countering Digital Hate, “The Toxic Ten” has found 10 fringe publishers fuel nearly two-thirds of climate change denial on Facebook. In a write up on the study, Cat Zakrzewski of the Washington Post reports that The Toxic Ten includes domestic right-wing news sites such as Breitbart, outlets with ties to the fossil fuel industry such as the Media Research Center as well as publishers with links to foreign governments including RT.
  • Climate Change II: The Real Facebook Oversight Board, SumofUs, and Stop Funding Heat also released #InDenial, a report finding that climate misinformation on Facebook is viewed between 800,000 and 1.36 million times daily. In the wake of the studies and last month’s hearing on fossil fuel industries and climate disinformation, Rep. Sean Casten (D-Ill) tweeted that platforms need to be held accountable for the disinformation they are spreading. Climate Monitor’s latest report on Digital Climate Ad spending finds that in the wake of the hearing, ExxonMobile stopped a massive Facebook ad campaign.
  • Spanish Disinformation: Following previous accusations that Facebook CEO Mark Zuckerberg axed plans to create a voting information center in Spanish on WhatsApp because it wouldn’t be “politically neutral” Sen. Amy Klobuchar (D-MN) and Rep. Ben Ray Luján (D-NM) have written a letter to the CEO raising concerns and questions. In particular, they ask if the reports that such an initiative was blocked are true, and whether there was any electoral/voting information that was available on Facebook or WhatsApp in English, but not in Spanish. New Research from Free Press, the Global Disinformation Index, and SumOfUs finds that Google’s ad service placed advertisements on Spanish language coronavirus disinformation sites.
  • Other Language Issues: In light of Facebook’s failure to invest in addressing non-English language disinformation on the platform, it’s no surprise that whistleblower Francis Haugen raised the issue in an interview with the German news outlet Deutsche Welle. In Rest of World, Nilesh Christopher spoke to misinformation researcher Tarunima Prabhakar about Big Tech’s issues with non-English language disinformation, and some fixes that could be implemented.
  • Trends in Ethiopia: In her interview with Deutsche Welle, Haugen specifically cited Facebook’s failures regarding the current situation in Ethiopia. In order to stem incitements to violence, Twitter recently announced that it was disabling Trends in the country. Nicolas Henin, a senior fellow at Open Diplomacy has a thread on why Trends can be so harmful. In Rest of World, Tomiwa Ilori examines social media’s role in both spreading disinformation and helping governments abuse “disinformation laws.”
  • Tough on Tech: A UK official has indicated the country may be taking a tougher stance on social media companies in the future, Natasha Lomas reports in TechCrunch. Nadine Dorries, the newly-appointed culture secretary, said she wants to make it easier for a proposed online safety bill to tackle tech companies. Under current proposals, social media companies have two years before they face potential criminal sanctions — including jail time for executives — if they don’t address illegal or harmful content spreading on their platforms. However, Dorries has said she supports shortening the two year time frame to three to six months, saying of the companies, “They know what they’re doing wrong.”
  • Goodbye Facebook Facial Recognition: Facebook announced last Tuesday via its new parent company Meta that it was shutting down its facial recognition software. The news was generally received positively, but came with some pretty major caveats. In Gizmodo, Shoshana Wodinsky writes about many of the privacy issues still rife on Meta’s social media platforms, including Meta’s possession of DeepFace, a facial recognition algorithm that learned from the facial templates that Facebook is now deleting. Cybersecurity Law Professor Jeff Kosseff points out that there are still many reasons we need a national privacy law addressing facial recognition.
  • Monetizing Groups: Reuters reports that Facebook has also been testing ways to make money on groups through subscriptions. The article notes, “Facebook Groups have been under scrutiny from lawmakers and researchers who argue that they provide closed spaces for health misinformation, violent rhetoric and extremism to proliferate without being policed properly.”  With Facebook’s closed groups so problematic, having revenue tied to their growth makes the move even more concerning.
  • Research: In Nieman Lab, Ethan Zuckerman explores why we don’t have complete information about disinformation on Facebook, and how the platform could remedy the issue. Axios reports that a new poll from YouGov and the Center for Growth and Opportunity found that there is an ongoing distrust of tech platforms. A new article in Policy Review looks at the recent European Union trend of making disinformation illegal, and examines the potential pitfalls of trying to create and enforce a legal definition of disinformation. In Protocol, Issie Lapowsky reports that Facebook’s practice of withholding some protections from groups of users seemed to barely move the needle, indicating the protections weren’t doing that much in the first place.


Join the fight
against online
political deception.