August 16, 2021

Decoder Newsletter: Can Social Media Change?

Margaret Sessa-Hawkins & Viviana Padelli

Last week in the Decoder, we looked at the fallout from Facebook’s decision to suspend the account of NYU academics who were conducting research into the platform’s ads. This week, we look at whether the increased pressure that decision has brought down on the company will engender any change. We also examine other developments that could lead to change for social media, including a renewed Federal Trade Commission focus on algorithms, and focus on Facebook’s failure to address racist harassment of England soccer players. The Decoder is a weekly roundup of news about digital deception. Know someone who might be interested? Ask them to sign up!

  • Afghanistan: Yesterday, Kabul fell to the Taliban, effectively putting Afghanistan under the group’s control. Of course, where there is a crisis, there are also people trying to peddle disinformation about it. First Draft’s ‘Daily Briefing’ newsletter had a good roundup of many of the top disinformation narratives circling. In the Columbia Journalism Review, Jon Allsop pointed out that much of the coverage of Afghanistan, including the political blamesmenship that disinformation capitalizes on, is pulling focus from the Afghans endangered by the current situation.
  • Social Media Research Update: Social media companies don’t love transparency. We focused on this in last week’s Decoder when Facebook suspended a team of NYU researchers. A week later, researchers at Princeton said they shut down a study into political ads on Facebook, reports Kate Kaye in Digiday. The researchers had concerns with a contract the company was asking them to sign to access data, specifically worrying it would give Facebook the right to suppress their findings. AlgorithmWatch has also revealed it shut down a research project in July, following pressure from Facebook.
  • Continued Backlash: Since last week, there has been extensive pushback against social media companies’ opposition to transparency. For the Electronic Frontier Foundation, Rory Mir and Cory Doctorow write that we need regulations that protect research into social media. The researchers themselves have written op-eds in both The Guardian and The New York Times explaining how Facebook’s actions could have a chilling effect on other research, and calling for legal protections. In the New York Times, Ashley Boyd of Mozilla goes broader than just Facebook, writing that while many social media companies want to appear to be transparent, they rarely are, and lists specific actions companies have taken to stifle transparency, as well as suggestions for what they could do to increase it. Twitter, meanwhile, has launched a new initiative promoting academic research in certain areas.
  • Algorithmic bias: The results of Twitter’s ‘bias bounty’ competition to find biases in its algorithm are out. The first place winner was a student who proved that Twitter’s image cropping algorithm is biased toward lighter-skinned, thinner faces. The second place winner found that images of the elderly and disabled were also marginalized. Twitter has a thread recapping the competition and winners. Speaking of algorithms, commissioner of the Federal Trade Commission Rebecca Kelly Slaughter has written a paper outlining the algorithmic harms that threaten to undermine economic and civil justice, as well as looking at existing tools the Federal Trade Commission could use to address these harms.
  • Health disinformation: As the Delta variant has surged, so has virus dis and misinformation, reports The New York Times. The rise in disinformation comes after a lull over the spring. In CodaStory, Erica Hellerstein looks at how distrust in institutions fuels vaccine hesitancy, using Haiti as a case study. Hellerstein also reported on local governments working with micro-influencers to engage with vaccine hesitancy in their communities. In the Washington Post, Caroline Anders reports that YouTube has had to grapple with how to handle videos of local government meetings that contain disinformation in their public comments, while Media Matters for America looks at an example, a ‘new Plandemic-like’ video.
  • Electoral Disinformation: Mike Lindell, of MyPillow fame, held a conference this weekend where he was meant to present evidence the 2020 election was stolen. He did not, but the danger posed by electoral disinformation is still acute, with Homeland Security Intelligence chief John Cohen telling CNN that calls for violence online are similar to those circulating before January 6th. The Cato Institute’s Julian Sanchez has a thread on why people buy into conspiracy theories like this, and the overlap with vaccine hesitancy.
  • Ad makeover: In The Verge, Alex Heath reports that Facebook, fearing impending regulation, is working on changing how its ads function so that more of users’ privacy is retained. In The Markup, Corin Faife used the Citizen Browser project to look at how far-right firebrand The Daily Wire targets ads compared to the New York Times.
  • Suspensions: Tuesday, Twitter suspended Rep. Marjorie Taylor Greene’s account for a week following yet another violation of the platform’s health disinformation rules. Though Twitter has declined to say how many times Greene has violated the terms, it is thought that she is at four violations, which means one more could result in a permanent ban. YouTube also suspended Rand Paul for a week for violating its health policies.
  • Research: A study out of the University of Washington is looking for YouTube creators to help them test a new tool to manage comments more efficiently. More information can be found here. Social media has often been blamed for polarizing society, but this week in Tech Policy Press, Justin Hendrix interviewed UC Berkeley’s Jonathan Stray about whether social media can help with depolarization. Pathmatics looked at the companies that engaged in the #BoycottFacebook campaign calling for stricter policies on hate speech to see how their ad spending compared a year later. The Tech Transparency Project traced who gets Big Tech funding and found their reach is fairly pervasive. This week Microsoft will be holding a research lecture series on Race and Technology. More information and registration can be found here.

Join the fight
against online
political deception.