October 18, 2021

Decoder Newsletter: Social Media Leaks Keep Coming

Margaret Sessa-Hawkins

This past week has been full of more social media revelations based on whistleblowing or leaks of internal documents. From the New York Times reporting on Instagram’s attempts to maintain its teenage ‘pipeline’, to a new installment in the Facebook Files and the Intercept’s release of Facebook’s ‘Dangerous Individuals and Organizations’ list, the social media leaks keep steadily dripping. Facebook is at least trying to plug theirs (the New York Times wrote, thanks to a leaked document) but in the meantime the question remains whether these leaked revelations are leading to any change. 

  • The Pipeline: In the New York Times Sheera Frenkel, Ryan Mac, and Mike Isaac have written an article about Instagram’s terror at losing teen users, and its effort to hang onto them. The fears come as Facebook and Instagram face aging user bases as teens move to TikTok. The key line in the piece, from a marketing document, is “If we lose the teen foothold in the U.S. we lose the pipeline.” For those interested in more analysis of Facebook’s desperation and decline, New York Times tech columnist Kevin Roose’s segment from last week’s On The Media is worth a listen.
  • AI Content Moderation: The Wall Street Journal released a new installment in its ‘Facebook Files’ series, this one looking at Facebook’s use of artificial intelligence in content moderation. While Facebook claims that AI will keep excessive violence and hate speech in check on its platform, the Journal reports that AI can’t identify first-person shooting videos or racist rants. Wired, meanwhile, has an interesting piece on a UK charity taking Facebook to court over its treatment of content moderators. The Guardian looks once again at Facebook’s massive failure to provide adequate content moderation to non-Western countries, profiling the independent researchers and volunteers reporting hate speech in Ethiopia to the company.
  • Whistleblowing: Speaking of Facebook’s massive failure to engage in content moderation outside of Western countries, Sophie Zhang, the whistleblower who was behind the ‘Facebook loophole’ series in The Guardian, testified before the UK parliament today. In her testimony Zhang pointed out that the under-resourced nature of Facebook’s content moderation team points to its priorities, noting that the ads team is never cited as being under-resourced. In an interview in Protocol, Zhang speculates on why it took a second whistleblower (Frances Haugen) before people started to pay attention to what she has been saying for six months. In other leak/whistleblowing news, in an internal announcement leaked to the New York Times, Facebook told employees that it would be making some internal discussion groups private in order to cut down on leaks and whistleblowing.
  • Dangerous Individuals & Organizations: The Intercept has published a list of “Dangerous Individuals and Organizations” that Facebook uses to guide its content moderation. The list, which was kept secret for more than a decade, despite being cited as a cornerstone of content moderation, seems to prioritize US foreign policy points, and could, the article argues, censor marginalized groups.
  • Antitrust: Sens. Amy Klobuchar (D-MN) and Chuck Grassley (R-IA) have announced they will be introducing antitrust legislation that aims to stop dominant tech platforms from using their gate-keeper position to squash competition. The bill is being seen as a potential indicator of whether bipartisan anti-tech sentiment will result in meaningful legislation. In CNBC, Reps. David Cicilline (D-RI) and Ken Buck (R-CO) have an op-ed on why antitrust action is necessary. The calls for antitrust action come as both Reuters reports that Amazon has been repeatedly accused of creating knockoffs of goods sold on its platform and The Markup reports that Amazon consistently ranks products affiliated with the company above outside products on its marketplace.
  • Section 230: In more reform news, House Democrats have also introduced a bill meant to amend Section 230 of the Communications Decency Act. The bill, the Justice Against Malicious Algorithms Act, would hold social media platforms responsible if their personalized recommendations led users to physical or emotional harm. Daphne Keller of the Stanford Cyber Policy Center has a thread diving into the bill, and analyzing some potential weaknesses. Tech Dirt has an interesting article (published before the introduction of the bill) on why any Section 230 reform essentially means its repeal.
  • Political Ad Data: Politico reports that the European Commission is considering forcing companies like Google and Facebook to provide detailed information on how political groups target individuals. While Commission officials had floated the possibility of an all-out ban on targeted political ads, it rejected this idea on the grounds that smaller political groups would be penalized if they couldn’t target voters.
  • Health Disinformation: A group of Attorneys General sent a letter to Facebook CEO Mark Zuckerberg asking whether its ‘whitelist’ of high-profile individuals exempt from content moderation included any of the so-called ‘disinformation dozen’ or health disinformation super-spreaders. In the MIT Technology Review, Charlotte Jee writes about a new report finding that covid conspiracy theories are helping to disseminate anti-semitic beliefs.
  • Research: A team of researchers from Spain and Australia showed that it’s possible for Facebook’s ad targeting tools to be used to direct an advertisement to a single individual. First Draft news has a report looking at anti-vaccine misinformation surrounding Black communities on social media. A new survey from Gallup and the Knight Foundation has found that while Americans are bipartisan in their agreement that misinformation is a problem on social media, they are split along partisan lines over whether censorship eclipses it. The Reuters institute looked at how journalists can address misinformation on Telegram. The New York Times also reports that content moderation policies on one platform can have ‘spillover effects’ on another. Digital Journalism has published a new study looking at whether social media is ‘killing’ local news. In a twitter thread, Nick Matthews, one of the authors, writes that, “While we cannot say definitively that social media is “killing” local news, on balance and put mildly Facebook does not appear helpful to its survival.”

Join the fight
against online
political deception.