April 19, 2021

Decoder Newsletter: The Perils of Being a Whistleblower

A protest sign that reads "power to truth."
Margaret Sessa-Hawkins & Viviana Padelli

Produced by Decode Democracy, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we’ll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up! 

  • Whistleblowers: Continuing its interviews with Facebook whistleblower Sophie Zhang, the Guardian has published a series of articles called “The Facebook Loophole” which details how the company allows abuses in smaller, poorer countries, so that it can focus on fixing abuses in richer countries that attract more media attention. In Protocol, Issie Lapowsky has written about the long-term challenges and barriers whistleblowers face. The Tech Policy Press podcast Sunday looked at political manipulation on social media, and featured Guardian tech writer Julia Carrie Wong (who wrote the Facebook loophole series) and Maria Ressa of Rappler.
  • More whistleblowing: A content moderator for Facebook who resigned has written a blistering farewell note. Among other things, the note deals with the trauma of being a moderator, and the lack of communication between moderators and those who craft policy.
  • Domestic extremism: In a hearing held by the Senate Intelligence Committee, FBI Director Christopher Wray said that social media companies play a central role in spreading the messaging of violent domestic extremists. The Washington Post also analyzed data showing that domestic terrorism surged over the past few years, peaking in 2020.
  • Profiles in domestic extremism: The New York Times interviewed an alt-right videographer about what becomes successful on the platform. In the article, he describes working to feed YouTube’s algorithm, noting that it rewards extremism and conflict. Bloomberg has profiled Nick Lim, a computer programmer who hosts websites cut off by other servers, including white-nationalist sites and QAnon message boards.
  • Regulating Big Tech: The House Judiciary Committee officially approved a report investigating how Big Tech companies abuse their market power to the detriment of consumers and smaller firms, thus setting out a blueprint for antitrust legislation. Republicans from the Energy and Commerce committee are also circulating a memo laying out ideas for regulating Big Tech. Both Axios and Bloomberg Government have pieces looking at the implications of the two moves.
  • Vaccine information: The Senate Subcommittee on Communications, Media and Broadband held a hearing on communicating trusted vaccine information. Chairing the meeting, Sen. Ben Ray Lujan (D-NM) spoke about the importance not only of spreading trusted information, but stopping the spread of disinformation. The day of the hearing, the top vaccine post on Facebook was a video from Tucker Carlson suggesting that vaccines do not work.
  • Weak privacy: The Markup has analyzed weak state privacy bills being pushed by Big Tech in several states. Experts agree the legislation is meant to preempt potentially stricter regulation, but The Markup also found that the bills are backed by “a who’s who of Big Tech-funded interest groups.” Other bills not as friendly to Big Tech are mostly dying or being rewritten.
  • Climate lies: Bloomberg reports that despite its promises, Facebook is not doing enough to address climate disinformation. Currently, the company relies on a system of labelling posts, rather than countering them or taking them down. Facebook claims that because climate change is a long-term problem, the posts aren’t considered to be posing an imminent threat of harm.
  • Clubhouse content moderation: And in this week’s episode of “the content moderation debate comes for everyone”: the Anti-Defamation League says that Clubhouse is hosting extremists and Holocaust deniers. Clubhouse responded by shutting down some rooms and suspending users. The reactions were what you would expect.
  • Oversight expansion: Facebook’s Oversight Board has announced that it is now accepting appeals over content that has not been removed from Facebook (rather than just content that has been taken down). Vox has a piece detailing how the process works. In The Guardian, Emily Bell writes that the move seems mostly a PR stunt, and that Facebook itself remains highly problematic. Bell also calls out the access journalism model Facebook seems to be pursuing. Also, for those who were hoping for the board’s Trump decision soon, don’t hold your breath.
  • Responsible AI?: Twitter has introduced a new “responsible machine learning initiative.” The company says the move is meant to assess the effects of its algorithm over time, analyzing any harms caused by the algorithm, and being transparent about decisions made as well as onboarding user feedback. In Europe, lawmakers are considering setting fines of up to 4% of global annual turnover for certain prohibited uses of AI, according to leaked documents. The comic XKCD has also taken a swing at inherent AI bias.
  • Surveillance advertising: A digital rights group in Ireland has said that it will be suing Facebook over a massive data breach from 2019 that resurfaced recently. Under Europe’s GDPR rules, the company could be liable for up to 4% of its global annual turnover. The breach has also raised questions about Facebook’s pervasive surveillance advertising. Jason Kint of Digital Content Next points out that most people don’t realize how far Facebook goes in this surveillance.
  • No to kid Instagram: A coalition of consumer advocacy groups and child development experts is urging Facebook to scrap its planned Instragram for children. The group argues social media is detrimental to children’s mental and physical well-being. In The Guardian, Rebecca Nicholson also argues against Facebook’s move, pointing out that many of the harms adults already see from social media’s algorithms and advertising will be far worse for children.
  • Research: Using data from NYU’s ad observer browser extension, The Markup found that companies are creating different — and oftentimes contradictory — ads for users with different political beliefs. A study published in PNAS looks at how misinformation gets amplified by mainstream news organizations. The Institute for Strategic Dialogue has released a series of papers examining potential avenues for transparency and regulation in the technology sector. The Open Technology Institute has an updating report which outlines the metrics companies are using when releasing their transparency reports.

 

Join the fight
against online
political deception.