Facebook’s horrible, no good, very bad, terrible few weeks continues with the landing of the Facebook Papers. The articles from a consortium of 17 major news outlets are based on documents provided by whistleblower Frances Haugen. Is the steady drumbeat of negative press leading to the path of regulatory reform? It’s still unclear, but as the negative attention mounts, it does seem more likely. In other news, Facebook’s Oversight Board also released its first transparency report, and Twitter revealed that its algorithm has a right wing bias. The Decoder is a weekly roundup of news about digital deception. Know someone who might be interested? Ask them to sign up!
- Facebook Papers: We learned this past week that Facebook whistleblower Frances Haugen gave a trove of internal Facebook documents to reporters at 17 major news outlets. Stories from those documents, known as the Facebook Papers, started emerging Friday. For those interested, Ben Smith, The New York Times’ media columnist, has a fascinating chronicle of the backstory of the Facebook Papers. Before any of the articles were published, Facebook called them “an orchestrated ‘gotcha’ campaign.” Former Facebook Chief Security Officer Alex Stamos has suggested that the journalists consider publishing the underlying documents, provided the documents were redacted for privacy. In Politico, Emily Birnbaum reports that Haugen has been financially supported in her whistleblowing efforts by Pierre Omidyar, a billionaire tech critic.
- The Toplines (Domestic): The Facebook Papers further strengthened the accusation that Facebook prioritizes profits over people. NBC, for example, chronicled how Facebook knew its algorithm pushed users toward QAnon content a full year before it banned the conspiracy theory, while USA Today delved into the divisiveness of the algorithm, and its impact on individuals. Both The New York Times and The Washington Post looked at Facebook’s failures when it came to electoral disinformation — with the Times examining how much electoral disinformation was circling unimpeded on the site and The Post pointing out the harms of Facebook’s policy of leniency towards politicians. The Washington Post also reported that Steve Bannon’s rightwing news outlet Breitbart was added to a Facebook “white list” that de facto exempted it from normal content moderation.
- The Toplines (Global): There was also extensive reporting on Facebook’s global failures. The Verge reported on Facebook’s practice of sorting the world into “tiers” which it then uses to determine how much moderation a country gets, The Washington Post looked at Facebook’s practice of acquiescing to the demands of world leaders when it’s in the company’s financial interest, and the Associated Press examines how language gaps increase the company’s issues with content moderation in non-Western countries. This macro-perspective on Facebook’s global failure is underpinned by stories of Facebook’s failures in individual instances, including its failures to stop incitements to violence in Ethiopia, its failure to address both religious violence and electoral interference in India and its failure to stop human trafficking on its platform.
- The Articles: Though articles have been emerging throughout the weekend, there will undoubtedly be more to come this week. For those interested in future pieces, as well as reading more of what has already been published, Tech Policy Press has an updating list of all the stories.
- Building Blocks: Most of the reporting in the Facebook Papers series adds to previous articles outlining issues at the company. Articles on electoral disinformation, for example, build on Buzzfeed’s story about an internal report on the company’s failures leading up to the January 6th insurrection. Warnings about the danger of Facebook groups are also not new — and have been reported in previous years in The Guardian and The Verge among other outlets.
- The Fallout: Two days before the reporting came out, Facebook announced changes to its groups policy — saying that it would downrank groups that broke rules (although some doubts have been expressed about how well the policy will be enforced). Speculation about possible regulation is also rife as Facebook gets hammered with negative publicity. Within the Facebook Papers reporting, Wired looks at how many of Facebook’s researchers have already provided good blueprints and suggestions for improving the platform. In Politico, Leah Nylan analyzes the ways in which the documents could support antitrust cases.
- Testimony: Facebook whistleblower Frances Haugen testified before UK Parliament today. Most of her testimony focused on the harms of Facebook’s algorithm — notably its amplification of hate, anger and divisiveness. Haugen especially emphasized how detrimental the harm has been to non-Western countries. Haugen also focused on the abundance of ways in which these harms could be mitigated. She spoke especially about algorithmic designs that prioritize safety, or methods for introducing friction into hyper-sharing. When she was questioned as to why Facebook hasn’t implemented these changes already, Haugen pointed out that slowing reshares, for example, could result in small cuts to revenue, and she noted that, “Facebook has been unable to accept even little slivers of profit being sacrificed for safety.”
- Oversight: The Facebook Oversight Board also released its first transparency report on Thursday. As Issie Lapowsky reports in Protocol, the Board criticized Facebook for its lack of transparency over its cross-check (or X-Check) system. The Board also shared data on its work, which shows a focus on Western countries over non-Western, despite the fact that, as the Board noted, “we have reason to believe that users in Asia, Sub-Saharan Africa, and the Middle East experience more, not fewer, problems with Facebook than parts of the world with more appeals.”
- Twitter’s Algorithmic Bias: Twitter, which has often been cited as being more transparent than Facebook, announced on Thursday that its algorithms amplify rightwing political content. Rumman Chowdhury, Twitter’s Software Engineering Director, has a thread breaking down many of the findings. Perhaps the most important point, however, as Protocol reports, is that the company is still in the dark as to why this right-leaning bias is happening, a topic it will explore in future research. Daniel Kreiss points out that the research also highlights why banning political ads altogether on social media would be problematic.