With online disinformation threatening the health of democracy in the United States, major social media platforms made more than 300 changes to their policies in just a year and a half in areas including civic integrity, violence and extremism, and public health. These changes have largely failed to alleviate the growing problem of online disinformation and have come as the result of crises and public pressure that forced the platforms to act.
Our new report, All Change, No Fix: How Big Tech’s “Self Regulation” Fails Our Democracy, documents and analyzes policy changes adopted in six major categories since August 2019 across nine platforms: Google, Facebook, Twitter, YouTube, WhatsApp, Instagram, Reddit, Snapchat, and TikTok. Overall, the companies made at least 321 policy changes in 18 months impacting disinformation and democracy, including civic integrity, violence and extremism, public health, consumer empowerment, data policy, and platform governance and other operational features. Seventy percent of those changes occurred at Google, Facebook, and Twitter.
This patchwork of ineffective, constantly changing, and reactive policies created and enforced by private companies is simply not working. Our lawmakers must recognize Big Tech cannot police itself and that we need stronger laws and regulations to protect our democracy from disinformation.
Because the algorithms that drive social media thrive on conflict, platforms have been reluctant to address the root cause of the problem: a business model that amplifies polarizing content to maximize engagement and profits.
Our new report reveals:
- Despite making more than 300 policy changes in 18 months, major social media platforms have largely failed to alleviate the growing problem of online disinformation.
- The major social media platforms are failing to anticipate disinformation. Instead they are reacting to crises and public pressure.
- The major social media platforms failed to properly counter disinformation during the 2020 presidential election cycle and continued to respond in half-hearted, piecemeal fashion to disinformation after the election, with catastrophic consequences.
The report documents multiple instances of social media companies waiting to make changes until after online disinformation and hate speech resulted in significant real-world damage — including when pro-Trump supporters fueled by disinformation spread on social media stormed the U.S. Capitol Building on January 6th. While the platforms had years to prepare policies limiting political disinformation heading into the 2020 election, lies circulated freely across social media throughout the election cycle. The changes from platforms were frequently too little and too late and required further updates or modification. Twitter didn’t clarify how it would handle false claims of election victory until one day before the election. Facebook further tightened its policies one day after the election.
Now it’s time for our lawmakers to take action. The report offers detailed recommendations for how to do exactly that, including instituting a coordinated national response from Congress and the Biden-Harris administration, scrutinizing the business models of social media platforms to hold them to account, and passing the For the People Act (H.R. 1 / S.1) to increase online transparency.
Read our full report here: All Change, No Fix: How Big Tech’s “Self Regulation” Fails Our Democracy