NEWS

Artificial Intelligence Plays a Critical Role Fueling Online Disinformation

MapLight | September 14, 2021

Social media algorithms and machine learning are contributing to the rise of online disinformation and pose a growing threat to our democracy. This week, we released a brand new report exploring this exact topic.

Click here to read the new report, Trained for Deception: How Artificial Intelligence Fuels Online Disinformation.

The report is part of a partnership between Decode Democracy, Anti-Defamation League, Avaaz, Mozilla Foundation, and New America’s Open Technology Institute through the Coalition to Fight Digital Deception. We outline how the increasing reliance by social media platforms on artificial intelligence and machine learning to moderate and curate online content as well as target and deliver ads has led to the amplification of sensational, harmful, and deceptive content such as misinformation and disinformation. 

To alleviate the problem, we’re calling on social media and technology platforms to take a number of concrete steps, including: improving transparency around their policies; directing more resources towards fact-checking and moderation efforts; providing users with access to more robust controls; and granting researchers more meaningful access to data. It also recommends lawmakers pursue policies to promote greater transparency and accountability for social media and technology companies, starting with  comprehensive federal data privacy legislation.

Three areas in particular show how artificial intelligence and machine learning fuel online disinformation: ad targeting and delivery, content moderation, and ranking and recommendation systems. The report also provides an overview of dozens of pieces of federal legislation aiming to address concerns around data privacy, platform transparency, and accountability requirements for platforms that use algorithmic systems.

It’s become painfully clear we cannot rely on Big Tech to police itself and protect our democracy.  It’s time for policymakers to ensure tools like artificial intelligence and machine learning are not doing more harm than good.  

“Social media companies are profiting by maximizing our engagement on their platforms, even if that means disinformation spreads rampantly and our democracy suffers. The stakes are simply too high to continue allowing a handful of companies to shape what so many of us see — and do not see — without any transparency or accountability.  It’s time for stronger federal laws to protect the integrity of our democracy against online disinformation and deception.” -- Daniel G. Newman, President of Decode Democracy

"Over the past few years, internet platforms have revamped how they tackle misinformation and disinformation, and are increasingly relying on AI and ML-based tools to combat misleading information online, “But, these tools can also amplify such harmful content. Unfortunately, platforms currently fail to provide adequate transparency into how they enforce their policies and practices, which makes holding them accountable challenging.” -- Spandana Singh, Policy Analyst at New America’s Open Technology Institute

“Interrupting disinformation is core to fighting online hate and pushing domestic extremism to the fringes of the digital world. Platforms are using artificial intelligence and machine learning to moderate content and deliver ads but this reliance has led to the amplification—and monetization—of harmful and hate-filled disinformation. Without increased oversight, investing in tools to mitigate violative content, and better transparency, harmful disinformation will continue to flourish on social media—something we cannot afford.”  -- Dave Sifry, Vice President of the ADL Center for Technology and Society

“Without better transparency and oversight into the ways in which AI is wielded, it’s impossible to build more trustworthy – and ultimately less harmful – systems. We can’t trust the fox to guard the henhouse. We need strong public scrutiny to fully understand how platforms rely on AI for engagement, ad-targeting, and content moderation.” -- Kaili Lambe, Senior Campaigner at Mozilla

"Social media platforms won't solve the problem of disinformation on their own.  Algorithms that amplify lies, hatred, and other toxic content are built into the very foundations of their services,” said  “A blueprint for comprehensive tech accountability legislation exists but it's scattered across more than a dozen different bills in the House and Senate. A unified approach is badly needed, one that brings together the promising policy proposals already on the table on research, reporting, and algorithmic transparency and accountability." -- Nate Miller, Campaign Director at Avaaz  

Trained for Deception: How Artificial Intelligence Fuels Online Disinformation