Online platforms must limit the spread of hate speech, harassment, disinformation, and incitement to violence.
Social media platforms are hotspots for white supremacists and militia groups to spread conspiracy theories, promote propaganda campaigns, incite hate against communities of color, recruit followers, and coordinate offline violence. The resulting radicalization and real-life violence jeopardize individual and community safety, entrench racial and economic inequality, and chill political participation by marginalized communities. At the same time, political discussion online has become so poisoned that governing based on a shared set of facts is nearly impossible.
Digital platforms should both protect freedom of speech and preserve a safe and civil environment for political discourse in a way that encourages political engagement and the participation of marginalized communities. We support a targeted approach that clamps down on extremism without hampering civic groups and grassroots movements. Companies should also conduct risk assessments to learn how their rules and algorithms contribute to radicalization, and cooperate with government agencies, experts, consumer associations, and civil society groups to design appropriate solutions. Social media companies should also be required to contain the spread of harmful activities that pose an imminent threat to public security or public health, including disinformation, hate speech, fake accounts, harassment, and messages aimed at inciting violence or voter suppression.