NEWS

Misinformation & Representation: How Facebook Gives State Lawmakers a Pass to Spread COVID Misinformation

MapLight | November 11, 2021

Facebook made headlines this summer when it announced it had removed 18 million posts for violating the platform’s rules against sharing mis- and disinformation about COVID-19. At the same time, the company failed to provide the necessary context about how many people had viewed those posts or how many other posts with false COVID content it allowed to stay on the site. Based on a review by Decode Democracy, at least one powerful group of Facebook users have been largely exempt from the company’s rules prohibiting COVID mis- and disinformation: state lawmakers. 

State lawmakers lack the high profiles of a president, governor or member of the U.S. Congress. On a day-to-day basis, however, they have a considerable amount of influence over the activities of virtually every American. Besides passing laws, state legislatures typically have some degree of oversight over boards that regulate trades and professions, including lawyers, teachers, and physicians. 

A Decode Democracy review of official Facebook pages finds multiple egregious examples of state lawmakers across the nation who are using the platform to spread or endorse misleading information about the pandemic or to promote distrust in COVID vaccines and masks, despite scientific validation that both are safe and reduce transmission. The company has frequently left up posts from state lawmakers, filled with misinformation, available on its site; occasionally Facebook has posted links to accurate information below the misleading posts instead of removing them. Either Facebook is failing to enforce its own standards against posts that violate its COVID content policy, or it is actively allowing state lawmakers to post dangerous, false claims.

Regardless, the end result is a platform where powerful users can promote misleading health claims with impunity during a public health crisis. Facebook is also using its Town Hall feature to enable constituents to follow the pages of lawmakers spreading misinformation, allowing their posts to achieve greater reach.

Widespread Disinformation

Each of the posts below violates Facebook’s community standards by promoting Covid-related harmful content. Yet in each case, Facebook has left the posts available for viewing, commenting, and sharing. In some instances, Facebook has appended small labels near the post to “Get Vaccine Info.”

Alaska State Sen. Lora Reinbold, an Eagle River Republican who was voted out of a committee leadership position and has been banned from Alaska Airlines for refusing to wear a mask, referred to the COVID vaccine as an “experimental medical procedure” in an Aug. 25 post, a violation of Facebook’s prohibition about false claims about the vaccine. Rather than removing the post, Facebook appended it with a label to “Get Vaccine Info.”
Indiana State Rep. John Jacob, who was admonished last year for anti-Catholic and anti-Muslim Facebook posts by his Republican Party’s leadership, echoed falsehoods from a Phoenix-based evangelical pastor on his official Facebook page on July 24. Jacob’s post claimed that COVID vaccines “use dangerous technologies that no sane or honest person can call safe or tested.” The U.S. Food and Drug Administration approved the Pfizer-BioNTech vaccine on Aug. 23, noting that it and other vaccines “have met the FDA’s rigorous, scientific standards for emergency use authorization.” “Forcing these untested, dangerous vaccines upon children is the largest example of global child abuse ever seen,” Jacob reposted. Facebook has failed to add so much as a label to the post and it’s been shared nearly 300 times.
New Jersey State Sen. Mike Testa spread doubt about the effectiveness of masks in preventing COVID transmission in an Aug. 6 Facebook post, saying schoolchildren “have been used as pawns in this game before, and we know the damage masks have done to their psyche and their educational and personal development.” Testa’s Facebook post linked to his official Senate website with a statement that “classrooms don’t fuel outbreaks. We won’t stand down and allow the Murphy Administration to steal one more day from our children by forcing them to surrender to unneeded and ineffective face protection.” Testa didn’t elaborate on the use of children as pawns, although New Jersey does require students to be vaccinated against diphtheria, tetanus, pertussis, polio, chickenpox, and hepatitis B. Children in the sixth grade and higher are required to be vaccinated against meningococcal pneumonia. More than 650 COVID cases have been linked to school outbreaks in New Jersey.
Oklahoma State Sen. Rob Standridge claimed in an April 28 Facebook post that masks “are not effective in the manner in which they are being mandated … I know some will ask for sources, but I have posted many in previous posts so you can easily search thru my feed to find the sources.” He added: “Children are at practically zero risk.” A study of 340,000 people in Bangladesh found cloth or surgical face masks, worn properly, reduced the spread of COVID. Another study suggested that mask mandates may have helped prevent as many as 450,000 cases. Almost 5.9 million children have been diagnosed with COVID since the pandemic began. Almost 650 children have died.
Oregon State Rep. E. Werner Reschke said in a July 20 Facebook post that “there are effective, safe, and inexpensive therapeutics such as hydroxychloroquine/erythromycin/zinc and ivermectin.” None of Reschke’s proposed treatments have been proven to be helpful. The number of exposures to ivermectin – usually given to horses and cows to treat parasitic infections -- that were reported to poison control centers shot up 245 percent between July and August. The FDA has rejected ivermectin as a COVID treatment. “You are not a horse,” the agency tweeted on Aug. 21. “You are not a cow. Seriously, y’all. Stop it.”
Texas State Sen. Bob Hall shared multiple videos on September 25 promoting COVID misinformation, including false claims that the delta variant was a result of the vaccine itself. Facebook has left the videos available, where they have received more than 40,000 views.

Facebook has been defensive about its role in the spread of mis- and disinformation about the pandemic, generally refusing to answer direct questions -- even to members of Congress -- about its algorithm and content moderation policies. While the company has insisted that it’s taking steps to prevent more COVID-related falsehoods from flooding its site, the nonprofit organization Avaaz found its “related pages” algorithm customarily sent Facebook users to pages promoting anti-vaccine content. Another study published in August by Media Matters found a video featuring misleading claims about COVID vaccines and masks had earned more than 90 million engagements, receiving millions of views. And when the Center for Countering Digital Hate published a study attributing a majority of COVID disinformation to just 12 individuals, Facebook called the information “faulty” without releasing the data to justify its claims. The company’s failure to act upon mis- and disinformation was also exposed in a Sept. 13 Wall Street Journal article that found Facebook kept a secret list of politicians and other high-profile individuals who could violate its rules with impunity, based on whether they were “newsworthy,” “influential or popular” or “PR risky.”

Facebook Posts from State Lawmakers Spreading COVID Disinformation

 

Poster: Alaska State Sen. Lora Reinbold

Date: August 25, 2021

Facebook's Response: “Get Vaccine Info” label

 

Poster: Indiana State Rep. John Jacob

Date: July 24, 2021

Facebook's Response: None

 

Poster: Indiana State Rep. John Jacob

Date: August 2, 2021

Facebook's Response: “Get Vaccine Info” label

Poster: New Jersey State Sen. Mike Testa

Date: August 6, 2021

Facebook's Response: None

Poster: Oklahoma State Sen. Rob Standridge

Date: January 28, 2021

Facebook's Response: None

Poster: Oklahoma State Sen. Rob Standridge

Date: April 28, 2021

Facebook's Response: None

Poster: Oklahoma State Sen. Rob Standridge

Date: May 17, 2021

Facebook's Response: “Get Vaccine Info” label

Poster: Oregon State Rep. E Werner Reschke

Date: July 20, 2021

Facebook's Response: None

Poster: Texas State Sen. Bob Hall

Date: February 9, 2021

Facebook's Response: “Get Vaccine Info” label

Poster: Texas State Sen. Bob Hall

Date: July 30, 2021

Facebook's Response: None

Moving Forward

Our research unveils serious gaps in how Facebook addresses harmful content related to COVID-19. By taking the five steps outlined below, Facebook can limit the impact of dangerous mis- and disinformation about COVID on its platform. 1. Stop exempting speech from politicians from fact-checking. Facebook relies on independent third-party fact-checkers “to identify, review and rate potential misinformation” on the platform. Each time a piece of content is rated as false, the platform reduces its distribution, applies a warning label, and sends a notification to users who shared or attempted to share the post. Unfortunately, Facebook does not allow fact-checkers to review and rate posts and ads from politicians. This exemption covers both organic posts and paid advertisements created by “candidates running for office, current office holders — and, by extension, many of their cabinet appointees,” as well as “political parties and their leaders.”  Facebook claims that debunking political speech “would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words.” In fact, the exemption allows politicians aiming to spread hoaxes, lies, and conspiracy theories with a massive platform to do so. Preventing fact-checkers from debunking COVID mis- and disinformation spread by politicians enables Facebook to continue amplifying inflammatory content and letting it go viral. Facebook itself has noted that “public figures often have broad influence across our platform and may therefore pose a greater risk of harm when they violate our Community Standards or Community Guidelines.”

2. Introduce ‘holding areas’ where fact-checkers can evaluate COVID-related content by politicians and high-reach accounts before posts go public.

COVID mis- and disinformation from people wielding power or influence can have an outsized impact on people’s trust in health authorities and their acceptance of public policy measures to combat the pandemic. Once false COVID-related content is shared online by a trusted source, its negative impacts can persist long after the post itself is removed. Preventing posts with harmful content from spreading in the first place is key.  Facebook should ask its fact-checking partners to prioritize COVID-related content shared by politicians and high-reach accounts to minimize risks. In addition, Facebook should introduce ‘holding areas’ where fact-checkers can evaluate content before it is posted online. Posts containing COVID mis- and disinformation should be removed before going public, while other posts that fall into a grey zone should be appropriately labeled to make users aware of their disputed nature and provide easy access to authoritative information.

3. Clearly label the pages of lawmakers posting mis- and disinformation in the Town Hall feature.

Facebook’s Town Hall feature launched in 2017 and aims to connect Facebook users to their local government officials based on the users’ address. The company describes the tool as part of its effort to “build civically engaged communities and make it easier for people to have a voice in government.”  But when government officials are spreading harmful, deceptive content that violates Facebook’s policies, the pages where those same officials are spreading false claims should not be placed in front of constituents without proper context. Facebook should clearly label the pages of state lawmakers who have violated the company’s policies so that Town Hall users have more context about the official’s history of sharing harmful content.

4. Take the local context of a post into account.

Facebook’s community standards prohibit content which contains misinformation that contributes to the risk of imminent violence or physical harm. But as Facebook’s Oversight Board noted in a recent decision, the company “does not take into account local context when defining the threshold of imminent harm for enforcement of the policy on misinformation and harm.” The Oversight Board is right to recommend Facebook more carefully consider local context when assessing risk. Facebook should take into account that false information regarding COVID-19 shared by state legislators creates support for reckless policy measures that increase the risk of physical harm, especially among vulnerable communities.

5. Disclose critical data about its handling of COVID mis- and information.

Researchers and watchdogs can play a crucial role in preventing, identifying, and addressing the harms of COVID false information. To do that, they need dependable access to social media data. Public authorities also need access to social media data to evaluate the breadth and depth of the problem and devise effective policy strategies.  While Facebook stresses the need for facts and data in the fight against COVID, the company itself makes it hard to find factual health information. Despite growing pressure from the government, civil society, and watchdogs, Facebook has yet to disclose insights about the amount of COVID-related mis- and disinformation circulating on its platforms. And the company’s latest efforts to increase transparency have been flawed and disingenuous Instead of providing data on its handling of COVID mis- and disinformation in a piecemeal manner meant to improve the company’s public perception, Facebook should provide researchers, public authorities, and watchdogs real-time access to key metrics and release a standardized and comprehensive public bulletin describing its efforts to limit the spread of health-related harmful content.