Facebook says it’s taking on Covid disinformation. So what’s all this?

One month ago, Facebook announced a harder line against Covid-19 and vaccine disinformation. A concerning amount of harmful content is still going viral
Getty Images / WIRED

Bold black and red text screams “Do you know what’s in a VACCINE?” listing off alleged “toxic” ingredients in garish fonts of erratic sizes. Over 620 people reacted to the Facebook post in the four days up to February 25 and many more will have seen it. But its claims are completely false.

Last month, after more than a year of misinformation and falsehoods about Covid-19 proliferating on Facebook, the platform pledged to take down incorrect claims about Covid-19 and vaccines. It listed a range of specific, outrightly false statements it would remove, including that Covid-19 jabs are untested, contain toxic chemicals and kill people. Yet many of those claims are still live on Facebook – and some are gaining thousands of interactions before removal.

Since Facebook updated its policies against Covid-19 misinformation on February 8, at least 3,200 posts containing the recently forbidden claims about Covid-19 vaccines have been posted on the site. These posts have attracted at least 142,049 likes, shares and comments according to a data analysis of posts mentioning banned claims that appeared on Facebook Pages and Public Groups in the weeks after the ban.

Data from CrowdTangle, an insights tool owned and operated by Facebook, shows that posts containing banned allegations around the safety of the vaccine received over 12,400 interactions on Facebook before being taken down. These posts spread baseless rumours such as that the Covid-19 vaccine “causes neurological disorders”, “produces sterility in 97% of women” and “changes your DNA”. Posts misleading users about the composition of vaccines – including claims that Covid-19 vaccines contain the “mark of the beast”, “toxic ingredients” or “aborted fetal cells” – also produced at least 5,000 likes, shares and comments in two weeks. The analysis focused on English keywords but similar claims are spreading in French, Italian, Dutch, Romanian, Finnish and Spanish.

Given Facebook’s reach, 2.8 billion people use it each month, these numbers form only a small part of a far bigger misinformation problem. CrowdTangle only tracks public content – it does not show posts from closed groups or personal profiles, including those belonging to people with thousands of followers – and it’s challenging to assess the full scale of Covid-19 vaccine falsehoods on the platform. But while Facebook has repeatedly announced a tightening of its moderation rules, it appears not to be fast enough at enforcing its own policies to prevent the spread of misinformation.

The findings illustrate the difficulty with forbidding a wide patchwork of falsehoods on a platform not originally intended for extensive policing, but founded as a means for unrestrained online sharing. By design, Facebook wasn’t built for bans, says Joan Donovan, a disinformation expert and research director of Harvard University’s Shorenstein Center on Media, Politics and Public Policy. “They did not build their systems for content moderation so they’re continuously trying to implement policies they do not have the technology or competency to enforce.”

Facebook did eventually spot and remove many posts featuring some of the banned phrases since they were posted in the days following the ban. But the platform was often slow to act, and some egregious pieces of incorrect information survived long enough to reach tens of thousands of people.

For example, a false claim that the Covid-19 vaccine has “caused 929 deaths” – and that such a number could be “as high as 56,000”, was pushed more than 1,300 times on Facebook since the ban, attracting at least 53,449 interactions. Several videos propping up misinformation about the vaccine from websites BitChute and BanThis.tv, two popular conspiracy theory websites, are still circulating on Facebook at the time of writing.

Over 2,000 BitChute videos mentioning the Covid-19 vaccine have been posted on Facebook Groups and Pages since February 8. In one of these videos – shared on Facebook more than 3,000 times since February 8 – Ohio-based osteopath and anti-vaccine activist Sherri Tenpenny falsely claims, while providing no evidence, that Covid-19 vaccines will start depopulating the world “in three to six months”. One BanThis.tv video falsely claiming the Covid-19 vaccine is “mass eugenics extermination” was viewed 891,046 times and shared 419 times on Facebook in the 24-hour period after the new moderation rules were adopted.

Even when posts were taken down, there were inconsistencies in how the policies were enforced. Facebook banned anti-vaxxer Robert F Kennedy Jr from Instagram on February 10 after he shared false Covid-19 claims. However, Kennedy Jr’s personal Facebook Page and that promoting his website, Children’s Health Defense, both of which regularly share Covid-19 vaccine misinformation, are still live and have nearly 450,000 followers combined.

When Facebook does take action against harmful content, Donovan explains, it’s often after the problem has already spiralled. As a result, she says, the anti-vaxx movement has been left “unchecked” and made a permanent home there. And in some cases, the banned phrases have reached more Facebook feeds after Facebook banned them. Occurrences of the phrase “depopulation Covid vaccines” increased by 256 per cent in the two weeks after the ban, compared to the same timeframe before it. Prior to February 8, it appeared in 178 posts, while in the two weeks following the ban it appeared 633 individual times and received 60,304 interactions. A Facebook spokesperson says the company has removed “12 million pieces of harmful misinformation about Covid-19 and approved vaccines” and that it was redirecting users to trusted online resources from health officials. Facebook is also working with 80 fact checking organisations, the spokesperson adds.

Facebook has struggled with anti-vaxx content since the onset of the pandemic, frequently updating its policy to confront various types of falsehood as they emerged. Even if all social platforms have a hard time staying free of Covid-19 misinformation posted by users, Facebook has come under particular scrutiny for making and breaking promises, and for its chequered track record on countering other types of disinformation, from QAnon to Holocaust denial.

“[The problem is] bigger on Facebook because it’s a feature to have these pages and groups where you can launder disinformation – medical misinformation – across huge networks of people with little to no accountability or consequences,” says Donovan. It helps influencers grow their followings, however small, bringing them fame and fortune.

Those peddling the claims range from conspiracy theory pages opposing “corrupt” governments and a supposed “new world order” to anti-vaxx non-profits posting misleading health articles, to local religious influencers and public figures denying the severity of Covid-19 to push an anti-lockdown agenda. Those questioning the vaccine on Facebook often future-proof their accounts by using crafty tactics to dodge moderation policies and avoid removal. Some accounts sharing anti-vaxx content have evaded detection through the use of code words for “vaccine”. An analysis of Facebook data revealed nearly 200 Facebook posts containing unconventional spellings, including “va$$ine”, “va((ine” and “va**ine”. Others tactically deploy vague language, for example by asking questions rather than making explicit claims, to circumvent moderation.

Even if Facebook had the means to closely regulate what’s posted on its platforms, it first needs to invest in understanding why and how vaccine misinformation spreads, says Donovan.“Facebook hasn’t once recognised what they’ve built. Misinformation research argues if not for Facebook existing we would not have the problem of misinformation at scale in our society."

Updated 10.03.2021, 16.19 GMT: This article has been updated with more information about Facebook's partnership with fact-checking organisations

Lydia Morrish and Carlotta Dotto are staff journalists at First Draft

This article was originally published by WIRED UK