The COVID-19 “Infodemic”
The pandemic has been accompanied by an “infodemic” of misinformation and propaganda, underminindg public health responses around the world. Building resilience will require a joint effort from social media platforms, governments, and civil society.
As the coronavirus started to spread around the globe in early 2020, so did a wave of misinformation and propaganda that has jeopardized public health, helped to undermine trust in government responses, and shifted blame onto imaginary scapegoats. The sources of misinformation were diverse, including authoritarian governments, anti-vaccination activists, and the US president, but also ordinary citizens, who unwittingly shared misleading content with their friends and families. In some cases, the sharing of misinformation was reported to have had an immediate impact on public health, either directly through the intake of toxic substances falsely claimed to cure the disease, or indirectly by contradicting public health advice about mask-wearing and social distancing.
The fact that a global health crisis would also be exploited by actors engaging in disinformation—the deliberate spreading of falsehoods with political motives—was no surprise: in almost every pandemic from the Black Death to Ebola, the spread of infectious diseases has been accompanied by the diffusion of false information, with deadly consequences for the infected and collateral victims alike.
In the 14th century, the blame for unexplainable outbreaks of the plague was attributed to Jews, thousands of whom were burned amid local outbreaks, leading to the destruction of hundreds of Jewish communities across Europe. When the Spanish flu erupted in 1918, wartime censorship from various governments left their respective populations uninformed about the scale and severity of the outbreak. Potentially inspired by the KGB’s Cold War operation to attribute the spread of AIDS to a US bioweapon, the more recent outbreaks of the swine flu in 2009 and Ebola in 2014 saw the Kremlin-controlled broadcaster RT (formerly Russia Today) featuring conspiracy theories that the diseases had been created in US military labs.
In the case of COVID-19, what has been surprising to many observers is the speed and scale with which falsehoods and propaganda spread across platforms and societies, with one study estimating that content published by accounts spreading health misinformation obtained more than one billion views. The ubiquity of misleading health information led the World Health Organization (WHO) to raise the alarm and label the situation as an “infodemic,” even before invoking the status of a “pandemic” on the epidemiological side.
Key Actors and Strategies
The sources of misinformation and propaganda around COVID-19 are so diverse that this article can only provide a selective overview. To begin with, authoritarian governments have used the pandemic to promote their geopolitical interests and discredit their rivals. In a study conducted at the University of Oxford, my colleagues and I found that Russia was leveraging its state-backed media outlets and their respective distribution networks to promote the Russian response to the virus, criticize the capability of democratic societies to effectively manage the health crisis, and, similar to Chinese and Iranian outlets, provide a platform for conspiracy theories, such as the claim that the virus may have been bio-engineered in a US military lab.
In multiple cases, Russian outlets and the commentators they featured further questioned the effectiveness of masks, in some instances actively encouraging their audience “[not to] wear a face mask—not in shops, not on public transport, not anywhere. Say you have asthma or offer no explanation.”
Following this call for resistance by RT, some social media users in Germany and the United Kingdom alike called their governments’ mandatory mask policies “despotic” and compared the respective legislation reminiscent of “Germany in the 1930s.” Chinese outlets such as CGTN or Xinhua further promoted various alternative theories that claimed the virus may have originated from places other than Wuhan. They also aired a fabricated video claiming that Italians had played the Chinese anthem in gratitude for support received from China. Both Russia and China deployed propaganda accompanying the orchestrated deliveries of medical aid to Europe, Africa, and South America.
Adding to these state-backed efforts, numerous conspiracy theorists have exploited the COVID-19 pandemic by spreading false or speculative information about the origins of the virus, the effectiveness of public health measures, or pseudo-medical cures. Some of these narratives have portrayed the virus and public health responses as clandestine operations for population control or mass sterilization, and have also linked the outbreak to a variety of causes including 5G or polio vaccines. One example was the Plandemic movie, which promoted numerous pieces of false health information, including that the virus was the result of laboratory manipulation, that flu vaccines increase the chance of contracting COVID-19, or that masks cause infections. The documentary-style video quickly went viral, amassing more than eight million views and hundreds of thousands of shares on social media. In addition, thousands of similar examples have spread across the Internet, including on emerging platforms such as TikTok, Telegram, or Parler.
In addition, certain celebrities and politicians, including US President Donald Trump, have been another key source of misinformation, sharing misleading or false information and medical advice with their followers on social media. A final group of less “malignant” spreaders of misinformation has been ordinary citizens, who have shared false information either because of a lack of trustworthy information during the dynamically evolving pandemic, or due to psychological biases such as cognitive dissonance or confirmation bias.
Bad for Public Health and Democracy
One of the central and most complex questions regarding misinformation is its impact. In other words, while it is interesting to measure the amount of attention a misleading claim or false piece of advice receives, what really matters is whether the consumption of such information leads to individuals making poor health choices or losing trust in democratic institutions, both of which directly undermine public health responses.
Unfortunately, there is reason to believe that misinformation may have the capacity to seriously impact public health. For example, one study linked multiple deaths in Iran to the drinking of alcohol-based cleaning liquids shortly after false advice praising methanol as a cure for COVID-19 had circulated on social media. In India, three Muslim men were violently attacked, likely because of rumors that Muslims were spreading the virus. And after Donald Trump encouraged the preventive intake of hydroxychloroquine, hospitals from Nigeria to Vietnam recorded spikes in poisoning admissions, with one man in Arizona dying after ingesting a chloroquine-based fish tank cleaner.
While directly attributable lethal incidents remain the exception, there is evidence that coronavirus-related misinformation may have impacts on many areas of public life and health. For instance, a study from King’s College London found that individuals holding conspiracy beliefs about COVID-19 are less compliant with social distancing rules or responsible health behavior such as regular handwashing or staying home if they have symptoms. Furthermore, in the UK alone more than 70 telephone masts were vandalized or burned following the spread of conspiracy theories linking the outbreak with 5G technology. And as countries across Europe went into lockdowns, the stockpiling of goods was accompanied by viral videos of long lines in front of supermarkets, in many cases using fabricated footage and images taken out of context.
Combatting a global wave of misinformation is a complex challenge that requires an overarching and coordinated approach involving tech platforms, government officials, and scientists, but also other members of society such as journalists, or in the long run, teachers and parents. Many large social media platforms, some of which previously favored a hands-off approach when it comes to content moderation, have increasingly recognized their role in facilitating the spread of dangerous falsehoods and have become significantly more interventionist as the pandemic unfold. For example, Facebook has not only introduced fact-checking labels under posts about COVID-19, but also introduced a policy that bans potentially harmful content. Twitter and TikTok similarly introduced warning labels, while YouTube announced that it would remove misleading health advice.
Building Information Resilience
Despite these measures, many platforms were caught off guard in the early stages of the pandemic, and their largely reactive policies were unsuccessful in preventing the spread of millions of examples of misinformation through Facebook groups, Twitter feeds, or embedded in videos uploaded to YouTube and TikTok. This lack of success in rapid content removal was further exacerbated by the opaque implementation of content moderation policies—a weakness that has been frequently exploited by conspiracy theorists, who complain about censorship and successfully circumvented platform policies through coordinated re-uploading.
Furthermore, researchers have accused platforms of prioritizing Western audiences in content moderation resource allocation, leaving less industrialized countries in the Global South without adequate support for responding to COVID-19 health misinformation. Another such blind spot are non-public messaging apps such as WhatsApp and Telegram, which play a key role in the peer-to-peer transmission of false information within networks of friends and families.
Journalists and fact-checking organizations have also made an enormous effort to help people find reliable information and identify falsehoods. However, fact-checking is a scarce resource and can only superficially solve the problem of contaminated health information during a dynamically evolving pandemic. And in the absence of reliable automation, content moderation will remain an expensive undertaking. Even steep progress in artificial intelligence and machine learning is unlikely to solve the problem, as fact-checking often requires nuanced interpretation rather than simple binary classifications. Furthermore, disseminators of disinformation are highly adaptive to platform responses, circumventing alleged censorship through coordinated re-uploading, dog-whistling, or sharing links to external sources in comment sections.
Governments and health authorities also have an important role within the fight against misinformation. As a large share of conspiracy theories and health misinformation includes criticism of government responses and public health measures, it is essential that such policies are communicated in a transparent way. This includes admitting and explaining mistakes. One example is provided by the initial advice around mask-wearing, which was first discouraged by the WHO and national governments, only for them to later change course based on updated scientific evidence.
The lack of clear communication about such policy reversals is frequently exploited by skeptics advocating against mask mandates. Furthermore, governments should pro-actively discuss regulatory countermeasures to prevent harmful health advice from proliferating through the public sphere, while balancing out individual liberties and freedom of speech. For example, legislators could require social media platforms to implement more pro-active and resource-intensive moderation policies, or provide substantive funding for independent fact-checking organizations.
Only the Prologue?
Despite the ubiquity of misinformation during the past few months, there is reason to fear that the worst is yet to come. As scientists around the world race to produce a COVID-19 vaccine, a well-connected network of anti-vaccination activists is preparing to undermine public trust and sow doubt about the purpose and safety of a potential vaccine at all costs.
Anti-vaccine activism is by no means a new phenomenon. Instead, anti-vaccination movements have worked for years to demonize vaccines—falsely claiming that they are unsafe, harmful, or, even worse, part of a clandestine effort by governments to control populations. Uniting millions of supporters worldwide, they form a network of densely connected and tech-savvy individuals, including liberal yoga moms and ultra-conservative skeptics of government alike. After years of experience, the online anti-vaccination community is extremely organized and strategic, exploiting the features of social media where possible and creating an outcry about alleged censorship when being restrained. In a sense, the movement has been preparing for this moment for years, and COVID-19 is the natural climax to deploy all of their resources into combat.
These activists are not alone in their efforts to undermine global vaccine development. Instead, domestic vaccine critics are being opportunistically joined by malign state actors seeking to undermine trust in vaccines, and in the democratic governments funding their development. With a previous track record of amplifying divisive vaccine debates in Western societies, including through the use of inauthentic “troll” accounts, Russia has recently intensified its vaccine propaganda—simultaneously promoting its own “Sputnik V” vaccine candidate and sowing doubt about the safety of Western competitors. In a recent investigation, The Times of London revealed an alleged Russian disinformation operation targeting the COVID-19 vaccine developed at the University of Oxford. The core narrative of this campaign was the claim that taking the vaccine would turn humans into monkeys.
These trends are particularly worrisome because high vaccination rates are essential for establishing herd immunity. Although it is difficult to measure how much anti-vaccination content is responsible for a recent decline in trust in a COVID-19 vaccine, a survey by the University of Hamburg indicated that by September 2020 only 57 percent of Germans said they were willing to get vaccinated once a vaccine becomes available, down from 70 percent in April. Similar figures can be found across the globe, with only 48 percent of French people and 51 percent of Americans signaling an intention to get vaccinated. At the same time, 44 percent of US Republican voters recently said they believed that “Bill Gates wants to use a mass vaccination campaign against COVID-19 to implant microchips in people that would be used to track people with a digital ID.”
Against this backdrop, making the development of a coronavirus vaccine a success will require a multidisciplinary and pro-active effort by social media platforms, governments, the scientific community, and civil society alike. This effort should include rigorous transparency about vaccine trials and potential complications, the provision of sufficient resources for strategic communication, as well as a vigilant and pre-emptive approach to detecting and countering domestic and foreign attempts to undermine trust in COVID-19 vaccines expected for early 2021.
Marcel Schliebs is a PhD candidate at the University of Oxford and Researcher in the “Computational Propaganda Project” at the Oxford Internet Institute.
IP Special, 02-2021, Januar 2021, S. 46-51