The recent outbreak of COVID-19, or the “coronavirus”, has captured the world’s attention. Officially deemed an international emergency in January, ceaseless reporting has been conducted on the potential implications of the virus for global health, state security, and the international economy.
What has largely flown under the public radar, however, has been the ways in which the epidemic has also inspired a rampant spread of false or misleading information – information which has proven counterproductive towards efforts to fight it. In this sense, the flood of misinformation represents a crisis of its own; not only does it impede the line of communication between experts and the general public, but it also shrouds efforts to fight the virus in confusion and misdirection.
Some of the misled narratives gaining popularity, like the suggestion that the virus is a Chinese bioweapon, divide global efforts along political lines. Others promoting alleged “cures,” like drinking bleach, are inherently dangerous and adverse to public health.
Nonetheless, this state of hysteria surrounding the coronavirus is nothing novel. Rather, it acts as the latest demonstration of how false, irrational, and impairing information can spread like wildfire amongst the public – particularly in times of crisis. It is also the latest episode of officials scrambling to contain such a misinformation trend, and limit the scope of its consequences.
The tendency to believe and share “misinformation” – or information that runs contrary to established understandings of truth – seems like irrational behaviour. It is, however, inherently human behaviour, as suggested by sociological and psychological studies.
Indeed, when the human brain selects what information to “believe”, it is often not always objectivity or impartiality that makes this choice. Instead, emotional triggers, and preconceived personal beliefs, play a more decisive role. In scenarios like global health epidemics, where fear and panic run high, misinformation that appeals to these triggers and beliefs find an ideal breeding ground.
For such a spread to occur however, a false or misleading narrative must be created in the first place. While this can certainly occur naturally through misunderstanding or naivete, it is often done intentionally, and for strategic ends.
Taking advantage of the irrational ways in which humans think is timeless, and can be performed for profit, political objectives, or simply to incite chaos. It is the general science behind propaganda, and has played a frequent role throughout modern history.
Yet, it is undoubtedly true, that the problem of misinformation has been transformed in the 21st century. The onset of the global communications era and the internet has amplified the symptoms of misinformation in two key ways.
For one, there is simply more information accessible to the general public, and it is easier for malicious actors with misleading messages to reach a greater audience. In effect, the lines between what is and isn’t “credible” is obscured, and the internet is shrouded in confusion.
Technological advances such as deep-fakes and voice manipulation, further exaggerate this effect. With a weaker grip on what can be considered “true”, the selection of facts based on preconceived beliefs and appeals to emotion have an even greater effect.
In addition, the tools of the internet facilitate a quicker and easier spread of misinformation. Techniques like microtargeting and automation make it feasible to engineer the spread of an idea across a population with sharp precision. Demographics with individuals most likely to believe and reproduce an idea can be preselected, and online “bots” can do much of the leg-work of sharing themselves.
Commercial platforms like Facebook and Twitter amplify this feasibility. To begin, these sites are based on networking, and carry loose regulations regarding the fact-checking of the content they host. In addition, the platforms’ internal algorithms have been proven to systematically prioritize content that incites strong responses from its audience, as opposed to a preference for what is considered “true” or credible. In effect, these sites become hotbeds for the production, as well as distribution, of misinformation.
The US presidential election in 2016, and the UK’s Brexit referendum the same year, made these mechanisms linking the internet and misinformation apparent. Ever since, a polarising debate has taken shape: as we continue into the 21st century, will the innovations of the digital age further exacerbate the problems of misinformation, or will our ability to control this problem catch up to, or even be aided by, the advances of technology?
The coronavirus has become the newest stage for this debate to play out.
Many experts are persistent in the opinion that misinformation in the 21st century represents a problem that cannot be solved. They believe that appeals to sensationalism and preconceived beliefs are deeply ingrained in human psychology, and cannot be removed through innovation. Instead, episodes like the coronavirus simply demonstrate how technology works to further exploit these innately human flaws.
The efforts by officials to prosecute such ideas, such as the WHO, have created a cause for optimism, nonetheless. Shortly after the outbreak, the “WHO Information Network for Epidemics” program was instituted. Through campaigns that provide scientifically-verified information to news outlets and social media, as well as debunk false theories, they have been attempting to establish a persuasive, factual narrative.
In addition, they have coordinated with tech giants like Facebook and Twitter, as well as with major employers across the world, to boost the spread of this verified information, and quell the momentum of any false facts. It has demonstrated that cooperation across national lines, and public and private sectors, can be achieved to address contemporary problems of misinformation.
Independent of the influence of misinformation, the coronavirus currently represents an alarming threat to the stability and health of global society. As has been demonstrated, however, misinformation only works to heighten the difficulties in combating the virus, and to enable its spread. Indeed, efforts to fight the virus ought to treat the spread of false ideas seriously too, just as they do pathogens.
In fact, while the coronavirus is likely to die out, misinformation is bound to act as an important dimension of the next epidemic to come, too. Confronting the role of misinformation in the digital age is the necessary, first step in addressing the global collective action problem that it represents.
Edited by Rebecka Pieder.
The opinions expressed in this article are solely those of the author and they do not reflect the position of the McGill Journal of Political Studies or the Political Science Students’ Association.
Image by NIAID via Flickr Creative Commons.