Beware the Information Lockdown

The new year has witnessed a dramatic escalation of efforts by big tech platforms to curb the spread of “misinformation,” a nebulous danger that first came into view amidst the 2016 electoral victories of Brexit and Trump. “Misinformation” sometimes refers to genuinely misleading claims; at other times, the term is deployed to discredit anything that contradicts current media consensus. In response to this spectral enemy, social media companies are now moving unevenly and inconsistently to exert more control over the content shared on their platforms. The indefinite ban of the outgoing president from his favored platform, Twitter, as well as its competitors, was a remarkable culmination of this process.

Trump was banned for inciting the rioters who occupied the Capitol on January 6th in a futile attempt to prevent the certification of the election. Twitter also banned other prominent figures who had asserted election fraud, as well as around 70,000 accounts that had made posts related to the pro-Trump QAnon conspiracy theory. Google, Apple, and Amazon also acted in sync to effectively deplatform Parler, an alternative to Twitter popular on the right. The rationale for these moves was that the circulation of false election fraud allegations had led to the catastrophic events of January 6th, which left five people dead.

This was not the first time that online platforms and hosts have removed material that allegedly incited acts of violence. However, the COVID-19 pandemic and its political fallout have also laid the groundwork for a far-broader crackdown on speech. Indeed, one consequence of the virus has been the significant expansion of what is considered “dangerous content.” 

Previously, anxieties about the impact of hateful or misleading posts tended to focus on shocking but statistically unusual events. For example, the 2019 Christchurch massacre led to many calls for tech companies to regulate content. But as long as the argument for more aggressive censorship depended on media coverage of rare but spectacular acts of violence, it was on shakier ground once the news cycle moved on.

The parallel anxieties provoked by the virus and of “viral” misinformation have enabled the restrictive logic of lockdowns to ramify in internet spaces.

Even the deadliest acts of terrorism have resulted a tiny fraction of the deaths caused by the pandemic. As a result, if not wearing a mask or seeing friends and family outside one’s home is tantamount to “literally murdering people,” skepticism about these measures may be treated as incitement. Now that everyone can be regarded as a potential bioterrorist as soon as they step outside their home, the risks attributable to online speech have massively expanded. Any content that might cause individuals to defy public health orders, or might impede efforts to mitigate the spread of the virus, can be viewed as deadly. The parallel anxieties provoked by the virus and of “viral” misinformation have thus enabled the restrictive logic of lockdowns to ramify in internet spaces.

One of the first waves of online censorship in the past year was directed against opponents of lockdowns, masking, and other measures. Last April, YouTube’s CEO announced the platform would take down content that contradicted WHO guidelines on COVID-19—a remarkable move given that the WHO had repeatedly revised its own assessments in the prior months. Later in the year, after a group of prominent scientists and doctors published a declaration that was critical of the efficacy of lockdown policies, many observed that Google was trying to direct internet users away from the document and towards articles supportive of lockdowns.

One indication of the potentially counterproductive consequences of this approach came last June, when Twitter users began to notice that the platform was automatically flagging tweets containing the words “5G” and “corona” and similar combinations. This was the platform’s response to a strange theory that claimed the virus was a myth invented to cover up the harmful impact of radiation from 5G towers. Yet, in a typical instance of the Streisand effect, thousands began to tweet “5G corona” just for the humor and novelty of triggering the warning. Containing the pandemic is now seen as dependent on containing the infodemic, but achieving this goal is no simple task: the many jocular tweets this strategy inspired surely made far more people aware of the 5G-COVID theory than might have been otherwise.

Likewise, the banning of accounts, shutdowns of platforms, and deletions of posts will not entirely stop the spread of the alleged misinformation. Rather, it will only serve to redirect its circulation. Fewer people overall may be exposed to the banned claims, but the most fervent adherents—who are also the most likely to translate ideas into action—will be the most likely to continue to propagate the banned content, via word of mouth and less-restrictive channels.

The more substantive problem with suppressing and banning dissent from current public health orthodoxy is that the science of lockdowns, masks, and other measures is more uncertain than governments or tech platforms want to acknowledge. Those making the case for indefinite lockdowns have maintained the moral and political high ground by citing the risks to vulnerable groups, but the appeal of such claims to most people is based less on their awareness of the established science than on the intuitive logic underlying the fear of viral spread.

Lockdowns were originally enacted as a way of buying time until a more comprehensive course of action could be implemented. Just as blocking content on some platforms does not stop its circulation altogether, the efficacy of lockdowns for combatting the virus is limited by their incompleteness. A great deal of economic activity cannot be fully suspended or done remotely. As a result, lockdowns have not eliminated risk, but concentrated it among lower-income workers who must still work in close proximity to others and their families. Furthermore, the effects of prolonged lockdowns, even on those who can take the advice to “stay home,” may counteract some of their benefits. That our leaders continued to promote these questionable half-measures for so long belies their failure to devise a more effectively targeted approach.

The current rollout of new vaccinations indicates the logical endpoint of the lockdown approach. Public health authorities largely rejected herd immunity as a defense against the virus, but widespread inoculation by way of vaccines is itself a form of herd immunity. The information lockdown currently being implemented has no comparable endpoint. Indeed, the viral metaphor for the spread of information falters here, because there are no vaccines for “misinformation.”  If there is an equivalent to herd immunity, it is poorly understood, but contrary to prevailing assumptions, it is plausible that limiting the public’s exposure to conspiratorial and pseudoscientific claims might render us more receptive to them, not less.

Nevertheless, the increasingly censorious approach lately underwritten by COVID-19 misinformation anxiety seems unlikely to subside, even assuming the vaccine restores some degree of normalcy, because the virus offers a useful proxy for a free-floating fear of underregulated digital communication platforms—a fear that first emerged out of the political shockwaves of 2016. Ever since big tech was blamed for spreading misinformation that led to the victories of Brexit and Trump and the rise of right-wing extremists, the conviction that information could be politically fatal has become the new consensus among liberal journalists and politicians. Now that the metaphorically viral spread of information has merged with the dangers of a real virus, the logic of this argument has become even more intuitive.

The panic provoked by COVID-19 and by most governments’ confusing responses led many internet users to try to reassert control—sometimes by sharing and elaborating on explanations that contradicted the consensus medical and political positions. Aggressive counter-reactions will only gloss over the many underlying problems responsible for the ongoing crisis, including the failure to implement effective public health policies, as well as infrastructural and manufacturing incapacities. All of this has exacerbated people’s wariness of expertise and authority. The tightening information lockdown will only magnify the mistrust that has made people receptive to “misinformation” in the first place.