Regarding the Use of Apocalyptic Rhetoric in Policy Debates

by on October 29, 2014 · 0 comments

Evan Selinger, a super-sharp philosopher of technology up at the Rochester Institute of Technology, is always alerting me to interesting new essays and articles and this week he brought another important piece to my attention. It’s a short new article by Arturo Casadevall, Don Howard, and Michael J. Imperiale, entitled, “The Apocalypse as a Rhetorical Device in the Influenza Virus Gain-of-Function Debate.” The essay touches on something near and dear to my own heart: the misuse of rhetoric in debates over the risk trade-offs associated with new technology and inventions. Casadevall, Howard, and Imperiale seek to “focus on the rhetorical devices used in the debate [over infectious disease experiments] with the hope that an analysis of how the arguments are being framed can help the discussion.”

They note that “humans are notoriously poor at assessing future benefits and risks” and that this makes many people susceptible to rhetorical ploys based on the artificial inflation of risks. Their particular focus in this essay is the debate over so-called “gain-of-function” (GOF) experiments involving influenza virus, but what they have to say here about how rhetoric is being misused in that field is equally applicable to many other fields of science and the policy debates surrounding various issues. The last two paragraphs of their essay are masterful and deserve everyone’s attention:

Who has the upper hand in the GOF debate? The answer to this question will be apparent only when the history of this time is written. However, it is possible that in the near future, arguments about risk will trump arguments about benefits, because the risk of a GOF experiment unleashing a devastating epidemic plays on a well-founded human fear, while the potential benefits of the research are considerably harder to articulate. In debates about benefits and risks, arguments based on positing extreme risks, however unlikely, are powerful rhetorical devices because they play into human fears. While we all agree that the risk of a GOF experiment unleashing a deadly epidemic is not zero, such an event would be at the extreme end of the likely outcomes from GOF experimentation. Arguing against GOF on the basis of pandemic dangers is a powerful rhetorical device because anyone can understand it. The problem with the use of apocalyptic scenarios in risk-benefit analysis is that they invoke the possibility for infinite suffering, irrespective of the probability of such an event, and the prospect of infinite suffering can potentially overwhelm any good obtained from knowledge gained from such experiments.

Repeatedly invoking the apocalypse can create a sophistry that we call the apocalyptic fallacy, which, when applied in a vacuum of evidence and theory, proposes consequences that are so dire, however low the probability, that this tactic can be employed to quash any new invention, technique, procedures, and/or policy. The apocalyptic fallacy is an effective rhetorical tool that is meaningless in the absence of objective numbers. We remind those who invoke the apocalypse that the DNA revolution went on to deliver a multitude of benefits without unleashing the fears of Asilomar and that the large hadron collider was turned on, the Higgs boson was discovered, the standard model in physics was validated, and we are still here. Hence, we caution individuals against overreliance on the apocalypse in the debates ahead, for rhetoric can win the day, but rhetoric never gave us a single medical advance.

This is spot-on and, again, has applicability in many other arenas. Indeed, it aligns quite nicely with what I had to say about the use and misuse of rhetoric in information technology debates in my recent law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle” (Minnesota Journal of Law, Science and Technology, Vol. 14, No. 1, Winter 2013). In that piece, I began by noting that:

Fear is an extremely powerful motivational force. In public policy debates, appeals to fear are often used in an attempt to sway opinion or bolster the case for action. Such appeals are used to convince citizens that  threats to individual or social well-being may be avoided only if specific steps are taken. Often these steps take the form of anticipatory regulation based on the precautionary principle. Such “fear appeal arguments” are frequently on display in the Internet policy arena and often take the form of a fullblown “moral panic” or “technopanic.”  These panics are intense public, political, and academic responses to the emergence or use of media or technologies, especially by the young.  In the extreme, they result in regulation or censorship. While cyberspace has its fair share of troubles and troublemakers, there is no evidence that the Internet is leading to greater problems for society than previous technologies did. That has not stopped some from suggesting there are reasons to be particularly fearful of the Internet and new digital technologies. There are various individual and institutional factors at work that perpetuate fear-based reasoning and tactics.

I continued on to document the structure of “fear appeal” arguments, and then outlined how those arguments can be deconstructed and refuted using sound analysis and real-world evidence. The logic pattern behind fear appeal arguments looks something like this (as documented by Douglas Walton, in his outstanding textbook, Fundamentals of Critical Argumentation):

  • Fearful Situation Premise: Here is a situation that is fearful to you.
  • Conditional Premise: If you carry out A, then the negative consequences portrayed in this fearful situation will happen to you.
  • Conclusion: You should not carry out A.

In the field of rhetoric and argumentation, this logic pattern is referred to as argumentum in terrorem or argumentum ad metum. A closely related variant of this argumentation scheme is known as argumentum ad baculum, or an argument based on a threat. Argumentum ad baculum literally means “argument to the stick,” and the logic pattern in this case looks like this (again, according to Walton’s book on the subject):

  • Conditional Premise: If you do not bring about A, then consequence B will occur.
  • Commitment Premise: I commit myself to seeing to it that B will come about.
  • Conclusion: You should bring about A.

The problem is that these logic patterns and rhetorical devices are logical fallacies or are based on outright myths. Once you start carefully unpacking arguments based on this logic pattern and applying reasoned, evidenced-based analysis, you can quickly show why the premise is not valid.

Unfortunately, that doesn’t stop some people (including a great many policymakers) from utilizing such faulty logic or misguided rhetorical devices. Even worse, as I note in my paper, is that,

fear appeals are facilitated by the use of threat inflation. Specifically, threat inflation involves the use of fear-inducing rhetoric to inflate artificially the potential harm a new development or technology poses to certain classes of the population, especially children, or to society or the economy at large. These rhetorical flourishes are empirically false or at least greatly blown out of proportion relative to the risk in question.

I then go on for many pages in my paper to document the use of fear appeals and threat inflation in a variety of information technology debates. I show that in every case where such tactics are common, they are unjustified once the evidence is evaluated dispassionately.  Regrettably, those who employ fear tactics and use threat inflation often don’t care because they know exactly what they are doing: The use of apocalyptic rhetoric grabs attention and sometimes ends serious deliberation. It is often an intentional ploy to scare people into action (or perhaps just into silence), even if that result is not based on a reasoned, level-headed evaluation of all the facts on hand.

The lesson here is simple: The ends do not justify the means. No matter how passionately you feel about a particular policy issue — even those that you perhaps believe potentially involve life and death ramifications — it is wise to avoid the use of apocalyptic rhetoric. Generally speaking, the sky is not falling and when one insists that it is, they should be backing up their assertions with a substantial body of evidence. Otherwise, they are just using fear appeal arguments and apocalyptic rhetoric to unnecessarily scare people and end all serious debate over issues that are likely far more complex and nuanced than their rhetoric suggests.


[For all my essays on “technopanics,” moral panics, and threat inflation, see this compendium I have assembled.]

Previous post:

Next post: