The Precautionary Principle in Information Technology Debates

by on April 4, 2011 · 12 comments

I’m currently plugging away at a big working paper with the running title, “Argumentum in Cyber-Terrorem: A Framework for Evaluating Fear Appeals in Internet Policy Debates.” It’s an attempt to bring together a number of issues I’ve discussed here in my past work on “techno-panics” and devise a framework to evaluate and address such panics using tools from various disciplines. I begin with some basic principles of critical argumentation and outline various types of “fear appeals” that usually represent logical fallacies, including: argumentum in terrorem, argumentum ad metum, and argumentum ad baculum.  But I’ll post more about that portion of the paper some other day. For now, I wanted to post a section of that paper entitled “The Problem with the Precautionary Principle.” I’m posting what I’ve got done so far in the hopes of getting feedback and suggestions for how to improve it and build it out a bit. Here’s how it begins…

________________

The Problem with the Precautionary Principle

“Isn’t it better to be safe than sorry?” That is the traditional response of those perpetuating techno-panics when their fear appeal arguments are challenged. This response is commonly known as “the precautionary principle.” Although this principle is most often discussed in the field of environment law, it is increasingly on display in Internet policy debates.

The “precautionary principle” basically holds that since every technology and technological advance poses some theoretical danger or risk, public policy should be crafted in such a way that no possible harm will come from a particular innovation before further progress is permitted. In other words, law should mandate “just play it safe” as the default policy toward technological progress.

The problem with that logic, notes Kevin Kelly, author of What Technology Wants, is that because “every good produces harm somewhere… by the strict logic of an absolute precautionary principle no technologies would be permitted.”[1] Or, as journalist Ronald Bailey has summarized this principle: “Anything new is guilty until proven innocent.”[2] Under an information policy regime guided at every turn by a precautionary principle, digital innovation and technological progress would become impossible because trade-offs and uncertainly would be considered unacceptable.

This is why Aaron Wildavsky, author of the seminal 1988 book, Searching for Safety, spoke of the dangers of “trial without error” as compared to trial and error.  Wildavsky argued that:

The direct implication of trial without error is obvious: if you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. … [E]xisting hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.[3]

Simply stated: Life involves and requires that some level of risk be accepted for progress to occur. While some steps to anticipate or control for unforeseen circumstances and “plan for the worse” are sensible, going overboard forecloses opportunities and experiences that offer valuable lessons for individuals and society. University of Chicago legal scholar Cass Sunstein, who currently serves as Administrator of the White House Office of Information and Regulatory Affairs, has argued that “If the burden of proof is on the proponent of the activity or processes in question, the Precautionary Principle would seem to impose a burden of proof that cannot be met.”[4]

Importantly, Wildavsky pointed out that the precautionary principle also downplays the important role of resiliency in human affairs. Through constant experimentation, humans learn valuable lessons about how the world works, which risks are real versus illusory or secondary, and how to assimilate new cultural, economic, and technological change into our lives.  A rigid precautionary principle would disallow such a learning progress from unfolding and leave us more vulnerable to the most serious problems we might face as individuals or a society. “Allowing, indeed, encouraging, trial and error should lead to many more winners, because of (a) increased wealth, (b) increased knowledge, and (c) increased coping mechanisms, i.e., increased resilience in general.”[5]

Recent work by Sean Lawson, an assistant professor in the Department of Communication at the University of Utah, has underscored the importance of resiliency as it pertains to cybersecurity. “Research by historians of technology, military historians, and disaster sociologists has shown consistently that modern technological and social systems are more resilient than military and disaster planners often assume,” he finds.[6] “Just as more resilient technological systems can better respond in the event of failure, so too are strong social systems better able to respond in the event of disaster of any type.”[7]

Resiliency is also a wise strategy as it pertains to Internet child safety issues, online privacy concerns, and online reputation management. Some risks in these contexts – such as underage access to objectionable content or the release of too much personal information – can be prevented through anticipatory regulatory policies. Increasingly, however, information proves too challenging to bottle up. Information control efforts today are greatly complicated by five phenomena unique to the Information Age: (1) media and technological convergence; (2) decentralized, distributed networking; (3) unprecedented scale of networked communications; (4) an explosion of the overall volume of information; and (5) unprecedented individual information sharing through user-generation of content and self-revelation of data. “The truth about data is that once it is out there, it’s hard to control,” says Jeff Jonas, an engineer with IBM.[8]

This is why resiliency becomes an even more attractive strategy compared to anticipatory regulation. Information will increasingly flow freely on interconnected, ubiquitous digital networks and getting those information genies back in their bottles would be an enormous challenge. Moreover, the costs of attempting to control information will exceed the benefits in most circumstances. Consequently, a strategy based on building resiliency will focus on education and empowerment-based strategies that allow for trial and error and encourage sensible, measured responses to the challenges posed by technological change.

[Note: I next plan to go on to discuss several case studies and outline the sorts of education and empowerment-based strategies that I believe represent the better approach to coping with technological change.]


[1] Kevin Kelly, What Technology Wants (New York: Viking, 2010), p. 247-8.

[2] Ronald Bailey, “Precautionary Tale,” Reason, April 1999, http://reason.com/archives/1999/04/01/precautionary-tale.

[3] Aaron Wildavsky, Searching for Safety (Transaction Books, 1988), p. 38.

[4] Cass Sunstein, “The Paralyzing Principle,” Regulation (Washington, DC: Cato Institute, Winter 2002-2003), p. 34, http://www.cato.org/pubs/regulation/regv25n4/v25n4-9.pdf. “The most serious problem with the Precautionary Principle is that it offers no guidance – not that it is wrong, but that it forbids all courses of action, including inaction,” Sunstein says. “The problem is that the Precautionary Principle, as applied, is a crude and sometimes perverse method of promoting [] various goals, not least because it might be, and has been, urged in situations in which the principle threatens to injure future generations and harm rather than help those who are most disadvantaged. A rational system of risk regulation certainly takes precautions. But it does not adopt the Precautionary Principle.” Id., p. 33, 37.

[5] Wildavsky, Id., p. 103.

[6] Sean Lawson, Beyond Cyber Doom: Cyber Attack Scenarios and the Evidence of History (Arlington, VA: Mercatus Center at George Mason University, January 25, 2011), p. 31, http://mercatus.org/publication/beyond-cyber-doom.

[7] Id., p. 29.

[8] Quoted in Jenn Webb, “The Truth about Data: Once It’s Out There, It’s Hard to Control,” O’Reilly Radar, April 4, 2011, http://radar.oreilly.com/2011/04/jeff-jonas-data-privacy-control.html.

 

  • http://enigmafoundry.wordpress.com eee_eff

    Adam you don’t indicate that you even understand the Precautionary Principle. You should be able to do that before you critique it.

    I keep on seeing these critiques of the Precautionary Principle in which those who critique put up some straw man version of it, and then proceed to demolish it.

  • Glenn

    Adam, you might find some benefit in taking a look at a lengthy Monograph Wasington Legal Foundation published a few years ago on the precautionary principle: http://www.wlf.org/upload/110405MONOKogan.pdf.
    G. Lammi
    WLF

  • http://www.techliberation.com Adam Thierer

    Thanks Glenn! That’s terrific. Very helpful.

  • http://www.techliberation.com Adam Thierer

    Thanks Glenn! That’s terrific. Very helpful.

  • http://www.techliberation.com Adam Thierer

    Thanks Glenn! That’s terrific. Very helpful.

  • http://www.techliberation.com Adam Thierer

    Thanks Glenn! That’s terrific. Very helpful.

  • Pingback: UK report’s “resiliency” sure looks a lot like “anticipation”

  • Pingback: San Francisco backs off controversial cell phone radiation ordinance

  • http://www.techliberation.com Adam Thierer

    just saw this new piece by Jonathan Adler: “The Problems with Precaution: A Principle without Principle.” http://www.american.com/archive/2011/may/the-problems-with-precaution-a-principle-without-principle

    He states:
    “Simply put, the precautionary principle is not a sound basis for public policy. At the broadest level of generality, the principle is unobjectionable, but it provides no meaningful guidance to pressing policy questions. In a public policy context, “better safe than sorry” is a fairly vacuous instruction. Taken literally, the precautionary principle is either wholly arbitrary or incoherent. In its stronger formulations, the principle actually has the potential to do harm.”

  • http://www.techliberation.com Adam Thierer

    just saw this new piece by Jonathan Adler: “The Problems with Precaution: A Principle without Principle.” http://www.american.com/archive/2011/may/the-problems-with-precaution-a-principle-without-principle

    He states:
    “Simply put, the precautionary principle is not a sound basis for public policy. At the broadest level of generality, the principle is unobjectionable, but it provides no meaningful guidance to pressing policy questions. In a public policy context, “better safe than sorry” is a fairly vacuous instruction. Taken literally, the precautionary principle is either wholly arbitrary or incoherent. In its stronger formulations, the principle actually has the potential to do harm.”

  • Christian Munthe

    Those of you who might be interested in a discussion of the PP and its nethical basis that takes into account that, obviously, precaution alwaysn has a price may want to have a look at my new book The Price of nPrecaution and the Ethics of Risk: http://www.springer.com/social+sciences/applied+ethics/book/978-94-007-1329-1

  • Pingback: San Francisco backs off controversial cell phone radiation ordinance | Cell Phone Dangers 101

Previous post:

Next post: