Wendell Wallach on the Challenge of Engineering Better Technology Ethics

by on April 20, 2016 · 0 comments

DM coverOn May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.

Wallach’s latest book is entitled, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. And, as I’ve noted here recently, the greatly expanded second edition of my latest book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, has just been released.

Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!—A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.

Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.

Many Questions, Few Clear Answers

Wallach does a particularly good job framing the major questions about emerging technologies and their effect on society. “Navigating the future of technological possibilities is a hazardous venture,” he observes. “It begins with learning to ask the right questions—questions that reveal the pitfalls of inaction, and more importantly, the passageways available for plotting a course to a safe harbor.” (p. 7) Wallach then embarks on a 260+ page inquiry that bombards the reader with an astonishing litany of questions about the wisdom of various forms of technological innovation—both large and small. While I wasn’t about to start an exact count, I would say that the number of questions Wallach poses in the book runs well into the hundreds. In fact, many paragraphs of the book are nothing but an endless string of questions.

Thus, if there is a primary weakness with A Dangerous Master, it’s that Wallach spends so much time formulating such a long list of smart and nuanced questions that some readers may come away disappointed when they do not find equally satisfying answers. On the other hand, the lack of clear answers is also completely understandable because, as Wallach notes, there really are no simple answers to most of these questions.

Just Slow Down!

Moving on to substance, let me make clear where Wallach and I generally see eye-to-eye and where we part ways.

Generally speaking, we agree about the need to come up with better “soft governance” systems for emerging technologies, which might include multistakeholder process, developer codes of conduct, sectoral self-regulation, sensible liability rules, and so on. (More on those strategies in a moment.)

But while we both believe it is wise to consider how we might “bake-in” better ethics and norms into the process of technological development, Wallach seems much more inclined than me to expect that we will be able to pre-ordain (or potentially require?) all this happens before much of this experimentation and innovation actually moves forward. Wallach opens by asking:

Determining when to bow to the judgment of experts and whether to intervene in the deployment of a new technology is certainly not easy. How can government leaders or informed citizens effectively discern which fields of research are truly promising and which pose serious risks? Do we have the intelligence and means to mitigate the serious risks that can be anticipated? How should we prepare for unanticipated risks? (p. 6)

Again, many good questions here! But this really gets to the primary difference between Wallach’s preferred approach and my own: I tend to believe that many of these things can only be worked out through ongoing trial and error, the constant reformulation of the various norms that govern the process of innovation, and the development of sensible ex post solutions to some of the most difficult problems posed by turbulent technological change.

By contrast, Wallach’s generally attitude toward technological evolution is probably best summarized by the phrases: “Slow down!” and, “Let’s have a conversation about it first!” As he puts it in his own words: “Slowing down the accelerating adoption of technology should be done as a responsible means to ensure basic human safety and to support broadly shared values.” (p. 13)

But I tend to believe that it’s not always possible to preemptively determine which innovations to slow down, or even how to determine what those “shared values” are that will help us make this determination. More importantly, I worry that there are very serious potential risks and unintended consequences associated with slowing down many forms of technological innovation, which could improve human welfare in important ways. There can be no prosperity, after all, without a certain degree of risk-taking and disruption.

Getting Out Ahead of the Pacing Problem

WWIt’s not that Wallach is completely hostile to new forms of technological innovation or blind to the many ways those innovations might improve our lives. To the contrary, he does a nice job throughout the book highlighting the many benefits associated with various new technologies, or he is at least willing to acknowledge that there can be many downsides associated with efforts aimed at limiting research and experimentation with new technological capabilities.

Yet, what concerns Wallach most is the much-discussed issue from the field of the philosophy of technology, the so-called “pacing problem.” Wallach concisely defines the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” (p. 251) “There has always been a pacing problem,” he notes, but he is concerned that technological innovation—especially highly disruptive and potentially uncontrollable forms of innovation—is now accelerating at an absolutely unprecedented pace.

(Just as an aside for all the philosophy nerds out there…  Such a rigid belief in the “pacing problem” represents a techno-deterministic viewpoint that is, ironically, sometimes shared by technological skeptics like Wallach as well as technological optimists like Larry Downes and even many in the middle of this debate, like Vivek Wadhwa. See, for example, The Laws of Disruption by Downes and “Laws and Ethics Can’t Keep Pace with Technology” by Wadhwa. Although these scholars approach technology ethics and politics quite differently, they all seem to believe that the pace of modern technological change is so relentless as to almost be an unstoppable force of nature. I guess the moral of the story is that, to some extent, we’re all technological determinists now!)

Despite his repeated assertions that modern technologies are accelerating at such a potentially uncontrollable pace, Wallach nonetheless hopes we can achieve some semblance of control over emerging technologies before they reach a critical “inflection point.” In the study of history and science, an inflection point generally represents a moment when a situation and trend suddenly changes in a significant way and things begin moving rapidly in a new direction. These inflections points can sometimes develop quite abruptly, ushering in major changes by creating new social, economic, or political paradigms. As it relates to technology in particular, inflection points can refer to the moment with a particular technology achieves critical mass in terms of adoption or, more generally, to the time when that technology begins to profoundly transform the way individuals and institutions act.

Another related concept that Wallach discusses is the so-called “Collingridge dilemma,” which refers to the notion that it is difficult to put the genie back in the bottle once a given technology has reached a critical mass of public adoption or acceptance. The concept is named after David Collingridge, who wrote about this in his 1980 book, The Social Control of Technology. “The social consequences of a technology cannot be predicated early in the life of the technology,” Collingridge argued. “By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult.”

On “Having a Discussion” & Coming Up with “a Broad Plan”

These related concepts of inflection points and the Collingridge dilemma constitute the operational baseline of Wallach’s worldview. “In weighing speedy development against long-term risks, speedy development wins,” he worries. “This is particularly true when the risks are uncertain and the perceived benefits great.” (p. 85)

Consequently, throughout his book, Wallach pleads with us to take what I will call Technological Time Outs. He says we need to pause at times so that we can have “a full public discussion” (p. 13) and make sure there is a “broad plan in place to manage our deployment of new technologies” (p. 19) to make sure that innovation happens only at “a humanly manageable pace” (p. 261) “to fortify the safety of people affected by unpredictable disruptions.” (p. 262) Wallach’s call for Technological Time Outs is rooted in his belief that “the accelerating pace [of modern technological innovation] undermines the quality of each of our lives.” (p. 263)

That is Wallach’s weakest assertion in the book and he doesn’t really offer much evidence to prove that the velocity of modern technological is hurting us rather than helping us, as many of us believe. Rather, he treats it as a widely accepted truism that necessitates some sort of collective effort to slow things down if the proverbial genie is about to exit the bottle, or to make sure those genies don’t get out of their bottles without a lot of preemptive planning regarding how they are to be released into the world. In the following passage on pg. 72, Wallach very succinctly summarizes his approach recommended throughout A Dangerous Master:

this book will champion the need for more upstream governance: more control over the way that potentially harmful technologies are developed or introduced into the larger society. Upstream management is certainly better than introducing regulations downstream, after a technology is deeply entrenched or something major has already gone wrong. Yet, even when we can access risks, there remain difficulties in recognizing when or determining how much control should be introduced. When does being precautionary make sense, and when is precaution an over-reaction to the risks? (p. 72)

Those who have read my Permissionless Innovation book will recall that I open by framing innovation policy debates in almost exactly the same way as Wallach suggests in that last line above. I argue in the first lines of my book that:

The central fault line in innovation policy debates today can be thought of as ‘the permission question.’  The permission question asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions and risk-taking, more generally.  Two conflicting attitudes are evident.

One disposition is known as the ‘precautionary principle.’ Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled ‘permissionless innovation.’ It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.

So, by contrasting these passages, you can see what I am setting up here is a clash of visions between what appears to be Wallach’s precautionary principle-based approach versus my own permissionless innovation-focused worldview.

How Much Formal Precaution?

But that would be a tad bit too simplistic because just a few paragraphs after Wallach makes the statement just above about “upstream management” being superior to ex post solutions formulated “after a technology is deeply entrenched,” Wallach begins slowly backing away from an overly-rigid approach to precautionary principle-based governance of technological processes and systems.

He admits, for example, that “precautionary measures in the form of regulations and governmental oversight can slow the development of research whose overall society impact will be beneficial,” (p. 26) and that can “be costly” and “slow innovation.” For countries, Wallach admits, this can have real consequences because “Countries with more stringent precautionary policies are at a competitive disadvantage to being the first to introduce a new tool or process.” (p. 74)

So, he’s willing to admit that what we might call a hard precautionary principle usually won’t be sensible or effective in practice, but he is far more open to soft precaution. But this is where real problems begin to develop with Wallach’s approach, and it presents us with a chance to turn the tables on him a bit and begin posing some serious questions about his vision for governing technology.

Much of what follows below are my miscellaneous ramblings about the current state of the intellectual dialogue about tech ethics and technological control efforts. I have discussed these issues at greater length in my new book as well as a series of essays here in past years, most notably: “On the Line between Technology Ethics vs. Technology Policy; “What Does It Mean to “Have a Conversation” about a New Technology?”; and, “Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation.”

As I’ve argued in those and other essays, my biggest problem with modern technological criticism is that specifics are in scandalously short supply in this field! Indeed, I often find the lack of details in this arena to be utterly exasperating. Most modern technological criticism follows a simple formula:

TECHNOLOGY –>> POTENTIAL PROBLEMS –>> DO SOMETHING!

But almost all the details come in the discussion about the nature of the technology in question and the apparent many problems associated with it. Far, far less thought goes into the “DO SOMETHING!” part of the critics’ work. One reason for that is probably self-evident: There are no easy solutions. Wallach admits as much at many junctures throughout the book. But that doesn’t excuse the need for the critics to give us a more concrete blueprint for identifying and then potentially rectifying the supposed problems.

Of course, the other reason that many critics are short of specifics is because what they really mean when they quip how much we need to “have a conversation” about a new disruptive technology is that we need to have a conversation about stopping that technology.

Where Shall We Draw the Line between Hard and Soft Law?

But this is what I found most peculiar about Wallach’s book: He never really gives us a good standard by which to determine when we should look to hard governance (traditional top-down regulation) versus soft governance (more informal, bottom-up and non-regulatory approaches).

On one hand, he very much wants society to exercise greatly restraint and precaution when it comes to many of the technologies he and others worry about today. Again, he’s particularly concerned about the potential runaway development and use of drones, genetic editing, nanotech, robotics, and artificial intelligence. For at least one class of robotics—autonomous military robots—Wallach does call for immediate policy action in the form of an Executive Order to ban “killer” autonomous systems. (Incidentally, there’s also a major effort underway called the “Campaign to Stop Killer Robots” that aims to make such a ban part of international law through a multinational treaty.)

But Wallach also acknowledges the many trade-offs associated with efforts to preemptively controls on robotics and other technology. Perhaps for that reason, Wallach doesn’t develop a clear test for when the Precautionary Principle should be applied to new forms of innovation.

Clearly there are times when it is appropriate, although I believe it is only in an extremely narrow subset of cases. In the 2nd Edition of my Permissionless Innovation book, I tried to offer a rough framework for when formal precautionary regulation (i.e., highly-restrictive policy defaults are necessary, such as operational restrictions, licensing requirements, research limitations, or even formal bans) might be necessary. I do not want to interrupt the flow of this review of Wallach’s book too much, so I have decided to just cut-and-paste that portion of Chapter 3 of my book (“When Does Precaution Make Sense?”) down below as an appendix to this essay.

The key takeaway of that passage from my book is that all of us who study innovation policy and the philosophy of technology—Wallach, myself, the whole darn movement—have done a remarkably poor job being specific about precisely when formal policy precaution is warranted. What is the test? All too often, we get lazy and apply what we might call an “I-Know-It-When-I-See-It” standard. Consider the possession of bazookas, tanks, and uranium. Almost all of us would agree that citizens should not be allowed to possess or use such things. Why? Well, it seems obvious, right? They just shouldn’t! But what is the exact standard we use to make that determination.

In coming years, I plan on spending a lot more time articulating a better test by which Precautionary Principle-based policies could be reasonably applied. Those who know me may be taken aback by what I just said. After all, I’ve spend many years explaining why Precautionary Principle-based thinking threatens human prosperity and should be rejected in the vast majority of cases. But that doesn’t excuse the lack of a serious and detailed exploration of the exact standard by which we determine when we should impose some limits on technological innovation.

Generally speaking, while I strongly believe that “permissionless innovation” should remain the policy default for most technologies, there certainly exists some scenarios where the threat of harm associated with a new innovation might be highly probable, tangible, immediate, irreversible, and catastrophic in nature. If so, that could qualify it for at least a light version of the Precautionary Principle. In a future paper or book chapter I’m just now starting to research, I hope to fuller develop those qualifiers and formulate a more robust test around them.

I would have very much liked to see Wallach articulate and defend a test of his own for when formal precaution would make sense. And, by extension, when should we default to soft precaution, or soft law and informal governance mechanisms for emerging technologies.

We turn to that issue next.

Toward Soft Governance & the Engineering of Better Technological Ethics

Even though Wallach doesn’t provide us with a test for determining when precaution makes sense or when we should instead default to soft governance, he does a much better job explaining the various models of soft law or informal governance that might help us deal with the potential negative ramifications of highly disruptive forms of technological change.

What Wallach proposes, in essence, is that we bake a dose of precautionary directly into the innovation process through a wide variety of informal governance/oversight mechanisms. “By embedding shared values in the very design of new tools and techniques, engineers improve the prospect of a positive outcome,” he claims. “The upstream embedding of shared values during the design process can ease the need for major course adjustments when it’s often too late.” (p. 261)

Wallach’s favored instrument of soft governance is what he refers to as “Governance Coordinating Committees” (GCCs). These Committees would coordinate “the separate initiatives by the various government agencies, advocacy groups, and representatives of industry” who would serve as “issue managers for the comprehensive oversight of each field of research.” (p. 250) He elaborates and details the function of GCCs as follows:

These committees, led by accomplished elders who have already achieved wide respect, are meant to work together with all the interested stakeholders to monitor technological development and formulate solutions to perceived problems. Rather than overlap with or function as a regulatory body, the committee would work together with existing institutions. (p. 250-51)

Wallach discussed the GCC idea in much greater detail in a 2013 book chapter he penned with Gary E. Marchant for a collected volume of essays on Innovative Governance Models for Emerging Technologies. (I highly recommend you pick up that book if you can afford it! Many terrific essays in that book on these issues.) In their chapter, Marchant and Wallach specify some of the soft law mechanisms we might use to instill a bit of precaution preemptively. These mechanisms include: “codes of conduct, statements of principles, partnership programs, voluntary programs and standards, certification programs and private industry initiatives.”

If done properly, GCCs could provide exactly the sort of wise counsel and smart recommendations that Wallach desires. In my book and many law review articles on various disruptive technologies, I have endorsed many of the ideas and strategies Wallach identifies. I’ve also stressed the importance of many other mechanisms, such as education and empowerment-based strategies that could help the public learn to cope with new innovations or use them appropriately. In addition, I’ve highlighted the many flexible, adaptive ex post remedies that can help when things go wrong. Those mechanisms include common law remedies such as product defects law, various torts, contract law, property law, and even class action lawsuits. Finally, I have written extensively about the very active role played by the Federal Trade Commission (FTC) and other consumer protection agencies, which have broad discretion to police “unfair and deceptive practices” by innovators.

Moreover, we already have a quasi-GCC model developing today with the so-called “multistakeholder governance” model that is often used in both informal and formal ways to handle many emerging technology policy issues.  The Department of Commerce (the National Telecommunications and Information Administration in particular) and the FTC have already developed many industry codes of conduct and best practices for technologies such as biometrics, big data, the Internet of Things, online advertising, and much more. Those agencies and others (such as the FDA and FAA) are continuing to investigate other codes or guidelines for things like advanced medical devices and drones, respectively. Meanwhile, I’ve heard other policymakers and academics float the idea of “digital ombudsmen,” “data ethicists,” and “private IRBs” (institutional review boards) as other potential soft law solutions that technology companies might consider. Perhaps going forward, many tech firms will have Chief Ethical Officers just as many of them today have Chief Privacy Officers or Chief Security Officers.

In other words, there’s already a lot of “soft law” activities going on in this space. And I haven’t even begun an inventory of the many other bodies or groups that already exist in each sector today that has set forth their own industry self-regulatory codes, but they exist in almost every field that Wallach worries about.

So, I’m not sure how much his GCC idea will add to this existing mix, but I would not be opposed to them playing the sort of coordinating “issue manager” role he describes. But I still have many questions about GCC’s, including:

  • How many of them are needed and how we will know which one is the definitive GCC for each sector or technology?
  • If they are overly formal in character and dominated by the most vociferous opponents of any particular technology, a real danger exists that a GCC could end up granting a small cabal a “heckler’s veto” over particular forms of innovation.
  • Alternatively, the possibility of “regulatory capture” could be a problem for some GCCs if incumbent companies come to dominate their membership.
  • Even if everything went fairly smoothly and the GCCs produced balanced reports and recommendations, future developers might wonder if and why they are to be bound by older guidelines.
  • And if those future developers choose not to play by the same set of guidelines, what’s the penalty for non-compliance?
  • And how are such guidelines enforced in a world where what I’ve called “global innovation arbitrage” is an increasing reality?

Challenging Questions for Both Hard and Soft Law

To summarize, whether we are speaking of “hard” or “soft” law approaches to technological governance, I am just not nearly as optimistic as Wallach seems to be that we will be able to find consensus on these three things:

(1) what constitutes “harm” in many of these circumstances;

(2) which “shared values” should prevail when “society” debates the shaping of ethics or guiding norms for emerging technologies but has highly contradictory opinions about those values (consider online privacy as a good example, where many people enjoy hyper-sharing while other demand hyper-privacy); and,

(3) that we can create a legitimate “governing body” (or bodies) that will be responsible for formulating these guidelines in a fair way without completely derailing the benefits of innovation in new fields and also remaining relevant for very long.

Nonetheless, as he and others have suggested, the benefit of adopting a soft law/informal governance approach to these issues is that it at least seeks to address these questions in more flexible and adaptive fashion. As I noted in my book, traditional regulatory systems “tend to be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things.” (Permissionless Innovation, p. 120)

So, despite the questions I have raised here, I welcome the more flexible soft law approach that Wallach sets forth in his book. I think it represents a far more constructive way forward when compared to the opposite “top-down” or “command-and-control” regulatory systems of the past. But I very much want to make sure that even these new and more flexible soft law approaches leave plenty of breathing room for ongoing trial-and-error experimentation with new technologies and systems.

Conclusion

In closing, I want to reiterate that not only did I appreciate the excellent questions raised by Wendell Wallach in A Dangerous Master, but I take them very seriously. When I sat down to revise and expand my Permissionless Innovation book last year, I decided to include this warning from Wallach in my revised preface: “The promoters of new technologies need to speak directly to the disquiet over the trajectory of emerging fields of research. They should not ignore, avoid, or superficially dampen criticism to protect scientific research.” (p. 28–9)

As I noted, in response to Wallach: “I take this charge seriously, as should others who herald the benefits of permissionless innovation as the optimal default for technology policy. We must be willing to take on the hard questions raised by critics and then also offer constructive strategies for dealing with a world of turbulent technological change.”

Serious questions deserve serious answers. Of course, sometimes those posing those questions fail to provide many answers of their own! Perhaps it is because they believe the questions answer themselves. Other times, it’s because they are willing to admit that easy answers to these questions typically prove quite elusive. In Wallach’s case, I believe it’s more the latter.

To wrap up, I’ll just reiterated that both Wallach and I share a common desire to find solutions to the hard questions about technological innovation. But the crucial question that probably separates his worldview and my own is this: Whether we are talking about hard or soft governance, how much faith should we place in preemptive planning vs. ongoing trial and error experimentation to solve technological challenges? Wallach is more inclined to believe we can divine these things with the sagacious foresight of “accomplished elders” and technocratic “issue managers,” who will help us slow things down until we figure out how to properly ease a new technology into society (if at all). But I believe that the only way we will find many of the answers we are searching for is by allowing still more experimentation with the very technologies that he and others seek to control the development of. We humans are outstanding problem-solvers and have the uncanny ability among all mammals to adapt to changing circumstances. We roll with the punches, learn from them, and become more resilient in the process. As I noted in my 2014 essay, “Muddling Through: How We Learn to Cope with Technological Change”:

we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. [. . .] Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies.

Will the technologies that Wallach fears bring about a “techstorm” that overwhelms our culture, our economy, and even our very humanity? It’s certainly possible, and we should continue to seriously discuss the issues that he and other skeptics raise about our expanding technological capabilities and the potential for many of them to do great harm. Because some of them truly could.

But it is equally plausible—in fact, some of us would say, highly probable—that instead of overwhelming us, we learn how to bend these new technological capabilities to our will and make them work for our collective benefit. Instead of technology becoming “a dangerous master,” we will instead make it our helpful servant, just as we have so many times before.


APPENDIX: When Does Precaution Make Sense?

[excerpt from chapter 3 of Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Footnotes omitted. See book for all references.]

But aren’t there times when a certain degree of precautionary policymaking makes good sense? Indeed, there are, and it is important to not dismiss every argument in favor of precautionary principle–based policymaking, even though it should not be the default policy rule in debates over technological innovation.

The challenge of determining when precautionary policies make sense comes down to weighing the (often limited) evidence about any given technology and its impact and then deciding whether the potential downsides of unrestricted use are so potentially catastrophic that trial-and-error experimentation simply cannot be allowed to continue. There certainly are some circumstances when such a precautionary rule might make sense. Governments restrict the possession of uranium and bazookas, to name just two obvious examples.

Generally speaking, permissionless innovation should remain the norm in the vast majority of cases, but there will be some scenarios where the threat of tangible, immediate, irreversible, catastrophic harm associated with new innovations could require at least a light version of the precautionary principle to be applied.  In these cases, we might be better suited to think about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria.

Precaution might make sense when harm is … Precaution generally doesn’t make sense for asserted harms that are …
Highly probable Highly improbable
Tangible (physical) Intangible (psychic)
Immediate Distant / unclear timeline
Irreversible Reversible / changeable
Catastrophic Mundane / trivial

 

But most cases don’t fall into this category. Instead, we generally allow innovators and consumers to freely experiment with technologies, and even engage in risky behaviors, unless a compelling case can be made that precautionary regulation is absolutely necessary.  How is the determination made regarding when precaution makes sense? This is where the role of benefit-cost analysis (BCA) and regulatory impact analysis is essential to getting policy right.  BCA represents an effort to formally identify the tradeoffs associated with regulatory proposals and, to the maximum extent feasible, quantify those benefits and costs.  BCA generally cautions against preemptive, precautionary regulation unless all other options have been exhausted—thus allowing trial-and-error experimentation and “learning by doing” to continue. (The mechanics of BCA are discussed in more detail in section VII.)

This is not the end of the evaluation, however. Policymakers also need to consider the complexities associated with traditional regulatory remedies in a world where technological control is increasingly challenging and quite costly. It is not feasible to throw unlimited resources at every problem, because society’s resources are finite.  We must balance risk probabilities and carefully weigh the likelihood that any given intervention has a chance of creating positive change in a cost-effective fashion.  And it is also essential to take into account the potential unintended consequences and long-term costs of any given solution because, as Harvard law professor Cass Sunstein notes, “it makes no sense to take steps to avert catastrophe if those very steps would create catastrophic risks of their own.”  “The precautionary principle rests upon an illusion that actions have no consequences beyond their intended ends,” observes Frank B. Cross of the University of Texas. But “there is no such thing as a risk-free lunch. Efforts to eliminate any given risk will create some new risks,” he says.

Oftentimes, after working through all these considerations about whether to regulate new technologies or technological processes, the best solution will be to do nothing because, as noted throughout this book, we should never underestimate the amazing ingenuity and resiliency of humans to find creative solutions to the problems posed by technological change.  (Section V discusses the importance of individual and social adaptation and resiliency in greater detail.) Other times we might find that, while some solutions are needed to address the potential risks associated with new technologies, nonregulatory alternatives are also available and should be given a chance before top-down precautionary regulations are imposed. (Section VII considers those alternative solutions in more detail.)

Finally, it is again essential to reiterate that we are talking here about the dangers of precautionary thinking as a public policy prerogative—that is, precautionary regulations that are mandated and enforced by government officials. By contrast, precautionary steps may be far more wise when undertaken in a more decentralized manner by individuals, families, businesses, groups, and other organizations. In other words, as I have noted elsewhere in much longer articles on the topic, “there is a different choice architecture at work when risk is managed in a localized manner as opposed to a society-wide fashion,” and risk-mitigation strategies that might make a great deal of sense for individuals, households, or organizations, might not be nearly as effective if imposed on the entire population as a legal or regulatory directive.

Finally, at times, more morally significant issues may exist that demand an even more exhaustive exploration of the impact of technological change on humanity. Perhaps the most notable examples arise in the field of advance medical treatments and biotechnology. Genetic experimentation and human cloning, for example, raise profound questions about altering human nature or abilities as well as the relationship between generations.

The case for policy prudence in these matters is easier to make because we are quite literally talking about the future of what it means to be human.  Controversies have raged for decades over the question of when life begins and how it should end. But these debates will be greatly magnified and extended in coming years to include equally thorny philosophical questions.  Should parents be allowed to use advanced genetic technologies to select the specific attributes they desire in their children? Or should parents at least be able to take advantage of genetic screening and genome modification technologies that ensure their children won’t suffer from specific diseases or ailments once born?

Outside the realm of technologically enhanced procreation, profound questions are already being raised about the sort of technological enhancements adults might make to their own bodies. How much of the human body can be replaced with robotic or bionic technologies before we cease to be human and become cyborgs?  As another example, “biohacking”—efforts by average citizens working together to enhance various human capabilities, typically by experimenting on their own bodies —could become more prevalent in coming years.  Collaborative forums, such as Biohack.Me, already exist where individuals can share information and collaborate on various projects of this sort.  Advocates of such amateur biohacking sometimes refer to themselves as “grinders,” which Ben Popper of the Verge defines as “homebrew biohackers [who are] obsessed with the idea of human enhancement [and] who are looking for new ways to put machines into their bodies.”

These technologies and capabilities will raise thorny ethical and legal issues as they advance. Ethically, they will raise questions of what it means to be human and the limits of what people should be allowed to do to their own bodies. In the field of law, they will challenge existing health and safety regulations imposed by the FDA and other government bodies.

Again, most innovation policy debates—including most of the technologies discussed throughout this book—do not involve such morally weighty questions. In the abstract, of course, philosophers might argue that every debate about technological innovation has an impact on the future of humanity and “what it means to be human.” But few have much of a direct influence on that question, and even fewer involve the sort of potentially immediate, irreversible, or catastrophic outcomes that should concern policymakers.

In most cases, therefore, we should let trial-and-error experimentation continue because “experimentation is part and parcel of innovation” and the key to social learning and economic prosperity.  If we froze all forms of technological innovation in place while we sorted through every possible outcome, no progress would ever occur. “Experimentation matters,” notes Harvard Business School professor Stefan H. Thomke, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”

Of course, ongoing experimentation with new technologies always entails certain risks and potential downsides, but the central argument of this book is that (a) the upsides of technological innovation almost always outweigh those downsides and that (b) humans have proven remarkably resilient in the face of uncertain, ever-changing futures.

In sum, when it comes to managing or coping with the risks associated with technological change, flexibility and patience is essential. One size most certainly does not fit all. And one-size-fits-all approaches to regulating technological risk are particularly misguided when the benefits associated with technological change are so profound. Indeed, “[t]echnology is widely considered the main source of economic progress”; therefore, nothing could be more important for raising long-term living standards than creating a policy environment conducive to ongoing technological change and the freedom to innovate.

Previous post:

Next post: