What Does It Mean to “Have a Conversation” about a New Technology?

by on May 23, 2013 · 3 comments

My colleague Eli Dourado brought to my attention this XKCD comic and when tweeting it out yesterday he made the comment that “Half of tech policy is dealing with these people”:

The comic and Eli’s comment may be a bit snarky, but something about it rang true to me because while conducting research on the impact of new information technologies on society I often come across books, columns, blog posts, editorials, and tweets that can basically be summed up with the line from that comic: “we should stop to consider the consequences of [this new technology] before we …”  Or, equally common is the line: “we need to have a conversation about [this new technology] before we…”

But what does that really mean? Certainly “having a conversation” about the impact of a new technology on society is important. But what is the nature of that “conversation”? How is it conducted? How do we know when it is going on or when it is over?

Generally speaking, it is best to avoid guessing as to motive when addressing public policy arguments. It is better to just address the assertions or proposals set forth in someone’s work and not try to determine what motivates it or what other ulterior motives may be driving their reasoning.

Nonetheless, I can’t help but think that sometimes what the “we-need-to-have-a-conversation” crowd is really suggesting is that we need to have a conversation about how to slow or stop the technology in question, not merely talk about its ramifications.

I see this at work all the time in the field of privacy policy. Many policy wonks craft gloom-and-doom scenarios that suggest our privacy is all but dead. I’ve notice a lot more of this lately in essays about the “Internet of Things” and Google Glass in particular. (See these recent essays by Paul Bernal and Bruce Schneier for good examples). Dystopian dread drips from almost every line of these essays.

But, after conjuring up a long parade of horribles and suggesting “we need to have a conversation” about new technologies, authors of such essays almost never finish their thought. There’s no conclusion or clear alternative offered. I suppose that in some cases it is because there aren’t any easy answers. Other times, however, I get the feeling that they have an answer in mind — comprehensive regulation of new technologies in question — but that they don’t want to come out and say it because they think they’ll sound like Luddites. Hell, I don’t know and, again, I don’t want to guess as to motive. I just find it interesting that so much of the writing being done in this arena these days follows that exact model.

But here’s the other point I want to make: I don’t think we’ll ever be able to “have a conversation” about a new technology that yields satisfactory answers because real wisdom is born of experience. This is one of the many important lessons I learned from my intellectual hero Aaron Wildavsky and his pioneering work on risk and safety. In his seminal 1988 book Searching for Safety, Wildavsky warned of the dangers of the “trial without error” mentality — otherwise known as the precautionary principle approach — and he contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that:

The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards … Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.

This is a lesson too often overlooked not just in the field of health and safety regulation, but also in the world of information policy and this insight is the foundation of a filing I will be submitting to the FTC next week in its new proceeding on the “Privacy and Security Implications of the Internet of Things.” In that filing, I will note that, as was the case with many other new information and communications technologies, the initial impulse may be to curb or control the development of certain Internet of Things technologies to guard against theoretical future misuses or harms that might develop.

Again, when such fears take the form of public policy prescriptions, it is referred to as a “precautionary principle” and it generally holds that, because a given new technology could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms.

The problem with letting such precautionary thinking guide policy is that it poses a serious threat to technological progress, economic entrepreneurialism, and human prosperity. Under an information policy regime guided at every turn by a precautionary principle, technological innovation would be impossible because of fear of the unknown; hypothetical worst-case scenarios would trump all other considerations. Social learning and economic opportunities become far less likely, perhaps even impossible, under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.

For these reasons, to the maximum extent possible, the default position toward new forms of technological innovation should be innovation allowed. This policy norm is better captured in the well-known Internet ideal of “permissionless innovation,” or the general freedom to experiment and learn through trial-and-error experimentation.

Stated differently, when it comes to new information technologies such as the Internet of Things, the default policy position should be an “anti-Precautionary Principle.” Paul Ohm, who recently joined the FTC as a Senior Policy Advisor, outlined the concept in his 2008 article, “The Myth of the Superuser: Fear, Risk, and Harm Online.” “Fear of the powerful computer user, the ‘Superuser,’ dominates debates about online conflict,” Ohm argued, but this superuser is generally “a mythical figure” concocted by those who are typically quick to set forth worst-case scenarios about the impact of digital technology on society. Fear of such superusers and the hypothetical worst-case dystopian scenarios they might bring about prompts policy action, since “Policymakers, fearful of his power, too often overreact by passing overbroad, ambiguous laws intended to ensnare the Superuser but which are instead used against inculpable, ordinary users.” “This response is unwarranted,” Ohm says “because the Superuser is often a marginal figure whose power has been greatly exaggerated.”

Ohm gets it exactly right and he could have cited Wildavsky on the matter, who noted that, “’Worst case’ assumptions can convert otherwise quite ordinary conditions… into disasters, provided only that the right juxtaposition of unlikely factors occur.” In other words, creative minds can string together some random anecdotes or stories and concoct horrific-sounding scenarios for the future that leave us searching for preemptive to solutions to problems that haven’t even developed yet.

Unfortunately, fear of “superusers” and worst-case boogeyman scenarios are already driving much of the debate over the Internet of Things. Most of the fear and loathing involves privacy-related dystopian scenarios that envision a miserable panoptic future from which there is no escape. And that’s about the time the authors suggest “we need to have a conversation” about these new technologies — by which they really mean to suggest we need to find ways to put the genie back in the bottle or smash the bottle before the genie even gets out.

But how are we to know what the future holds? And even to the extent some critics believe they possess a techno-crystal ball that can forecast the future, why is it seemingly always the case that none of those possible futures involves humans gradually adapting and assimilating these new technologies into their lives the way they have countless times before? In my FTC filing next week, I will document examples of that process of initial resistance, gradual adaptation, and then eventual assimilation of various new information technologies into society. But I have already developed a model explaining this process and offering plenty of examples in my recent law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as in this lengthy blog post, “Who Really Believes in ‘Permissionless Innovation’?”

In sum, the most important “conversations” we have about new technologies are the ones we have every day as we interact with those new technologies and with each other. Wisdom is born of experience, including experiences involving risk and the possibility of mistakes and accidents. Patience and an openness to permissionless innovation represent the wise disposition toward new technologies not only because it provides breathing space for future entrepreneurialism, but also because it provides an opportunity to observe both the evolution of societal attitudes toward new technologies and how citizens adapt to them.

Previous post:

Next post: