In their paper, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” my Mercatus Center colleagues Jerry Brito and Tate Watkins warned of the dangers of “threat inflation” in cybersecurity policy debates. In early 2011, Mercatus also published a paper by Sean Lawson, an assistant professor in the Department of Communication at the University of Utah, entitled “Beyond Cyber Doom” that documented how fear-based tactics and cyber-doom scenarios and rhetoric increasingly were on display in cybersecurity policy debates. Finally, in my recent Mercatus Center working paper, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” I extended their threat inflation analysis and developed a comprehensive framework offering additional examples of, and explanations for, threat inflation in technology policy debates.
These papers make it clear that a sort of hysteria has developed around cyberwar and cybersecurity issues. Frequent allusions are made in cybersecurity debates to the potential for a “Digital Pearl Harbor,” a “cyber cold war,” a “cyber Katrina,” or even a “cyber 9/11.” These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” even though no one can be “bombed” with binary code. And new examples of such inflationary rhetoric seem to emerge each day. Continue reading →
[UPDATE: 2/14/2013: As noted here, this paper was published by the Minnesota Journal of Law, Science & Technology in their Winter 2013 edition. Please refer to that post for more details and cite this final version of the paper going forward.]
I’m pleased to report that the Mercatus Center at George Mason University has just released my huge new white paper, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” I’ve been working on this paper for a long time and look forward to finding it a home in a law journal some time soon. Here’s the summary of this 80-page paper:
Fear is an extremely powerful motivating force, especially in public policy debates where it is used in an attempt to sway opinion or bolster the case for action. Often, this action involves preemptive regulation based on false assumptions and evidence. Such fears are frequently on display in the Internet policy arena and take the form of full-blown “technopanic,” or real-world manifestations of this illogical fear. While it’s true that cyberspace has its fair share of troublemakers, there is no evidence that the Internet is leading to greater problems for society.
This paper considers the structure of fear appeal arguments in technology policy debates and then outlines how those arguments can be deconstructed and refuted in both cultural and economic contexts. Several examples of fear appeal arguments are offered with a particular focus on online child safety, digital privacy, and cybersecurity. The various factors contributing to “fear cycles” in these policy areas are documented.
To the extent that these concerns are valid, they are best addressed by ongoing societal learning, experimentation, resiliency, and coping strategies rather than by regulation. If steps must be taken to address these concerns, education and empowerment-based solutions represent superior approaches to dealing with them compared to a precautionary principle approach, which would limit beneficial learning opportunities and retard technological progress.
The complete paper can be found on the Mercatus site here, on SSRN, or on Scribd. I’ve also embedded it below in a Scribd reader. Continue reading →
Just last week I was discussing the terrifically interesting work of Michael Sacasas who pens The Frailest Thing, a poetic blog about technology and culture. [see: "Information Revolutions & Cultural / Economic Tradeoffs"] I highly recommend you follow his blog even if you struggle to keep up with his brilliance, as I often do. He posted another great essay today entitled, “Nostalgia: The Third Wave,” in which he discusses the work of the late social critic Christopher Lasch and his work on memory and nostalgia. Go read the entire thing since I cannot possible do it justice here. Anyway, I posted a short comment over there that I thought I would just republish here in case others are interested. I find the issue of nostalgia to be quite interesting.
Michael… I’m currently finishing up a paper looking at the causes of various “techno-panics” over time. I try to group together a variety of theories and possible explanations, one of which is labeled “Hyper-Nostalgia, Pessimistic Bias & Soft Ludditism.” I don’t go into anywhere near the detail you do here, but I did unearth a number of interesting things while conducting research.
Have you ever come across the book On Longing: Narratives of the Miniature, the Gigantic, the Souvenir, the Collection, by the poet Susan Stewart? She notes that what is ironic about nostalgia is that it is rooted in something typically unknown by the proponent. Consequently, she argues that nostalgia represents “a sadness without an object, a sadness which creates a longing that of necessity is inauthentic because it does not take part in lived experience. Rather, it remains behind and before that experience.” Too often, Stewart observes, “nostalgia wears a distinctly utopian face” and thus becomes a “social disease.”
That’s probably a bit extreme, but it does help explain why some intellectuals, social critics, and policymakers occasionally demonize new mediums, technologies, or forms of culture. If one if suffering from a rather extreme version of what Michael Shermer refers to this as “rosy retrospection bias,” (The Believing Brain, 2011) or “the tendency to remember past events as being more positive than they actually were,” then it would hardly be surprising that they would adopt attitudes and policies that disfavor the new and different. Continue reading →
My thanks to both Maria H. Andersen and Michael Sacasas for their thoughtful responses to my recent Forbes essay on “10 Things Our Kids Will Never Worry About Thanks to the Information Revolution.” They both go point by point through my Top 10 list and offer an alternative way of looking at each of the trends I identify. What their responses share in common is a general unease with the hyper-optimism of my Forbes piece. That’s understandable. Typically in my work on technological “optimism” and “pessimism” — and yes, I admit those labels are overly simplistic — I always try to strike a sensible balance between pollyannism and hyper-pessimism as it pertains to the impact of technological change on our culture and economy. I have called this middle ground position “pragmatic optimism.” In my Forbes essay, however, I was in full-blown pollyanna mode. That doesn’t mean I don’t generally feel very positive about the changes I itemized in that essay, rather, I just didn’t have the space in a 1,000-word column to identify the tradeoffs inherent in each trend. Thus, Andersen and Sacasas are rightfully pushing back against my lack of balance.
But there is a problem with their slightly pessimistic pushback, too. To better explain my own position and respond to Andersen and Sacasas, let me return to the story we hear again and again in discussion about technological change: the well-known allegorical tale from Plato’s Phaedrus about the dangers of the written word. In the tale, the god Theuth comes to King Thamus and boasts of how Theuth’s invention of writing would improve the wisdom and memory of the masses relative to the oral tradition of learning. King Thamus shot back, “the discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it.” King Thamus then passed judgment himself about the impact of writing on society, saying he feared that the people “will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part quite ignorant.”
After recounting Plato’s allegory in my essay, “Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society,” I noted how this same tension has played out in every subsequent debate about the impact of a new technology on culture, values, morals, language, learning, and so on. It is a never-ending cycle. Continue reading →
In my ongoing work on technopanics, I’ve frequently noted how special interests create phantom fears and use “threat inflation” in an attempt to win attention and public contracts. In my next book, I have an entire chapter devoted to explaining how “fear sells” and I note how often companies and organizations incite fear to advance their own ends. Cybersecurity and child safety debates are littered with examples.
In their recent paper, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” my Mercatus Center colleagues Jerry Brito and Tate Watkins argued that “a cyber-industrial complex is emerging, much like the military-industrial complex of the Cold War.” As Stefan Savage, a Professor in the Department of Computer Science and Engineering at the University of California, San Diego, told The Economist magazine, the cybersecurity industry sometimes plays “fast and loose” with the numbers because it has an interest in “telling people that the sky is falling.” In a similar vein, many child safety advocacy organizations use technopanics to pressure policymakers to fund initiatives they create. [Sometimes I can get a bit snarky about this.] Continue reading →
[NOTE: The following is a template for how to script congressional testimony when invited to speak about online safety issues.]
Mr. Chairman and members of the Committee, thank you inviting me here today to testify about the most important issue to me and everyone in this room: Our children.
There is nothing I care more about than the future of our children. Like Whitney Houston, “I believe the children are our future.”
Mr. Chairman, I remember with fondness the day my little Johnny and Jannie came into this world. They were my little miracles. Gifts from God, I say. At the moment of birth, my wife… oh, well, I could tell you all about it someday but suffice it to say it was a beautiful scene, with the exception of all the amniotic fluid and blood everywhere. I wept for days.
Today my kids are (mention ages of each) and they are the cutest little angels on God’s green Earth. (NOTE: At this point it would be useful for you to hold up a picture of your kids, preferably with them cuddling with cute stuffed animals, a kitten, or petting a pony as in the example below. Alternatively, use a picture taken at a major attraction located in the Chairman’s congressional district.) Continue reading →
Mark Thompson has a new essay up over at Time on “Cyber War Worrywarts” in which he argues that in debates about cybersecurity, “the ratio of scaremongers to calm logic [is] currently about a 2-to-1 edge in favor of the Jules Verne crowd.” He’s right. In fact, I used my latest Forbes essay to document some of the panicky rhetoric and examples of “threat inflation” we currently see at work in debates over cybersecurity policy. “Threat inflation” refers to the artificial escalation of dangers or harms to society or the economy and doom-and-gloom rhetoric is certainly on the rise in this arena.
I begin my essay by noting how “It has become virtually impossible to read an article about cybersecurity policy, or sit through any congressional hearing on the issue, without hearing prophecies of doom about an impending “Digital Pearl Harbor,” a “cyber Katrina,” or even a “cyber 9/11.”” Meanwhile, Gen. Michael Hayden, who led the National Security Administration and Central Intelligence Agency under president George W. Bush, recently argued that a “digital Blackwater” may be needed to combat the threat of cyberterrorism.
These rhetorical claims are troubling to me for several reasons. I build on the concerns raised originally in an important Mercatus Center paper by my colleagues Jerry Brito and Tate Watkins, which warns of the dangers of threat inflation in policy debates and the corresponding rise of the “cybersecurity industrial complex.” In my Forbes essay, I note that: Continue reading →
I enjoyed this Wall Street Journal essay by Daniel H. Wilson on “The Terrifying Truth About New Technology.” It touches on many of the themes I’ve discussed here in my essays on techno-panics, fears about information overload, and the broader battle throughout history between technology optimists and pessimists about the impact of new technologies on culture, life, and learning. Wilson correctly notes that:
The fear of the never-ending onslaught of gizmos and gadgets is nothing new. The radio, the telephone, Facebook — each of these inventions changed the world. Each of them scared the heck out of an older generation. And each of them was invented by people who were in their 20s.
Young people adapt quickly to the most absurd things. Consider the social network Foursquare, in which people not only willingly broadcast their location to the world but earn goofy virtual badges for doing so. My first impulse was to ignore Foursquare—for the rest of my life, if I have to.
And that’s the problem. As we get older, the process of adaptation slows way down. Unfortunately, we depend on alternating waves of assimilation and accommodation to adapt to a constantly changing world. For [developmental psychologist Jean] Piaget, this balance between what’s in the mind and what’s in the environment is called equilibrium. It’s pretty obvious when equilibrium breaks down. For example, my grandmother has phone numbers taped to her cellphone. Having grown up with the Rolodex (a collection of numbers stored next to the phone), she doesn’t quite grasp the concept of putting the numbers in the phone.
Why are we so nostalgic about the technology we grew up with? Old people say things like: “This new technology is stupid. I liked (new, digital) technology X better when it was called (old, analog) technology Y. Why, back in my day….” Which leads inexorably to, “I just don’t get it.”
There’s a simple explanation for this phenomenon: “adventure window.” At a certain age, that which is familiar and feels safe becomes more important to you than that which is new, different, and exciting. Think of it as “set-in-your-ways syndrome.”
Continue reading →
Facebook announced yesterday that it had finished most of the global roll-out, begun in the U.S. last December. Now ZDNet reports that European Privacy regulators are already planning a probe of this. Emil Protalinski writes:
“Tags of people on pictures should only happen based on people’s prior consent and it can’t be activated by default,” Gerard Lommel, a Luxembourg member of the so-called Article 29 Data Protection Working Party, told BusinessWeek. Such automatic tagging “can bear a lot of risks for users” and the group of European data protection officials will “clarify to Facebook that this can’t happen like this.”
No doubt our friends at the Extra-Paternalist Internet Cops (EPIC) will jump into the fray with another of their many complaints to the FTC, dripping with outrage that Facebook has “opted us into” this feature. But what’s the big deal, really? Emil explains how things work:
When you upload new photos, Facebook uses software similar to that found in many photo editing tools to match your new photos to other photos you’re tagged in. Similar photos are grouped together and, whenever possible, Facebook suggests the name(s) your friend(s) in the photos. In other words, the square that magically finds faces in a photo now suggests names of your Facebook friends to streamline the tagging process, especially with the same friends in multiple uploaded photos.
Lifehacker explains how easy it is for Facebook users to opt-out of having their friends seeing the automatically generated suggestion to tag their face (as Facebook did in its own announcement):
- Head your Privacy Settings and click on Customize Settings.
- Scroll down to the “Suggest Photos of Me to Friends” setting and hit “Edit Settings”.
- In the drop-down on the right, hit “Disable”.
See the screenshots here. So, in short: The feature that’s upsetting the privacy regulationistas is a feature that saves us time and effort in tagging our friends in photos we upload—unless our friends have opt-outed of having their photos auto-suggested.
Continue reading →
John Naughton, a professor at the Open University in the U.K. and a columnist for the U.K. Guardian, has a new essay out entitled “Only a Fool or Nicolas Sarkozy Would Go to War with Facebook.” I enjoyed it because it touches upon two interrelated concepts that I’ve spent years writing about: “moral panic” and “third-person effect hypothesis” (although Naughton doesn’t discuss the latter by name in his piece.) To recap, let’s define those terms:
“Moral Panic” / “Techno-Panic“: Christopher Ferguson, a professor at Texas A&M’s Department of Behavioral, Applied Sciences and Criminal Justice, offers the following definition: “A moral panic occurs when a segment of society believes that the behavior or moral choices of others within that society poses a significant risk to the society as a whole.” By extension, a “techno-panic” is simply a moral panic that centers around societal fears about a specific contemporary technology (or technological activity) instead of merely the content flowing over that technology or medium.
“Third-Person Effect Hypothesis“: First formulated by psychologist W. Phillips Davison in 1983, “this hypothesis predicts that people will tend to overestimate the influence that mass communications have on the attitudes and behavior of others. More specifically, individuals who are members of an audience that is exposed to a persuasive communication (whether or not this communication is intended to be persuasive) will expect the communication to have a greater effect on others than on themselves.” While originally formulated as an explanation for how people convinced themselves “media bias” existed where none was present, the third-person-effect hypothesis has provided an explanation for other phenomenon and forms of regulation, especially content censorship. Indeed, one of the most intriguing aspects about censorship efforts historically is that it is apparent that many censorship advocates desire regulation to protect others, not themselves, from what they perceive to be persuasive or harmful content. That is, many people imagine themselves immune from the supposedly ill effects of “objectionable” material, or even just persuasive communications or viewpoints they do not agree with, but they claim it will have a corrupting influence on others.
All my past essays about moral panics and third-person effect hypothesis can be found here. These theories are also frequently on display in the work of some of the “Internet pessimists” I have written about here, as well as in many bills and regulatory proposals floated by lawmakers. Which brings us back to the Naughton essay.
Continue reading →