I’ve spent a lot of time here through the years trying to identify the factors that fuel moral panics and “technopanics.” (Here’s a compendium of the dozens of essays I’ve written here on this topic.) I brought all this thinking together in a big law review article (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”) and then also in my new booklet, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.”

One factor I identify as contributing to panics is the fact that “bad news sells.” As I noted in the book, “Many media outlets and sensationalist authors sometimes use fear-based tactics to gain influence or sell books. Fear mongering and prophecies of doom are always effective media tactics; alarmism helps break through all the noise and get heard.”

In line with that, I want to highly recommend you check out this excellent new oped by John Stossel of Fox Business Network on “Good News vs. ‘Pessimism Porn‘.”  Stossel correctly notes that “the media win by selling pessimism porn.” He says:

Are you worried about the future? It’s hard not to be. If you watch the news, you mostly see violence, disasters, danger. Some in my business call it “fear porn” or “pessimism porn.” People like the stuff; it makes them feel alive and informed.

Of course, it’s our job to tell you about problems. If a plane crashes — or disappears — that’s news. The fact that millions of planes arrive safely is a miracle, but it’s not news. So we soak in disasters — and warnings about the next one: bird flu, global warming, potential terrorism. I won Emmys hyping risks but stopped winning them when I wised up and started reporting on the overhyping of risks. My colleagues didn’t like that as much.

He continues on to note how, even though all the data clearly proves that humanity’s lot is improving, the press relentlessly push the “pessimism porn.” Continue reading →

Last December, it was my pleasure to take part in a great event, “The Disruptive Competition Policy Forum,” sponsored by Project DisCo (or The Disruptive Competition Project). It featured several excellent panels and keynotes and they’ve just posted the video of the panel I was on here and I have embedded it below. In my remarks, I discussed:

  • benefit-cost analysis in digital privacy debates (building on this law review article);
  • the contrast between Europe and America’s approach to data & privacy issues (referencing this testimony of mine);
  • the problem of “technopanics” in information policy debates (building on this law review article);
  • the difficulty of information control efforts in various tech policy debates (which I wrote about in this law review article and these two blog posts: 1, 2);
  • the possibility of less-restrictive approaches to privacy & security concerns (which I have written about here as well in those other law review articles);
  • the rise of the Internet of Things and the unique challenges it creates (see this and this as well as my new book); and,
  • the possibility of a splintering of the Internet or the rise of “federated Internets.”

The panel was expertly moderated by Ross Schulman, Public Policy & Regulatory Counsel for CCIA, and also included remarks from John Boswell, SVP & Chief Legal Officer at SAS, and Josh Galper, Chief Policy Officer and General Counsel of Personal, Inc. (By the way, you should check out some of the cool things Personal is doing in this space to help consumers. Very innovative stuff.) The video lasts one hour. Here it is:

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today. Continue reading →

Last night, I appeared on a short segment on the PBS News Hour discussing, “What’s the future of privacy in a big data world?” I was also joined by Jules Polonetsky, executive director of the Future of Privacy Forum. If you’re interested, here’s the video. Transcript is here. Finally, down below the fold, I’ve listed a few law review articles and other essays of mine on this same subject.

Continue reading →

When Google announced it was acquiring digital thermostat company Nest yesterday, it set off another round of privacy and security-related technopanic talk on Twitter and elsewhere. Fear and loathing seemed to be the order of the day. It seems that each new product launch or business announcement in the “Internet of Things” space is destined to set off another round of Chicken Little hand-wringing. We are typically told that the digital sky will soon fall on our collective heads unless we act preemptively to somehow head-off some sort of pending privacy or security apocalypse.

Meanwhile, however, a whole heck of lot of people are demanding more and more of these technologies, and American entrepreneurs are already engaged in heated competition with European and Asian rivals to be at the forefront of the next round Internet innovation to satisfy those consumer demands. So, how is this going to play out?

This gets to what becoming the defining policy issue of our time, not just for the Internet but for technology policy more generally: To what extent should the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? We can think of this as “the permission question” and it is creating a massive rift between those who desire more preemptive, precautionary safeguards for a variety of reasons (safety, security, privacy, copyright, etc.) and those of us who continue to believe that permissionless innovation should be the guiding ethos of our age. The chasm between these two worldviews is only going to deepen in coming years as the pace of innovation around new technologies (the Internet of Things, wearable tech, driverless cars, 3D printing, commercial drones, etc) continues to accelerate.

Sarah Kessler of Fast Company was kind enough to call me last night and ask for some general comments about Google buying Nest and she also sought out the comments of Marc Rotenberg of EPIC about privacy in the Internet of Things era more generally. Our comments provide a useful example of the divide between these two worldviews and foreshadow debates to come: Continue reading →

Tomorrow, the Federal Trade Commission (FTC) will host an all-day workshop entitled, “Internet of Things: Privacy and Security in a Connected World.” [Detailed agenda here.] According to the FTC: “The workshop will focus on privacy and security issues related to increased connectivity for consumers, both in the home (including home automation, smart home appliances and connected devices), and when consumers are on the move (including health and fitness devices, personal devices, and cars).”

Where is the FTC heading on this front? This Politico story by Erin Mershon from last week offers some possible ideas. Yet, it still remains unclear whether this is just another inquiry into an exciting set of new technologies or if it is, as I worried in my recent comments to the FTC on this matter, “the beginning of a regulatory regime for a new set of information technologies that are still in their infancy.”

First, for those not familiar with the “Internet of Things,” this short new report from Daniel Castro & Jordan Misra of the Center for Data Innovation offers a good definition:

The “Internet of Things” refers to the concept that the Internet is no longer just a global network for people to communicate with one another using computers, but it is also a platform or devices to communicate electronically with the world around them. The result is a world that is alive with information as data flows from one device to another and is shared and reused for a multitude of purposes. Harnessing the potential of all of this data for economic and social good will be one of the primary challenges and opportunities of the coming decades.

The report continues on to offer a wide range of examples of new products and services that could fulfill this promise.

What I find somewhat worrying about the FTC’s sudden interest in the Internet of Things is that it opens to the door for some regulatory-minded critics to encourage preemptive controls on this exciting new wave of digital age innovation, based almost entirely on hypothetical worst-case scenarios they have conjured up. Continue reading →

Much of my recent research and writing has been focused on the contrast between “permissionless innovation” (the notion that innovation should generally be allowed by default) versus its antithesis, the “precautionary principle” (the idea that new innovations should be discouraged or even disallowed until their developers can prove that they won’t cause any harms).  I have discussed this dichotomy in three recent law review articles, a couple of major agency filings, and several blog posts. Those essays are listed at the end of this post.

In this essay, I want to discuss a recent speech by Federal Trade Commission (FTC) Chairwoman Edith Ramirez and show how precautionary principle thinking is increasingly creeping into modern information technology policy discussions, prompted by the various privacy concerns surrounding “big data” and the “Internet of Things” among other information innovations and digital developments.

First, let me recap the core argument I make in my recent articles and filings. It can be summarized as follows: Continue reading →

do not panicIn a recent essay here “On the Line between Technology Ethics vs. Technology Policy,” I made the argument that “We cannot possibly plan for all the ‘bad butterfly-effects’ that might occur, and attempts to do so will result in significant sacrifices in terms of social and economic liberty.” It was a response to a problem I see at work in many tech policy debates today: With increasing regularity, scholars, activists, and policymakers are conjuring up a seemingly endless parade of horribles that will befall humanity unless “steps are taken” to preemptive head-off all the hypothetical harms they can imagine. (This week’s latest examples involve the two hottest technopanic topics du jour: the Internet of Things and commercial delivery drones. Fear and loathing, and plenty of “threat inflation,” are on vivid display.)

I’ve written about this phenomenon at even greater length in my recent law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as in two lengthy blog posts asking the questions, “Who Really Believes in ‘Permissionless Innovation’?” and “What Does It Mean to ‘Have a Conversation’ about a New Technology?” The key point I try to get across in those essays is that letting such “precautionary principle” thinking guide policy poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity. If public policy is guided at every turn by the precautionary mindset then innovation becomes impossible because of fear of the unknown; hypothetical worst-case scenarios trump all other considerations. Social learning and economic opportunities become far less likely under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.

Indeed, if we live in constant fear of the future and become paralyzed by every boogeyman scenario that our creative little heads can conjure up, then we’re bound to end up looking as silly as this classic 2005 parody from The Onion,Everything That Can Go Wrong Listed.” Continue reading →

10 commandmentsWhat works well as an ethical directive might not work equally well as a policy prescription. Stated differently, what one ought to do it certain situations should not always be synonymous with what they must do by force of law.

I’m going to relate this lesson to tech policy debates in a moment, but let’s first think of an example of how this lesson applies more generally. Consider the Ten Commandments. Some of them make excellent ethical guidelines (especially the stuff about not coveting neighbor’s house, wife, or possessions). But most of us would agree that, in a free and tolerant society, only two of the Ten Commandments make good law: Thou shalt not kill and Thou shalt not steal.

In other words, not every sin should be a crime. Perhaps some should be; but most should not. Taking this out of the realm of religion and into the world of moral philosophy, we can apply the lesson more generally as: Not every wise ethical principle makes for wise public policy. Continue reading →

Last month, it was my great pleasure to serve as a “provocateur” at the IAPP’s (Int’l Assoc. of Privacy Professionals) annual “Navigate” conference. The event brought together a diverse audience and set of speakers from across the globe to discuss how to deal with the various privacy concerns associated with current and emerging technologies.

My remarks focused on a theme I have developed here for years: There are no simple, silver-bullet solutions to complex problems such as online safety, security, and privacy. Instead, only a “layered” approach incorporating many different solutions–education, media literacy, digital citizenship, evolving society norms, self-regulation, and targeted enforcement of existing legal standards–can really help us solve these problems. Even then, new challenges will present themselves as technology continues to evolve and evade traditional controls, solutions, or norms. It’s a never-ending game, and that’s why education must be our first-order solution. It better prepares us for an uncertain future. (I explained this approach in far more detail in this law review article.)

Anyway, if you’re interested in an 11-minute video of me saying all that, here ya go. Also, down below I have listed several of the recent essays, papers, and law review articles I have done on this issue.


Continue reading →