Technopanics & the Precautionary Principle

DM coverOn May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.

Wallach’s latest book is entitled, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. And, as I’ve noted here recently, the greatly expanded second edition of my latest book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, has just been released.

Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!—A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.

Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.

Continue reading →

One of my favorite themes, and not just in the field of tech policy, is the “Unintended Consequences of Well-Intentioned Regulations.” I believe that all laws and regulations have dynamic effects and that to fully appreciate the true impact of any particular public policy, you must always closely investigate the potential opportunity costs and unintended consequences associated with those policies. Because all too often laws and regulations are hastily put on the books with the very best of intentions in mind, only to later be shown to produce the opposite of what was intended.

Today’s case in point comes from Wall Street Journal article by Rachel Bachman and it involves how the growing wave of cycling helmet laws are having a net negative impact on public health because they discourage ridership in the aggregate. Thus, those potential riders are then either (a) just less active overall or (b) driving their cars to get where they need to go. And both of those results are, ultimately, riskier than cycling without a helmet. For that reason, Bachman reports, cycling advocates “are pushing back against mandatory bike-helmet laws in the U.S. and elsewhere. They say mandatory helmet laws, particularly for adults, make cycling less convenient and seem less safe, thus hindering the larger public-health gains of more people riding bikes.” Supporting evidence comes from this 2012 paper in the journal Risk Analysis by Piet de Jong, a professor in the department of applied finance and actuarial studies at Sydney’s Macquarie University. His paper included an empirical model that showed how mandatory bike-helmet laws “have a net negative health impact.”

This strikes me as one of the very best examples of how to do dynamic benefit-cost analysis and show the full range of societal impacts associated with well-intentioned regulations. And it reminds me of the playground example I use in several of my papers: Laws and liability threats discouraged tall playground climbing structures in the ’80s and ’90s. Continue reading →

Since the release of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, it has been my pleasure to be invited to speak to dozens of groups about the future of technology policy debates. In the process, I have developed and continuously refined a slide show entitled, “Permissionless Innovation’ & the Clash of Visions over Emerging Technologies.” After delivering this talk again twice last week, I figured I would post the latest slide deck I’m using for the presentation. It’s embedded below or it can be found at the link above.

It was my pleasure this week to be invited to deliver some comments at an event hosted by the Information Technology and Innovation Foundation (ITIF) to coincide with the release of their latest study, “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies.” The goal of the new ITIF report, which was co-authored by Daniel Castro and Alan McQuinn, is to highlight the dangers associated with “the cycle of panic that occurs when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on.” (p. 1)

As Castro and McQuinn describe it, the privacy panic cycle “charts how perceived privacy fears about a technology grow rapidly at the beginning, but eventually decline over time.” They divide this cycle into four phases: Trusting Beginnings, Rising Panic, Deflating Fears, and Moving On. Here’s how they depict it in an image:

Privacy Panic Cycle - 1

 

Continue reading →

I’ve been thinking about the “right to try” movement a lot lately. It refers to the growing movement (especially at the state level here in the U.S.) to allow individuals to experiment with alternative medical treatments, therapies, and devices that are restricted or prohibited in some fashion (typically by the Food and Drug Administration). I think there are compelling ethical reasons for allowing citizens to determine their own course of treatment in terms of what they ingest into their bodies or what medical devices they use, especially when they are facing the possibility of death and have exhausted all other options.

But I also favor a more general “right to try” that allows citizens to make their own health decisions in other circumstances. Such a general freedom entails some risks, of course, but the better way to deal with those potential downsides is to educate citizens about the trade-offs associated with various treatments and devices, not to forbid them from seeking them out at all.

The Costs of Control

But this debate isn’t just about ethics. There’s also the question of the costs associated with regulatory control. Practically speaking, with each passing day it becomes harder and harder for governments to control unapproved medical devices, drugs, therapies, etc.  Correspondingly, that significantly raises the costs of enforcement and makes one wonder exactly how far the FDA or other regulators will go to stop or slow the advent of new technologies.

I have written about this “cost of control” problem in various law review articles as well as my little Permissionless Innovation book and pointed out that, when enforcement challenges and costs reach a certain threshold, the case for preemptive control grows far weaker simply because of (1) the massive resources that regulators would have to pour into the task on crafting a workable enforcement regime; and/or (2) the massive loss of liberty it would entail for society more generally to devise such solutions. With the rise of the Internet of Things, wearable devices, mobile medical apps, and other networked health and fitness technologies, these issues are going to become increasingly ripe for academic and policy consideration. Continue reading →

“Why hasn’t Europe fostered the kind of innovation that has spawned hugely successful technology companies?” asks James B. Stewart in an important new column for the New York Times (“A Fearless Culture Fuels U.S. Tech Giants“).

That’s a great question, and one that I have tried to answer in a series of recent essays. (See, for example, “Europe’s Choice on Innovation” and “Embracing a Culture of Permissionless Innovation.”) What I have suggested in those essays is that the starkly different outcomes on either side of the Atlantic in terms of recent economic growth and innovation can primarily be explained by cultural attitudes toward risk-taking and failure. “For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things,” I have argued. And the most powerful proof of this is to examine the amazing natural experiment that has played out on either side of the Atlantic over the past two decades with the Internet and the digital economy.

For example, an annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing and digital technology. None of them are based in Europe, however. Another recent survey revealed that the world’s 15 most valuable Internet companies (based on market capitalizations) have a combined market value of nearly $2.5 trillion, but none of them are European while 11 of them are U.S. firms. Again, it is America’s tech innovators that dominate that list.

Many European officials and business leaders are waking up to this grim reality and are wondering how to reverse this situation. In his Times essay, Stewart quotes Danish economist Jacob Kirkegaard of the Peterson Institute for International Economics, who notes that Europeans “all want a Silicon Valley. . . . But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”

OK, but why is that? Continue reading →

do not panicOn Sunday night, 60 Minutes aired a feature with the ominous title, “Nobody’s Safe on the Internet,” that focused on connected car hacking and Internet of Things (IoT) device security. It was followed yesterday morning by the release of a new report from the office of Senator Edward J. Markey (D-Mass) called Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, which focused on connected car security and privacy issues. Employing more than a bit of techno-panic flare, these reports basically suggest that we’re all doomed.

On 60 Minutes, we meet former game developer turned Department of Defense “cyber warrior” Dan (“call me DARPA Dan”) Kaufman–and learn his fears of the future: “Today, all the devices that are on the Internet [and] the ‘Internet of Things’ are fundamentally insecure. There is no real security going on. Connected homes could be hacked and taken over.”

60 Minutes reporter Lesley Stahl, for her part, is aghast. “So if somebody got into my refrigerator,” she ventures, “through the internet, then they would be able to get into everything, right?” Replies DARPA Dan, “Yeah, that’s the fear.” Prankish hackers could make your milk go bad, or hack into your garage door opener, or even your car.

This segues to a humorous segment wherein Stahl takes a networked car for a spin. DARPA Dan and his multiple research teams have been hard at work remotely programming this vehicle for years. A “hacker” on DARPA Dan’s team proceeded to torment poor Lesley with automatic windshield wiping, rude and random beeps, and other hijinks. “Oh my word!” exclaims Stahl. Continue reading →

Claire Cain Miller of The New York Times posted an interesting story yesterday noting how, “Technology Has Made Life Different, but Not Necessarily More Stressful.” Her essay builds on a new study by researchers at the Pew Research Center and Rutgers University on “Social Media and the Cost of Caring.” Miller’s essay and this new Pew/Rutgers study indirectly make a point that I am always discussing in my own work, but that is often ignored or downplayed by many technological critics, namely: We humans have repeatedly proven quite good at adapting to technological change, even when it entails some heartburn along the way.

The major takeaway of the Pew/Rutgers study was that, “social media users are not any more likely to feel stress than others, but there is a subgroup of social media users who are more aware of stressful events in their friends’ lives and this subgroup of social media users does feel more stress.” Commenting on the study, Miller of the Times notes:

Fear of technology is nothing new. Telephones, watches and televisions were similarly believed to interrupt people’s lives and pressure them to be more productive. In some ways they did, but the benefits offset the stressors. New technology is making our lives different, but not necessarily more stressful than they would have been otherwise. “It’s yet another example of how we overestimate the effect these technologies are having in our lives,” said Keith Hampton, a sociologist at Rutgers and an author of the study.  . . .  Just as the telephone made it easier to maintain in-person relationships but neither replaced nor ruined them, this recent research suggests that digital technology can become a tool to augment the relationships humans already have.

I found this of great interest because I have written about how humans assimilate new technologies into their lives and become more resilient in the process as they learn various coping techniques. Continue reading →

I’ve spent much of the past year studying the potential public policy ramifications associated with the rise of the Internet of Things (IoT). As I was preparing some notes for my Jan. 6th panel discussing on “Privacy and the IoT: Navigating Policy Issues” at this year’s 2015 CES show, I went back and collected all my writing on IoT issues so that I would have everything in one place. Thus, down below I have listed most of what I’ve done over the past year or so. Most of this writing is focused on the privacy and security implications of the Internet of Things, and wearable technologies in particular.

I plan to stay on top of these issues in 2015 and beyond because, as I noted when I spoke on a previous CES panel on these issues, the Internet of Things finds itself at the center of what we might think of a perfect storm of public policy concerns: Privacy, safety, security, intellectual property, economic / labor disruptions, automation concerns, wireless spectrum issues, technical standards, and more. When a new technology raises one or two of these policy concerns, innovators in those sectors can expect some interest and inquiries from lawmakers or regulators. But when a new technology potentially touches all of these issues, then it means innovators in that space can expect an avalanche of attention and a potential world of regulatory trouble. Moreover, it sets the stage for a grand “clash of visions” about the future of IoT technologies that will continue to intensify in coming months and years.

That’s why I’ll be monitoring developments closely in this field going forward. For now, here’s what I’ve done on this issue as I prepare to head out to Las Vegas for another CES extravaganza that promises to showcase so many exciting IoT technologies. Continue reading →

As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy. Continue reading →