The Growing Conflict of Visions over the Internet of Things & Privacy

by on January 14, 2014 · 2 comments

When Google announced it was acquiring digital thermostat company Nest yesterday, it set off another round of privacy and security-related technopanic talk on Twitter and elsewhere. Fear and loathing seemed to be the order of the day. It seems that each new product launch or business announcement in the “Internet of Things” space is destined to set off another round of Chicken Little hand-wringing. We are typically told that the digital sky will soon fall on our collective heads unless we act preemptively to somehow head-off some sort of pending privacy or security apocalypse.

Meanwhile, however, a whole heck of lot of people are demanding more and more of these technologies, and American entrepreneurs are already engaged in heated competition with European and Asian rivals to be at the forefront of the next round Internet innovation to satisfy those consumer demands. So, how is this going to play out?

This gets to what becoming the defining policy issue of our time, not just for the Internet but for technology policy more generally: To what extent should the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? We can think of this as “the permission question” and it is creating a massive rift between those who desire more preemptive, precautionary safeguards for a variety of reasons (safety, security, privacy, copyright, etc.) and those of us who continue to believe that permissionless innovation should be the guiding ethos of our age. The chasm between these two worldviews is only going to deepen in coming years as the pace of innovation around new technologies (the Internet of Things, wearable tech, driverless cars, 3D printing, commercial drones, etc) continues to accelerate.

Sarah Kessler of Fast Company was kind enough to call me last night and ask for some general comments about Google buying Nest and she also sought out the comments of Marc Rotenberg of EPIC about privacy in the Internet of Things era more generally. Our comments provide a useful example of the divide between these two worldviews and foreshadow debates to come:

With an estimated 50 billion connected objects coming online by 2050, some see good reason to put policies in place that regulate the new categories of data they will collect about the people who use those products. “The basic problem with the Internet of Things, unless privacy safeguards are established up front, is that users will lose control over the data they generate,” Marc Rotenberg, the president of the Electronic Privacy Information Center, told Fast Company in an email. Others see the emerging category as a perfect reason to block omnibus attempts to regulate user data. “If we spend all of our time living in fear of hypothetical worst-case scenarios, then the best-case scenarios will never come about,” says Adam Thierer, a Senior Research Fellow at George Mason University’s Mercatus Center. “That’s the nature of how innovation works. You have to allow for risks and experimentation, and even accidents and failures, if you want to get progress.”

Last week, I wrote about this conflict of visions in my dispatch from the CES show and this topic is also the focus of my forthcoming eBook, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” To reiterate what I already said, my book will describe the future of the Internet of Things and all technology policy as a grand battle the “precautionary principle” and “permissionless innovation.” The “precautionary principle” refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions. The other worldview, “permissionless innovation,” refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

While those adhering to the precautionary principle mindset tend to favor “top-down” legalistic approaches to solving those potential problems that might creep up, those of us who favor the premissionless innovation approach favor “bottom-up” solutions that evolve over time but do not interrupt the ongoing experimentation and innovation that consumers demand. What does a “bottom-up” approach mean in practice? Education and empowerment, social pressure, societal norms, voluntary self-regulation, and targeted enforcement of existing legal norms (especially through the common law) are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I” (i.e., permissioned) nature.

We really should not underestimate the power of norms and public pressure to “regulate” in this regard, perhaps even better than law, which tends to be too slow-moving to make much of a difference. In my book I spend a great deal of time talking about how other technological innovations have been shaped by social norms, public pressure, and press attention. That same will be true for the Internet of Things and various new technologies I discuss in my book. Others will gradually adapt to the new technological realities and integrate these new devices and services into their lives over time.

Perhaps, then, it will be the case that if Google does something particularly bone-headed with Nest that a public backlash will ensue. Or maybe some consumers will just reject Nest and look for other options, which is apparently what Rotenberg is doing according to the Fast Company article. Of course, as I noted in concluding the interview, others may act quite differently and accept Nest and other new Internet of Things technologies, even if there are some privacy or security downsides. As I told Sarah Kessler, while I was visiting the consumer electronics show last week, I heard it was freezing back here in DC. If I would have had Nest in my house, perhaps Google Now could have alerted me to the dangerously low temps in my house and suggested that I raise the temp remotely before my pipes froze. As I noted to Kessler:

“Would that have been creepy?” he says. “To me it would have been helpful. So for everything that people regard as a negative, I can usually find a positive. And if there’s that balance there, then it should be left to individuals to decide for themselves how to decide that balance.”

Finally, since I often get accused of being some sort of nihilist in these debates, I want to make it clear that ethics should influence all these discussions, but I prefer that we not impose ethics in a heavy-handed, inflexible way through preemptive, proscriptive regulatory controls. It makes more sense to wait and see how things play out before regulating to address harms, once we figure out which ones are real. (See the second and third essays listed below for more on ethics and technological innovation.) But we absolutely need to be engaging in robust societal discussions about digital ethics, digital citizenship, privacy and security by design, and sensible online etiquette. I’ve spent a lifetime writing about the power of that approach in the context of online child safety and I think it is equally applicable for privacy and security-related matters. In particular, we need to talk to our kids and our future technologists and innovators about smarter digital habits that respect the safety, security, and privacy of others. Those conversations can help us chart a more sensible path forward without sacrificing the many benefits that accompany the ongoing technological revolution we are blessed to be experiencing today.


Additional Reading:

  • Arps

    Adam, while I am traditionally sympathetic to your views on privacy regulation, I’m not sure your post follows:

    1. You criticize Google-Nest technopanic but curiously don’t provide any examples. The post smells like a straw man — those who don’t like the Google-Nest deal are handmaidens to more government regulation.

    2a. It’s totally consistent to be (a) worried about new privacy threats (b) without wanting government regulation. In many cases, technophobia is a way of clearing information — a way to warn customers that sharing with companies isn’t often worth it. Boycotts and consumer reports may be “market-phobic” but also consistent with (and required by) free market views.

    2b. In this case, I suspect people are worried because Google has had a very suspect approach to privacy controls and now has a direct pathway to our homes.

    2c. Your tone will convince few. I know you’ve recognized the distinction in the past, but your current post (among others) suggests that if we love capitalism, we should love what capitalists are doing. In practice, that turns people off to capitalism.

    3. The Snowden revelations seriously undermine the libertarian view based on the distinction between public and private surveillance. We know now that the government often relies on private companies to collection citizens’ data. If Xbox360 can continuously peer into your living room, we need to assume the NSA can usurp the device and do so as well. Therefore, libertarian technopanic is rational. Apart from Snowden, courts are saying there’s no reasonable expectation of privacy in much of the data that companies are collecting. Therefore, if companies are expanding surveillance, so too is the government when it hacks, strong-arms, or subpoenas it.

    Somebody on TLF needs to address #3, because i think it’s a game-changer. I’d be interested to see any libertarian rebuttals to it.

  • stop the cyborgs

    Why do you characterise the actions of democracies as top down and actions large corporate dictatorships as bottom up?

Previous post:

Next post: