When Google announced it was acquiring digital thermostat company Nest yesterday, it set off another round of privacy and security-related technopanic talk on Twitter and elsewhere. Fear and loathing seemed to be the order of the day. It seems that each new product launch or business announcement in the “Internet of Things” space is destined to set off another round of Chicken Little hand-wringing. We are typically told that the digital sky will soon fall on our collective heads unless we act preemptively to somehow head-off some sort of pending privacy or security apocalypse.
Meanwhile, however, a whole heck of lot of people are demanding more and more of these technologies, and American entrepreneurs are already engaged in heated competition with European and Asian rivals to be at the forefront of the next round Internet innovation to satisfy those consumer demands. So, how is this going to play out?
This gets to what becoming the defining policy issue of our time, not just for the Internet but for technology policy more generally: To what extent should the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? We can think of this as “the permission question” and it is creating a massive rift between those who desire more preemptive, precautionary safeguards for a variety of reasons (safety, security, privacy, copyright, etc.) and those of us who continue to believe that permissionless innovation should be the guiding ethos of our age. The chasm between these two worldviews is only going to deepen in coming years as the pace of innovation around new technologies (the Internet of Things, wearable tech, driverless cars, 3D printing, commercial drones, etc) continues to accelerate.
Sarah Kessler of Fast Company was kind enough to call me last night and ask for some general comments about Google buying Nest and she also sought out the comments of Marc Rotenberg of EPIC about privacy in the Internet of Things era more generally. Our comments provide a useful example of the divide between these two worldviews and foreshadow debates to come: Continue reading →
With each booth I pass and presentation I listen to at the 2014 International Consumer Electronics Show (CES), it becomes increasingly evident that the “Internet of Things” era has arrived. In just a few short years, the Internet of Things (IoT) has gone from industry buzzword to marketplace reality. Countless new IoT devices are on display throughout the halls of the Las Vegas Convention Center this week, including various wearable technologies, smart appliances, remote monitoring services, autonomous vehicles, and much more.
This isn’t vaporware; these are devices or services that are already on the market or will launch shortly. Some will fail, of course, just as many other earlier technologies on display at past CES shows didn’t pan out. But many of these IoT technologies will succeed, driven by growing consumer demand for highly personalized, ubiquitous, and instantaneous services.
But will policymakers let the Internet of Things revolution continue or will they stop it dead in its tracks? Interestingly, not too many people out here in Vegas at the CES seem all that worried about the latter outcome. Indeed, what I find most striking about the conversation out here at CES this week versus the one about IoT that has been taking place in Washington over the past year is that there is a large and growing disconnect between consumers and policymakers about what the Internet of Things means for the future.
When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers. And that’s what has them so excited and ready to embrace these new technologies. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.
But at least so far, most consumers don’t seem to share the same worries. Continue reading →
James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, discusses the future of Artificial Intelligence (AI). Barrat takes a look at how to create friendly AI with human characteristics, which other countries are developing AI, and what we could expect with the arrival of the Singularity. He also touches on the evolution of AI and how companies like Google and IBM and government entities like DARPA and the NSA are developing artificial general intelligence devices right now.
Washington Post columnist Robert J. Samuelson published an astonishing essay today entitled, “Beware the Internet and the Danger of Cyberattacks.” In the print edition of today’s Post, the essay actually carries a different title: “Is the Internet Worth It?” Samuelson’s answer is clear: It isn’t. He begins his breathless attack on the Internet by proclaiming:
If I could, I would repeal the Internet. It is the technological marvel of the age, but it is not — as most people imagine — a symbol of progress. Just the opposite. We would be better off without it. I grant its astonishing capabilities: the instant access to vast amounts of information, the pleasures of YouTube and iTunes, the convenience of GPS and much more. But the Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar.
And then, after walking through a couple of worst-case hypothetical scenarios, he concludes the piece by saying:
the Internet’s social impact is shallow. Imagine life without it. Would the loss of e-mail, Facebook or Wikipedia inflict fundamental change? Now imagine life without some earlier breakthroughs: electricity, cars, antibiotics. Life would be radically different. The Internet’s virtues are overstated, its vices understated. It’s a mixed blessing — and the mix may be moving against us.
What I found most troubling about this is that Samuelson has serious intellectual chops and usually sweats the details in his analysis of other issues. He understands economic and social trade-offs and usually does a nice job weighing the facts on the ground instead of engaging in the sort of shallow navel-gazing and anecdotal reasoning that many other weekly newspaper columnist engage in on a regular basis.
But that’s not what he does here. His essay comes across as a poorly researched, angry-old-man-shouting-at-the-sky sort of rant. There’s no serious cost-benefit analysis at work here; just the banal assertion that a new technology has created new vulnerabilities. Really, that’s the extent of the logic at work here. Samuelson could have just as well substituted the automobile, airplanes, or any other modern technology for the Internet and drawn the same conclusion: It opens the door to new vulnerabilities (especially national security vulnerabilities) and, therefore, we would be better off without it in our lives. Continue reading →
Today, Jerry Brito, Adam Thierer and I filed comments on the FAA’s proposed privacy rules for “test sites” for the integration of commercial drones into domestic airspace. I’ve been excited about this development ever since I learned that Congress had ordered the FAA to complete the integration by September 2015. Airspace is a vastly underutilized resource, and new technologies are just now becoming available that will enable us to make the most of it.
In our comments, we argue that airspace, like the Internet, could be a revolutionary platform for innovation:
Vint Cerf, one of the “fathers of the Internet,” credits “permissionless innovation” for the economic beneﬁts that the Internet has generated. As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand.
Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.
And in Wired today, I argue that preemptive privacy regulation is unnecessary and unwise:
Regulation at this juncture requires our over-speculating about which types of privacy violations might arise. Since many of these harms may never materialize, pre-emptive regulation is likely to overprotect privacy at the expense of innovation.
Frankly, it wouldn’t even work. Imagine if we had tried to comprehensively regulate online privacy before allowing commercial use of the internet. We wouldn’t have even known how to. We wouldn’t have had the benefit of understanding how online commerce works, nor could we have anticipated the rise of social networking and related phenomena.
Marvin Ammori, a fellow at the New American Foundation and author of the new book On Internet Freedom explains his view of how the First Amendment applies the Internet through the lens of constitutional law and real world case studies.
According to Ammori, Internet freedom is a foundational issue for democracy, equivalent to the right to vote or freedom of speech. In fact, he says, the First Amendment can be used as a design principle for how we think about the challenges we face as Internet technology increasingly becomes a part of our lives.
Ammori’s belief in a positive right to speech—that everyone should have access to the most important speech tools in society and be able to speak with and listen to any other speaker without having to seek permission— translates to a belief that Internet should be made available for everybody, without restrictions aside from those placed on offlinet speech.
Ammori goes on to explain why he thinks SOPA threatened to infringe upon free speech while net neutrality protects it, suggesting that allowing ISPs to control bandwidth usage is tantamount to forcing internet users to become passive consumers of information, rather than creators and content-spreaders.
Mention the word “drone” to the average American today and the mental image it will conjure is likely to be of a flying robot weapon being wielded by a practically unaccountable executive. That’s why Sen. Rand Paul’s filibuster to draw attention to the administration opaque targeting process was important. I’m afraid, though, that Americans will end up seeing drones only in this negative light. In reality, the thousands of drones that will populate our skies before the end of the decade will be more like this one:
Over at Reason.com today I try to draw the distinction between killbots and TacoCopters, and I make the case we can’t let our legitimate fears of police surveillance and unaccountable assassinations keep us from the benefits of commercial drones.
Requiring that police get a warrant before engaging in surveillance is a no-brainer. But there is a danger that fear of governmental abuse of drones might result in the public demanding—or at least politicians hearing them ask for—precautionary restrictions on personal and commercial uses as well. For example, a bill being considered in New Hampshire would make all aerial photography illegal. And a bill recently introduced in the U.S. House of Representatives would make it a crime to use a private drone to photograph someone “in a manner that is highly offensive to a reasonable person … engaging in a personal or familial activity under circumstances in which the individual had a reasonable expectation of privacy”—a somewhat convoluted standard.
Restrictions on private drones may indeed be necessary some day, as the impending explosion of drone activity will no doubt disrupt our current social patterns. But before deciding on these restrictions, shouldn’t legislators and regulators wait until we have flying around more than a tiny fraction of the thousands of domestic drones the FAA estimates will be active this decade?
If officials don’t wait, they are bound to set the wrong rules since they will have no real data and only their imaginations to go on. It’s quite possible that existing privacy and liability laws will adequately handle most future conflicts. It’s also likely social norms will evolve and adapt to a world replete with robots.
Let’s talk about “permissionless innovation.” We all believe in it, right? Or do we? What does it really mean? How far are we willing to take it? What are its consequences? What is its opposite? How should we balance them?
What got me thinking about these questions was a recent essay over at The Umlaut by my Mercatus Center colleague Eli Dourado entitled, “‘Permissionless Innovation’ Offline as Well as On.” He opened by describing the notion of permissionless innovation as follows:
In Internet policy circles, one is frequently lectured about the wonders of “permissionless innovation,” that the Internet is a global platform on which college dropouts can try new, unorthodox methods without the need to secure authorization from anyone, and that this freedom to experiment has resulted in the flourishing of innovative online services that we have observed over the last decade.
Eli goes on to ask, “why it is that permissionless innovation should be restricted to the Internet. Can’t we have this kind of dynamism in the real world as well?”
That’s a great question, but let’s ponder an even more fundamental one: Does anyone really believe in the ideal of “permissionless innovation”? Is there anyone out there who makes a consistent case for permissionless innovation across the technological landscape, or is it the case that a fair degree of selective morality is at work here? That is, people love the idea of “permissionless innovation” until they find reasons to hate it — namely, when it somehow conflicts with certain values they hold dear. Continue reading →
When the smoke cleared and I found myself half caught-up on sleep, the information and sensory overload that was CES 2013 had ended.
There was a kind of split-personality to how I approached the event this year. Monday through Wednesday was spent in conference tracks, most of all the excellent Innovation Policy Summit put together by the Consumer Electronics Association. (Kudos again to Gary Shapiro, Michael Petricone and their team of logistics judo masters.)
The Summit has become an important annual event bringing together legislators, regulators, industry and advocates to help solidify the technology policy agenda for the coming year and, in this case, a new Congress.
I spent Thursday and Friday on the show floor, looking in particular for technologies that satisfy what I coined the The Law of Disruption: social, political, and economic systems change incrementally, but technology changes exponentially.
The precautionary principle generally states that new technologies should be restricted or heavily regulated until they are proven absolutely safe. In other words, out of an abundance of caution, the precautionary principle holds that it is “better to be safe than sorry,” regardless of the costs or consequences. The problem with that, as Kevin Kelly reminded us in his 2010 book, What Technology Wants, is that because “every good produces harm somewhere… by the strict logic of an absolute Precautionary Principle no technologies would be permitted.” The precautionary principle is, in essence, the arch-enemy of progress and innovation. Progress becomes impossible when experimentation and trade-offs are considered unacceptable.