Today, Jerry Brito, Adam Thierer and I filed comments on the FAA’s proposed privacy rules for “test sites” for the integration of commercial drones into domestic airspace. I’ve been excited about this development ever since I learned that Congress had ordered the FAA to complete the integration by September 2015. Airspace is a vastly underutilized resource, and new technologies are just now becoming available that will enable us to make the most of it.
In our comments, we argue that airspace, like the Internet, could be a revolutionary platform for innovation:
Vint Cerf, one of the “fathers of the Internet,” credits “permissionless innovation” for the economic beneﬁts that the Internet has generated. As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand.
Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.
And in Wired today, I argue that preemptive privacy regulation is unnecessary and unwise:
Regulation at this juncture requires our over-speculating about which types of privacy violations might arise. Since many of these harms may never materialize, pre-emptive regulation is likely to overprotect privacy at the expense of innovation.
Frankly, it wouldn’t even work. Imagine if we had tried to comprehensively regulate online privacy before allowing commercial use of the internet. We wouldn’t have even known how to. We wouldn’t have had the benefit of understanding how online commerce works, nor could we have anticipated the rise of social networking and related phenomena.
Marvin Ammori, a fellow at the New American Foundation and author of the new book On Internet Freedom explains his view of how the First Amendment applies the Internet through the lens of constitutional law and real world case studies.
According to Ammori, Internet freedom is a foundational issue for democracy, equivalent to the right to vote or freedom of speech. In fact, he says, the First Amendment can be used as a design principle for how we think about the challenges we face as Internet technology increasingly becomes a part of our lives.
Ammori’s belief in a positive right to speech—that everyone should have access to the most important speech tools in society and be able to speak with and listen to any other speaker without having to seek permission— translates to a belief that Internet should be made available for everybody, without restrictions aside from those placed on offlinet speech.
Ammori goes on to explain why he thinks SOPA threatened to infringe upon free speech while net neutrality protects it, suggesting that allowing ISPs to control bandwidth usage is tantamount to forcing internet users to become passive consumers of information, rather than creators and content-spreaders.
Mention the word “drone” to the average American today and the mental image it will conjure is likely to be of a flying robot weapon being wielded by a practically unaccountable executive. That’s why Sen. Rand Paul’s filibuster to draw attention to the administration opaque targeting process was important. I’m afraid, though, that Americans will end up seeing drones only in this negative light. In reality, the thousands of drones that will populate our skies before the end of the decade will be more like this one:
Over at Reason.com today I try to draw the distinction between killbots and TacoCopters, and I make the case we can’t let our legitimate fears of police surveillance and unaccountable assassinations keep us from the benefits of commercial drones.
Requiring that police get a warrant before engaging in surveillance is a no-brainer. But there is a danger that fear of governmental abuse of drones might result in the public demanding—or at least politicians hearing them ask for—precautionary restrictions on personal and commercial uses as well. For example, a bill being considered in New Hampshire would make all aerial photography illegal. And a bill recently introduced in the U.S. House of Representatives would make it a crime to use a private drone to photograph someone “in a manner that is highly offensive to a reasonable person … engaging in a personal or familial activity under circumstances in which the individual had a reasonable expectation of privacy”—a somewhat convoluted standard.
Restrictions on private drones may indeed be necessary some day, as the impending explosion of drone activity will no doubt disrupt our current social patterns. But before deciding on these restrictions, shouldn’t legislators and regulators wait until we have flying around more than a tiny fraction of the thousands of domestic drones the FAA estimates will be active this decade?
If officials don’t wait, they are bound to set the wrong rules since they will have no real data and only their imaginations to go on. It’s quite possible that existing privacy and liability laws will adequately handle most future conflicts. It’s also likely social norms will evolve and adapt to a world replete with robots.
Let’s talk about “permissionless innovation.” We all believe in it, right? Or do we? What does it really mean? How far are we willing to take it? What are its consequences? What is its opposite? How should we balance them?
What got me thinking about these questions was a recent essay over at The Umlaut by my Mercatus Center colleague Eli Dourado entitled, “‘Permissionless Innovation’ Offline as Well as On.” He opened by describing the notion of permissionless innovation as follows:
In Internet policy circles, one is frequently lectured about the wonders of “permissionless innovation,” that the Internet is a global platform on which college dropouts can try new, unorthodox methods without the need to secure authorization from anyone, and that this freedom to experiment has resulted in the flourishing of innovative online services that we have observed over the last decade.
Eli goes on to ask, “why it is that permissionless innovation should be restricted to the Internet. Can’t we have this kind of dynamism in the real world as well?”
That’s a great question, but let’s ponder an even more fundamental one: Does anyone really believe in the ideal of “permissionless innovation”? Is there anyone out there who makes a consistent case for permissionless innovation across the technological landscape, or is it the case that a fair degree of selective morality is at work here? That is, people love the idea of “permissionless innovation” until they find reasons to hate it — namely, when it somehow conflicts with certain values they hold dear. Continue reading →
When the smoke cleared and I found myself half caught-up on sleep, the information and sensory overload that was CES 2013 had ended.
There was a kind of split-personality to how I approached the event this year. Monday through Wednesday was spent in conference tracks, most of all the excellent Innovation Policy Summit put together by the Consumer Electronics Association. (Kudos again to Gary Shapiro, Michael Petricone and their team of logistics judo masters.)
The Summit has become an important annual event bringing together legislators, regulators, industry and advocates to help solidify the technology policy agenda for the coming year and, in this case, a new Congress.
I spent Thursday and Friday on the show floor, looking in particular for technologies that satisfy what I coined the The Law of Disruption: social, political, and economic systems change incrementally, but technology changes exponentially.
The precautionary principle generally states that new technologies should be restricted or heavily regulated until they are proven absolutely safe. In other words, out of an abundance of caution, the precautionary principle holds that it is “better to be safe than sorry,” regardless of the costs or consequences. The problem with that, as Kevin Kelly reminded us in his 2010 book, What Technology Wants, is that because “every good produces harm somewhere… by the strict logic of an absolute Precautionary Principle no technologies would be permitted.” The precautionary principle is, in essence, the arch-enemy of progress and innovation. Progress becomes impossible when experimentation and trade-offs are considered unacceptable.
Psychologists Daniel Simons and Christopher Chabris had an interesting editorial in The Wall Street Journal this weekendasking, “Do Our Gadgets Really Threaten Planes?” They conducted an online survey of 492 American adults who have flown in the past year and found that “40% said they did not turn their phones off completely during takeoff and landing on their most recent flight; more than 7% left their phones on, with the Wi-Fi and cellular communications functions active. And 2% pulled a full Baldwin, actively using their phones when they weren’t supposed to.”
Despite the widespread prevalence of such law-breaking activity, planes aren’t falling from the sky and yet the Federal Aviation Administration continues to enforce the rule prohibiting the use of digital gadgets during certain times during flight. “Why has the regulation remained in force for so long despite the lack of solid evidence to support it?” Simons and Chabris ask. They note:
Human minds are notoriously overzealous “cause detectors.” When two events occur close in time, and one plausibly might have caused the other, we tend to assume it did. There is no reason to doubt the anecdotes told by airline personnel about glitches that have occurred on flights when they also have discovered someone illicitly using a device. But when thinking about these anecdotes, we don’t consider that glitches also occur in the absence of illicit gadget use. More important, we don’t consider how often gadgets have been in use when flights have been completed without a hitch. Our survey strongly suggests that there are multiple gadget violators on almost every flight.
That’s all certain true, but what actually motivated this ban — and has ensured its continuation despite a lack of evidence it is needed to diminish technological risk — is the precautionary principle. As the authors correct note: Continue reading →
Yesterday on TechCrunch, Josh Constine posted an interesting essay about how some in the press were “Selling Digital Fear” on the privacy front. His specific target was The Wall Street Journal, which has been running an ongoing investigation of online privacy issues with a particular focus on online apps. Much of the reporting in their “What They Know” series has been valuable in that it has helped shine light on some data collection practices and privacy concerns that deserve more scrutiny. But as Constine notes, sometimes the articles in the WSJ series lack sufficient context, fail to discuss trade-offs, or do not identify any concrete harm or risk to users. In other words, some of it is just simple fear-mongering. Constine argues:
Reality has yet to stop media outlets from yelling about privacy, and because the WSJ writers were on assignment, they wrote the “Selling You On Facebook” hit piece despite thin findings. These kind of articles can make mainstream users so worried about the worst-case scenario of what could happen to their data, they don’t see the value they get in exchange for it. “Selling You On Facebook” does bring up the important topic of how apps can utilize personal data granted to them by their users, but it overstates the risks. Yes, the business models of Facebook and the apps on its platform depend on your personal information, but so do the services they provide. That means each user needs to decide what information to grant to who, and Facebook has spent years making the terms of this value exchange as clear as possible.
“While sensationalizing the dangers of online privacy sure drives page views and ad revenue,” Constine also noted, “it also impedes innovation and harms the business of honest software developers.” These trade-offs are important because, to the extent policymakers get more interested in pursing privacy regulations based on these fears, they could force higher prices or less innovation upon us with very little benefit in exchange.
I want to highly recommend everyone watch this interesting new talk by danah boyd on “Culture of Fear + Attention Economy = ?!?!” In her talk, danah discusses “how fear gets people into a frenzy” or panic about new technologies and new forms of culture. “The culture of fear is the idea that fear can be employed by marketers, politicians, the media, and the public to really regulate the public… such that they can be controlled,” she argues. “Fear isn’t simply the product of natural forces. It can systematically be generated to entice, motivate, or suppress. It can be leveraged as a political tool and those in power have long used fear for precisely these goals.” I discuss many of these issues in my new 80-page white paper, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.“
danah points out that new media is often leveraged to generate fear and so we should not be surprised when the Internet and digital technologies are used in much the same way. She also correctly notes that our cluttered, cacophonous information age might also be causing an escalation of fear-based tactics. “The more there are stimuli competing for your attention, the more likely it is that fear is going to be the thing that will drive your attention” to the things that some want you to notice or worry about.
I spent some time in my technopanics paper discussing this point in Section III.C (“Bad News Sells: The Role of the Media, Advocates, and the Listener.”) Here’s the relevant passage: Continue reading →