The future of emerging technology policy will be influenced increasingly by the interplay of three interrelated trends: “innovation arbitrage,” “technological civil disobedience,” and “spontaneous private deregulation.” Those terms can be briefly defined as follows:

  • Innovation arbitrage” refers to the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. Just as capital now fluidly moves around the globe seeking out more friendly regulatory treatment, the same is increasingly true for innovations. And this will also play out domestically as innovators seek to play state and local governments off each other in search of some sort of competitive advantage.
  • Technological civil disobedience” represents the refusal of innovators (individuals, groups, or even corporations) or consumers to obey technology-specific laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. New technological devices and platforms are making it easier than ever for the public to openly defy (or perhaps just ignore) rules that limit their freedom to create or use modern technologies.
  • Spontaneous private deregulation” can be thought of as de facto rather than the de jure elimination of traditional laws and regulations owing to a combination of rapid technological change as well the potential threat of innovation arbitrage and technological civil disobedience. In other words, many laws and regulations aren’t being formally removed from the books, but they are being made largely irrelevant by some combination of those factors. “Benign or otherwise, spontaneous deregulation is happening increasingly rapidly and in ever more industries,” noted Benjamin Edelman and Damien Geradin in a Harvard Business Review article on the phenomenon.[1]

I have previously documented examples of these trends in action for technology sectors as varied as drones, driverless cars, genetic testing, Bitcoin, and the sharing economy. (For example, on the theme of global innovation arbitrage, see all these various essays. And on the growth of technological civil disobedience, see, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” and “Quick Thoughts on FAA’s Proposed Drone Registration System.” I also discuss some of these issues in the second edition of my Permissionless Innovation book.)

In this essay, I want to briefly highlight how, over the course of just the past month, a single company has offered us a powerful example of how both global innovation arbitrage and technological civil disobedience—or at least the threat thereof—might become a more prevalent feature of discussions about the governance of emerging technologies. And, in the process, that could lead to at least the partial spontaneous deregulation of certain sectors or technologies. Finally, I will discuss how this might affect technological governance more generally and accelerate the movement toward so-called “soft law” governance mechanisms as an alternative to traditional regulatory approaches. Continue reading →

Title II allows the FCC to determine what content and media Internet access providers must transmit on their own private networks, so the First Amendment has constantly dogged the FCC’s “net neutrality” proceedings. If the Supreme Court agrees to take up an appeal from the DC Circuit Court of Appeals, which rejected a First Amendment challenge this summer, it will likely be because of Title II’s First Amendment deficiencies.

Title II has always been about handicapping ISPs qua speakers and preventing ISPs from offering curated Internet content. As former FCC commissioner Copps said, absent the Title II rules, “a big cable company could block access to an investigative report about its less-than-stellar customer service.” Tim Wu told members of Congress that net neutrality was intended to prevent ISPs from favoring, say, particular news sources or sports teams.

But just as a cable company chooses to offer some channels and not others, and a search engine chooses to promote some pages and not others, choosing to offer a curated Internet to, say, children, religious families, or sports fans involves editorial decisions. As communications scholar Stuart Benjamin said about Title II’s problem, under current precedent, ISPs “can say they want to engage in substantive editing, and that’s enough for First Amendment purposes.”

Title II – Bringing Broadcast Regulation to the Internet

Title II regulation of the Internet is frequently compared to the Fairness Doctrine, which activists used for decades to drive conservatives out of broadcast radio and TV. As a pro-net neutrality media professor explained in The Atlantic last year, the motivation for the Fairness Doctrine and Title II Internet regulation is the same: to “rescue a potentially democratic medium from commercial capture.” This is why there is almost perfect overlap between the organizations and advocates who support the Fairness Doctrine and those who lobbied for Title II regulation of the Internet. Continue reading →

The FCC appears to be dragging the TV industry, which is increasingly app- and Internet-based, into years of rulemakings, unnecessary standards development and oversight, and drawn-out lawsuits. The FCC hasn’t made a final decision but the general outline is pretty clear. The FCC wants to use a 20 year-old piece of corporate welfare, calculated to help a now-dead electronics retailer, as authority to regulate today’s TV apps and their licensing terms. Perhaps they’ll succeed in expanding their authority over set top boxes and TV apps. But as TV is being revolutionized by the Internet the legacy providers are trying to stay ahead of the new players (Netflix, Amazon, Layer 3), regulating TV apps and boxes will likely impede the competitive process and distract the FCC from more pressing matters, like spectrum and infrastructure. Continue reading →

Today, the U.S. Department of Transportation released its eagerly-awaited “Federal Automated Vehicles Policy.” There’s a lot to like about the guidance document, beginning with the agency’s genuine embrace of the potential for highly automated vehicles (HAVs) to revolutionize this sector and save thousands of lives annually in the process.

It is important we get HAV policy right, the DOT notes, because, “35,092 people died on U.S. roadways in 2015 alone” and “94 percent of crashes can be tied to a human choice or error.” (p. 5) HAVs could help us reverse that trend and save thousands of lives and billions in economic costs annually. The agency also documents many other benefits associated with HAVs, such as increasing personal mobility, reducing traffic and pollution, and cutting infrastructure costs.

I will not attempt here to comment on every specific recommendation or guideline suggested in the new DOT guidance document. I could nit-pick about some of the specific recommended guidelines, but I think many of the guidelines are quite reasonable, whether they are related to safety, security, privacy, or state regulatory issues. Other issues need to be addressed and CEI’s Marc Scribner does a nice job documenting some of them is his response to the new guidelines.

Instead of discussing those specific issues today, I want to ask a more fundamental and far-reaching question which I have been writing about in recent papers and essays: Is this guidance or regulation? And what does the use of informal guidance mechanisms like these signal for the future of technological governance more generally? Continue reading →

SecGen BanOn Tuesday, UN Secretary-General Ban Ki-Moon delivered an address to the UN Security Council “on the Non-Proliferation of Weapons of Mass Destruction.” He made many of the same arguments he and his predecessors have articulated before regarding the need for the Security Council “to develop further initiatives to bring about a world free of weapons of mass destruction.” In particular, he was focused on the great harm that could come about from the use of chemical, biological and nuclear weapons. “Vicious non-state actors that target civilians for carnage are actively seeking chemical, biological and nuclear weapons,” the Secretary-General noted. A stepped-up disarmament agenda is needed, he argued, “to prevent the human, environmental and existential destruction these weapons can cause . . . by eradicating them once and for all.”

The UN has created several multilateral mechanisms to pursue those objectives, including the Nuclear Non-Proliferation Treaty, the Chemical Weapons Convention, and the Biological Weapons Convention. Progress on these fronts has always been slow and limited, however. The Secretary-General observed that nuclear non-proliferation efforts have recently “descended into fractious deadlock,” but the effectiveness of those and similar UN-led efforts have long been challenged by the dual realities of (1) rapid ongoing technological change that has made WMDs more ubiquitous than ever, plus (2) a general lack of teeth in UN treaties and accords to do much to slow those advances, especially among non-signatories.

Despite those challenges, the Secretary-General is right to remain vigilant about the horrors of chemical, biological and nuclear attacks. But what was interesting about this address is that the Secretary-General continued on to discuss his concerns about a rising class of emerging technologies, which we usually don’t hear mentioned in the same breath as those traditional “weapons of mass destruction”: Continue reading →

Dominos pizza droneJust three days ago I penned another installment in my ongoing series about the growing phenomenon of “global innovation arbitrage” — or the idea that “innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” And now it’s already time for another entry in the series!

My previous column focused on driverless car innovation moving overseas, and earlier installments discussed genetic testingdrones, and the sharing economy. Now another drone-related example has come to my attention, this time from New Zealand. According to the New Zealand Herald:

Aerial pizza delivery may sound futuristic but Domino’s has been given the green light to test New Zealand pizza delivery via drones. The fast food chain has partnered with drone business Flirtey to launch the first commercial drone delivery service in the world, starting later this year.

Importantly, according to the story, “If it is successful the company plans to extend the delivery method to six other markets – Australia, Belgium, France, The Netherlands, Japan and Germany.” That’s right, America is not on the list. In other words, a popular American pizza delivery chain is looking overseas to find the freedom to experiment with new delivery methods. And the reason they are doing so is because of the seemingly endless bureaucratic foot-dragging by federal regulators at the FAA. Continue reading →

In previous essays here I have discussed the rise of “global innovation arbitrage” for genetic testing, drones, and the sharing economy. I argued that: “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” I’ve been working on a longer paper about this with Samuel Hammond, and in doing research on the issue, we keep finding interesting examples of this phenomenon.

The latest example comes from a terrific new essay (“Humans: Unsafe at Any Speed“) about driverless car technology by Wall Street Journal technology columnist L. Gordon Crovitz. He cites some important recent efforts by Ford and Google and he notes that they and other innovators will need to be given more flexible regulatory treatment if we want these life-saving technologies on the road as soon as possible. “The prospect of mass-producing cars without steering wheels or pedals means U.S. regulators will either allow these innovations on American roads or cede to Europe and Asia the testing grounds for self-driving technologies,” Crovitz observes. “By investing in autonomous vehicles, Ford and Google are presuming regulators will have to allow the new technologies, which are developing faster even than optimists imagined when Google started working on self-driving cars in 2009.”  Continue reading →

I came across an article last week in the AV Club that caught my eye. The title is: “The Telecommunications Act of 1996 gave us shitty cell service, expensive cable.” The Telecom Act is the largest update to the regulatory framework set up in the 1934 Communications Act. The basic thrust of the Act was to update the telephone laws because the AT&T long-distance monopoly had been broken up for a decade. The AV Club is not a policy publication but it does feature serious reporting on media. This analysis of the Telecom Act and its effects, however, omits or obfuscates important information about dynamics in media since the 1990s.

The AV Club article offers an illustrative collection of left-of-center critiques of the Telecom Act. Similar to Glass-Steagall  repeal or Citizens United, many on the left are apparently citing the Telecom Act as a kind of shorthand for deregulatory ideology run amuck. And like Glass-Steagall repeal and Citizens United, most of the critics fundamentally misstate the effects and purposes of the law. Inexplicably, the AV Club article relies heavily on a Common Cause white paper from 2005. Now, Common Cause typically does careful work but the paper is hopelessly outdated today. Eleven years ago Netflix was a small DVD-by-mail service. There was no 4G LTE (2010). No iPhone or Google Android (2007). And no Pandora, IPTV, and a dozen other technologies and services that have revolutionized communications and media. None of the competitive churn since 2005, outlined below, is even hinted at in the AV Club piece. The actual data undermine the dire diagnoses about the state of communications and media from the various critics cited in the piece.  Continue reading →

One would think that if there is any aspect of Internet policy that libertarians could agree on, it would be that the government should not be in control of basic internet infrastructure. So why are Tech Freedom and a few other so-called “liberty” groups making a big fuss about the plan to complete the privatization of ICANN? The IANA transition, as it has become known, would set the domain name system root, IP addressing and Internet protocol parameter registries free of direct governmental control, and make those aspects of the Internet transnational and self-governing.

Yet, the same groups that have informed us that net neutrality is the end of Internet freedom because it would have a government agency indirectly regulating discriminatory practices by private sector ISPs, are now trying to tell us that retaining direct U.S. government regulation of the content of the domain name system root, and indirect control of the domain name industry and IP addressing via a contract with ICANN, is essential to the maintenance of global Internet freedom. It’s insane.

One mundane explanation is that TechFreedom, which is known for responding eagerly to anyone offering them a check, has found some funding source that doesn’t like the IANA transition and has, in the spirit of a true political entrepreneur, taken up the challenge of trying to twist, turn and spin freedom rhetoric into some rationalization for opposing the transition. But that doesn’t explain the opposition of Senators Cruz and other conservatives who feign a concern for Internet freedom. No, I think this split represents something bigger. At bottom, it’s a debate about the role of nation-states in Internet governance and the state’s role in preserving freedom.

In this regard it would be good to review my May 2016 blog post at the Internet Governance Project, which smashes the myths being asserted about the US government’s role in ICANN. In it, I show that NTIA’s control of ICANN has never been used to protect Internet freedom, but has been used multiple times to limit or attack it. I show that the US control of the DNS root was never put into place to “protect Internet freedom,” but was established for other reasons, and that the US explicitly rejected putting a free expression clause in ICANN’s constitution. I show that the new ICANN Articles of Incorporation created as part of the transition contain good mission limitations and protections against content regulation by ICANN. Finally, I argued that in the real world of international relations (as opposed to the unilateralist fantasies of conservative nationalists) the privileged US role is a magnet for other governments, inviting them to push for control, rather than a bulwark against it.

Another libertarian tech policy analyst, Eli Dourado, has also argued that going ahead with the IANA transition is a ‘no-brainer.’

Assistant Secretary of Commerce Larry Strickling’s speech at the US Internet Governance Forum last month goes through the FUD being advanced by TechFreedom and the nationalist Republicans one by one. Among other points, he contends that if the U.S. tries to retain control, Internet infrastructure will become increasingly politicized as rival states, such as China, Russia and Iran, argue for a sovereignty-based model and try to get internet infrastructure in the hands of intergovernmental organizations:

Privatizing the domain name system has been a goal of Democratic and Republican administrations since 1997. Prior to our 2014 announcement to complete the privatization, some governments used NTIA’s continued stewardship of the IANA functions to justify their demands that the United Nations, the International Telecommunication Union or some other body of governments take control over the domain name system. Failing to follow through on the transition or unilaterally extending the contract will only embolden authoritarian regimes to intensify their advocacy for government-led or intergovernmental management of the Internet via the United Nations.

The TechFreedom “coalition letter” raises no new arguments or issues – it is a nakedly political appeal for Congress to intervene to stop the transition, based mainly on partisan hatred of the Obama administration. But I think this debate is highly significant nevertheless. It’s not about rational policy argumentation, it’s about the diverging political identity of people who say they are pro-freedom.

What is really happening here is a rift between nationalist conservativism of the sort represented by the Heritage Foundation and the nativists in the Tea Party, on the one hand, and true free market libertarians, on the other. The root of this difference is a radically different conception of the role of the nation-state in the modern world. Real libertarians see national borders as, at best, administrative necessary evils, and at worst as unjustifiable obstacles to society and commerce. A truly classical liberal ethic is founded on individual rights and a commitment to free and open markets and free political institutions everywhere, and thus is universalist and globalist in outlook. They see the economy and society as increasingly globalized, and understand that the institution of the state has to evolve in new directions if basic liberal and democratic values are to be institutionalized in that environment.

The nationalist Republican conservatives, on the other hand, want to strengthen the state. They are hemmed in by a patriotic and exceptionalist view of its role. Insofar as they are motivated by liberal impulses at all – and of course many parts of their political base are not – it is based on a conception of freedom situated entirely on national-level institutions. As such, it implies walling the world off or, worse, dominating the world as a pre-eminent nation-state. The rise of Trump and the ease with which he took over the Republican Party ought to be a signal to the real libertarians that the Republican Party is no longer viable as a lesser-of-two-evils home for true liberals. The base of the Republican Party, the coalition of constituencies and worldviews of which it is composed, is splitting into two camps with irreconcilable differences over fundamental issues. Good riddance to the nationalists, I say. This split poses a tremendous opportunity for libertarians to finally free themselves of the social conservatism, nationalistic militarists, nativists and theocrats that have dragged them down in the GOP.

Juma book cover

“The quickest way to find out who your enemies are is to try doing something new.” Thus begins Innovation and Its Enemies, an ambitious new book by Calestous Juma that will go down as one of the decade’s most important works on innovation policy.

Juma, who is affiliated with the Harvard Kennedy School’s Belfer Center for Science and International Affairs, has written a book that is rich in history and insights about the social and economic forces and factors that have, again and again, lead various groups and individuals to oppose technological change. Juma’s extensive research documents how “technological controversies often arise from tensions between the need to innovate and the pressure to maintain continuity, social order, and stability” (p. 5) and how this tension is “one of today’s biggest policy challenges.” (p. 8)

What Juma does better than any other technology policy scholar to date is that he identifies how these tensions develop out of deep-seated psychological biases that eventually come to affect attitudes about innovations among individuals, groups, corporations, and governments. “Public perceptions about the benefits and risks of new technologies cannot be fully understood without paying attention to intuitive aspects of human psychology,” he correctly observes. (p. 24) Continue reading →