Earlier this week I posted an essay entitled, “Global Innovation Arbitrage: Commercial Drones & Sharing Economy Edition,” in which I noted how:
Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.
That essay focused on how actions by U.S. policymakers and regulatory agencies threatened to disincentivize homegrown innovation in the commercial drone and sharing economy sectors. But there are many other troubling examples of how America risks losing its competitive advantage in sectors where we should be global leaders as innovators looks offshore. We can think of this as “global innovation arbitrage,” as venture capitalist Marc Andreessen has aptly explained:
Think of it as a sort of “global arbitrage” around permissionless innovation — the freedom to create new technologies without having to ask the powers that be for their blessing. Entrepreneurs can take advantage of the difference between opportunities in different regions, where innovation in a particular domain of interest may be restricted in one region, allowed and encouraged in another, or completely legal in still another.
One of the more vivid recent examples of global innovation arbitrage involves the well-known example of 23andMe, which sells mail-order DNA-testing kits to allow people to learn more about their genetic history and predisposition to various diseases. Continue reading →
What sort of public policy vision should govern the Internet of Things? I’ve spent a lot of time thinking about that question in essays here over the past year, as well as in a new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology early next year.
But I recently heard three policymakers articulate their recommended vision for the Internet of Things (IoT) and I found their approach so inspiring that I wanted to discuss it here in the hopes that it will become the foundation for future policy in this arena.
Last Thursday, it was my pleasure to attend a Center for Data Innovation (CDI) event on “How Can Policymakers Help Build the Internet of Things?” As the title implied, the goal of the event was to discuss how to achieve the vision of a more fully-connected world and, more specifically, how public policymakers can help facilitate that objective. It was a terrific event with many excellent panel discussions and keynote addresses.
Two of those keynotes were delivered by Senators Deb Fischer (R-Neb.) and Kelly Ayotte (R-N.H.). Below I will offer some highlights from their remarks and then relate them to the vision set forth by Federal Trade Commission (FTC) Commissioner Maureen K. Ohlhausen in some of her recent speeches. I will conclude by discussing how the Ayotte-Fischer-Ohlhausen vision can be seen as the logical extension of the Clinton Administration’s excellent 1997 Framework for Global Electronic Commerce, which proposed a similar policy paradigm for the Internet more generally. This shows how crafting policy for the IoT can and should be a nonpartisan affair. Continue reading →
Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity. I was reminded of that fact today while reading two different reports about commercial drones and the sharing economy and the global competition to attract investment on both fronts. First, on commercial drone policy, a new Wall Street Journal article notes that:
Amazon.com Inc., which recently began testing delivery drones in the U.K., is warning American officials it plans to move even more of its drone research abroad if it doesn’t get permission to test-fly in the U.S. soon. The statement is the latest sign that the burgeoning drone industry is shifting overseas in response to the Federal Aviation Administration’s cautious approach to regulating unmanned aircraft.
According to the Journal reporters, Amazon has sent a letter to the FAA warning that, “Without the ability to test outdoors in the United States soon, we will have no choice but to divert even more of our [drone] research and development resources abroad.” And another report in the U.K. Telegraph notes that other countries are ready and willing to open their skies to the same innovation that the FAA is thwarting in America. Both the UK and Australia have been more welcoming to drone innovators recently. Here’s a report from an Australian newspaper about Google drone services testing there. (For more details, see this excellent piece by Alan McQuinn, a research assistant with the Information Technology and Innovation Foundation: “Commercial Drone Companies Fly Away from FAA Regulations, Go Abroad.”) None of this should be a surprise, as I’ve noted in recent essays and filings. With the FAA adopting such a highly precautionary regulatory approach, innovation has been actively disincentivized. America runs the risk of driving still more private drone innovation offshore in coming months since all signs are that the FAA intends to drag its feet on this front as long as it can, even though Congress has told to agency to take steps to integrate these technologies into national airspace. Continue reading →
Yesterday, the Article 29 Data Protection Working Party issued a press release providing more detailed guidance on how it would like to see Europe’s so-called “right to be forgotten” implemented and extended. The most important takeaway from the document was that, as Reuters reported, “European privacy regulators want Internet search engines such as Google and Microsoft’s Bing to scrub results globally.” Moreover, as The Register reported, the press release made it clear that “Europe’s data protection watchdogs say there’s no need for Google to notify webmasters when it de-lists a page under the so-called “right to be forgotten” ruling.” (Here’s excellent additional coverage from Bloomberg: “Google.com Said to Face EU Right-to-Be-Forgotten Rules“). These actions make it clear that European privacy regulators hope to expand the horizons of the right to be forgotten in a very significant way.
The folks over at Marketplace radio asked me to spend a few minutes with them today discussing the downsides of this proposal. Here’s the quick summary of what I told them: Continue reading →
In my previous essay, I discussed a new white paper by my colleague Robert Graboyes, Fortress and Frontier in American Health Care, which examines the future of medical innovation. Graboyes uses the “fortress vs frontier” dichotomy to help explain different “visions” about how public policies debates about technological innovation in the health care arena often play out. It’s a terrific study that I highly recommend for all the reasons I stated in my previous post.
As I was reading Bob’s new report, I realized that his approach shared much in common with a couple of other recent innovation policy paradigms I have discussed here before from Virginia Postrel (“Stasis” vs. “Dynamism”), Robert D. Atkinson (“Preservationists” vs. “Modernizers”), and myself (“Precautionary Principle” vs. “Permissionless Innovation”). In this essay, I will briefly relate Bob’s’ approach to those other three innovation policy paradigms and then note a deficiency with our common approaches. I’ll conclude by briefly discussing another interesting framework from science writer Joel Garreau. Continue reading →
Evan Selinger, a super-sharp philosopher of technology up at the Rochester Institute of Technology, is always alerting me to interesting new essays and articles and this week he brought another important piece to my attention. It’s a short new article by Arturo Casadevall, Don Howard, and Michael J. Imperiale, entitled, “The Apocalypse as a Rhetorical Device in the Influenza Virus Gain-of-Function Debate.” The essay touches on something near and dear to my own heart: the misuse of rhetoric in debates over the risk trade-offs associated with new technology and inventions. Casadevall, Howard, and Imperiale seek to “focus on the rhetorical devices used in the debate [over infectious disease experiments] with the hope that an analysis of how the arguments are being framed can help the discussion.”
They note that “humans are notoriously poor at assessing future benefits and risks” and that this makes many people susceptible to rhetorical ploys based on the artificial inflation of risks. Their particular focus in this essay is the debate over so-called “gain-of-function” (GOF) experiments involving influenza virus, but what they have to say here about how rhetoric is being misused in that field is equally applicable to many other fields of science and the policy debates surrounding various issues. The last two paragraphs of their essay are masterful and deserve everyone’s attention: Continue reading →
Last week, I participated in a program co-sponsored by the Progressive Policy Institute, the Lisbon Council, and the Georgetown Center for Business and Public Policy on “Growing the Transatlantic Digital Economy.”
The complete program, including keynote remarks from EU VP Neelie Kroes and U.S. Under Secretary of State Catherine A. Novelli, is available below.
My remarks reviewed worrying signs of old-style interventionist trade practices creeping into the digital economy in new guises, and urged traditional governments to stay the course (or correct it) on leaving the Internet ecosystem largely to its own organic forms of regulation and market correctives: Continue reading →
According to this article by Julian Hattem in The Hill (“Lawmakers warn in-flight calls could lead to fights“), 77 congressional lawmakers have sent a letter to the heads of four federal agencies warning them not to allow people to have in-flight cellphone conversations on the grounds that it “could lead to heated arguments among passengers that distract officials’ attention and make planes less safe.” The lawmakers say “arguments in an aircraft cabin already start over mundane issues, like seat selection and overhead bin space, and the volume and pervasiveness of voice communications would only serve to exacerbate and escalate these disputes.” They’re also concerned that it may distract passengers from important in-flight announcements.
Well, I think I speak for a lot of other travelers when I say I find the idea of gabby passengers — whether on a phone or just among themselves — insanely annoying. For those of us who value peace and quiet and find airline travel to be among the most loathsome of experiences to begin with, it might be tempting to sympathize with this letter and just say, “Sure, go ahead and make this a federal problem and solve this for us with an outright ban.”
But isn’t there a case to be made here for differentiation and choice over yet another one-size-fits all mandate? Why must we have federal lawmakers or bureaucrats dictating that every flight be the same? I don’t get that. After all, enough of us would be opposed to in-flight calls that we would likely pressure airlines to not offer many of them. But perhaps a few flights or routes might be “business traveler”-oriented and offer this option to those who do. Or perhaps some airlines would restrict calling to certain areas of the cabin, or limit when the calls could occur. Continue reading →
Today, Ryan Hagemann and I filed comments with the Federal Aviation Administration (FAA) in its proceeding on the “Interpretation of the Special Rule for Model Aircraft.” This may sound like a somewhat arcane topic but it is related to the ongoing policy debate over the integration of unmanned aircraft systems (UASs)—more commonly referred to as drones—into the National Airspace System. As part of the FAA Modernization and Reform Act of 2012, Congress required the FAA to come up with a plan by September 2015 to accomplish that goal. As part of that effort, the FAA is currently accepting comments on its enforcement authority over model aircraft. Because the distinction between “drones” and “model aircraft” is blurring rapidly, the outcome of this proceeding could influence the outcome of the broader debate about drone policy in the United States.
In our comment to the agency, Hagemann and I discuss the need for the agency to conduct a thorough review of the benefits and costs associated with this rule. We argue this is essential because airspace is poised to become a major platform for innovation if the agency strikes the right balance between safety and innovation. To achieve that goal, we stress the need for flexibility and humility in interpreting older standards, such as “line of sight” restrictions, as well as increasingly archaic “noncommercial” vs. “commercial” distinctions or “hobbyists” vs. “professional” designations.
We also highlight the growing tension between the agency’s current regulatory approach and the First Amendment rights of the public to engage in peaceful, information-gathering activities using these technologies. (Importantly, on that point, we attached to our comments a new Mercatus Center working paper by Cynthia Love, Sean T. Lawson, and Avery Holton entitled, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones.” See my coverage of the paper here.)
Finally, Hagemann and I close by noting the important role that voluntary self-regulation and codes of conduct already play in governing proper use of these technologies. We also argue that other “bottom-up” remedies are available and should be used before the agency imposes additional restrictions on this dynamic, rapidly evolving space.
You can download the complete comment on the Mercatus Center website here. (Note: The Mercatus Center filed comments with the FAA earlier about the prompt integration of drones into the nation’s airspace. You can read those comments here.)
Continue reading →
If there are two general principles that unify my recent work on technology policy and innovation issues, they would be as follows. To the maximum extent possible:
- We should avoid preemptive and precautionary-based regulatory regimes for new innovation. Instead, our policy default should be innovation allowed (or “permissionless innovation”) and innovators should be considered “innocent until proven guilty” (unless, that is, a thorough benefit-cost analysis has been conducted that documents the clear need for immediate preemptive restraints).
- We should avoid rigid, “top-down” technology-specific or sector-specific regulatory regimes and/or regulatory agencies and instead opt for a broader array of more flexible, “bottom-up” solutions (education, empowerment, social norms, self-regulation, public pressure, etc.) as well as reliance on existing legal systems and standards (torts, product liability, contracts, property rights, etc.).
I was very interested, therefore, to come across two new essays that make opposing arguments and proposals. The first is this recent Slate oped by John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often.” The second is Ryan Calo’s new Brookings Institution white paper, “The Case for a Federal Robotics Commission.”
Weaver argues that new robot technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” In order to preemptively address concerns about new technologies such as driverless cars or commercial drones, “we need to legislate early and often,” Weaver says. Stated differently, Weaver is proposing “precautionary principle”-based regulation of these technologies. The precautionary principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.
Calo argues that we need “the establishment of a new federal agency to deal with the novel experiences and harms robotics enables” since there exists “distinct but related challenges that would benefit from being examined and treated together.” These issues, he says, “require special expertise to understand and may require investment and coordination to thrive.
I’ll address both Weaver and Calo’s proposals in turn. Continue reading →