Writing last week in The Wall Street Journal, Matt Moffett noted how many European countries continue to struggle with chronic unemployment and general economic malaise.  (“New Entrepreneurs Find Pain in Spain“) It’s a dismal but highly instructive tale about how much policy incentives matter when it comes to innovation and job creation–especially the sort of entrepreneurial activity from small start-ups that is so essential for economic growth. Here’s the key takeaway:

Scarce capital, dense bureaucracy, a culture deeply averse to risk and a cratered consumer market all suppress startups in Europe. The Global Entrepreneurship Monitor, a survey of startup activity, found the percentage of the adult population involved in early stage entrepreneurial activity last year was just 5% in Germany, 4.6% in France and 3.4% in Italy. That compares with 12.7% in the U.S. Even once they are established, European businesses are, on average, smaller and slower growing than those in the U.S.  The problems of entrepreneurs are one reason Europe’s economy continues to struggle after six years of crisis. The European Union this month cut its growth forecasts for the region for this year and next, citing weaker than expected performance in the eurozone’s biggest economies, Germany, France and Italy. This week, the Organization for Economic Cooperation and Development delivered its own pessimistic appraisal, with chief economist Catherine Mann saying, “The eurozone is the locus of the weakness in the global economy.”

[…]
Europe’s unemployment crisis may be eroding a deeply ingrained fear of failure that is a bigger impediment to entrepreneurship on the Continent than in other regions, according to academic surveys. “Fear of failure is less of an issue because the whole country is a failure, and most of us are out of business or have a hard time paying our bills,” said Nick Drandakis of Athens, who in 2011 founded Taxibeat, an app that provides passenger ratings on taxi drivers.

Continue reading →

Yesterday, the Article 29 Data Protection Working Party issued a press release providing more detailed guidance on how it would like to see Europe’s so-called “right to be forgotten” implemented and extended. The most important takeaway from the document was that, as Reuters reported, “European privacy regulators want Internet search engines such as Google and Microsoft’s Bing to scrub results globally.” Moreover, as The Register reported, the press release made it clear that “Europe’s data protection watchdogs say there’s no need for Google to notify webmasters when it de-lists a page under the so-called “right to be forgotten” ruling.” (Here’s excellent additional coverage from Bloomberg: Google.com Said to Face EU Right-to-Be-Forgotten Rules“). These actions make it clear that European privacy regulators hope to expand the horizons of the right to be forgotten in a very significant way.

The folks over at Marketplace radio asked me to spend a few minutes with them today discussing the downsides of this proposal. Here’s the quick summary of what I told them: Continue reading →

This Thanksgiving holiday season, an estimated 39 million people plan on traveling by car. Sadly, according to the National Safety Council, some 418 Americans may lose their lives on the roads over the next few days, in addition to over 44,000 injuries from car crashes.

In a new oped for the Orange County Register, Ryan Hagemann and I argue that many of these accidents and fatalities could be averted if more “intelligent” vehicles were on the road. That’s why it is so important that policymakers clear away roadblocks to intelligent vehicle technology (including driverless cars) as quickly as possible. The benefits would be absolutely enormous.

Read our oped, and for more details check out our recent Mercatus Center white paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.”

IoT paperThe Mercatus Center at George Mason University has just released my latest working paper, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation.” The “Internet of Things” (IoT) generally refers to “smart” devices that are connected to both the Internet and other devices. Wearable technologies are IoT devices that are worn somewhere on the body and which gather data about us for various purposes. These technologies promise to usher in the next wave of Internet-enabled services and data-driven innovation. Basically, the Internet will be “baked in” to almost everything that consumers own and come into contact with.

Some critics are worried about the privacy and security implications of the Internet of Things and wearable technology, however, and are proposing regulation to address these concerns. In my new 93-page article, I explain why preemptive, top-down regulation would derail the many life-enriching innovations that could come from these new IoT technologies. Building on a recent book of mine, I argue that “permissionless innovation,” which allows new technology to flourish and develop in a relatively unabated fashion, is the superior approach to the Internet of Things.

As I note in the paper and my earlier book, if we spend all our time living in fear of the worst-case scenarios — and basing public policies on them — then best-case scenarios can never come about. As the old saying goes: nothing ventured, nothing gained. Precautionary principle-based regulation paralyzes progress and must be avoided.  We instead need to find constructive, “bottom-up” solutions to the privacy and security risks accompanying these new IoT technologies instead of top-down controls that would limit the development of life-enriching IoT innovations. Continue reading →

Supporters of Title II reclassification for broadband Internet access services point to the fact that some wireless services have been governed by a subset of Title II provisions since 1993.  No one is complaining about that.  So what, then, is the basis for opposition to similar regulatory treatment for broadband?

Austin Schlick, the former FCC general counsel, outlined the so-called “Third Way” legal framework for broadband in a 2010 memo that proposed Title II reclassification along with forbearance of all but six of Title II’s 48 provisions.  He noted that “this third way is a proven success for wireless communications.”  This is the model that President Obama is backing.  Title II reclassification “doesn’t have to be a big deal,” Harold Feld reminds us, since the wireless industry seems to be doing okay despite the fact mobile phone service was classified as a Title II service in 1993.

To be clear, only mobile voice services are subject to Title II, since the FCC classified broadband access to the Internet over wireless networks as an “information” service (and thus completely exempt from Title II) in March of 2007.

Sec. 6002(c) of the Omnibus Budget Reconciliation Act of 1993 (Public Law 103-66) modified Sec. 332 of the Communications Act so commercial mobile services would be treated “as a common carrier … except for such provisions of title II as the Commission may specify by regulation as inapplicable…”

The FCC commendably did forbear.  Former Chairman Reed E. Hundt would later boast in his memoir that the commission “totally deregulated the wireless industry.” He added that this was possible thanks to a Democratic Congress and former Vice President Al Gore’s tie-breaking Senate vote. Continue reading →

Last week marked the conclusion of the ITU’s Plenipotentiary Conference, the quadrennial gathering during which ITU member states get together to revise the treaty that establishes the Union and conduct other high-level business. I had the privilege of serving as a member of the US delegation, as I did for the WCIT, and to see the negotiations first hand. This year’s Plenipot was far less contentious than the WCIT was two years ago. For other summaries of the conference, let me recommend to you Samantha Dickinson, Danielle Kehl, and Amb. Danny Sepulveda. Rather than recap their posts or the entire conference, I just wanted to add a couple of additional observations.

We mostly won on transparent access to documents

Through my involvement with WCITLeaks, I have closely followed the issue of access to ITU documents, both before and during the Plenipot. My assessment is that we mostly won.

Going forward, most inputs and outputs to ITU conferences and assemblies will be available to the public from the ITU website. This excludes a) working documents, b) documents related to other meetings such as Council Working Groups and Study Groups, and c) non-meeting documents that should be available to the public.

However, in February, an ITU Council Working Group will be meeting to develop what is likely to be a more extensive document access policy. In May, the whole Council will meet to provisionally approve an access policy. And in 2018, the next Plenipot will permanently decide what to do about this provisional access policy.

There are no guarantees, and we will need to closely monitor the outcomes in February and May to see what policy is adopted—but if it is a good one, I would be prepared to shut down WCITLeaks as it would become redundant. If the policy is inadequate, however, WCITLeaks will continue to operate until the policy improves.

I was gratified that WCITLeaks continued to play a constructive role in the discussion. For example, in the Arab States’ proposal on ITU document access, they cited us, considering “that there are some websites on the Internet which are publishing illegally to the public ITU documents that are restricted only to Member States.” In addition, I am told that at the CEPT coordination meeting, WCITLeaks was thanked for giving the issue of transparency at the ITU a shot in the arm.

A number of governments were strong proponents of transparency at the ITU, but I think special thanks are due to Sweden, who championed the issue on behalf of Europe. I was very grateful for their leadership.

The collapse of the WCIT was an input into a harmonious Plenipot

We got through the Plenipot without a single vote (other than officer elections)! That’s great news—it’s always better when the ITU can come to agreement without forcing some member states to go along.

I think it’s important to recognize the considerable extent to which this consensus agreement was driven by events at the WCIT in 2012. At the WCIT, when the US (and others) objected and said that we could not agree to certain provisions, other countries thought we were bluffing. They decided to call our bluff by engineering a vote, and we wisely decided not to sign the treaty, along with 54 other countries.

In Busan this month, when we said that we could not agree to certain outcomes, nobody thought we were bluffing. Our willingness to walk away at the WCIT gave us added credibility in negotiations at the Plenipot. While I also believe that good diplomacy helped secure a good outcome at the Plenipot, the occasional willingness to walk the ITU off a cliff comes in handy. We should keep this in mind for future negotiations—making credible promises and sticking to them pays dividends down the road.

The big question of the conference is in what form will the India proposal re-emerge

At the Plenipot, India offered a sweeping proposal to fundamentally change the routing architecture of the Internet so that a) IP addresses would be allocated by country, like telephone numbers, with a country prefix and b) domestic Internet traffic would never be routed out of the country.

This proposal was obviously very impractical. It is unlikely, in any case, that the ITU has the expertise or the budget to undertake such a vast reengineering of the Internet. But the idea would also be very damaging from the perspective of individual liberty—it would make nation-states, even more than the are now, mediators of human communication.

I was very proud that the United States not only made the practical case against the Indian proposal, it made a principled one. Amb. Sepulveda made a very strong statement indicating that the United States does not share India’s goals as expressed in this proposal, and that we would not be a part of it. This statement, along with those of other countries and subsequent negotiations, effectively killed the Indian proposal at the Plenipot.

The big question is in what form this proposal will re-emerge. The idea of remaking the Internet along national lines is unlikely to go away, and we will need to continue monitoring ITU study groups to ensure that this extremely damaging proposal does not raise its head.

In my previous essay, I discussed a new white paper by my colleague Robert Graboyes, Fortress and Frontier in American Health Care, which examines the future of medical innovation. Graboyes uses the “fortress vs frontier” dichotomy to help explain different “visions” about how public policies debates about technological innovation in the health care arena often play out.  It’s a terrific study that I highly recommend for all the reasons I stated in my previous post.

As I was reading Bob’s new report, I realized that his approach shared much in common with a couple of other recent innovation policy paradigms I have discussed here before from Virginia Postrel (“Stasis” vs. “Dynamism”), Robert D. Atkinson (“Preservationists” vs. “Modernizers”), and myself (“Precautionary Principle” vs. “Permissionless Innovation”). In this essay, I will briefly relate Bob’s’ approach to those other three innovation policy paradigms and then note a deficiency with our common approaches. I’ll conclude by briefly discussing another interesting framework from science writer Joel Garreau. Continue reading →

Robert-GraboyesI want to bring to everyone’s attention an important new white paper by Dr. Robert Graboyes, a colleague of mine at the Mercatus Center at George Mason University who specializes in the economics of health care. His new 67-page study, Fortress and Frontier in American Health Care, seeks to move away from the tired old dichotomies that drive health care policy discussions: Left versus Right, Democrat versus Republican, federal versus state, and public versus private, and so on. Instead, Graboyes seeks to reframe the debate over the future of health care innovation in terms of “Fortress versus Frontier” and to highlight what lessons we can learn from the Internet and the Information Revolution when considering health care policy.

What does Graboyes mean by “Fortress and Frontier”? Here’s how he explains this conflict of visions:

The Fortress is an institutional environment that aims to obviate risk and protect established producers (insiders) against competition from newcomers (outsiders). The Frontier, in contrast, tolerates risk and allows outsiders to compete against established insiders. . . .  The Fortress-Frontier divide does not correspond neatly with the more familiar partisan or ideological divides. Framing health care policy issues in this way opens the door for a more productive national health care discussion and for unconventional policy alliances. (p. 4)

He elaborates in more detail later in the paper: Continue reading →

How Much Tax?

by on October 30, 2014 · 0 comments

As I and others have recently noted, if the Federal Communications Commission reclassifies broadband Internet access as a “telecommunications” service, broadband would automatically become subject to the federal Universal Service tax—currently 16.1%, or more than twice the highest state sales tax (California–7.5%), according to the Tax Foundation.

Erik Telford, writing in The Detroit News, has reached a similar conclusion.

U.S. wireline broadband revenue rose to $43 billion in 2012 from $41 billion in 2011, according to one estimate. “Total U.S. mobile data revenue hit $90 billion in 2013 and is expected to rise above $100 billion this year,” according to another estimate.  Assuming that the wireline and wireless broadband industries as a whole earn approximately $150 billion this year, the current 16.1% Universal Service Contribution Factor would generate over $24 billion in new revenue for government programs administered by the FCC if broadband were defined as a telecommunications service.

The Census Bureau reports that there were approximately 90 million households with Internet use at home in 2012. Wireline broadband providers would have to collect approximately $89 from each one of those households in order to satisfy a 16.1% tax liability on earnings of $50 billion. There were over 117 million smartphone users over the age of 15 in 2011, according to the Census Bureau. Smartphones would account for the bulk of mobile data revenue. Mobile broadband providers would have to collect approximately $137 from each of those smartphone users to shoulder a tax liability of 16.1% on earnings of $100 billion. Continue reading →

Evan Selinger, a super-sharp philosopher of technology up at the Rochester Institute of Technology, is always alerting me to interesting new essays and articles and this week he brought another important piece to my attention. It’s a short new article by Arturo Casadevall, Don Howard, and Michael J. Imperiale, entitled, “The Apocalypse as a Rhetorical Device in the Influenza Virus Gain-of-Function Debate.” The essay touches on something near and dear to my own heart: the misuse of rhetoric in debates over the risk trade-offs associated with new technology and inventions. Casadevall, Howard, and Imperiale seek to “focus on the rhetorical devices used in the debate [over infectious disease experiments] with the hope that an analysis of how the arguments are being framed can help the discussion.”

They note that “humans are notoriously poor at assessing future benefits and risks” and that this makes many people susceptible to rhetorical ploys based on the artificial inflation of risks. Their particular focus in this essay is the debate over so-called “gain-of-function” (GOF) experiments involving influenza virus, but what they have to say here about how rhetoric is being misused in that field is equally applicable to many other fields of science and the policy debates surrounding various issues. The last two paragraphs of their essay are masterful and deserve everyone’s attention: Continue reading →