The Wall Street Journal reported yesterday that the White House is crafting a plan for $1 trillion in infrastructure investment. I was intrigued to learn that President Trump “inquired about the possibility of auctioning the broadcast spectrum to wireless carriers” to help fund the programs. Spectrum sales are the rare win-win-win: they stimulate infrastructure investment (cell towers, fiber networks, devices), provide new wireless services and lower prices to consumers, and generate billions in revenue for the federal government.

Broadcast TV spectrum is good place to look for revenue but the White House should also look at federal agencies, who possess about ten times what broadcasters hold.

Large portions of spectrum are underused or misallocated because of decades of command-and-control policies. Auctioning spectrum for flexible uses, on the other hand, is a free-market policy that is often lucrative for the federal government. Since 1993, when Congress authorized spectrum auctions, wireless carriers and tech companies have spent somewhere around $120 billion for about 430 MHz of flexible-use spectrum, and the lion’s share of revenue was deposited in the US Treasury.

A few weeks ago, the FCC completed the $19 billion sale of broadcast TV spectrum, the so-called incentive auction. Despite underwhelming many telecom experts, this was the third largest US spectrum auction ever in terms of revenue and will transfer a respectable 70 MHz from restricted (broadcast TV) use to flexible use.

The remaining broadcast TV spectrum that President Trump is interested in totals about 210 MHz. But even more spectrum is under the President’s nose.

As Obama’s Council of Advisors on Science and Technology pointed out in 2012, federal agencies possess around 2,000 MHz of “beachfront” (sub-3.7 GHz) spectrum. I charted various spectrum uses in a December 2016 Mercatus policy brief.

This government spectrum is very valuable if portions can be cleared of federal users. Federal spectrum was part of the frequencies the FCC auctioned in 2006 and 2015, and the slivers of federal spectrum (around 70 MHz of the federal total) sold for around $27 billion combined.

The Department of Commerce has been analyzing which federal spectrum bands could be used commercially and the Mobile Now Act, a pending bill in Congress, proposes more sales of federal spectrum. These policies have moved slowly (and the vague language about unlicensed spectrum in the Mobile Now bill has problems) but the Trump administration has a chance to expedite spectrum reallocation processes and sell more federal spectrum to commercial users.

If Congress and the President wanted to prevent intrusive regulation of the Internet, how would they do it? They know that silence on the issue wouldn’t protect Internet services. As Congress learned in the 1960s and 1970s with cable TV, congressional silence, to the FCC, looks like permission to enact a far-reaching regulatory regime.

In the 1990s, Congress knew the FCC would be tempted to regulate the Internet and Internet services and that silence would be seen as an invitation to regulate the Internet. Congress and President Clinton therefore passed a 1996 law, Section 230 of the Communications Decency Act, which stated:

It is the policy of the United States…to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.

But this statement raised the possibility that the FCC would regulate Internet access providers and would claim (as FCC defenders do today) they were not regulating “the Internet,” only access providers. To preempt such sophistry, Congress added that the “interactive computer services” shielded from regulation include:

specifically a service or system that provides access to the Internet….

Congress proved prescient. For over a decade, as the FCC’s traditional areas of regulation waned in importance, advocates and FCC officials have sought to regulate Internet access providers and the Internet. After two failed attempts to regulate providers and enforce net neutrality norms, the FCC decided to regulate Internet access providers with Title II, the same provisions regulating telephone and telegraph providers. Section 230 featured prominently in the dissents of commissioners Pai and O’Rielly who both noted that the Open Internet Order was a simple rejection of the plain words of Congress. Nevertheless, two judges on DC Circuit Court of Appeals blessed those regulations and the Open Internet Order in 2016.

If “unfettered from Federal regulation” means anything, doesn’t it mean that the FCC cannot use Title II, its most stringent regulatory regime, to regulate Internet access providers? Is there any combination of words Congress could draft that would protect Internet access providers and Internet services from Title II?

There is a pending appeal challenging the Open Internet Order before the DC Circuit and after that is appeal to the Supreme Court. The Supreme Court, in particular, might be receptive to a common-sense argument that “unfettered from Federal regulation” is hazy around the edges but it cannot mean regulation of ISPs’ content, services, protocols, network topology, and business models.

I understand the sentiment that a net neutrality compromise is urgently needed to save the Internet from Title II. But until the Open Internet Order appeals have concluded, I think it’s premature to compromise and grant the FCC permanent authority to regulate the Internet with vague standards (e.g., no one knows what “reasonable throttling” means). A successful appeal could mean a third and final court loss for net neutrality purists, thereby restoring Section 230’s free-market protections for the Internet. Until the Supreme Court denies cert or agrees with the FCC that up is down, black is white, and agencies can ignore clear statutes, I’m not persuaded that Congress should nullify its own deregulatory language of Section 230 with a net neutrality compromise.

The proposed Mobile Now Act signals that spectrum policy is being prioritized by Congress and there’s some useful reforms in the bill. However, the bill encourages unlicensed spectrum allocations in ways that I believe will create major problems down the road.

Congress and the FCC need to proceed much more carefully before allocating more unlicensed spectrum. The FCC’s 2008 decision, for instance, to allow unlicensed devices in the “TV white spaces” has been disappointing. As some economists recently noted, “[s]imply stated, the FCC’s TV white space policy to date has been a flop.” Unlicensed spectrum policy is also generating costly fights (see WiFi v. LTE-U, Bluetooth v. TLPS, LightSquared v. GPS) as device makers and carriers lobby about who gains regulatory protection and how to divide this valuable resource that the FCC parcels out for free.

The unlicensed spectrum provisions in the Mobile Now Act may force the FCC to referee innumerable fights over who has access to unlicensed spectrum. Section 18 of the Mobile Now bill encourages unlicensed spectrum. It says the FCC must

make available on an unlicensed basis radio frequency bands sufficient to meet demand for unlicensed wireless broadband operations if doing so is…reasonable…and…in the public interest.

Note that we have language about supply and demand here. But unlicensed spectrum is free to all users using an approved device (that is, nearly everyone in the US). Quantity demanded will always outstrip quantity supplied when a valuable asset (like spectrum or real estate) is handed out when price = 0. By removing a valuable asset from the price system, large allocation distortions are likely.

Any policy originating from Congress or the FCC to satisfy “demand” for unlicensed spectrum biases the agency towards parceling out an excessive amount of unlicensed spectrum. 

The problems from unlicensed spectrum allocation could be mitigated if the FCC decided, as part of a “public interest” conclusion, to estimate the opportunity cost of any unlicensed spectrum allocated. That way, the government will have a rough idea of the market value of unlicensed spectrum being given away. There have been several auctions and there is an active secondary market for spectrum so estimates are achievable, and the UK has required the calculation of the opportunity cost of spectrum for over a decade.

With these estimates, it will be more difficult but still possible for the FCC to defend giving away spectrum for free. Economist Coleman Bazelon, for instance, estimates that the incremental value of a nationwide megahertz of licensed spectrum is more than 10x the equivalent unlicensed spectrum allocation. Significantly, unlike licensed spectrum, allocations of unlicensed bands are largely irreversible.

People can quibble with the estimates but it is unclear that unlicensed use is the best use of additional spectrum. In any case, hopefully the FCC will attempt to bring some economic rigor to public interest determinations.

Is the incentive auction a disappointment? For consumers, this auction is not a disappointment. At least–not yet.

Scott Wallsten at the Technology Policy Institute has a good rundown. My thoughts below:

By my count, this was the eighth major auction of commercial, flexible-use spectrum since auctions were authorized in 1993. On the most important question–how much spectrum was repurposed from restricted uses to flexible, licensed uses?–this auction stacks up pretty well.

At 70 MHz, this was the third largest auction in terms of total spectrum repurposed, trailing the mid-1990s PCS auction (120 MHz) and 2006 AWS-1 auction (90 MHz).

On the next most important question–how quickly will new services be deployed?–the verdict is still out. Historically, repurposing spectrum like this typically takes six to twelve years. Depending on how you classify it, this proceeding commenced in 2010 (when the FCC proposed the incentive auction) or 2012 (when Congress authorized the auction). With the auction over, broadcasters have over three years to clear out of the spectrum but some believe it will take longer. Right now, it looks like the process will take seven to eleven years total–not great but pretty typical. 

Some people are disappointed, however, with this auction, particularly some in the broadcasting industry and in the FCC or Congress, who expected higher auction revenues.

High revenue gets nice headlines but is far less important than the amount of spectrum repurposed. It’s an underreported story but close to 290 MHz of spectrum, nearly 45% of all liberalized, licensed spectrum, was de-zoned by the FCC, not auctioned. De-zoning spectrum generates zero auction revenue for the government but consumers see substantial benefits from this de-zoning, even if the government does not directly benefit. I recently wrote a policy brief about the benefits of de-zoning spectrum.

In any case, in terms of revenue, this auction was not a failure. At around $17 billion, it’s third out of eight, trailing the 2008 700 MHz band auction (about $21 billion in 2015 dollars) and the massive haul from the 2015 AWS-3 auction (about $42 billion).

At close, broadcasters will receive $10 billion for the 70 MHz of available licensed spectrum. Some broadcasters consider it a failure, just as a home seller is disappointed when her home sells below list price. The broadcasters initially requested $86 billion for 100 MHz of available spectrum. When the carriers’ bids didn’t match that price, some broadcasters pulled out and the remaining broadcasters lowered their price.

Were there better ways of repurposing broadcast spectrum? Broadcasters have a point that the complexity of the auction might have reduced buyer and seller participation (which means lower bids and fewer deals). As Wallsten notes, an overlay auction (like AWS-1) or simply de-zoning the spectrum might have been better (faster) alternatives. But it goes too far deem this auction a failure (at least until we know how long the broadcaster repack takes).

Today marks the 10th anniversary of the launch of the Apple iPhone. With all the headlines being written today about how the device changed the world forever, it is easy to forget that before its launch, plenty of experts scoffed at the idea that Steve Jobs and Apple had any chance of successfully breaking into the seemingly mature mobile phone market.

After all, those were the days when BlackBerry, Palm, Motorola, and Microsoft were on everyone’s minds. Perhaps, then, it wasn’t so surprising to hear predictions like these leading up to and following the launch of the iPhone:

  • In December 2006, Palm CEO Ed Colligan summarily dismissed the idea that a traditional personal computing company could compete in the smartphone business. “We’ve learned and struggled for a few years here figuring out how to make a decent phone,” he said. “PC guys are not going to just figure this out. They’re not going to just walk in.”
  • In January 2007, Microsoft CEO Steve Ballmer laughed off the prospect of an expensive smartphone without a keyboard having a chance in the marketplace as follows: “Five hundred dollars? Fully subsidized? With a plan? I said that’s the most expensive phone in the world and it doesn’t appeal to business customers because it doesn’t have a keyboard, which makes it not a very good e-mail machine.”
  • In March 2007, computing industry pundit John C. Dvorak argued that “Apple should pull the plug on the iPhone” since “There is no likelihood that Apple can be successful in a business this competitive.” Dvorak believed the mobile handset business was already locked up by the era’s major players. “This is not an emerging business. In fact it’s gone so far that it’s in the process of consolidation with probably two players dominating everything, Nokia Corp. and Motorola Inc.”

Continue reading →

The future of emerging technology policy will be influenced increasingly by the interplay of three interrelated trends: “innovation arbitrage,” “technological civil disobedience,” and “spontaneous private deregulation.” Those terms can be briefly defined as follows:

  • Innovation arbitrage” refers to the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. Just as capital now fluidly moves around the globe seeking out more friendly regulatory treatment, the same is increasingly true for innovations. And this will also play out domestically as innovators seek to play state and local governments off each other in search of some sort of competitive advantage.
  • Technological civil disobedience” represents the refusal of innovators (individuals, groups, or even corporations) or consumers to obey technology-specific laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. New technological devices and platforms are making it easier than ever for the public to openly defy (or perhaps just ignore) rules that limit their freedom to create or use modern technologies.
  • Spontaneous private deregulation” can be thought of as de facto rather than the de jure elimination of traditional laws and regulations owing to a combination of rapid technological change as well the potential threat of innovation arbitrage and technological civil disobedience. In other words, many laws and regulations aren’t being formally removed from the books, but they are being made largely irrelevant by some combination of those factors. “Benign or otherwise, spontaneous deregulation is happening increasingly rapidly and in ever more industries,” noted Benjamin Edelman and Damien Geradin in a Harvard Business Review article on the phenomenon.[1]

I have previously documented examples of these trends in action for technology sectors as varied as drones, driverless cars, genetic testing, Bitcoin, and the sharing economy. (For example, on the theme of global innovation arbitrage, see all these various essays. And on the growth of technological civil disobedience, see, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” and “Quick Thoughts on FAA’s Proposed Drone Registration System.” I also discuss some of these issues in the second edition of my Permissionless Innovation book.)

In this essay, I want to briefly highlight how, over the course of just the past month, a single company has offered us a powerful example of how both global innovation arbitrage and technological civil disobedience—or at least the threat thereof—might become a more prevalent feature of discussions about the governance of emerging technologies. And, in the process, that could lead to at least the partial spontaneous deregulation of certain sectors or technologies. Finally, I will discuss how this might affect technological governance more generally and accelerate the movement toward so-called “soft law” governance mechanisms as an alternative to traditional regulatory approaches. Continue reading →

Title II allows the FCC to determine what content and media Internet access providers must transmit on their own private networks, so the First Amendment has constantly dogged the FCC’s “net neutrality” proceedings. If the Supreme Court agrees to take up an appeal from the DC Circuit Court of Appeals, which rejected a First Amendment challenge this summer, it will likely be because of Title II’s First Amendment deficiencies.

Title II has always been about handicapping ISPs qua speakers and preventing ISPs from offering curated Internet content. As former FCC commissioner Copps said, absent the Title II rules, “a big cable company could block access to an investigative report about its less-than-stellar customer service.” Tim Wu told members of Congress that net neutrality was intended to prevent ISPs from favoring, say, particular news sources or sports teams.

But just as a cable company chooses to offer some channels and not others, and a search engine chooses to promote some pages and not others, choosing to offer a curated Internet to, say, children, religious families, or sports fans involves editorial decisions. As communications scholar Stuart Benjamin said about Title II’s problem, under current precedent, ISPs “can say they want to engage in substantive editing, and that’s enough for First Amendment purposes.”

Title II – Bringing Broadcast Regulation to the Internet

Title II regulation of the Internet is frequently compared to the Fairness Doctrine, which activists used for decades to drive conservatives out of broadcast radio and TV. As a pro-net neutrality media professor explained in The Atlantic last year, the motivation for the Fairness Doctrine and Title II Internet regulation are the same: to “rescue a potentially democratic medium from commercial capture.” This is why there is almost perfect overlap between the organizations and advocates who support the Fairness Doctrine and those who lobbied for Title II regulation of the Internet. Continue reading →

The FCC appears to be dragging the TV industry, which is increasingly app- and Internet-based, into years of rulemakings, unnecessary standards development and oversight, and drawn-out lawsuits. The FCC hasn’t made a final decision but the general outline is pretty clear. The FCC wants to use a 20 year-old piece of corporate welfare, calculated to help a now-dead electronics retailer, as authority to regulate today’s TV apps and their licensing terms. Perhaps they’ll succeed in expanding their authority over set top boxes and TV apps. But as TV is being revolutionized by the Internet the legacy providers are trying to stay ahead of the new players (Netflix, Amazon, Layer 3), regulating TV apps and boxes will likely impede the competitive process and distract the FCC from more pressing matters, like spectrum and infrastructure. Continue reading →

Today, the U.S. Department of Transportation released its eagerly-awaited “Federal Automated Vehicles Policy.” There’s a lot to like about the guidance document, beginning with the agency’s genuine embrace of the potential for highly automated vehicles (HAVs) to revolutionize this sector and save thousands of lives annually in the process.

It is important we get HAV policy right, the DOT notes, because, “35,092 people died on U.S. roadways in 2015 alone” and “94 percent of crashes can be tied to a human choice or error.” (p. 5) HAVs could help us reverse that trend and save thousands of lives and billions in economic costs annually. The agency also documents many other benefits associated with HAVs, such as increasing personal mobility, reducing traffic and pollution, and cutting infrastructure costs.

I will not attempt here to comment on every specific recommendation or guideline suggested in the new DOT guidance document. I could nit-pick about some of the specific recommended guidelines, but I think many of the guidelines are quite reasonable, whether they are related to safety, security, privacy, or state regulatory issues. Other issues need to be addressed and CEI’s Marc Scribner does a nice job documenting some of them is his response to the new guidelines.

Instead of discussing those specific issues today, I want to ask a more fundamental and far-reaching question which I have been writing about in recent papers and essays: Is this guidance or regulation? And what does the use of informal guidance mechanisms like these signal for the future of technological governance more generally? Continue reading →

SecGen BanOn Tuesday, UN Secretary-General Ban Ki-Moon delivered an address to the UN Security Council “on the Non-Proliferation of Weapons of Mass Destruction.” He made many of the same arguments he and his predecessors have articulated before regarding the need for the Security Council “to develop further initiatives to bring about a world free of weapons of mass destruction.” In particular, he was focused on the great harm that could come about from the use of chemical, biological and nuclear weapons. “Vicious non-state actors that target civilians for carnage are actively seeking chemical, biological and nuclear weapons,” the Secretary-General noted. A stepped-up disarmament agenda is needed, he argued, “to prevent the human, environmental and existential destruction these weapons can cause . . . by eradicating them once and for all.”

The UN has created several multilateral mechanisms to pursue those objectives, including the Nuclear Non-Proliferation Treaty, the Chemical Weapons Convention, and the Biological Weapons Convention. Progress on these fronts has always been slow and limited, however. The Secretary-General observed that nuclear non-proliferation efforts have recently “descended into fractious deadlock,” but the effectiveness of those and similar UN-led efforts have long been challenged by the dual realities of (1) rapid ongoing technological change that has made WMDs more ubiquitous than ever, plus (2) a general lack of teeth in UN treaties and accords to do much to slow those advances, especially among non-signatories.

Despite those challenges, the Secretary-General is right to remain vigilant about the horrors of chemical, biological and nuclear attacks. But what was interesting about this address is that the Secretary-General continued on to discuss his concerns about a rising class of emerging technologies, which we usually don’t hear mentioned in the same breath as those traditional “weapons of mass destruction”: Continue reading →