[originally posted on Medium]

Today is the anniversary of the day the machines took over.

Exactly twenty years ago today, on May 11, 1997, the great chess grandmaster Garry Kasparov became the first chess world champion to lose a match to a supercomputer. His battle with IBM’s “Deep Blue” was a highly-publicized media spectacle, and when he lost Game 6 of his match against the machine, it shocked the world.

At the time, Kasparov was bitter about the loss and even expressed suspicions about how Deep Blue’s team of human programmers and chess consultants might have tipped the match in favor of machine over man. Although he still wonders about how things went down behind the scenes during the match, Kasparov is no longer as sore as he once was about losing to Deep Blue. Instead, Kasparov has built on his experience that fateful week in 1997 and learned how he and others can benefit from it.

The result of this evolution in his thinking is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, a book which serves as a paean to human resiliency and our collective ability as a species to adapt in the face of technological disruption, no matter how turbulent.

Kasparov’s book serves as the perfect antidote to the prevailing gloom-and-doom narrative in modern writing about artificial intelligence (AI) and smart machines. His message is one of hope and rational optimism about future in which we won’t be racing against the machines but rather running alongside them and benefiting in the process.

Overcoming the Technopanic Mentality

There is certainly no shortage of books and articles being written today about AI, robotics, and intelligent machines. The tone of most of these tracts is extraordinarily pessimistic. Each page is usually dripping with dystopian dread and decrying a future in which humanity is essentially doomed.

As I noted in a recent essay about “The Growing AI Technopanic,” after reading through most of these books and articles, one is left to believe that in the future: “Either nefarious-minded robots enslave us or kill us, or AI systems treacherously trick us, or at a minimum turn our brains to mush.” These pessimistic perspectives are clearly on display within the realm of fiction, where every sci-fi book, movie, or TV show depicts humanity as certain losers in the proverbial “race” against machines. But such lugubrious lamentations are equally prevalent within the pages of many non-fiction books, academic papers, editorials, and journalistic articles.

Given the predominantly panicky narrative surrounding the age of smart machines, Kasparov’s Deep Thinking serves as a welcome breath of fresh air. The aim of his book is finding ways of “doing a smarter job of humans and machines working together” to improve well-being. Continue reading →

By Jordan Reimschisel & Adam Thierer

[Originally published on Medium on May 2, 2017.]

Americans have schizophrenic opinions about artificial intelligence (AI) technologies. Ask the average American what they think of AI and they will often respond with a combination of fear, loathing, and dread. Yet, the very same AI applications they claim to be so anxious about are already benefiting their lives in profound ways.

Last week, we posted complementary essays about the growing “technopanic” over artificial intelligence and the potential for that panic to undermine many important life-enriching medical innovations or healthcare-related applications. We were inspired to write those essays after reading the results of a recent poll conducted by Morning Consult, which suggested that the public was very uncomfortable with AI technologies. “A large majority of both Republicans and Democrats believe there should be national and international regulations on artificial intelligence,” the poll found, Of the 2,200 American adults surveyed, the poll revealed that “73 percent of Democrats said there should be U.S. regulations on artificial intelligence, as did 74 percent of Republicans and 65 percent of independents.”

We noted that there were reasons to question the significance of those in light of the binary way in which the questions were asked. Nonetheless, there are clearly some serious concerns among the public about AI and robotics. You see that when you read deeper into the poll results for specific questions and find respondents saying that they are “somewhat” to “very uncomfortable” about a wide range of specific AI applications.

Yet, in each case, Americans are already deriving significant benefits from each of the AI applications they claim to be so uncomfortable with.

Continue reading →

There is reporting suggesting that the Trump FCC may move to eliminate the FCC’s complex Title II regulations for the Internet and restore the FTC’s ability to police anticompetitve and deceptive practices online. This is obviously welcome news. These reports also suggest that FCC Chairman Pai and the FTC will require ISPs add open Internet principles to their terms of service, that is, no unreasonable blocking or throttling of content and no paid priority. These principles have always been imprecise because federal law allows ISPs to block objectionable content if they wish (like pornography or violent websites) and because ISPs have a First Amendment right to curate their services.

Whatever the exact wording, there shouldn’t be a per se ban of paid priority. Whatever policy develops should limit anticompetitive paid priority, not all paid priority. Paid prioritization is simply a form of consideration payment, which is economists’ term for when upstream producers pay downstream retailers or distributors for special treatment. There’s economics literature on consideration payments and it’s an accepted business practice in many other industries. Further, consideration payments often benefit small providers and niche customers. Some small and large companies with interactive IP services might be willing to pay for end-to-end service reliability.

The Open Internet Order’s paid priority ban has always been short sighted because it attempts to preserve the Internet as it existed circa 2002. It resembles the FCC’s unfounded insistence for decades that subscription TV (ie, how the vast majority of Americans consume TV today) was against “the public interest.” Like the defunct subscription TV ban, the paid priority ban is an economics-free policy that will hinder new services. 

Despite what late-night talk show hosts might say, “fast lanes” on the Internet are here and will continue. “Fast lanes” have always been permitted because, as Obama’s US CTO Aneesh Chopra noted, some emerging IP services need special treatment. Priority transmission was built into Internet protocols years ago and the OIO doesn’t ban data prioritization; it bans BIAS providers from charging “edge providers” a fee for priority.

The notion that there’s a level playing field online needing preservation is a fantasy. Non-real-time services like Netflix streaming, YouTube, Facebook pages, and major websites can mostly be “cached” on servers scattered around the US. Major web companies have their own form of paid prioritization–they spend millions annually, including large payments to ISPs, on transit agreements, CDNs, and interconnection in order to avoid congested Internet links.

The problem with a blanket paid priority ban is that it biases the evolution of the Internet in favor of these cache-able services and against real-time or interactive services like teleconferencing, live TV, and gaming. Caching doesn’t work for these services because there’s nothing to cache beforehand. 

When would paid prioritization make sense? Most likely a specialized service for dedicated users that requires end-to-end reliability. 

I’ll use a plausible example to illustrate the benefits of consideration payments online–a telepresence service for deaf people. As Martin Geddes described, a decade ago the government in Wales developed such a service. The service architects discovered that a well-functioning service had quality characteristics not supplied by ISPs. ISPs and video chat apps like Skype optimize their networks, video codecs, and services for non-deaf people (ie, most customers) and prioritize consistent audio quality over video quality. While that’s useful for most people, deaf people need basically the opposite optimization because they need to perceive subtle hand and finger motions. The typical app that prioritizes audio, not video, doesn’t work for them.

But high-def real-time video quality requires upstream and downstream capacity reservation and end-to-end reliability. This is not cheap to provide. An ISP, in this illustration, has three options–charge the telepresence provider, charge deaf customers a premium, or spread the costs across all customers. The paid priority ban means ISPs can only charge customers for increased costs. This paid priority ban unnecessarily limits the potential for such services since there may be companies or nonprofits willing to subsidize such a service.

It’s a specialized example but illustrates the idiosyncratic technical requirements needed for many real-time services. In fact, real-time services are the next big challenge in the Internet’s evolution. As streaming media expert Dan Rayburn noted, “traditional one-way live streaming is being disrupted by the demand for interactive engagement.”  Large and small edge companies are increasingly looking for low-latency video solutions. Today, a typical “live” event is broadcast online to viewers with a 15- to 45-second delay. This latency limits or kills the potential for interactive online streaming services like online talk shows, pet cams, online auctions, videogaming, and online classrooms.

If the FTC takes back oversight of ISPs and the Internet it should, as with any industry, permit any business practice that complies with competition law and consumer protection law. The agency should disregard the unfounded belief that consideration payments online (“paid priority”) are always harmful.

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)

Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.

As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.

The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge. Continue reading →

Federal Communications Commission (FCC) Chairman Ajit Pai today announced plans to expand the role of economic analysis at the FCC in a speech at the Hudson Institute. This is an eminently sensible idea that other regulatory agencies (both independent and executive branch) could learn from.

Pai first made the case that when the FCC listened to its economists in the past, it unlocked billions of dollars of value for consumers. The most prominent example was the switch from hearings to auctions in order to allocate spectrum licenses. He perceptively noted that the biggest effect of auctions was the massive improvement in consumer welfare, not just the more than $100 billion raised for the Treasury. Other examples of the FCC using the best ideas of its economists include:

  • Use of reverse auctions to allocate universal service funds to reduce costs.
  • Incentive auctions that reward broadcasters for transferring licenses to other uses – an idea initially proposed in a 2002 working paper by Evan Kwerel and John Williams at the FCC.
  • The move from rate of return to price cap regulation for long distance carriers.

More recently, Pai argued, the FCC has failed to use economics effectively. He identified four key problems:

  1. Economics is not systematically employed in policy decisions and often employed late in the process. The FCC has no guiding principles for conduct and use of economic analysis.
  2. Economists work in silos. They are divided up among bureaus. Economists should be able to work together on a wide variety of issues, as they do in the Federal Trade Commission’s Bureau of Economics, the Department of Justice Antitrust Division’s economic analysis unit, and the Securities and Exchange Commission’s Division of Economic and Risk Analysis.
  3. Benefit-cost analysis is not conducted well or often, and the FCC does not take Regulatory Flexibility Act analysis (which assesses effects of regulations on small entities) seriously. The FCC should use Office of Management and Budget guidance as its guide to doing good analysis, but OMB’s 2016 draft report on the benefits and costs of federal regulations shows that the FCC has estimated neither benefits nor costs of any of its major regulations issued in the past 10 years. Yet executive orders from multiple administrations demonstrate that “Serious cost-benefit analysis is a bipartisan tradition.”
  4. Poor use of data. The FCC probably collects a lot of data that’s unnecessary, at a paperwork cost of $800 million per year, not including opportunity costs of the private sector. But even useful data are not utilized well. For example, a few years ago the FCC stopped trying to determine whether the wireless market is effectively competitive even though it collects lots of data on the wireless market.

To remedy these problems, Pai announced an initiative to establish an Office of Economics and Data that would house the FCC’s economists and data analysts. An internal working group will be established to collect input within the FCC and from the public. He hopes to have the new office up and running by the end of the year. The purpose of this change is to give economists early input into the rulemaking process, better manage the FCC’s data resources, and conduct strategic research to help find solutions to “the next set of difficult issues.”

Can this initiative significantly improve the quality and use of economic analysis at the FCC?

There’s evidence that independent regulatory agencies are capable of making some decent improvements in their economic analysis when they are sufficiently motivated to do so. For example, the Securities and Exchange Commission’s authorizing statue contains language that requires benefit-cost analysis of regulations when the commission seeks to determine whether they are in the public interest. Between 2005 and 2011, the SEC lost several major court cases due to inadequate economic analysis.

In 2012, the commission’s general counsel and chief economist issued new economic analysis guidance that pledged to assess regulations according to the principal criteria identified in executive orders, guidance from the Office of Management and Budget, and independent research. In a recent study, I found that the economic analysis accompanying a sample of major SEC regulations issued after this guidance was measurably better than the analysis accompanying regulations issued prior to the new guidance. The SEC improved on all five aspects of economic analysis it identified as critical: assessment of the need for the regulation, assessment of the baseline outcomes that will likely occur in the absence of new regulation, identification of alternatives, and assessment of the benefits and costs of alternatives.

Unlike the SEC, the FCC faces no statutory benefit-cost analysis requirement for its regulations. Unlike the executive branch agencies, the FCC is under no executive order requiring economic analysis of regulations. Unlike the Federal Trade Commission in the early 1980s, the FCC faces little congressional pressure for abolition.

But Congress is considering legislation that would require all regulatory agencies to conduct economic analysis of major regulations and subject that analysis to limited judicial review. Proponents of executive branch regulatory review have always contended that the president has legal authority to extend the executive orders on regulatory impact analysis to cover independent agencies, and perhaps President Trump is audacious enough to try this. Thus, it appears Chairman Pai is trying to get the FCC out ahead of the curve.

Congress passed joint resolutions to rescind FCC online privacy regulations this week, which President Trump is expected to sign. Ignore the hyperbole. Lawmakers are simply attempting to maintain the state of Internet privacy law that’s existed for 20-plus years.

Since the Internet was commercialized in the 1990s, the Federal Trade Commission has used its authority to prevent “unfair or deceptive acts or practices” to prevent privacy abuses by Web companies and ISPs. In 2015, that changed. The Obama FCC classified “broadband Internet access service” as a common carrier service, thereby blocking the FTC’s authority to determine which ISP privacy policies and practices are acceptable.

Privacy advocates failed to convince the Obama FTC that de-identified browsing history is “sensitive” data. (The FTC has treated SSNs, medical information, financial information, precise location, etc. as “sensitive” for years and companies must handle these differently.) The FCC was the next best thing and in 2016 they convinced the FCC to say that browsing history is “sensitive data,” but it’s sensitive only when ISPs have it.

This has contributed to a regulatory mess for consumers and tech companies. Technological convergence is here. Regulatory convergence is not.

Consider a plausible scenario. I start watching an NFL game via Twitter on my tablet on Starbucks’ wifi. I head home at halftime and watch the game from my cable TV provider, Comcast. Then I climb into bed and watch overtime on my smartphone via NFL Mobile from Verizon.

One TV program, three privacy regimes. FTC guidelines cover me at Starbucks. Privacy rules from Title VI of the Communications Act cover my TV viewing. The brand-new FCC broadband privacy rules cover my NFL Mobile viewing and late-night browsing.

Other absurdities result from the FCC’s decision to regulate Internet privacy. For instance, if you bought your child a mobile plan with web filtering, she’s protected by FTC privacy standards, while your mobile plan is governed by FCC rules. Google Fiber customers are covered by FTC policies when they use Google Search but FCC policies when they use Yelp.

This Swiss-cheese approach to classifying services means that regulatory obligations fall haphazardly across services and technologies. It’s confusing to consumers and to companies, who need to write privacy policies based on artificial FCC distinctions that consumers disregard.

The House and Senate bills rescind the FCC “notice and choice” rules, which is the first step to restoring FTC authority. (In the meantime, the FCC will implement FTC-like policies.) 

Considering that these notice and choice rules have not even gone into effect, the rehearsed outrage from advocates demands explanation: The theatrics this week are not really about congressional repeal of the (inoperative) privacy rules. Two years ago the FCC decided to regulate the Internet in order to shape Internet services and content. The leading advocates are outraged because FCC control of the Internet is slipping away. Hopefully Congress and the FCC will eliminate the rest of the Title II baggage this year.

US telecommunications laws are in need of updates. US law states that “the Internet and other interactive computer services” should be “unfettered by Federal or State regulation,” but regulators are increasingly imposing old laws and regulations onto new media and Internet services. Further, Federal Communications Commission actions often duplicate or displace general competition laws. Absent congressional action, old telecom laws will continue to delay and obstruct new services. A new Mercatus paper by Roslyn Layton and Joe Kane shows how governments can modernize telecom agencies and laws.

Legacy Laws

US telecom laws are codified in Title 47 of the US Code and enforced mostly by the FCC. That the first eight sections of US telecommunications law are devoted to the telegraph, the killer app of 1850, illustrates congressional inaction towards obsolete regulations.

In the last decade, therefore, several media, Internet, and telecom companies inadvertently stumbled into Communications Act quagmires. An Internet streaming company, for instance, was bankrupted for upending the TV status quo established by the FCC in the 1960s; FCC precedents mean broadcasters can be credibly threatened with license revocation for airing a documentary critical of a presidential candidate; and the thousands of Internet service providers across the US are subjected to laws designed to constrain the 1930s AT&T long-distance phone monopoly.

US telecom and tech laws, in other words, are a shining example of American “kludgeocracy”–a regime of prescriptive and dated laws whose complexity benefits special interests and harms innovators. These anti-consumer results led progressive Harvard professor Lawrence Lessig to conclude in 2008 that “it’s time to demolish the FCC.” While Lessig’s proposal goes too far, Congress should listen to the voices on the right and left urging them to sweep away the regulations of the past and rationalize telecom law for the 21st century.

Modern Telecom Policy in Denmark

An interesting new Mercatus working paper explains how Denmark took up that challenge. The paper, “Alternative Approaches to Broadband Policy: Lessons on Deregulation from Denmark,” is by Denmark-based scholar Roslyn Layton, who served on President Trump’s transition team for telecom policy, and Joe Kane, a masters student in the GMU econ department. 

The “Nordic model” is often caricatured by American conservatives (and progressives like Bernie Sanders) as socialist control of industry. But as AEI’s James Pethokoukis and others point out, it’s time both sides updated their 1970s talking points. “[W]hen it comes to regulatory efficiency and business freedom,” Tyler Cowen recently noted, “Denmark has a considerably higher [Heritage Foundation] score than does the U.S.”

Layton and Kane explore Denmark’s relatively free-market telecom policies. They explain how Denmark modernized its telecom laws over time as technology and competition evolved. Critically, the center-left government eliminated Denmark’s telecom regulator in 2011 in light of the “convergence” of services to the Internet. Scholars noted,

Nobody seemed to care much—except for the staff who needed to move to other authorities and a few people especially interested in IT and telecom regulation.

Even-handed, light telecom regulation performs pretty well. Denmark, along with South Korea, leads the world in terms of broadband access. The country also has a modest universal service program that depends primarily on the market. Further, similar to other Nordic countries, Denmark permitted a voluntary forum, including consumer groups, ISPs, and Google, to determine best practices and resolve “net neutrality” controversies.

Contrast Denmark’s tech-neutral, consumer-focused approach with recent proceedings in the United States. One of the Obama FCC’s major projects was attempting to regulate how TV streaming apps functioned–despite the fact that TV has never been more abundant and competitive. Countless hours of staff time and industry time were wasted (Trump’s election killed the effort) because advocates saw the opportunity to regulate the streaming market with a law intended to help Circuit City (RIP) sell a few more devices in 1996. The biggest waste of government resources has been the “net neutrality” fight, which stems from prior FCC attempts to apply 1930s telecom laws to 1960s computer systems. Old rules haphazardly imposed on new technologies creates a compliance mindset in our tech and telecom industries. Worse, these unwinnable fights over legal minutiae prevent FCC staff from working on issues where they can help consumers. 

Americans deserve better telecom laws but the inscrutability of FCC actions means consumers don’t know what to ask for. Layton and Kane illuminate that alternative frameworks are available. They highlight Denmark’s political and cultural differences from the US. Nevertheless, Denmark’s telecom reforms and pro-consumer policies deserve study and emulation. The Danes have shown how tech-neutral, consumer-focused policies not only can expand broadband access, they reduce government duplication and overreach.

The Wall Street Journal reported yesterday that the White House is crafting a plan for $1 trillion in infrastructure investment. I was intrigued to learn that President Trump “inquired about the possibility of auctioning the broadcast spectrum to wireless carriers” to help fund the programs. Spectrum sales are the rare win-win-win: they stimulate infrastructure investment (cell towers, fiber networks, devices), provide new wireless services and lower prices to consumers, and generate billions in revenue for the federal government.

Broadcast TV spectrum is good place to look for revenue but the White House should also look at federal agencies, who possess about ten times what broadcasters hold.

Large portions of spectrum are underused or misallocated because of decades of command-and-control policies. Auctioning spectrum for flexible uses, on the other hand, is a free-market policy that is often lucrative for the federal government. Since 1993, when Congress authorized spectrum auctions, wireless carriers and tech companies have spent somewhere around $120 billion for about 430 MHz of flexible-use spectrum, and the lion’s share of revenue was deposited in the US Treasury.

A few weeks ago, the FCC completed the $19 billion sale of broadcast TV spectrum, the so-called incentive auction. Despite underwhelming many telecom experts, this was the third largest US spectrum auction ever in terms of revenue and will transfer a respectable 70 MHz from restricted (broadcast TV) use to flexible use.

The remaining broadcast TV spectrum that President Trump is interested in totals about 210 MHz. But even more spectrum is under the President’s nose.

As Obama’s Council of Advisors on Science and Technology pointed out in 2012, federal agencies possess around 2,000 MHz of “beachfront” (sub-3.7 GHz) spectrum. I charted various spectrum uses in a December 2016 Mercatus policy brief.

This government spectrum is very valuable if portions can be cleared of federal users. Federal spectrum was part of the frequencies the FCC auctioned in 2006 and 2015, and the slivers of federal spectrum (around 70 MHz of the federal total) sold for around $27 billion combined.

The Department of Commerce has been analyzing which federal spectrum bands could be used commercially and the Mobile Now Act, a pending bill in Congress, proposes more sales of federal spectrum. These policies have moved slowly (and the vague language about unlicensed spectrum in the Mobile Now bill has problems) but the Trump administration has a chance to expedite spectrum reallocation processes and sell more federal spectrum to commercial users.

If Congress and the President wanted to prevent intrusive regulation of the Internet, how would they do it? They know that silence on the issue wouldn’t protect Internet services. As Congress learned in the 1960s and 1970s with cable TV, congressional silence, to the FCC, looks like permission to enact a far-reaching regulatory regime.

In the 1990s, Congress knew the FCC would be tempted to regulate the Internet and Internet services and that silence would be seen as an invitation to regulate the Internet. Congress and President Clinton therefore passed a 1996 law, Section 230 of the Communications Decency Act, which stated:

It is the policy of the United States…to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.

But this statement raised the possibility that the FCC would regulate Internet access providers and would claim (as FCC defenders do today) they were not regulating “the Internet,” only access providers. To preempt such sophistry, Congress added that the “interactive computer services” shielded from regulation include:

specifically a service or system that provides access to the Internet….

Congress proved prescient. For over a decade, as the FCC’s traditional areas of regulation waned in importance, advocates and FCC officials have sought to regulate Internet access providers and the Internet. After two failed attempts to regulate providers and enforce net neutrality norms, the FCC decided to regulate Internet access providers with Title II, the same provisions regulating telephone and telegraph providers. Section 230 featured prominently in the dissents of commissioners Pai and O’Rielly who both noted that the Open Internet Order was a simple rejection of the plain words of Congress. Nevertheless, two judges on DC Circuit Court of Appeals blessed those regulations and the Open Internet Order in 2016.

If “unfettered from Federal regulation” means anything, doesn’t it mean that the FCC cannot use Title II, its most stringent regulatory regime, to regulate Internet access providers? Is there any combination of words Congress could draft that would protect Internet access providers and Internet services from Title II?

There is a pending appeal challenging the Open Internet Order before the DC Circuit and after that is appeal to the Supreme Court. The Supreme Court, in particular, might be receptive to a common-sense argument that “unfettered from Federal regulation” is hazy around the edges but it cannot mean regulation of ISPs’ content, services, protocols, network topology, and business models.

I understand the sentiment that a net neutrality compromise is urgently needed to save the Internet from Title II. But until the Open Internet Order appeals have concluded, I think it’s premature to compromise and grant the FCC permanent authority to regulate the Internet with vague standards (e.g., no one knows what “reasonable throttling” means). A successful appeal could mean a third and final court loss for net neutrality purists, thereby restoring Section 230’s free-market protections for the Internet. Until the Supreme Court denies cert or agrees with the FCC that up is down, black is white, and agencies can ignore clear statutes, I’m not persuaded that Congress should nullify its own deregulatory language of Section 230 with a net neutrality compromise.

The proposed Mobile Now Act signals that spectrum policy is being prioritized by Congress and there’s some useful reforms in the bill. However, the bill encourages unlicensed spectrum allocations in ways that I believe will create major problems down the road.

Congress and the FCC need to proceed much more carefully before allocating more unlicensed spectrum. The FCC’s 2008 decision, for instance, to allow unlicensed devices in the “TV white spaces” has been disappointing. As some economists recently noted, “[s]imply stated, the FCC’s TV white space policy to date has been a flop.” Unlicensed spectrum policy is also generating costly fights (see WiFi v. LTE-U, Bluetooth v. TLPS, LightSquared v. GPS) as device makers and carriers lobby about who gains regulatory protection and how to divide this valuable resource that the FCC parcels out for free.

The unlicensed spectrum provisions in the Mobile Now Act may force the FCC to referee innumerable fights over who has access to unlicensed spectrum. Section 18 of the Mobile Now bill encourages unlicensed spectrum. It says the FCC must

make available on an unlicensed basis radio frequency bands sufficient to meet demand for unlicensed wireless broadband operations if doing so is…reasonable…and…in the public interest.

Note that we have language about supply and demand here. But unlicensed spectrum is free to all users using an approved device (that is, nearly everyone in the US). Quantity demanded will always outstrip quantity supplied when a valuable asset (like spectrum or real estate) is handed out when price = 0. By removing a valuable asset from the price system, large allocation distortions are likely.

Any policy originating from Congress or the FCC to satisfy “demand” for unlicensed spectrum biases the agency towards parceling out an excessive amount of unlicensed spectrum. 

The problems from unlicensed spectrum allocation could be mitigated if the FCC decided, as part of a “public interest” conclusion, to estimate the opportunity cost of any unlicensed spectrum allocated. That way, the government will have a rough idea of the market value of unlicensed spectrum being given away. There have been several auctions and there is an active secondary market for spectrum so estimates are achievable, and the UK has required the calculation of the opportunity cost of spectrum for over a decade.

With these estimates, it will be more difficult but still possible for the FCC to defend giving away spectrum for free. Economist Coleman Bazelon, for instance, estimates that the incremental value of a nationwide megahertz of licensed spectrum is more than 10x the equivalent unlicensed spectrum allocation. Significantly, unlike licensed spectrum, allocations of unlicensed bands are largely irreversible.

People can quibble with the estimates but it is unclear that unlicensed use is the best use of additional spectrum. In any case, hopefully the FCC will attempt to bring some economic rigor to public interest determinations.