“You don’t gank the noobs” my friend’s brother explained to me, growing angrier as he watched a high-level player repeatedly stalk and then cut down my feeble, low-level night elf cleric in the massively multiplayer online roleplaying game World of Warcraft. He logged on to the server to his “main,” a high-level gnome mage and went in search of my killer, carrying out two-dimensional justice. What he meant by his exclamation was that players have developed a social norm banning the “ganking” or killing of low-level “noobs” just starting out in the game. He reinforced that norm by punishing the overzealous player with premature annihilation.

Ganking noobs is an example of undesirable social behavior in a virtual space on par with cutting people off in traffic or budging people in line. Punishments for these behaviors take a variety of forms, from honking, to verbal confrontation, to virtual manslaughter. Virtual reality social spaces, defined as fully artificial digital environments, are the newest medium for social interaction. Increased agency and a sense of physical presence within a VR social world like VRChat allows users to more intensely experience both positive and negative situations, thus reopening the discussion for how best to govern these spaces.

Continue reading →

Internet regulation advocates lost their fight at the FCC, which voted in December 2017 to rescind the 2015 Open Internet Order. Regulation advocates have now taken their “net neutrality” regulations to the states.

Some state officials–via procurement contracts, executive order, or legislation–are attempting to monitor and regulate traffic management techniques and Internet service provider business models in the name of net neutrality. No one, apparently, told these officials that government-mandated net neutrality principles are dead in the US.

As the litigation over the 2015 rules showed, our national laissez faire policy towards the Internet and our First Amendment guts any attempt to enforce net neutrality. Recall that the 1996 amendments to the Communications Act announce a clear national policy about the Internet: Continue reading →

Autonomous cars have been discussed rather thoroughly recently and at this point it seems a question of when and how rather than if they will become standard. But as this issue starts to settle, new questions about the application of autonomous technology to other types of transportation are becoming ripe for policy debates. While a great deal of attention seems to be focused on the potential revolutionize the trucking and shipping industries, not as much attention has been paid to how automation may help improve both intercity and intracity bus travel or other public and private transit like trains. The recent requests for comment from the Federal Transit Authority show that policymakers are starting to consider these other modes of transit in preparing for their next recommendations for autonomous vehicles. Here are 5 issues that will need to be considered for an autonomous transit system.

Continue reading →

Last week the FCC commissioners voted to restructure the agency and create an Office of Economics and Analytics. Hopefully the new Office will give some rigor to the “public interest standard” that guides most FCC decisions. It’s important the FCC formally inject economics in to public interest determinations, perhaps much like the Australian telecom regulator’s “total welfare standard,” which is basically a social welfare calculation plus consideration of “broader social impacts.”

In contrast, the existing “standard” has several components and subcomponents (some of them contradictory) depending on the circumstances; that is, it’s no standard at all. As the first general counsel of the Federal Radio Commission, Louis Caldwell, said of the public interest standard, it means

as little as any phrase that the drafters of the Act could have used and still comply with the constitutional requirement that there be some standard to guide the administrative wisdom of the licensing authority.

Unfortunately, this means public interest determinations are largely shielded from serious court scrutiny. As Judge Posner said of the standard in Schurz Communications v. FCC,

So nebulous a mandate invests the Commission with an enormous discretion and correspondingly limits the practical scope of responsible judicial review.

Posner colorfully characterized FCC public interest analysis in that case:

The Commission’s majority opinion … is long, but much of it consists of boilerplate, the recitation of the multitudinous parties’ multifarious contentions, and self-congratulatory rhetoric about how careful and thoughtful and measured and balanced the majority has been in evaluating those contentions and carrying out its responsibilities. Stripped of verbiage, the opinion, like a Persian cat with its fur shaved, is alarmingly pale and thin.

Every party who does significant work before the FCC has agreed with Judge Posner’s sentiments at one time or another.

Which brings us to the Office of Economics and Analytics. Cost-benefit analysis has its limits, but economic rigor is increasingly important as the FCC turns its attention away from media regulation and towards spectrum assignment and broadband subsidies.

The worst excesses of FCC regulation are in the past where, for instance, one broadcaster’s staff in 1989 “was required to review 14,000 pages of records to compile information for one [FCC] interrogatory alone out of 299.” Or when, say, FCC staff had to sift through and consider 60,000 TV and radio “fairness” complaints in 1970. These regulatory excesses were corrected by economists (namely, Ronald Coase’s recommendation that spectrum licenses be auctioned, rather than given away for free by the FCC after a broadcast “beauty contest” hearing), but history shows that FCC proceedings spiral out of control without the agency intending it.

Since Congress gave such a nebulous standard, the FCC is always at risk of regressing. Look no further than the FCC’s meaningless “Internet conduct standard” from its 2015 Open Internet Order. This “net neutrality” regulation is a throwback to the bad old days, an unpredictable conduct standard that–like the Fairness Doctrine–would constantly draw the FCC into social policy activism and distract companies with interminable FCC investigations and unknowable compliance requirements.

In the OIO’s mercifully short life, we saw glimpses of the disputes that would’ve distracted the agency and regulated companies. For instance, prominent net neutrality supporters had wildly different views about whether a common practice, “zero rating” of IP content, by T-Mobile violated the Internet conduct standard. Chairman Tom Wheeler initially called it “highly innovative and highly competitive” while Harvard professor Susan Crawford said it was “dangerous” and “malignant” and should be outlawed “immediately.” The nearly year-long FCC investigations into zero rating and the equivocal report sent a clear, chilling message to ISPs and app companies: 20 years of permissionless innovation for the Internet was long enough. Submit your new technologies and business plans to us or face the consequences.

Fortunately, by rescinding the 2015 Order and creating the new economics Office, Chairman Pai and his Republican colleagues are improving the outlook for the development of the Internet. Hopefully the Office will make social welfare calculations a critical part of the public interest standard.

We hear a lot these days about “technological moonshots.” It’s an interesting phrase because the meaning of both words in it are often left undefined. I won’t belabor the point about how people define–or, rather, fail to define–“technology” when they use it. I’ve already spent a lot of time writing about that problem. See, for example, this constantly updated essay here about “Defining ‘Technology.'” It’s a compendium I began curating years ago that collects what dozens of others have had to say on the matter. I’m always struck by how many different definitions are out there that I keep unearthing.

The term “moonshots” has a similar problem. The first meaning is the literal one that hearkens back to President Kennedy’s famous 1962 “we choose to go to the moon” speech. That use of the terms implies large government programs and agencies, centralized control, and top-down planning with a very specific political objective in mind. Increasingly, however, the term “moonshot” is used more generally, as I note in this new Mercatus essay about “Making the World Safe for More Moonshots.”  My Mercatus Center colleague Donald Boudreaux has referred to moonshots as, “radical but feasible solutions to important problems,” and  Mike Cushing of Enterprise Innovation defines a moonshot as an “innovation that achieves the previously unthinkable.” I like that more generic use of the term and think it could be used appropriately when discussing the big innovations many of us hope to see in fields as diverse as quantum computing, genetic editing, AI and autonomous systems, supersonic transport, and much more. I still have some reservations about the term, but I think it’s definitely a better term than “disruptive innovation,” which is also used differently by various scholars and pundits.

Continue reading →

There was a bold, bizarre proposal published by Axios yesterday that includes leaked documents by a “senior National Security Council official” for accelerating 5G deployment in the US. “5G” refers to the latest generation of wireless technologies, whose evolving specifications are being standardized by global telecommunications companies as we speak. The proposal highlights some reasonable concerns–the need for secure networks, the deleterious slowness in getting wireless infrastructure permits from thousands of municipalities and counties–but recommends an unreasonable solution–a government-operated, nationwide wireless network.

The proposal to nationalize some 5G equipment and network components needs to be nipped in the bud. It relies on the dated notion that centralized government management outperforms “wasteful competition.” It’s infeasible and would severely damage the US telecom and Internet sector, one of the brightest spots in the US economy. The plan will likely go nowhere but the fact it’s being circulated by administration officials is alarming.

First, a little context. In 1927, the US nationalized all radiofrequency spectrum, and for decades the government rations out dribbles of spectrum for commercial use (though much has improved since liberalization in the 1990s). To this day all spectrum is nationalized and wireless companies operate at sufferance. What this new document proposes is to make a poor situation worse.

In particular, the presentation proposes to re-nationalize 500 MHz of spectrum (the 3.7 GHz to 4.2 GHz band, which contains mostly satellite and government incumbents) and build wireless equipment and infrastructure across the country to transmit on this band. The federal government would act as a wholesaler to the commercial networks (AT&T, Verizon, T-Mobile, Sprint, etc.), who would sell retail wireless plans to consumers and businesses.

The justification for nationalizing a portion of 5G networks has a national security component and an economic component: prevent Chinese spying and beat China in the “5G race.”

The announced goals are simultaneously broad and narrow, and at severe tension.

The plan is broad in that it contemplates nationalizing part of the 5G equipment and network. However, it’s narrow in that it would nationalize only a portion of the 5G network (3.7 GHz to 4.2 GHz) and not other portions (like 600 MHz and 28 GHz). This undermines the national security purpose (assuming it’s even feasible to protect the nationalized portion) since 5G networks interconnect. It’d be like having government checkpoints on Interstate 95 but leaving all other interstates checkpoint-free.

Further, the document author misunderstands the evolutionary nature of 5G networks. 5G for awhile will be an overlay on the existing 4G LTE network, not a brand-new parallel network, as the NSC document assumes. 5G equipment will be installed on 4G LTE infrastructure in neighborhoods where capacity is strained. As Sherif Hanna, director of the 5G team at Qualcomm, noted on Twitter, in fact, “the first version of the 5G [standard]…by definition requires an existing 4G radio and core network.”

The most implausible idea in the document is a nationwide 5G network could be deployed in the next few years. Environmental and historic preservation review in a single city can take longer than that. (AT&T has battled NIMBYs and local government in San Francisco for a decade, for instance, to install a few hundred utility boxes on the public right-of-way.) The federal government deploying and maintaining hundreds of thousands 5G installations in two years from scratch is a pipe dream. And how to pay for it? The “Financing” section in the document says nothing about how the federal government will find tens of billions of dollars for nationwide deployment of a government 5G network.

The plan to nationalize a portion of 5G wireless networks and deploy nationwide is unwise and unrealistic. It would permanently damage the US broadband industry, it would antagonize city and state officials, it would raise serious privacy and First Amendment concerns, and it would require billions of new tax dollars to deploy. The released plan would also fail to ensure the network security it purports to protect. US telecom companies are lining up to pay the government for spectrum and to invest private dollars to build world-class 5G networks. If the federal government wants to accelerate 5G deployment, it should sell more spectrum and redirect existing government funding towards roadside infrastructure. Network security is a difficult problem but nationalizing networks is overkill.

Already, four out of five [update: all five] FCC commissioners have come out strongly against this plan. Someone reading the NSC proposal would get the impression that the US is sitting still while China is racing ahead on 5G. The US has unique challenges but wireless broadband deployment is probably the FCC’s highest priority. The Commission is aware of the permitting problems and formed the Broadband Deployment Advisory Committee in part for that very purpose (I’m a member). The agency, in cooperation with the Department of Commerce, is also busy looking for more spectrum to release for 5G.

Recode is reporting that White House officials are already distancing the White House from the proposal. Hopefully they will publicly reject the plan soon.

Co-authored with Adam Thierer

Why would progressives abandon the most successful progressive technology policy ever formulated?

In a recent piece in The Washington Spectator, Marc Rotenberg and Larry Irving have some harsh words for progressives’ supposed starry-eyed treatment of Internet firms and the Clinton Administration policies that helped give rise to the modern digital economy. They argue that the Internet has failed to live up to its promise in part because “[p]rogressive leaders moved away from progressive values on tech issues, and now we live with the consequences.”

But if the modern Internet we know today is truly the result of progressive’s self-repudiation, then we owe them and the Clinton Administration a debt of gratitude, not a lecture.

Unfortunately, Rotenberg and Irving take a different perspective. They criticize progressives for standing aside while “a new mantra of ‘multistakeholder engagement’” replaced traditional regulatory governance structures, unleashing a Pandora’s Box of “self-regulatory processes” that failed to keep the private sector accountable to the public.

Rotenberg and Irving are also upset that the First Amendment rights of Internet companies have received stronger support following the implementation of Section 230 of the Communications Decency Act, which was enacted by Congress in 1996 and signed into law by President Clinton as part of the Telecommunications Act of 1996.

All of this could have been avoided, they argue, if the Clinton Administration had instead embraced the creation of a National Information Infrastructure (NII) to govern the Internet. As part of its 1993 proposed “Agenda for Action,” the Clinton White House toyed with the idea that “[d]evelopment of the NII can help unleash an information revolution that will change forever the way people live, work, and interact with each other,” citing specific examples of how it would: empower people to “live almost anywhere they wanted, without foregoing opportunities for useful and fulfilling employment”; make education “available to all students, without regard to geography, distance, resources, or disability”; and permit healthcare and other social needs to be delivered “on-line, without waiting in line, when and where you needed them.” Luckily, all these things came to pass precisely because the Clinton Administration went a different route, ignoring the heavy-handed regulatory approach offered by early tech policy wonks and opting instead to embrace a different governance framework: The Framework for Global Electronic Commerce.

The 1997 Framework outlined a succinct, market-oriented vision for the Internet and the emerging digital economy. It envisioned a model of cyberspace governance that relied on multistakeholder collaboration and ongoing voluntary negotiations and agreements to find consensus on the new challenges of the information age. Policy was to be formulated in an organic, bottom-up, and fluid fashion. This was a stark and welcome break from the failed top-down technocratic regulatory regimes of the analog era, which had long held back innovation and choice in traditional communications and media sectors.

“Where governmental involvement is needed,” The Framework advised, “its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.” The result was one of the most amazing explosions in innovation our nation and, indeed, the entire world had ever witnessed. It was precisely the flexibility of multistakeholder governance—as well as the strong support for the free flow of speech and commerce—that unleashed this tsunami of technological progress.  

It’s strange, then, that Rotenberg and Irving decry the era of “multistakeholder engagement” that the Clinton Administration Framework presaged, especially because they included similar provisions in their own frameworks. For example, in “A Public-Interest Vision of the National Information Infrastructure,” the authors specifically called for “democratic policy-making” in the governance of the emerging Internet, arguing that “[t]he public should be fully involved in policy-making for the information infrastructure.” They go even further by citing the value of “participatory design,” which emphasized iterative experimentation and information feedback loops (learning by doing) in the process of designing network standards and systems. These “[n]ew approaches,” Rotenberg and Irving argue, “combine the centralized and decentralized models, obtaining the benefits of each while avoiding their deficiencies.” Embracing “[b]oth participatory design and the experimental approach to standardization,” they concluded, would “achieve the benefits of democratic input to design and policy-making without sacrificing the technical advantages of consistency and elegance of design.”

On this point, Rotenberg and Irving are correct. Unfortunately, it seems their valuation of such processes do not apply to the regulatory structures overseeing these technologies. This is despite the “Agenda for Action” explicitly calling for the NII to “complement … the efforts of the private sector” by “work[ing] in close partnership with business, labor, academia, the public, Congress, and state and local government.” What’s more “multistakeholder” than that?

For all their lamentations of the multistakeholder process, Rotenberg and Irving engaged in that very process in the 1990s. Their proposals had their shot at convincing the Clinton Administration that a national regulatory agency governing the Internet was necessary to usher in the digital age. And in one of those ironic twists of history, they failed to get their agency, but nevertheless bore witness to the emergence of a free and open Internet where innovation and progress still flourish.

We shouldn’t lose sight of this miraculous achievement and the public policies that made it all possible. There’s nothing “progressive” about rolling back the clock in the way Rotenberg and Irving recommend. Instead, America should double-down on the Clinton Administration’s vision for innovation policy by embracing permissionless innovation, collaborative multistakeholderism, and strong support for freedom of speech as the cornerstones of public policy toward other emerging technologies and sectors.

The FCC released a proposed Order today that would create an Office of Economics and Analytics. Last April, Chairman Pai proposed this data-centric office. There are about a dozen bureaus and offices within the FCC and this proposed change in the FCC’s organizational structure would consolidate a few offices and many FCC economists and experts into a single office.

This is welcome news. Several years ago when I was in law school, I was a legal clerk for the FCC Wireless Bureau and for the FCC Office of General Counsel. During that ten-month stint, I was surprised at the number of economists, who were all excellent, at the FCC. I assisted several of them closely (and helped organize what one FCC official dubbed, unofficially, “The Economists’ Cage Match” for outside experts sparring over the competitive effects of the proposed AT&T-T-Mobile merger). However, my impression even during my limited time at the FCC was well-stated by Chairman Pai in April:

[E]conomists are not systematically incorporated into policy work at the FCC. Instead, their expertise is typically applied in an ad hoc fashion, often late in the process. There is no consistent approach to their use.

And since the economists are sprinkled about the agency, their work is often “siloed” within their respective bureau. Economics as an afterthought in telecom is not good for the development of US tech industries, nor for consumers.

As Geoffrey Manne and Allen Gibby said recently, “the future of telecom regulation is antitrust,” and the creation of the OEA is a good step in line with global trends. Many nations–like the Netherlands, Denmark, Spain, Japan, South Korea, and New Zealand–are restructuring legacy telecom regulators. The days of public and private telecom monopolies and discrete, separate communications, computer, and media industries (thus bureaus) is past. Convergence, driven by IP networks and deregulation, has created these trends and resulted in sometimes dramatic restructuring of agencies.

In Denmark, for instance, as Roslyn Layton and Joe Kane have written, national parties and regulators took inspiration from the deregulatory plans of the Clinton FCC. The Social Democrats, the Radical Left, the Left, the Conservative People’s Party, the Socialist People’s Party, and the Center Democrats agreed in 1999:

The 1990s were focused on breaking down old monopoly; now it is important to make the frameworks for telecom, IT, radio, TV meld together—convergence. We believe that new technologies will create competition.

It is important to ensure that regulation does not create a barrier for the possibility of new converged products; for example, telecom operators should be able to offer content if they so choose. It is also important to ensure digital signature capability, digital payment, consumer protection, and digital rights. Regulation must be technologically neutral, and technology choices are to be handled by the market. The goal is to move away from sector-specific regulation toward competition-oriented regulation. We would prefer to handle telecom with competition laws, but some special regulation may be needed in certain cases—for example, regulation for access to copper and universal service.

This agreement was followed up by the quiet shuttering of NITA, the Danish telecom agency, in 2011.

Bringing economic rigor to the FCC’s notoriously vague “public interest” standard seemed to be occurring (slowly) during the Clinton and Bush administrations. However, during the Obama years, this progress was de-railed, largely by the net neutrality silliness, which not only distracted US regulators from actual problems like rural broadband expansion but also reinvigorated the media-access movement, whose followers believe the FCC should have a major role in shaping US culture, media, and technologies.

Fortunately, those days are in the rearview mirror. The proposed creation of the OEA represents another pivot toward the likely future of US telecom regulation: a focus on consumer welfare, competition, and data-driven policy.

Technology policy has made major inroads into a growing number of fields in recent years, including health care, labor, and transportation, and we at the Technology Liberation Front have brought a free-market lens to these issues for over a decade. As is our annual tradition, below are the most popular posts* from the past year, as well as key excerpts.

Enjoy, and Happy New Year. Continue reading →

Reason magazine recently published my review of Franklin Foer’s new book, World Without Mind: The Existential Threat of Big Tech. My review begins as follows:

If you want to sell a book about tech policy these days, there’s an easy formula to follow.

First you need a villain. Google and Facebook should suffice, but if you can throw in Apple, Amazon, or Twitter, that’s even better. Paint their CEOs as either James Bond baddies bent on world domination or naive do-gooders obsessed with the quixotic promise of innovation.

Finally, come up with a juicy Chicken Little title. Maybe something like World Without Mind: The Existential Threat of Big Tech. Wait—that one’s taken. It’s the title of Franklin Foer’s latest book, which follows this familiar techno-panic template almost perfectly.

The book doesn’t break a lot of new ground; it serves up the same old technopanicky tales of gloom-and-doom that many others have said will befall us unless something is done to save us. But Foer’s unique contribution is to unify many diverse strands of modern tech criticism in one tome, and then amp up the volume of panic about it all. Hence, the “existential” threat in the book’s title. I bet you didn’t know the End Times were so near!

Read the rest of my review over at Reason. And, if you care to read some of my other essays on technopanics through the ages, here’s a compendium of them.