The outrage over the FCC’s attempt to write new open Internet rules has caught many by surprise, and probably Chairman Wheeler as well. The rumored possibility of the FCC authorizing broadband “fast lanes” draws most complaints and animus. Gus Hurwitz points out that the FCC’s actions this week have nothing to do with fast lanes and Larry Downes reminds us that this week’s rules don’t authorize anything. There’s a tremendous amount of misinformation because few understand how administrative law works. Yet many net neutrality proponents fear the worst from the proposed rules because Wheeler takes the consensus position that broadband provision is a two-sided market and prioritized traffic could be pro-consumer.

Fast lanes have been permitted by the FCC for years and they can benefit consumers. Some broadband services–like video and voice over Internet protocol (VoIP)–need to be transmitted faster or with better quality than static webpages, email, and file syncs. Don’t take my word for it. The 2010 Open Internet NPRM, which led to the recently struck-down rules, stated,

As rapid innovation in Internet-related services continues, we recognize that there are and will continue to be Internet-Protocol-based offerings (including voice and subscription video services, and certain business services provided to enterprise customers), often provided over the same networks used for broadband Internet access service, that have not been classified by the Commission. We use the term “managed” or “specialized” services to describe these types of offerings. The existence of these services may provide consumer benefits, including greater competition among voice and subscription video providers, and may lead to increased deployment of broadband networks.

I have no special knowledge about what ISPs will or won’t do. I wouldn’t predict in the short term the widespread development of prioritized traffic under even minimal regulation. I think the carriers haven’t looked too closely at additional services because net neutrality regulations have precariously hung over them for a decade. But some of net neutrality proponents’ talking points (like insinuating or predicting ISPs will block political speech they disagree with) are not based in reality.

We run a serious risk of derailing research and development into broadband services if the FCC is cowed by uninformed and extreme net neutrality views. As Adam eloquently said, “Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about.” Many net neutrality proponents would like to smear all priority traffic as unjust and exploitative. This is unfortunate and a bit ironic because one of the most transformative communications developments, cable VoIP, is a prioritized IP service.

There are other IP services that are only economically feasible if jitter, latency, and slow speed are minimized. Prioritized traffic takes several forms, but it could enhance these services:

VoIP. This prioritized service has actually been around for several years and has completely revolutionized the phone industry. Something unthinkable for decades–facilities-based local telephone service–became commonplace in the last few years and undermined much of the careful industrial planning in the 1996 Telecom Act. If you subscribe to voice service from your cable provider, you are benefiting from fast lane treatment. Your “phone” service is carried over your broadband cable, segregated from your television and Internet streams. Smaller ISPs could conceivably make their phone service more attractive by pairing up with a Skype- or Vonage-type voice provider, and there are other possibilities that make local phone service more competitive.

Cloud-hosted virtual desktops. This is not a new idea, but it’s possible to have most or all of your computing done in a secure cloud, not on your PC, via a prioritized data stream. With a virtual desktop, your laptop or desktop PC functions mainly as a dumb portal. No more annoying software updates. Fewer security risks. IT and security departments everywhere would rejoice. Google Chromebooks are a stripped-down version of this but truly functional virtual desktops would be valued by corporations, reporters, or government agencies that don’t want sensitive data saved on a bunch of laptops in their organization that they can’t constantly monitor. Virtual desktops could also transform the device market, putting the focus on a great cloud and (priority) broadband service and less on the power and speed of the device. Unfortunately, at present, virtual desktops are not in widespread use because even small lag frustrates users.

TV. The future of TV is IP-based and the distinction between “TV” and “the Internet” is increasingly blurring, with Netflix leading the way. In a fast lane future, you could imagine ISPs launching pared-down TV bundles–say, Netflix, HBO Go, and some sports channels–over a broadband connection. Most ISPs wouldn’t do it, but an over-the-top package might interest smaller ISPs who find acquiring TV content and bundling their own cable packages time-consuming and expensive.

Gaming. Computer gamers hate jitter and latency. (My experience with a roommate who had unprintable outbursts when Diablo III or World of Warcraft lagged is not uncommon.) Game lag means you die quite frequently because of your data connection and this depresses your interest in a game. There might be gaming companies out there who would like to partner with ISPs and other network operators to ensure smooth gameplay. Priority gaming services could also lead the way to more realistic, beautiful, and graphics-intensive games.

Teleconferencing, telemedicine, teleteaching, etc. Any real-time, video-based service could reach critical mass of subscribers and become economical with priority treatment. Any lag absolutely kills consumer interest in these video-based applications. By favoring applications like telemedicine, providing remote services could become attractive to enough people for ISPS to offer stand-alone broadband products.

This is just a sampling of the possible consumer benefits of pay-for-priority IP services we possibly sacrifice in the name of strict neutrality enforcement. There are other services we can’t even conceive of yet that will never develop. Generally, net neutrality proponents don’t admit these possible benefits and are trying to poison the well against all priority deals, including many of these services.

Most troubling, net neutrality turns the regulatory process on its head. Rather than identify a market failure and then take steps to correct the failure, the FCC may prevent commercial agreements that would be unobjectionable in nearly any other industry. The FCC has many experts who are familiar with the possible benefits of broadband fast lanes, which is why the FCC has consistently blessed priority treatment in some circumstances.

Unfortunately, the orchestrated reaction in recent weeks might leave us with onerous rules, delaying or making impossible new broadband services. Hopefully, in the ensuing months, reason wins out and FCC staff are persuaded by competitive analysis and possible innovations, not t-shirt slogans.

This article was written by Adam Thierer, Jerry Brito, and Eli Dourado.

For the three of us, like most others in the field today, covering “technology policy” in Washington has traditionally been synonymous with covering communications and information technology issues, even though “tech policy” has actually always included policy relevant to a much wider array of goods, services, professions, and industries.

That’s changing, however. Day by day, the world of “technology policy” is evolving and expanding to incorporate much, much more. The same forces that have powered the information age revolution are now transforming countless other fields and laying waste to older sectors, technologies, and business models in the process. As Marc Andreessen noted in a widely-read 2011 essay, “Why Software Is Eating The World”:

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Why is this happening now? Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.

More specifically, many of the underlying drivers of the digital revolution—massive increases in processing power, exploding storage capacity, steady miniaturization of computing, ubiquitous communications and networking capabilities, the digitization of all data, and increasing decentralization and disintermediation—are beginning to have a profound impact beyond the confines of cyberspace.

Continue reading →

Allowing broadband providers to impose tolls on Internet companies represents a “grave” threat to the Internet, or so wrote several Internet giants and their allies in a letter to the Federal Communications Commission this past week.

The reality is that broadband networks are very expensive to build and maintain.  Broadband companies have invested approximately $250 billion in U.S. wired and wireless broadband networks—and have doubled average delivered broadband speeds—just since President Obama took office in early 2009.  Nevertheless, some critics claim that American broadband is still too slow and expensive.

The current broadband pricing model is designed to recover the entire cost of maintaining and improving the network from consumers.  Internet companies get free access to broadband subscribers.

Although the broadband companies are not poised to experiment with different pricing models at this time, the Internet giants and their allies are mobilizing against the hypothetical possibility that they might in the future.  But this is not the gravest threat to the Internet.   Continue reading →

Today is a big day in Congress for the cable and satellite (MVPDs) war on broadcast television stations. The House Judiciary Committee is holding a hearing on the compulsory licenses for broadcast television programming in the Copyright Act, and the House Energy and Commerce Committee is voting on a bill to reauthorize “STELA” (the compulsory copyright license for the retransmission of distant broadcast signals by satellite operators). The STELA license is set to expire at the end of the year unless Congress reauthorizes it, and MVPDs see the potential for Congressional action as an opportunity for broadcast television to meet its Waterloo. They desire a decisive end to the compulsory copyright licenses, the retransmission consent provision in the Communications Act, and the FCC’s broadcast exclusivity rules — which would also be the end of local television stations.

The MVPD industry’s ostensible motivations for going to war are retransmission consent fees and television “blackouts”, but the real motive is advertising revenue.

The compulsory copyright licenses prevent MVPDs from inserting their own ads into broadcast programming streams, and the retransmission consent provision and broadcast exclusivity agreements prevent them from negotiating directly with the broadcast networks for a portion of their available advertising time. If these provisions were eliminated, MVPDs could negotiate directly with broadcast networks for access to their television programming and appropriate TV station advertising revenue for themselves. Continue reading →

Few people have been more tireless in their defense of the notion of “permissionless innovation” than Wall Street Journal columnist L. Gordon Crovitz. In his weekly “Information Age” column for the Journal (which appears each Monday), Crovitz has consistently sounded the alarm regarding new threats to Internet freedom, technological freedom, and individual liberties. It was, therefore, a great honor for me to wake up Monday morning and read his latest post, “The End of the Permissionless Web,” which discussed my new book “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.”

“The first generation of the Internet did not go well for regulators,” Crovitz begins his column. “Despite early proposals to register websites and require government approval for business practices, the Internet in the U.S. developed largely without bureaucratic control and became an unstoppable engine of innovation and economic growth.” Unfortunately, he correctly notes:

Regulators don’t plan to make the same mistake with the next generation of innovations. Bureaucrats and prosecutors are moving in to undermine services that use the Internet in new ways to offer everything from getting a taxi to using self-driving cars to finding a place to stay.

This is exactly why I penned my little manifesto. As Crovitz continues on to note in his essay, new regulatory threats to both existing and emerging technologies are popping up on almost a daily basis. He highlights currently battles over Uber, Airbnb, 23andme, commercial drones, and more. And his previous columns have discussed many other efforts to “permission” innovation and force heavy-handed top-down regulatory schemes on fast-paced and rapidly-evolving sectors and technologies. Continue reading →

Adam and I recently published a Mercatus research paper titled Video Marketplace Regulation: A Primer on the History of Television Regulation And Current Legislative Proposals, now available on SSRN. I presented the paper at a Silicon Flatirons academic conference last week.

We wrote the paper for a policy audience and students who want succinct information and history about the complex world of television regulation. Television programming is delivered to consumers in several ways, including via cable, satellite, broadcast, IPTV (like Verizon FiOS), and, increasingly, over-the-top broadband services (like Netflix and Amazon Instant Video). Despite their obvious similarities–transmitting movies and shows to a screen–each distribution platform is regulated differently.

The television industry is in the news frequently because of problems exacerbated by the disparate regulatory treatment. The Time Warner Cable-CBS dispute last fall (and TWC’s ensuing loss of customers), the Aereo lawsuit, and the Comcast-TWC proposed merger were each caused at least indirectly by some of the ill-conceived and antiquated TV regulations we describe. Further, TV regulation is a “thicket of regulations,” as the Copyright Office has said, which benefits industry insiders at the expense of most everyone else.

We contend that overregulation of television resulted primarily because past FCCs, and Congress to a lesser extent, wanted to promote several social objectives through a nationwide system of local broadcasters:

1) Localism
2) Universal Service
3) Free (that is, ad-based) television; and
4) Competition

These objectives can’t be accomplished simultaneously without substantial regulatory mandates. Further, these social goals may even contradict each other in some respects.

For decades, public policies constrained TV competitors to accomplish those goals. We recommend instead a reliance on markets and consumer choice through comprehensive reform of television laws, including repeal of compulsory copyright laws, must-carry, retransmission consent, and media concentration rules.

At the very least, our historical review of TV regulations provides an illustrative case study of how regulations accumulate haphazardly over time, demand additional “correction,” and damage dynamic industries. Congress and the FCC focused on attaining particular competitive outcomes through industrial policy, unfortunately. Our paper provides support for market-based competition and regulations that put consumer choice at the forefront.

Bell-3D-cover-webLast week, the Mercatus Center at George Mason University published the new book by Tom W. Bell, Intellectual Privilege: Copyright, Common Law, and the Common Good, which Eugene Volokh calls “A fascinating, highly readable, and original look at copyright[.]” Richard Epstein says that Bell’s book “makes a distinctive contribution to a field in which fundamental political theory too often takes a back seat to more overt utilitarian calculations.” Some key takeaways from the book:

  • If copyright were really property, like a house or cell phone, most Americans would belong in jail. That nobody seriously thinks infringement should be fully enforced demonstrates that copyright is not property and that copyright policy is broken.
  • Under the Founders’ Copyright, as set forth in the 1790 Copyright Act, works could be protected for a maximum of 28 years. Under present law, they can be extended to 120 years. The massive growth of intellectual privilege serves big corporate publishers to the detriment of individual authors and artist.
  • By discriminating against unoriginal speech, copyright sharply limits our freedoms of expression.
    We should return to the wisdom of the Founders and regard copyrights as special privileges narrowly crafted to serve the common good.

This week, on Wednesday, May 7, at noon, the Cato Institute will hold a book forum featuring Bell, and comments by Christopher Newman, Assistant Professor, George Mason University School of Law. It’s going to be a terrific event and you should come. Please make sure to RSVP.

The FCC is set to vote later this month on rules for the incentive auction of spectrum licenses in the broadcast television band. These licenses would ordinarily be won by the highest bidders, but not in this auction. The FCC plans to ensure that Sprint and T-Mobile win licenses in the incentive auction even if they aren’t willing to pay the highest price, because it believes that Sprint and T-Mobile will expand their networks to cover rural areas if it sells them licenses at a substantial discount.

This theory is fundamentally flawed. Sprint and T-Mobile won’t substantially expand their footprints into rural areas even if the FCC were to give them spectrum licenses for free. There simply isn’t enough additional revenue potential in rural areas to justify covering them with four or more networks no matter what spectrum is used or how much it costs. It is far more likely that Sprint and T-Mobile will focus their efforts on more profitable urban areas while continuing to rely on FCC roaming rights to use networks built by other carriers in rural areas. Continue reading →

My friend Tim Lee has an article at Vox that argues that interconnection is the new frontier on which the battle for the future of the Internet is being waged. I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless.

How the Internet used to work

The Internet is a network of networks. Your ISP is a network. It connects to the other ISPs and exchanges traffic with them. Since connections between ISPs are about equally valuable to each other, this often happens through “settlement-free peering,” in which networks exchange traffic on an unpriced basis. The arrangement is equally valuable to both partners.

Not every ISP connects directly to every other ISP. For example, a local ISP in California probably doesn’t connect directly to a local ISP in New York. If you’re an ISP that wants to be sure your customer can reach every other network on the Internet, you have to purchase “transit” services from a bigger or more specialized ISP. This would allow ISPs to transmit data along what used to be called “the backbone” of the Internet. Transit providers that exchange roughly equally valued traffic with other networks themselves have settlement-free peering arrangements with those networks.

How the Internet works now

A few things have changed in the last several years. One major change is that most major ISPs have very large, geographically-dispersed networks. For example, Comcast serves customers in 40 states, and other networks can peer with them in 18 different locations across the US. These 18 locations are connected to each other through very fast cables that Comcast owns. In other words, Comcast is not just a residential ISP anymore. They are part of what used to be called “the backbone,” although it no longer makes sense to call it that since there are so many big pipes that cross the country and so much traffic is transmitted directly through ISP interconnection.

Another thing that has changed is that content providers are increasingly delivering a lot of a) traffic-intensive and b) time-sensitive content across the Internet. This has created the incentive to use what are known as content-delivery networks (CDNs). CDNs are specialized ISPs that locate servers right on the edge of all terminating ISPs’ networks. There are a lot of CDNs—here is one list.

By locating on the edge of each consumer ISP, CDNs are able to deliver content to end users with very low latency and at very fast speeds. For this service, they charge money to their customers. However, they also have to pay consumer ISPs for access to their networks, because the traffic flow is all going in one direction and otherwise CDNs would be making money by using up resources on the consumer ISP’s network.

CDNs’ payments to consumer ISPs are also a matter of equity between the ISP’s customers. Let’s suppose that Vox hires Amazon CloudFront to serve traffic to Comcast customers (they do). If the 50 percent of Comcast customers who wanted to read Vox suddenly started using up so many network resources that Comcast and CloudFront needed to upgrade their connection, who should pay for the upgrade? The naïve answer is to say that Comcast should, because that is what customers are paying them for. But the efficient answer is that the 50 percent who want to access Vox should pay for it, and the 50 percent who don’t want to access it shouldn’t. By Comcast charging CloudFront to access the Comcast network, and CloudFront passing along those costs to Vox, and Vox passing along those costs to customers in the form of advertising, the resource costs of using the network are being paid by those who are using them and not by those who aren’t.

What happened with the Netflix/Comcast dust-up?

Netflix used multiple CDNs to serve its content to subscribers. For example, it used a CDN provided by Cogent to serve content to Comcast customers. Cogent ran out of capacity and refused to upgrade its link to Comcast. As a result, some of Comcast’s customers experienced a decline in quality of Netflix streaming. However, Comcast customers who accessed Netflix with an Apple TV, which is served by CDNs from Level 3 and Limelight, never had any problems. Cogent has had peering disputes in the past with many other networks.

To solve the congestion problem, Netflix and Comcast negotiated a direct interconnection. Instead of Netflix paying Cogent and Cogent paying Comcast, Netflix is now paying Comcast directly. They signed a multi-year deal that is reported to reduce Netflix’s costs relative to what they would have paid through Cogent. Essentially, Netflix is vertically integrating into the CDN business. This makes sense. High-quality CDN service is essential to Netflix’s business; they can’t afford to experience the kind of incident that Cogent caused with Comcast. When a service is strategically important to your business, it’s often a good idea to vertically integrate.

It should be noted that what Comcast and Netflix negotiated was not a “fast lane”—Comcast is prohibited from offering prioritized traffic as a condition of its merger with NBC/Universal.

What about Comcast’s market power?

I think that one of Tim’s hangups is that Comcast has a lot of local market power. There are lots of barriers to creating a competing local ISP in Comcast’s territories. Doesn’t this mean that Comcast will abuse its market power and try to gouge CDNs?

Let’s suppose that Comcast is a pure monopolist in a two-sided market. It’s already extracting the maximum amount of rent that it can on the consumer side. Now it turns to the upstream market and tries to extract rent. The problem with this is that it can only extract rents from upstream content producers insofar as it lowers the value of the rent it can collect from consumers. If customers have to pay higher Netflix bills, then they will be less willing to pay Comcast. The fact that the market is two-sided does not significantly increase the amount of monopoly rent that Comcast can collect.

Interconnection fees that are being paid to Comcast (and virtually all other major ISPs) have virtually nothing to do with Comcast’s market power and everything to do with the fact that the Internet has changed, both in structure and content. This is simply how the Internet works. I use CloudFront, the same CDN that Vox uses, to serve even a small site like my Bitcoin Volatility Index. CloudFront negotiates payments to Comcast and other ISPs on my and Vox’s behalf. There is nothing unseemly about Netflix making similar payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).

For more reading material on the Netflix/Comcast arrangement, I recommend Dan Rayburn’s posts here, here, and here. Interconnection is a very technical subject, and someone with very specialized expertise like Dan is invaluable in understanding this issue.

I spend a lot of time reading books and essays about technology; more specifically, books and essays about technology history and criticism. Yet, I am often struck by how few of the authors of these works even bother defining what they mean by “technology.” I find that frustrating because, if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”

Photo: David HartsteinOf course, it’s not easy. “In fact, technology is a word we use all of the time, and ordinarily it seems to work well enough as a shorthand, catch-all sort of word,” notes the always-insightful Michael Sacasas in his essay “Traditions of Technological Criticism.” “That same sometimes useful quality, however, makes it inadequate and counter-productive in situations that call for more precise terminology,” he says.

Quite right, and for a more detailed and critical discussion of how earlier scholars, historians, and intellectuals have defined or thought about the term “technology,” you’ll want to check out Michael’s other recent essay, “What Are We Talking About When We Talk About Technology?” which preceded the one cited above. We don’t always agree on things — in fact, I am quite certain that most of my comparatively amateurish work must make his blood boil at times! — but you won’t find a more thoughtful technology scholar alive today than Michael Sacasas. If you’re serious about studying technology history and criticism, you should follow his blog and check out his book, The Tourist and The Pilgrim: Essays on Life and Technology in the Digital Age, which is a collection of some of his finest essays.

Anyway, for what it’s worth, I figured I would create this post to list some of the more interesting definitions of “technology” that I have uncovered in my own research. I suspect I will add to it in coming months and years, so please feel free to suggest other additions since I would like this to be a useful resource to others. Continue reading →