Broadband & Neutrality Regulation

In a [recent post](http://www.forbes.com/sites/timothylee/2012/09/08/the-weird-economics-of-utility-networks/), Tim Lee does a good job of explaining why facilities-based competition in broadband is difficult. He writes,

>As Verizon is discovering with its FiOS project, it’s much harder to turn a profit installing the second local loop; both because fewer than 50 percent of customers are likely to take the service, and because competition pushes down margins. And it’s almost impossible to turn a profit providing a third local loop, because fewer than a third of customers are likely to sign up, and even more competition means even thinner margins.

Tim thus concludes that

>the kind of “facilities-based” competition we’re seeing in Kansas City, in which companies build redundant networks that will sit idle most of the time, is extremely wasteful. In a market where every household has n broadband options (each with its own fiber network), only 1/n local loops will be in use at any given time. The larger n is, the more resources are wasted on redundant infrastructure.

I don’t understand that conclusion. You would imagine that redundant infrastructure would be built only if it is profitable to its builder. Tim is right we probably should not expect more than a few competitors, but I don’t see how more than one pipe is necessarily wasteful. If laying down a second set of pipes is profitable, shouldn’t we welcome the competition? The question is whether that second pipe is profitable without government subsidy.

That brings me to a larger point: I think what Tim is missing is what makes Google Fiber so unique. Tim is assuming that all competitors in broadband will make their profits from the subscription fees they collect from subscribers. As we all know, that’s not [how Google tends to operate](http://elidourado.com/blog/theory-of-google/). Google’s primary business model is advertising, and that’s likely from [where they expect their return to come](http://community.nasdaq.com/News/2012-08/google-seeking-more-ad-impressions-with-fast-fiber.aspx?storyid=162788). One of Google Fiber’s price points is [free](http://www.techdirt.com/blog/innovation/articles/20120726/11200919842/google-fiber-is-official-free-broadband-up-to-5-mbps-pay-symmetrical-1-gbps.shtml), so we might expect greater adoption of the service. That’s disruptive innovation that could sustainably increase competition and bring down prices for consumers–without a government subsidy.

Kansas City sadly gave Google all sorts of subsidies, like free power and rackspace for its servers as [Tim has pointed out](http://arstechnica.com/tech-policy/2012/09/how-kansas-city-taxpayers-support-google-fiber/), but it also cut serious red tape. For example, there is no build-out requirement for Google Fiber, a fact [now bemoaned](http://www.wired.com/business/2012/09/google-fiber-digital-divide/) by digital divide activists. Such requirements, I would argue, are the [true cause](http://news.cnet.com/How-to-squelch-growth-of-the-high-speed-Net/2010-1034_3-6106690.html) of the unused and wasteful overbuilding that Tim laments.

So what matters more? The in-kind subsidies or the freedom to build only where it’s profitable? I think that’s the empirical question we’re really arguing about. It’s not a forgone conclusion of broadband economics that [there can be only one](http://www.youtube.com/watch?v=4AoOa-Fz2kw). And do we want to limit competition in part of a municipality in order to achieve equity for the whole? That’s another question over which “original recipe” and bleeding-heart libertarians may have a difference of opinion.

How does the FCC justify taking action without an adequate evidentiary basis? By relying on a series of fallacies to provide an aura of evidence without actually having any. That’s a problem for an agency that wants to be seen as fact-based and data driven. Fallacies are like zeros: No matter how many you have, you still have nothing.


Yesterday the Federal Communications Commission (FCC), our government’s communications industry experts, issued an order that would flunk an introductory college course in logic. Despite issuing multiple data requests, in October 2011, the FCC told the DC Circuit Court of Appeals that it “lacked a sufficient evidentiary record” to document claims that its “pricing flexibility rules” governing special access were flawed. The FCC’s evidentiary record hasn’t improved, but it suspended its pricing flexibility rules on a so-called “interim” basis anyway while it tries to figure out how to obtain the data it needs to do a transparent, data based analysis. Continue reading →

Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy.

Executive Summary

For three years now the Obama Administration and the Federal Communications Commission (FCC) have been pushing for national broadband connectivity as a way to strengthen our economy, spur innovation, and create new jobs across the country. They know that America requires more private investment to achieve their vision. But, despite their good intentions, their policies haven’t encouraged substantial private investment in communications infrastructure. That’s why the launch of Google Fiber is so critical to policymakers who are seeking to promote investment in next generation networks.

The Google Fiber deployment offers policymakers a rare opportunity to examine policies that successfully spurred new investment in America’s broadband infrastructure. Google’s intent was to “learn how to bring faster and better broadband access to more people.” Over the two years it planned, developed, and built its ultra high-speed fiber network, Google learned a number of valuable lessons for broadband deployment – lessons that policymakers can apply across America to meet our national broadband goals.

To my surprise, however, the policy response to the Google Fiber launch has been tepid. After reviewing Google’s deployment plans, I expected to hear the usual chorus of Rage Against the ISP from Public KnowledgeFree Press, and others from the left-of-center, so-called “public interest” community (PIC) who seek regulation of the Internet as a public utility. Instead, they responded to the launch with deafening silence.

Maybe they were stunned into silence. Google’s deployment is a real-world rejection of the public interest community’s regulatory agenda more powerful than any hypothetical. Google is building fiber in Kansas City because its officials were willing to waive regulatory barriers to entry that have discouraged broadband deployments in other cities. Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy. Continue reading →

On Forbes today, I have a long article on the progress being made to build gigabit Internet testbeds in the U.S., particularly by Gig.U.

Gig.U is a consortium of research universities and their surrounding communities created a year ago by Blair Levin, an Aspen Institute Fellow and, recently, the principal architect of the FCC’s National Broadband Plan.  Its goal is to work with private companies to build ultra high-speed broadband networks with sustainable business models .

Gig.U, along with Google Fiber’s Kansas City project and the White House’s recently-announced US Ignite project, spring from similar origins and have similar goals.  Their general belief is that by building ultra high-speed broadband in selected communities, consumers, developers, network operators and investors will get a clear sense of the true value of Internet speeds that are 100 times as fast as those available today through high-speed cable-based networks.  And then go build a lot more of them.

Google Fiber, for example, announced last week that it would be offering fully-symmetrical 1 Gbps connections in Kansas City, perhaps as soon as next year.  (By comparison, my home broadband service from Xfinity is 10 Mbps download and considerably slower going up.)

US Ignite is encouraging public-private partnerships to build demonstration applications that could take advantage of next generation networks and near-universal adoption.  It is also looking at the most obvious regulatory impediments at the federal level that make fiber deployments unnecessarily complicated, painfully slow, and unduly expensive.

I think these projects are encouraging signs of native entrepreneurship focused on solving a worrisome problem:  the U.S. is nearing a dangerous stalemate in its communications infrastructure.  We have the technology and scale necessary to replace much of our legacy wireline phone networks with native IP broadband.  Right now, ultra high-speed broadband is technically possible by running fiber to the home.  Indeed, Verizon’s FiOS network currently delivers 300 Mbps broadband and is available to some 15 million homes.

Continue reading →

On CNET today, I’ve posted a long critique of the recent report by the President’s Council of Advisors on Science and Technology (PCAST) urging the White House to reverse course on a two-year old order to free up more spectrum for mobile users.

In 2010, soon after the FCC’s National Broadband Plan raised alarms about the need for more spectrum for an explosion in mobile broadband use, President Obama issued a Memorandum ordering federal agencies to free up as much as 500 MHz. of radio frequencies currently assigned to them.

After a great deal of dawdling, the National Telecommunications and Information Administration, which oversees spectrum assignments within the federal government, issued a report earlier this year that seemed to offer progress. 95 MHz. of very attractive spectrum could in fact be cleared in the ten years called for by the White House.

But reading between the lines, it was clear that the 20 agencies involved in the plan had no serious intention of cooperating. Their cost estimates for relocation (which were simply reported by NTIA without any indication of how they’d been arrived at or even whether NTIA had been given any details) appeared to be based on an amount that would make any move economically impossible. Continue reading →

As budget deficits have increased, public investment in our nation’s infrastructure has declined. In just the last four yours, the “United States has fallen sharply in the World Economic Forum’s ranking of national infrastructure systems,” from 6th in 2007-2008 to 16th in 2011-2012. Our roads, bridges, rail networks, and ports are all straining to handle demand, but due to budget concerns, lawmakers have little interest in increased funding. Continue reading →

I’ve argued (here and here, for instance) against worrying too much about the monopolization of Internet access. Broadband is pretty clearly an industry in which there are increasing returns to scale, and when returns to scale are severe enough, that results in natural monopoly. There are not clear welfare gains from regulatory solutions to natural monopoly problems generally, and broadband in particular is a case where many of the problems associated with monopolization are ameliorated by price discrimination.

Nevertheless, I accept that most people are not persuaded by this logic. Let me try a different tack, explaining what I would expect to see if profit-centered monopolists were really as bad for consumers as their critics claim.

The answer can be summed up in one word: mutuals. Mutual companies are not especially common in today’s economy, but they are worth pondering at some length. Mutuals are firms in which customers, in virtue of their ongoing patronage of the firm, are also its owners. A mutual company generally has no other shareholders to please, and it does not typically distribute dividends. Instead, if it makes a profit it will distribute it to its customers in the form of lower prices in the future.

Continue reading →

Is competition really a problem in the tech industry? That was the question the folks over at WebProNews asked me to come on their show and discuss this week. I offer my thoughts in the following 15-minute clip. Also, down below I have embedded a few of my recent relevant essays on this topic, a few of which I mentioned during the show.

It’s come to this. After more than a decade of policies aimed at reducing the telephone companies’ share of the landline broadband market, the feds now want to thwart a key wireless deal on the remote chance it might result in a major phone company exiting the wireline market completely.

The Department of Justice is holding up the $3.9 billion deal that would transfer a block of unused wireless spectrum from a consortium of four cable companies to Verizon Wireless, an arm of Verizon, the country’s largest phone company.

The rationale, reports The Washington Post’s Cecilia Kang, is that DoJ is concerned the deal, which also would involve a wireless co-marketing agreement with Comcast, Cox, Time Warner and Bright House Networks, the companies that jointly own the spectrum in question, would lead Verizon to neglect of its FiOS fiber-to-the-home service.

There’s no evidence that this might happen, but the fact that DoJ put it on the table demonstrates the problems inherent in government attempts to regulate competition.

Continue reading →

Count me among those who are rolling their eyes as the Department of Justice initiates an investigation into whether cable companies are using data caps to strong-arm so-called “over-the-top” on-demand video providers like Netflix, Walmart’s Vudu and Amazon.com and YouTube.

The Wall Street Journal reported last week that DoJ investigators “are taking a particularly close look at the data caps that pay-TV providers like Comcast and AT&T Inc. have used to deal with surging video traffic on the Internet. The companies say the limits are needed to stop heavy users from overwhelming their networks.”

Internet video providers like Netflix have expressed concern that the limits are aimed at stopping consumers from dropping cable television and switching to online video providers. They also worry that cable companies will give priority to their own online video offerings on their networks to stop subscribers from leaving.

Here are five reasons why the current anticompetitive sturm und drang is an absurd waste of time and might end up leading to more harm than good.

Continue reading →