Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy.
Executive Summary
For three years now the Obama Administration and the Federal Communications Commission (FCC) have been pushing for national broadband connectivity as a way to strengthen our economy, spur innovation, and create new jobs across the country. They know that America requires more private investment to achieve their vision. But, despite their good intentions, their policies haven’t encouraged substantial private investment in communications infrastructure. That’s why the launch of Google Fiber is so critical to policymakers who are seeking to promote investment in next generation networks.
The Google Fiber deployment offers policymakers a rare opportunity to examine policies that successfully spurred new investment in America’s broadband infrastructure. Google’s intent was to “learn how to bring faster and better broadband access to more people.” Over the two years it planned, developed, and built its ultra high-speed fiber network, Google learned a number of valuable lessons for broadband deployment – lessons that policymakers can apply across America to meet our national broadband goals.
To my surprise, however, the policy response to the Google Fiber launch has been tepid. After reviewing Google’s deployment plans, I expected to hear the usual chorus of Rage Against the ISP from Public Knowledge, Free Press, and others from the left-of-center, so-called “public interest” community (PIC) who seek regulation of the Internet as a public utility. Instead, they responded to the launch with deafening silence.
Maybe they were stunned into silence. Google’s deployment is a real-world rejection of the public interest community’s regulatory agenda more powerful than any hypothetical. Google is building fiber in Kansas City because its officials were willing to waive regulatory barriers to entry that have discouraged broadband deployments in other cities. Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy. Continue reading →
On Forbes today, I have a long article on the progress being made to build gigabit Internet testbeds in the U.S., particularly by Gig.U.
Gig.U is a consortium of research universities and their surrounding communities created a year ago by Blair Levin, an Aspen Institute Fellow and, recently, the principal architect of the FCC’s National Broadband Plan. Its goal is to work with private companies to build ultra high-speed broadband networks with sustainable business models .
Gig.U, along with Google Fiber’s Kansas City project and the White House’s recently-announced US Ignite project, spring from similar origins and have similar goals. Their general belief is that by building ultra high-speed broadband in selected communities, consumers, developers, network operators and investors will get a clear sense of the true value of Internet speeds that are 100 times as fast as those available today through high-speed cable-based networks. And then go build a lot more of them.
Google Fiber, for example, announced last week that it would be offering fully-symmetrical 1 Gbps connections in Kansas City, perhaps as soon as next year. (By comparison, my home broadband service from Xfinity is 10 Mbps download and considerably slower going up.)
US Ignite is encouraging public-private partnerships to build demonstration applications that could take advantage of next generation networks and near-universal adoption. It is also looking at the most obvious regulatory impediments at the federal level that make fiber deployments unnecessarily complicated, painfully slow, and unduly expensive.
I think these projects are encouraging signs of native entrepreneurship focused on solving a worrisome problem: the U.S. is nearing a dangerous stalemate in its communications infrastructure. We have the technology and scale necessary to replace much of our legacy wireline phone networks with native IP broadband. Right now, ultra high-speed broadband is technically possible by running fiber to the home. Indeed, Verizon’s FiOS network currently delivers 300 Mbps broadband and is available to some 15 million homes.
Continue reading →
On CNET today, I’ve posted a long critique of the recent report by the President’s Council of Advisors on Science and Technology (PCAST) urging the White House to reverse course on a two-year old order to free up more spectrum for mobile users.
In 2010, soon after the FCC’s National Broadband Plan raised alarms about the need for more spectrum for an explosion in mobile broadband use, President Obama issued a Memorandum ordering federal agencies to free up as much as 500 MHz. of radio frequencies currently assigned to them.
After a great deal of dawdling, the National Telecommunications and Information Administration, which oversees spectrum assignments within the federal government, issued a report earlier this year that seemed to offer progress. 95 MHz. of very attractive spectrum could in fact be cleared in the ten years called for by the White House.
But reading between the lines, it was clear that the 20 agencies involved in the plan had no serious intention of cooperating. Their cost estimates for relocation (which were simply reported by NTIA without any indication of how they’d been arrived at or even whether NTIA had been given any details) appeared to be based on an amount that would make any move economically impossible. Continue reading →
I’ve argued (here and here, for instance) against worrying too much about the monopolization of Internet access. Broadband is pretty clearly an industry in which there are increasing returns to scale, and when returns to scale are severe enough, that results in natural monopoly. There are not clear welfare gains from regulatory solutions to natural monopoly problems generally, and broadband in particular is a case where many of the problems associated with monopolization are ameliorated by price discrimination.
Nevertheless, I accept that most people are not persuaded by this logic. Let me try a different tack, explaining what I would expect to see if profit-centered monopolists were really as bad for consumers as their critics claim.
The answer can be summed up in one word: mutuals. Mutual companies are not especially common in today’s economy, but they are worth pondering at some length. Mutuals are firms in which customers, in virtue of their ongoing patronage of the firm, are also its owners. A mutual company generally has no other shareholders to please, and it does not typically distribute dividends. Instead, if it makes a profit it will distribute it to its customers in the form of lower prices in the future.
Continue reading →
It’s come to this. After more than a decade of policies aimed at reducing the telephone companies’ share of the landline broadband market, the feds now want to thwart a key wireless deal on the remote chance it might result in a major phone company exiting the wireline market completely.
The Department of Justice is holding up the $3.9 billion deal that would transfer a block of unused wireless spectrum from a consortium of four cable companies to Verizon Wireless, an arm of Verizon, the country’s largest phone company.
The rationale, reports The Washington Post’s Cecilia Kang, is that DoJ is concerned the deal, which also would involve a wireless co-marketing agreement with Comcast, Cox, Time Warner and Bright House Networks, the companies that jointly own the spectrum in question, would lead Verizon to neglect of its FiOS fiber-to-the-home service.
There’s no evidence that this might happen, but the fact that DoJ put it on the table demonstrates the problems inherent in government attempts to regulate competition.
Continue reading →
I suppose there’s something to be said for the fact that two days into DirecTV’s shutdown of 17 Viacom programming channels (26 if you count the HD feeds) no congressman, senator or FCC chairman has come forth demanding that DirecTV reinstate them to protect consumers’ “right” to watch SpongeBob SquarePants.
Yes, it’s another one of those dust-ups between studios and cable/satellite companies over the cost of carrying programming. Two weeks ago, DirecTV competitor Dish Network dropped AMC, IFC and WE TV. As with AMC and Dish, Viacom wants a bigger payment—in this case 30 percent more—from DirecTV to carry its channel line-up, which includes Comedy Central, MTV and Nickelodeon. DirecTV, balked, wanting to keep its own prices down. Hence, as of yesterday, those channels are not available pending a resolution.
As I have said in the past, Washington should let both these disputes play out. For starters, despite some consumer complaints, demographics might be in DirecTV’s favor. True, Viacom has some popular channels with popular shows. But they all skew to younger age groups that are turning to their tablets and smartphones for viewing entertainment. At the same time, satellite TV service likely skews toward homeowners, a slightly older demographic. It could be that DirecTV’s research and the math shows dropping Viacom will not cost them too many subscribers.
Continue reading →
One of the most egregious examples of special interest pleading before the Federal Communications Commission and now possibly before Congress involves the pricing of “special access,” a private line service that high-volume customers purchase from telecommunications providers such as AT&T and Verizon. Sprint, for example, purchases these services to connect its cell towers.
Sprint has been seeking government-mandated discounts in the prices charged by AT&T, Verizon and other incumbent local exchange carriers for years. Although Sprint has failed to
make a remotely plausible case for re-regulation, fuzzy-headed policymakers are considering using taxpayer’s money in an attempt to gather potentially useless data on Sprint’s behalf.
Sprint is trying to undo a regulatory policy adopted by the FCC during the Clinton era. The commission ordered pricing flexibility for special access in 1999 as a result of massive investment in fiber optic networks. Price caps, the commission explained, were designed to act as a “transitional regulatory scheme until actual competition makes price cap regulation
unnecessary.” The commission rejected proposals to grant pricing flexibility in geographic areas smaller than Metropolitan Statistical Areas, noting that
because regulation is not an exact science, we cannot time the grant of regulatory relief to coincide precisely with the advent of competitive alternatives for access to each individual end user. We conclude that the costs of delaying regulatory relief outweigh the potential costs of granting it before [interexchange carriers] have a competitive alternative for each and every end user. The Commission has determined on several occasions that retaining regulations longer than necessary is contrary to the public interest. Almost 20 years ago, the Commission determined that regulation imposes costs on common carriers and the public, and that a regulation should be eliminated when its costs outweigh its benefits. (footnotes omitted.)
Continue reading →
California is recognized as a world leader in Internet technologies and services. It is the home of companies, like Apple, Google, and Cisco, whose innovations are driving economic recovery in California and Internet innovation around the world. The success of these and many other California technology companies has been driven by the decentralized and largely unregulated Internet, which provides them with the ability to market their products and services globally.
California’s success is also its biggest threat. The economic growth, individual empowerment, and entrepreneurialism driven by Internet innovation in California have made it the envy of the world. As a result, local and international governments are increasingly proposing new regulations that would favor their own companies – and cripple California’s economy. A current example is the upcoming World Conference on International Telecommunications, which will consider proposals to impose price regulations on the Internet through an agency of the United Nations. Continue reading →
On Wednesday morning, the U.S. House of Representatives Energy & Commerce Subcommittee on Communications and Technology will hold a hearing on “The Future of Video.”
As we Tech Liberators have long argued on these pages (1, 2, 3, 4, 5, 6, 7), government’s hands have been all over the video market since its inception, primarily in the form of the FCC’s rulemaking and enforcement enabled by the Communications Act. While the 1996 Telecommunications Act scrapped some obsolete video regulations, volumes of outdated rules remain law, and the FCC wields vast and largely unchecked authority to regulate video providers of all shapes and sizes. Wednesday’s hearing offers members an excellent opportunity to question each and every law that enables governmental intervention—and restricts liberty in—the television market.
It’s high time for Congress to free up America’s video marketplace and unleash the forces of innovation. Internet entrepreneurs should be free to experiment with novel approaches to creating, distributing, and monetizing video content without fear of FCC regulatory intervention. At the same time, established media businesses—including cable operators, satellite providers, telecom companies, broadcast networks and affiliates, and studios—should compete on a level playing field, free from both federal mandates and special regulatory treatment.
The Committee should closely examine the Communications and Copyright Acts, and rewrite or repeal outright provisions of law that inhibit a free video marketplace. Adam Thierer has chronicled many such laws. The Committee should, among other reforms, consider:
Here’s to the success of Sen. Jim DeMint, Rep. Steve Scalise, and other members of Congress who are working to achieve real reform and ensure that the future of video is bounded only by the dreams of entrepreneurs.
Thanks to TLFers Jerry Brito and Eli Dourado, and the anonymous individual who leaked a key planning document for the International Telecommunication Union’s World Conference on International Telecommunications (WCIT) on Jerry and Eli’s inspired WCITLeaks.org site, we now have a clearer view of what a handful of regimes hope to accomplish at WCIT, scheduled for December in Dubai, U.A.E.
Although there is some danger of oversimplification, essentially a number of member states in the ITU, an arm of the United Nations, are pushing for an international treaty that will give their governments a much more powerful role in the architecture of the Internet and economics of the cross-border interconnection. Dispensing with the fancy words, it represents a desperate, last ditch effort by several authoritarian nations to regain control of their national telecommunications infrastructure and operations
A little history may help. Until the 1990s, the U.S. was the only country where telephone companies were owned by private investors. Even then, from AT&T and GTE on down, they were government-sanctioned monopolies. Just about everywhere else, including western democracies such as the U.K, France and Germany, the phone company was a state-owned monopoly. Its president generally reported to the Minster of Telecommunications.
Since most phone companies were large state agencies, the ITU, as a UN organization, could wield a lot of clout in terms of telecom standards, policy and governance–and indeed that was the case for much of the last half of the 20th century. That changed, for nations as much as the ITU, with the advent of privatization and the introduction of wireless technology. In a policy change that directly connects to these very issues here, just about every country in the world embarked on full or partial telecom privatization and, moreover, allowed at least one private company to build wireless telecom infrastructure. As ITU membership was reserved for governments, not enterprises, the ITU’s political influence as a global standards and policy agency has since diminished greatly. Add to that concurrent emergence of the Internet, which changed the fundamental architecture and cost of public communications from a capital-intensive hierarchical mechanism to inexpensive peer-to-peer connections and the stage was set for today’s environment where every smartphone owner is a reporter and videographer. Telecommunications, once part of the commanding heights of government control, was decentralized down to street level.
Continue reading →