Telecom & Cable Regulation

Just when you thought the FCC’s investigation of the wireless industry couldn’t get any stranger, TechCrunch reports that the Commission has sent letters to AT&T, Apple, and Google inquiring about Apple’s recent decision to reject the Google Voice app from the iPhone App Store (as Berin discussed yesterday).google-voice-iphone-app-rejected-by-apple

It’s been over two years since the original iPhone was launched, but it seems the FCC still doesn’t get it: the iPhone is very clearly a closed platform — a prototypical walled garden — and Apple has the final say on what applications users can install. When you buy an iPhone, you’re not simply buying a piece of hardware, but actually a package deal that includes software, hardware, and a wireless contract. Is this anti-consumer? 26 million consumers don’t think so. The iPhone 3GS, the latest version of the phone, is selling so fast that Apple’s CFO says they can’t make enough to meet demand!

Of course, the iPhone model isn’t for everyone. I, for one, don’t own one because I’m an obsessive tinkerer and prefer a phone that’s as open as possible. But not everyone shares my preferences. As mentioned above, over 26 million iPhones have been sold since June 2007, so openness clearly isn’t make-or-break for a lot of consumers. Who knows, maybe some people actually trust Apple and like the comfort of knowing that every app they can get comes with a seal of approval from Cupertino.

The FCC’s letter to Apple demands an explanation for why Google Voice was rejected. If Apple’s explanation doesn’t satisfy the FCC’s criteria — which, by the way, are entirely unclear — then what? Will the FCC force Apple to accept Google Voice? Say what you will about Apple’s app store track record, but the prospect of federal regulators having the final word on which applications smartphone owners can install hardly seems pro-consumer. The FCC can’t even figure out how to run its own website!

In some ways, the iPhone has perhaps been too successful for its own good. It’s so popular that many consumers seem to no longer view it as just another product but instead as an item to which they are entitled. Thus, bureaucrats and Congresscritters in search of political points are making a big fuss over the fact that the iPhone isn’t everything to everyone. Why can’t it be wide open? Why isn’t in available on every carrier nationwide? Why is it so expensive to purchase without a service contract?

Continue reading →

The FCC has less than seven months to complete and submit to Congress a “National Broadband Plan For Our Future.” Last week, CEI filed reply comments with the FCC on the broadband plan. One of our arguments was that network neutrality rules amount to price controls. ArsTechnica quoted our comments in a recent article and expressed skepticism toward our contention about neutrality mandates:638px-World_War_II_Domestic_Price_Controls

“In particular, [neutrality rules] require ISPs to offer content providers a price of zero, and to differentiate prices to consumers only in certain limited ways,” says CEI’s filing. “The disastrous consequences of price controls are all too familiar. And while neutrality may currently align with industry best practices, that fact limits the possible benefits just as much as the possible harm.”

Content providers pay for bandwidth on the competitive market, so it’s not clear what the line about “a price of zero” refers to (that money is passed along to other ISPs along the network path through the mechanism of “peering and transit“). But it is clear what groups like CEI want from a broadband plan: nothing at all.

Ars is correct in pointing out that pricing based on usage is already commonplace in the form of the well-established system of peering arrangements and transit pricing. But pricing needn’t be based solely on usage; it could also be based on priority levels or quality of service tiers. Such pricing schemes remain in a nascent stage, yet many of them would be prohibited or restricted by neutrality rules. This is because neutrality rules by definition set the price of many kinds of data prioritization at zero. Thus, even if an effective mechanism for differentiating between data streams at the network level were to gain traction, it would be subject to regulatory burdens if neutrality were to be enshrined into law.

Continue reading →

Howard Stern swore off free broadcast radio in 2004 in part because of federally mandated decency rules. The self-annointed “king of all media” may have stepped off the throne in doing so. Them’s the breaks in the competitive media marketplace, contorted as it is by government speech controls.

Some would argue that a new king of all media is seeking the mantle of power now that the Obama administration is ensconced and friendly majorities hold the House and Senate. The new pretender is the federal government.

And some would argue that the Free PressChanging Media Summit” held yesterday here in Washington laid the groundwork for a new federal takeover of media and communications.

That person is not me. But I am concerned by the enthusiasm of many groups in Washington to “improve” media (by their reckoning) with government intervention.

Free Press issued a report yesterday entitled Dismantling Digital Deregulation. Even the title is a lot to swallow – Have communications and media been deregulated in any meaningful sense? (The title itself prioritizes alliteration over logic – evidence of what may come within.)

Opening the conference, Josh Silver, executive director of Free Press harkened to Thomas Jefferson – well and good – but public subsidies for printers and a government-run postal system model his hopes for U.S. government policies to come.

It’s helpful to note what policies found their way into Jefferson’s constitution as absolutes and what were merely permissive. The absolute is found in Amendment I: “Congress shall make no law . . . abridging the freedom of speech, or of the press . . . .”

Among the permissive is the Article I power “to establish Post Offices and post Roads.” There’s no mandate to do it and the scope and extent of any law is subject to Congress’ discretion, just like the power to create patents and copyrights which immediately follows.

I won’t label Free Press and all their efforts a collectivist plot and dismiss it as such – there are some issues on which we probably have common cause – but a crisper expression of “dismantling deregulation” is “re-regulation.”

It’s a very friendly environment for a government takeover of modern-day printing presses: Internet service providers, cable companies, phone companies, broadcasters, and so on.

Over at the Verizon Policy Blog, Link Hoewing has a sharp piece up entitled, “Of Business Models and Innovation.” He makes a point that I have often stressed in my debates with Zittrain and Lessig, namely, that the whole “open vs. closed” debate is typically greatly overstated or misunderstood.   Hoewing correctly argues that:

The point is not that open or managed models are always better or worse.  The point is that there is no one “right” model for promoting innovation.  There are examples of managed and open business models that have been both good for innovation and bad for it. There are also examples of managed and open models that have both succeeded and failed.  The point is in a competitive market to let companies develop business models they believe will serve consumers best and see how things play out.

Exactly right.  Moreover, the really important point here is that there exists a diverse spectrum of innovative digital alternatives from which to choose. Along the “open vs. closed” spectrum, the range of digital technologies and business models continues to grow and grow in both directions.  Do you want wide-open, tinker-friendly devices, sites, or software? You got it. Do you want a more closed, simple, and safe online experience?  You can have that, too.  And there are plenty of choices in between.

This is called progress!

Better not be offering incentives!

As I previously reported, the DC Circuit recently upheld a decision by the FCC to forbid customer retention practices used by Verizon to incentivize its customers to stay with the carrier rather than leaving for a VOIP provider. In the earlier post, I analyzed the bad economics of the FCC’s ban. In this post, as promised, I go into greater detail on the court’s decision affirming the FCC.

The latest issue of the Center for Internet and Society’s publication, Packets, has arrived and with it my summary of the case. The Packets piece provides a more neutral (but detailed) summary of the DC Circuit’s decision, without much analysis.

The big question before the court was whether what the FCC did was really pursuant to the Telecommunications Act, which forbids a telco “that receives or obtains proprietary information from another carrier for purposes of providing any telecommunications service” from using the information for a marketing purpose. If not, then essentially the FCC just went AWOL; instead of enforcing the law, as it is supposed to, it simply made its own law.

Indeed, that is exactly what happened here. The natural reading of the language, as the court admits, is contrary to the FCC’s ruling. To use an example employed by the court, when one says “Joe received information from Mary for purposes of drafting a brief,” the court reasoned, “it is overwhelmingly likely that the speaker expects Joe to do the drafting.” But Verizon is getting the information from other telcos not in order to provide their customers with phone service, but to cut off service. It is the competitors who are using the information to provide phone service. Mary is drafting the brief, so the statute doesn’t apply! The court never fully explains why it refuses to limit the statutory language to its natural meaning – saying only that one could grammatically read it the other way. Continue reading →

This is the third in a series of articles about Internet technologies. The first article was about web cookies. The second article explained the network neutrality debate. This article explains network management systems. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

There has been lots of talk on blogs recently about Cox Communications’ network management trial. Some see this as another nail in Network Neutrality’s coffin, while many users are just hoping for anything that will make their network connection faster.

As I explained previously, the Network Neutrality debate is best understood as a debate about how to best manage traffic on the Internet.

Those who advocate for network neutrality are actually advocating for legislation that would set strict rules for how ISPs manage traffic. They essentially want to re-classify ISPs as common carriers. Those on the other side of the debate believe that the government is unable to set rules for something that changes as rapidly as the Internet. They want ISPs to have complete freedom to experiment with different business models and believe that anything that approaches real discrimination will be swiftly dealt with by market forces.

But what both sides seem to ignore is that traffic must be managed. Even if every connection and router on the Internet is built to carry ten times the expected capacity, there will be occasional outages. It is foolish to believe that routers will never become overburdened–they already do. Current routers already have a system for prioritizing packets when they get overburdened; they just drop all packets received after their buffers are full. This system is fair, but it’s not optimized.

The network neutrality debate needs to shift to a debate on what should be prioritized and how. One way packets can be prioritized is by the type of data they’re carrying. Applications that require low latency would be prioritized and those that don’t require low latency would not be prioritized.

Continue reading →

The Supreme Court building (thank Chief Justice Taft!)During my summer internship at CEI, a couple of us interns discussed the book Cato’s Robert Levy published last May, The Dirty Dozen: How Twelve Supreme Court Cases Radically Expanded Government and Eroded Freedom. We looked at Levy’s list of the worst decisions and sent each other lists of our own. Now that I’m taking ConLaw, I feel as though the time has come to post my lists of the twelve worst and the twelve best Supreme Court decisions of all time. It is by no means an exhaustive list. My inclusion of different cases than Levy does not indicate that I disagree with his assessment that those decisions are terrible – just maybe not as bad as the ones I select.

The Dirty DozenThe Worst:

  1. The Slaughter-House Cases (1873). The very worst decision ever made by the US Supreme Court. Eviscerated the 14th Amendment only five years after its adoption. It is best known for reading the Privileges or Immunities Clause, which was supposed to be (and could have been) a vehicle for both incorporation and unenumerated rights, out of the Constitution. But it also wrote out the Due Process Clause and the Equal Protection Clause, though those two clauses eventually crawled back into existence, to a degree.
  2. Katzenbach v. McClung (1964). It was tough to decide which of the various cases reading the Commerce Clause expansively enough to permit Congress to pass any law it desires, thus destroying the basis of the federal government as one of defined and limited powers to include. But McClung seems to be the most expansive in both its result and its holding.

Continue reading →

briefcase full of cashOver the summer, I blogged about an FCC decision to ban Verizon’s practice of offering incentives to departing customers to get them to stay. Yesterday, the DC Circuit upheld that bad decision. When a customer of Verizon’s phone service decides to leave for a VOIP company, Verizon gets a notice that the number is being ported. When Verizon got notified that the customer was trying to leave, the company would offer her incentives such as “discounts and American Express reward cards” to stay.

This worked well for the customers, who got discounts if they stayed. It also worked well for Verizon, for whom it costs much more to find a replacement customer than to keep the current one. And it was really the best way to do so. If Verizon had given the incentives any time a customer threatened to leave, but didn’t start the process of doing so, then customers would just bluff to get the incentives. Verizon instead looked for a costly signal from the customer. And if Verizon had waited until after the port was already completed, it would cost the customer, Verizon, and the new carrier a lot of effort to switch back.

But the FCC banned Verizon’s efforts and yesterday the DC Circuit affirmed the Commission. I will follow with more details, once my summary of the case comes out in the March issue of Packets, the Center for Internet and Society’s publication summarizing important new internet cases. But for now, I should just note that the court hinted that the FCC’s reading of the statute it relied upon was a bit counterintuitive, but was compelled by Chevron v. NRDC to give the administrative agency great deference in its bad reading of the law. The court even noted that Verizon offered uncontroverted evidence “that continuation of its marketing program would generate $75–79 million in benefits for telephone customers over a five-year period.” Further, the court rejected Verizon’s First Amendment challenge, because the lower standard for commercial speech compelled the conclusion that Verizon’s sound marketing efforts didn’t deserve protection.

These precedents need to be revoked, or the growing administrative state will keep swallowing up more and more of our most important freedoms while preventing sensible and beneficial policies.

Wide of the Mark

by on February 3, 2009 · 9 comments

Wall Street Journal columnist Gordon Crovitz writes that

In Japan, wireless technology works so well that teenagers draft novels on their cellphones. People in Hong Kong take it for granted that they can check their BlackBerrys from underground in the city’s subway cars. Even in France, consumers have more choices for broadband service than in the U.S.??

The Internet may have been developed in the U.S., but the country now ranks 15th in the world for broadband penetration. For those who do have access to broadband, the average speed is a crawl, moving bits at a speed roughly one-tenth that of top-ranked Japan. This means a movie that can be downloaded in a couple of seconds in Japan takes half an hour in the U.S. The BMW 7 series comes equipped with Internet access in Germany, but not in the U.S.

Then he adds that the economic stimulus package before Congress will not fix the real reason the U.S. now ranks 15th in the world for broadband penetration because

nothing in the legislation would address the key reason that the U.S. lags so far behind other countries. This is that there is an effective broadband duopoly in the U.S., with most communities able to choose only between one cable company and one telecom carrier. It’s this lack of competition, blessed by national, state and local politicians, that keeps prices up and services down.

A couple of observations come to mind.

One is that the U.S. has the most successful wireless market in the world. Cellphone calls are significantly less expensive on a per minute basis in the U.S. (6 cents per minute) than in France (17 cents) or Japan (26 cents), according to the FCC’s latest analysis of wireless competition (Table 16). U.S. mobile subscribers continue to lead the world in average voice usage by a wide margin.

The explanation for why fourth generation wireless technology is further along in Japan than it is here would have to include the fact that the Japanese government years ago decided to make leadership in 4G wireless technology a national priority and invested heavily with taxpayer money (see, e.g., this).

This is a form of industrial policy, which involves picking winners and losers, and it is how the Japanese do things. Back in the late 1980s or early 1990s the Japanese government decided Japan needed to be the world-leader in high-definition television, which prompted some in our own government to argue we couldn’t afford to let that happen, so we needed a public-private partnership and a national high-definition television transition plan (which some now want to postpone).??

The good news is that AT&T, Clearwire and Verizon Wireless have all successfully acquired spectrum for the rollout of their own 4G services over the next couple years without government subsidies and oversight.??

Continue reading →

The next several days feature a variety of upcoming events, both on broadband stimulus legislation, and on some of the broader issues associated with the Internet and its architecture.

On Friday, January 30, the Technology Policy Institute features a debate, “Broadband, Economic Growth, and the Financial Crisis: Informing the Stimulus Package,”  from 12 noon – 2 p.m., at the Rayburn House Office Building, Room B369.

Moderated by my friend Scott Wallsten, senior fellow and vice president for research at the Technology Policy Institute, the event features James Assey, Executive Vice President for the National Cable & Telecommunications Association; Robert Crandall, Senior Fellow in Economic Studies, The Brookings Institution; Chris King, Principal/Senior Telecom Services Analyst, Stifel Nicolaus Telecom Equity Research; and Shane Greenstein, Elinor and Wendell Hobbs Professor of Management and Strategy at the Kellogg School of Management, Northwestern University.

The language promoting the event notes, “How best to include broadband in an economic stimulus package depends, in part, on understanding two critical issues: how broadband affects economic growth, and how the credit crisis has affected broadband investment.  In particular, one might favor aggressive government intervention if broadband stimulates growth and investment is now lagging.  Alternatively, money might be better spent elsewhere if the effects on growth are smaller than commonly believed or private investment is continuing despite the crisis.”

And then, on Tuesday,  MIT Professor David Clark, one of the pioneers of the Internet and a distinguished scientist whose work on “end-to-end” connectivity is widely cited as the architectural blueprint of the Internet, looks to the future.  Focusing on the dynamics of advanced communications – the role of social networking, problems security and broadband access, and the industrial implications of network virtualization and overlays – Clark here tackles new forces shifting regulation and market structure.

David Clark is Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory. In the forefront of Internet development since the early 1970s, Dr. Clark was Chief Protocol Architect in 1981-1989, and then chaired the Internet Activities Board. A past chairman of the Computer Science and Telecommunications Board of the National Academies, Dr. Clark is co-director of the MIT Communications Futures Program.

I’m no longer affiliated with the Information Economy Project at George Mason University, but I urge all interested in the architecture of the Internet to register and attend More information about the lecture, and about the Information Economy Project, is available at http://iep.gmu.edu/davidclark.

It will take place at the George Mason University School of Law, Room 120, 3301 Fairfax Drive, Arlington, VA 22201 (Orange Line: Virginia Square-GMU Metro), on Tuesday, February 3, from 4 – 5:30 p.m., with a reception to follow. The event is free and open to the public, but reservations are requested. To reserve a spot, please e-mail iep.gmu@gmail.com