Broadband & Neutrality Regulation

JZ

Well, I actually didn’t exactly get a chance to say quite enough for this to qualify as much of a “debate,” but I was brought in roughly a half hour into this WBUR (Boston NPR affiliate) radio show featuring Jonathan Zittrain, author of the recently released: The Future of the Internet–And How to Stop It. Jonathan was kind enough to suggest to the producers that I might make a good respondent to push back a bit in opposition to the thesis set forth in his new book.

Jonathan starts about 6 minutes into the show and they bring me in around 29 minutes in. Although I only got about 10 minutes to push back, I thought the show’s host Tom Ashbrook did an excellent job raising many of the same questions I do in my 3-part review (Part 1, 2, 3) of Jonathan’s provocative book.

In the show, I stress the same basic points I made in those reviews: (1) he seems to be over-stating things quite a bit in saying that the old “generative” Internet is “dying”; and in doing so, (2) he creates a false choice of possible futures from which we must choose. What I mean by false choice is that Jonathan doesn’t seem to believe a hybrid future is possible or desirable. I see no reason why we can’t have the best of both worlds–-a world full of plenty of tethered appliances, but also plenty of generativity and openness.

If you’re interested, listen in.

Broadband Reports ran an opinion piece by Karl last week discussing the rumors that Comcast will soon adopt a 250GB a month maximum with overage fees for excessive consumption.

As the piece points out, implementing overage fees runs the risk of giving FiOS (and, to a lesser extent, U-Verse) an even bigger edge on cable broadband. AT&T and Verizon, because of their last-mile network architectures, are less susceptible to congestion caused by heavy users than C omcast, with its shared cable network. AT&T and Verizon have gotten by without terminating heavy users or even charging them extra.

Yet right after Karl finishes explaining about how overage fees will change the competitive landscape, he starts ranting about the prospect of “investor pressure constantly forcing caps downward and overage fees upward.”

Competitive pressures make this scenario a remote possibility, especially as content portals serving massive files like Apple TV and Xbox Marketplace gain mainstream appeal. If Comcast wants to deflect criticism from other ISPs over bandwidth limits, any cap must be high enough to ensure very few customers even approach it. Arguably, 250GB a month is enough to satiate even power users, at least for a couple more years.

ISPs are competing fiercely to attract subscribers, so providers regularly make hay out of trivial product differences such as the “ugly cabinets” that AT&T sometimes installs when upgrading a neighborhood’s DSL speeds. Imagine the ads Verizon will run if Comcast starts charging customers for heavy use—“With Comcast, you never know when you’ll be hit with an enormous monthly bill if your kids go on a YouTube frenzy or your computer is overtaken by hackers. Here in FiOS land, rest assured there are no extra fees, no matter how much you download.” It’s not hard to see this message resonating with customers, especially those living in households with multiple Web-savvy residents.

Continue reading →

The National Cable & Telecommunications Association blog did a series of posts back in February about the OECD study. There seems to be three basic criticisms. First, businesses in the US have a higher proportion of “special access” lines than other countries ranked, and these are not counted in the statistics, while businesses with normal DSL lines are counted. Second, the OECD statistics are focused on “connections per 100 subscribers” rather than proportion of subscribers who have an Internet connect. The result is to penalize the US, which has a larger-than-average household size (all of whom can share a single Internet connection) while giving an edge to countries with smaller household sizes. Finally, the report relies on advertised speeds and prices, which the NCTA suggests exaggerates Japan’s lead over a metric that focuses on the actual available speeds in that country. Obviously, the NCTA has an agenda to promote so it’s worth taking the criticisms with a grain of salt, but they’re interesting in any event. Thanks to reader Wyatt Ditzler for the link.

OECD vs. SpeedTest

by on May 5, 2008 · 8 comments

Nate Anderson points to a new report on broadband around the world that I’m looking forward to reading. I have to say I’m skeptical of this sort of thing, though:

Critics of the current US approach to spurring broadband deployment and adoption point out that the country has been falling on most broadband metrics throughout the decade. One of the most reliable, that issued by the OECD, shows the US falling from 4th place in 2001 to 15th place in 2007. While this ranking in particular has come under criticism from staunchly pro-market groups, the ITIF’s analysis shows that these numbers are the most accurate we have. According to an ITIF analysis of various OECD surveys, the US is in 15th place worldwide and it lags numerous other countries in price, speed, and availability—a trifecta of lost opportunities. With an average broadband speed of 4.9Mbps, the US is being Chariots of Fire-d by South Korea (49.5Mbps), Japan (63.6Mbps), Finland (21.7Mbps), Sweden (16.8Mbps), and France (17.6Mbps), among others. Not only that, but the price paid per megabyte in the US ($2.83) is substantially higher than those countries, all of which come in at less than $0.50 per megabyte.

Now, this site is a tool for measuring the speed of your broadband connection, and it purports to have data from around the world. I have no idea how reliable their methodology is generally, or how good their testing equipment is around the world, but I’ve used it in several different places in the US and it at least seems reliable around here. According to their measurements, the US has an average broadband speed of 5.3 mbps, roughly what the OECD study said. But the numbers for the other countries cited are wildly different: Japan is 13 mbps, Sweden is 8.7 mbps, South Korea is 6.1 mbps, and France is 5.5 mbps. If these numbers are right, te US is behind Sweden and Japan, and slightly behind South Korea and France, but we’re not nearly as far behind the curve as the OECD reports would suggest.

And then there’s this:

The ITIF warns against simply implementing the policies that have worked for other countries, however, and it notes that a good percentage of the difference can be chalked up to non-policy factors like density. For instance, more than half of all South Koreans lived in apartment buildings that are much easier to wire with fiber connections than are the sprawling American suburbs.

Now, I haven’t examined SpeedTest’s methodology, so they might have all sorts of problems that make their results suspect. But it’s at least one data point suggesting that the OECD data might be flawed. And I think the very fact that there seems to be only one widely cited ranking out there ought to make us somewhat suspicious of its findings. Scott Wallsten had bad things to say about the OECD numbers on our podcast. Is there other work out there analyzing the quality of the OECD rankings?

I was planning to leave the Lessig/Sydnor thing alone because I feel like we’ve beat it to death, but Tom’s really pissing me off. For those who haven’t been following the now-voluminous comments (and I don’t blame you), Mike Masnick recently wrote the following:

[Lessig] wasn’t praising communism in the slightest — but pointing out how regulatory regimes in the US can impact someone’s day-to-day life quite strongly, while for certain aspects of life in Vietnam those similar regulations do not impact them. That doesn’t mean communism is good or that life is great in Vietnam. In fact, Lessig pointed out that neither point is true. But he was pointing out what the factual situation was concerning certain aspects of day-to-day life. You don’t dispute those points — you can’t, because they’re true. You merely take those statements and pretend they’re an endorsement of communism. It’s not even remotely a defense of communism. It’s showing the problems with US regulations, something I would think you would endorse.

And Tom responds:

I must distance myself from Mike’s claim that the admittedly deregulatory effect of terrorizing civilians “is something I would think you would endorse.”

And I had to pick my jaw up off the floor.

In case English isn’t your first language, let me dissect this a little bit. Scholars have a basic obligation to represent their opponents’ words accurately. If you put a phrase in quotes, you have an obligation for the quoted phrase to be a faithful representation of what the person being quoted actually said. That obligation counts double if you precede the quote by a phrase like “Mike’s claim” that unambiguously attributes the entire sentence to the person you’re criticizing. And in particular, if you quote half of a sentence, say the verb and direct object, you have an obligation not to change the subject to be something totally different. I if I write “Ice cream is great,” it would be dishonest for you to write “I must distance myself from Tim’s claim that the Holocaust ‘is great.'” Yes, I literally wrote the phrase “is great,” but the subject of that phrase wasn’t “the Holocaust,” and implying that it was is just as dishonest as writing “Tim claimed ‘the Holocaust is great.'”

What Tom did here is identical. In Mike’s comment, the subject of the phrase “something I would think you would endorse” is “showing the problems with US regulations.” Tom’s response plainly implies that the subject of the phase “something I would think you would endorse” was “the admittedly deregulatory effect of terrorizing civilians.” This, of course, is a totally different proposition, and something that Mike never said. Yet Tom has the audacity to precede the sentence with “Mike’s claim” plainly attributing the whole sentence to Mike.

This is, quite simply, a lie. And a stupid, transparent lie at that. I’m really confused about what Tom thinks he’s accomplishing. Surely he doesn’t believe the readership of TLF is so dumb that we’ll be persuaded by these kinds of grade-school rhetorical sleights of hand.

Update: Now that I’ve posted this, it occurs to me that I’ll probably see a post on IPCentral in a few minutes with the headline “Lessig supporter endorses the Holocaust.”

Larry Lessig, Demagogue?

by on April 30, 2008 · 92 comments

Tom Sydnor and Richard Bennett have both made a big deal of the fact that Larry Lessig is purportedly a demogogue. Richard, for example, says:

It’s an error to consider Lessig a serious scholar with serious views about serious issues. He’s a performer/demagogue who will latch onto any issue that he can use to promote the Lessig brand. At the Stanford FCC hearing, he portrayed capitalism as a law of the jungle, in pictures of tigers eating prey. What intellectual critique if appropriate to refute that point of view, a picture of George Soros writing a fat check to Free Press so they can bus partisans to the hearing?

Now as it happens, I watched Lessig’s Stanford presentation, so I know what Richard is referring to here. And while this characterization is not wrong, exactly, it’s certainly not a fair summary of Lessig’s point. Here’s what he actually said:

If we had right policy, I don’t think that we would be talking about questions of trust. I don’t think the Department of Justice after the IBM case was talking about whether we trust IBM, or trust Microsoft, or trust Google. We don’t talk about trusting a company just like you don’t talk about trusting a tiger, even though the brand management for tigers has very cute images that they try to sell you on how beautiful and wonderful the tiger is. If you looked at that picture and you thought to yourself the great thing for my child to do would be to play with that tiger you’d be a fool because a tiger has a nature. The nature is not one you trust with your child. And likewise, a company has a nature, and thank god it does. Its nature is to produce economic value and wealth for its shareholders. We don’t trust it to follow good public policy. We trust it to follow that objective. Public policy is designed to make it profitable for them to behave in a way that serves the objectives of public policy, in this case the objective of an open, neutral network. It makes it more profitable for them to behave than to misbehave.

Continue reading →

More Broadband Progress

by on April 24, 2008 · 6 comments

Comcast recently unveiled a 50 Mbps broadband connection for $150/month, and has promised to have it available nationwide (and most likely bring the price down somewhat) by 2010. Verizon is in the process of rolling out a new fiber infrastructure that will allow it to offer a similar deal (you can also get an 8 Mbps connection for $50/month). All of this makes Qwest look like a relative laggard, with its announcement that it will soon be offering a 20 Mbit service for $100/month, and 12 Mbps for $50/month. And AT&T brings up the rear with plans for “only” 10 Mbit service.

One sometimes sees hand-wringing about the anemic state of the American broadband market. To the extent that other countries are doing even better (a debatable contention), we should certainly be looking for ways to make the broadband market more competitive. No doubt things would be progressing even faster if there were more players in the market. But the strong claim you sometimes see that there’s something deeply dysfunctional about the progress of the US broadband industry, is positively silly.

If Comcast and Verizon deliver on their promises to roll out 50 Mbit service by the end of the decade, and if prices follow their historical pattern of dropping over time, consumers in their service footprints will have seen the average speed of a $50 Internet connection increase by three orders of magnitude in about 15 years. And it’s been a reasonably steady pace of growth: in round numbers, $50 would buy you a 56kbps connection in 1998, a 512kbps connection in 2001, a 5Mbps connection in 2006, and (I hope) a 50 Mbps connection sometime early next decade. Things have consistently improved by a factor of 10 every 4-6 years.

It’s interesting to look back at the broadband argument we were having a couple of years ago. In mid-2006, TLF reader Luis Villa was reporting that he and his folks were still seeing typical broadband speeds of 2 Mpbs. Maybe he can chime in to let us know if that number has gone up at all. Personally, my apartment in St. Louis gets around 5 Mbps for about $30/month, and the house where I’m currently staying in DC has a 17 Mbps Comcast connection.

One of the things that makes it hard to judge is that broadband speeds and prices don’t tend to grow continuously and evenly around the country. Rather, carriers take turns leap-frogging one another, with each upgrade accompanied by a temporary price increase. So it can be tricky to judge the average rate of improvement by looking at just one market, because one market may seem to stagnate for several years at a time. But if one looks at the country as a whole, and focuses on time horizons closer to a decade, I think it’s undeniable that things are improving at a fairly rapid pace.

Don Marti offers a good example of the point of my last post:

The Smoot-Hawley Tariff was law. Many of the “code is law” design choices are similar: they can cause big effects without necessarily implementing the lawmaker’s intent.

I couldn’t have put it better myself. And this brings to mind another important point about “law” that I think is missed in Lessig’s “code is law” formulation: the real “law” Lessig is analogizing computer code to often isn’t exactly omnipotent. The War on Drugs is law but there’s still lots of drug use going on. Strict gun control is law in some big cities but there’s still plenty of gun violence. The McCain-Feingold soft money ban is law but there’s still plenty of corruption in Washington. The copyright system is law but there’s still plenty of illicit file sharing going on. And in all of these cases, the law has had significant effects that were very different from their authors’ intent.

And indeed, if we’re going to analogize ISPs to legislators, it’s important to remember that legislators have several key advantages over ISPs. In the first place, actual legislators can have you thrown in jail, whereas the worst an ISP can do to you is disconnect your service. Second, governments have effectively unlimited sums of money to waste on futile efforts like the drug war. In contrast, ISPs are profit-seeking institutions so there’s some limits on the amount of idiocy they can undertake before they’re forced to stop due to financial considerations. Finally, the law has at least some moral suasion behind it; the fact that something is illegal at least causes people to mostly hide the fact that they’re doing it. In contrast, people evading the “law” of an ISP would be able to claim the moral high ground.

Yet despite all these advantages, our government hasn’t come close to stopping people from using drugs, sharing files, hiring prostitutes, owning guns, corrupting public officials, and so forth. Not because the laws for those purposes were badly designed, or because the people implementing them haven’t tried hard enough, but because there are fundamental limits to government’s ability to control the behavior of citizens in a free society.

I think something similar is true of a large-scale network like the Internet. Incompetent network owners, as the “governments” of their corners of the Internet, can certainly cause a lot of havoc, just like real governments. But their ability to control what their customers do is constrained for many of the same reasons that governments’ ability to stop drug use or file sharing is constrained. And that means that network owners that try to restrict their customers’ use of their network are as likely to shoot themselves in the foot as they are to enhance their bottom lines.

Writing my critique of Larry Lessig’s Stanford lecture brought to mind an important ambiguity in Lessig’s oft-repeated slogan that code is law. I think the phrase actually has two distinct meanings that are often conflated but actually have quite different implications. One is, I think, indisputably true, whereas the other one is at best greatly overstated.

It’s certainly true that code is law in the sense that technological architectures place important constraints on the actions of their users. A TCP/IP network allows me to do different things than Verizon’s mobile market or the global ATM network. Wikipedia is the way it is largely because the software on which it runs constrains users in certain ways and empowers them in others. CALEA likely has given law enforcement agencies surveillance powers they couldn’t have had otherwise. To the extent that this was Lessig’s point, he’s obviously right.

However, when Lessig says “code is law,” he often seems to be making a significantly stronger claim about the power of code as law: not simply that code constrains us in some ways, but that the authors of code have a great deal of control over the exact nature of the constraints the technology they build will place on people. So that virtually any outcome the code designer wants to achieve, within reason, can be achieved if the code is designed by a sufficiently clever and determined programmer.

This stronger formulation strikes me as obviously wrong. This is true for at least two reasons. First, the set of tools available to the code-writer is often rather limited. For one thing, barring major breakthroughs in AI technology, many concepts and categories that are common sense to human beings cannot easily be translated into code. Rules like “block messages critical of president Bush” or “don’t run applications that undermine our business model” can’t easily be translated into hardware or software. There are a variety of heuristics that can be used to approximate these results, but they’re almost always going to be possible for human beings to circumvent.

The deeper challenge is a Hayekian point about spontaneous order: with a sufficiently complex and powerful technological platform, it often will not be possible to even predict, much less control, how the technology will be used in practice. Complex technologies often exhibit emergent properties, with the whole exhibiting behaviors that “emerge” from the complex interplay of much simpler constituent parts. It would have been hard for anyone to predict, for example, that the simple rules of wiki software could form the basis for a million-entry encyclopedia. Indeed, it’s pretty clear that Wikipedia was possible only because the site’s creators gave up any semblance of centralized control and allowed spontaneous order to work its magic.

A similar point applies to the Internet, and to the network owners that nominally control it. They certainly have some levers they can push to change certain micro-level characteristics of their networks, just as Jimmy Wales could make changes to the code of Wikipedia. But there’s no reason to think that either Wikipedia or Comcast would have any way to either predict or control what effects any given micro-level change to their platforms would have on the macro-level behavior of the whole system. They’re both large, complex systems whose millions of participants are each pursuing their own idiosyncratic objectives, and they’re likely to react and interact in surprising ways.

Now, the fact that Comcast can’t predict or control what effects its tinkering with its network might have does not, of course, mean that it won’t try. But it does mean that we should be skeptical of just-so stories about telcos turning the Internet into a well-manicured walled garden. When you create complex, open systems, they tend to take on a life of their own, and once you’ve ceded centralized authority, it can be very difficult to take back. Code may be law, but I think it’s a much more limited kind of law than a lot of people seem to think.

Over at Techdirt, I’ve got a series of posts that highlight some of the major arguments in my forthcoming paper on network neutrality. I’m particularly pleased with the latest installment of the series:

He claims that “owners have the power to change [the Internet’s architecture], using it as a tool, not to facilitate competition but to weaken competition.” Do they? He doesn’t spend any time explaining how networks would do this, or what kind of architectural changes he has in mind. But he does give an example that I think is quite illuminating, although not quite in the way he had in mind. Lessig imagines a world of proprietary power outlets, in which the electricity grid determines the make and model of an appliance before deciding whether to supply it with power. So your power company might charge you one price for a Sony TV, another price for a Hitachi TV, and it might refuse to work at all with an RCA TV. Lessig is certainly right that that would be a bad way for the electricity grid to work, and it would certainly be a headache for everybody if things had been set up that way from the beginning. But the really interesting question is what a power company would have to do if it wanted to switch an existing electricity grid over to a discriminatory model. Because the AT&Ts and Comcasts of the world wouldn’t be starting from scratch; they’d be changing an existing, open network.

I focus on an aspect of the debate that I’ve seen receive almost no attention elsewhere: assuming that network discrimination remains legal, how much power could network owners actually exert over the use of their networks? There’s an assumption on both sides that ownership of a network automatically translates to comprehensive control over how it’s used. But I have yet to see anyone give a plausible explanation of how a last mile provider would get from the open network they’ve got now to the tightly-controlled one that network neutrality advocates fear. It’s always simply asserted, as Lessig does here, and then taken as a given for the rest of the argument. It’s a weakness in the pro-regulation argument that I wish more critics of regulation would highlight.

I go on to discuss the difficulties of real-world architectural transitions. Read the whole thing here.