I’ve finally finished a draft of the network neutrality paper I’ve been blogging about for the last few months. One of the things I learned after the publication of my previous Cato paper is that, especially when you’re writing about a technical subject, you’ll inevitably make some errors that will only be caught when a non-trivial number of other people have the chance to read it. So if any TLF readers are willing to review a pre-release draft of the paper and give me their comments, I would appreciate the feedback. Please email me at leex1008@umn.edu and I’ll send you a copy. Thanks!
I would be remiss if I didn’t mention the launch of the End Software Patents coalition, headed by Ben Klemens. Ars has a good write-up. Software patents are among the biggest threats to innovation in the software industry. And indeed, they’ve become a nuisance far beyond Silicon Valley, because every company uses software and is therefore likely violating software patents. It’s good to see the movement for ending them continue to grow.
Larry Lessig is a gifted writer, and he does a good job of finding and telling stories that illustrate the points he’s trying to make. I found Free Culture extremely compelling for just this reason; he does a great job of illustrating a fairly subtle but pervasive problem by finding representative examples and weaving together a narrative about a broader problem. He demonstrates that copyright law has become so pervasive that peoples’ freedom is restricted in a thousand subtle ways by its over-broad application.
He takes a similar approach in Code, but the effort falls flat for me. Here, too, he gives a bunch of examples in which “code is law”: the owners of technological systems are able to exert fine-grained control over their users’ online activities. But the systems he describes to illustrate this principle have something important in common: they are proprietary platforms whose users have chosen to use them voluntarily. He talks about virtual reality, MOOs and MUDs, AOL, and various web-based fora. He does a good job of explaining that the different rules of these virtual spaces—their “architecture”—has a profound impact on their character. The rules governing an online space interact in complex ways with their participants to produce a wide variety of online spaces with distinct characters.
Lessig seems to want to generalize from these individual communities to the “community” of the Internet as a whole. He wants to say that if code is law on individual online communications platforms, then code must be law on the Internet in general. But this doesn’t follow at all. The online fora that Lessig describes are very different from the Internet in general. The Internet is a neutral, impersonal platform that supports a huge variety of different applications and content. The Internet as a whole is not a community in any meaningful sense. So it doesn’t make sense to generalize from individual online communities, which are often specifically organized to facilitate control, to the Internet in general which was designed with the explicit goal of decentralizing control to the endpoints.
Also, the cohesiveness and relative ease of control one finds on the Internet occurs precisely because users tend to use any given online service voluntarily. Users face pressures to abide by the generally-accepted rules of the community, and users who feel a given community’s rules aren’t a good fit will generally switch to a new one rather than make trouble. In other words, code is law in individual Internet communities precisely because there exists a broader Internet in which code is not law. When an ISP tries to control its users’ online activities, users are likely to react very differently. As we’ve seen in the case of the Comcast kerfuffle, users do not react in a docile fashion to ISPs that attempt to control their online behavior. And at best, such efforts produce only limited and short-term control.
I’m re-reading Larry Lessig’s Code and Other Laws of Cyberspace. I last read it about four years ago, long enough that I’d forgotten a lot of the specific claims Lessig made. One of the things that I think has clearly not occurred is his prediction that we would develop a “general architecture of trust” that would “permit the authentication of a digital certificate that verifies facts about you—your identity, citizenship, sex, age, or the authority you hold.” Lessig thought that “online commerce will not fully develop until such an architecture is established,” and that way back in 1999, we could “see enough to be confident that it is already developing.”
Needless to say, this never happened, and it now looks unlikely that it ever will happen. The closest we came was with Passport, which was pretty much a flop. We have instead evolved a system in which people have dozens of lightweight online identities for the different websites they visit, many of which involve little more than setting a cookie on one’s browser. The kind of universal, monolithic ID system that would allow any website to quickly and transparently learn who you are seems much less likely today than it apparently seemed to Lessig in 1999.
Of course, this would have been obvious to Lessig if he’d had the chance to read Jim Harper’s Identity Crisis. Jim explained that the security of an identifier is a function not only of the sophistication of its security techniques, but also of the payoff for breaking it. A single, monolithic identifier is a bad idea because it becomes an irresistible target for the bad guys. It’s also insufficiently flexible: Security rules that are robust enough for online banking is going to be overkill for casual web surfing. What I want, instead, are a range of identifiers of varying level of security, tailored to the sensitivity of the systems to which they control access.
Online security isn’t much about technology at all. For example, the most important safeguard against online credit card fraud isn’t SSL. It’s the fact that someone trying to buy stuff with a stolen credit card has to give a delivery location, which can be used by the police to apprehend him. Our goal isn’t and shouldn’t be maximal security in every transaction. Rather, it’s to increase security until the costs of additional security on the margin cease to outweigh the resulting reductions in fraud. If the size of a transaction is reasonably low, and most people are honest, quite minimalist security precautions may be sufficient to safeguard it. That appears to be what’s happened so far, and Lessig’s prediction to the contrary is starting to look rather dated.
So I’ve finished reading the Frischmann paper. I think it makes some interesting theoretical observations about the importance of open access to certain infrastructure resources. But I think the network neutrality section of the paper is weakened by a lack of specificity about what’s at stake in the network neutrality debate. He appears to take for granted that the major ISPs are able and likely to transform the Internet into a proprietary network in the not-too-distant future. Indeed, he seems to regard this point as so self-evident that he frames it as a simple political choice between open and closed networks.
But I think it’s far from obvious that anyone has the power to transform the Internet into a closed network. I can count the number of serious reports of network neutrality violations on my fingers, and no ISP has even come within the ballpark of transforming its network into a proprietary network like AOL circa 1994. Larry Lessig raised the alarm about that threat a decade ago, yet if anything, things have gotten more, not less, open in the last decade. We have seen an explosion of mostly non-commercial, participatory Internet technologies like Wikipedia, Flickr, blogs, YouTube, RSS, XMPP, and so forth. We have seen major technology companies, especially Google, throw their weight behind the Intenet’s open architecture. I’m as happy as the next geek to criticize Comcast’s interference with BitTorrent, but that policy has neither been particularly successful in preventing BitTorrent use, nor emulated by other ISPs, nor a harbinger of the imminent AOL-ization of Comcast’s network.
Before we can talk about whether a proprietary Interent is desirable (personally, I think it isn’t), I think we have to first figure out what kind of changes are plausible. I have yet to see anyone tell a coherent, detailed story about what AT&T or Verizon could do to achieve the results that Lessig, Benkler, Wu, and Frischmann are so worried about. Frischmann, like most advocates of network neutrality regulation, seem to simply assume that ISPs have this power and move on to the question of whether it’s desirable. But in doing so, they’re likely to produce regulation that’s not only unnecessary, but quite possibly incoherent. If you don’t quite know what you’re trying to prevent, it’s awfully hard to know which regulations are necessary to prevent it.
I’ve been reading Brett Frischman’s “An Economic Theory of Infrastructure and Commons Management”, which develops a general theory of infrastructure management and then applies this theory to (among other things) the network neutrality debate. I’ll likely have more to say about it once I’ve finished digesting it, but I wanted to note one offhand comment that I found interesting. On page 924, Frischmann writs:
A list of familiar examples [of “infrastructure”] includes: (transportation systems such as highway systems, railways, airline systems, and ports, (2) communications systems, such as telephone networks and postal services; (3) governance systems such as court systems; and (4) basic public services and facilities, such as schools, sewers, and water systems.
Which of these things isn’t like the others? Let me first quote his definition of “infrastructure resources”:
(1) The resource may be consumed nonrivalrously;
(2) Social demand for the resource is driven primarily by downstream productive activity that requires the resource as an input; and
(3) The resource may be used as an input into a wide range of goods and services, including private goods, public goods, and nonmarket goods.
As far as I can see, schools don’t fit any of these criteria. While there are certainly some under-attended schools who could be called non-rivalrous in some sense, there is typically a trade-off between the number of students in a classroom and the quality of instruction. And I suppose you could say that in general, education is a pre-condition for higher future earnings, but if we interpret this definition that broadly, almost everything is “infrastructure.” People couldn’t be productive citizens without food, so does that make farmers, supermarkets, and restaurants “infrastructure?”
Continue reading →
I agree with most of what Tom says here but I have to quibble with his final paragraph:
Oh, and one more thing: it’s probably worth noting that one of the greatest technical (and economic) triumphs in recent memory — TCP/IP and the suite of other protocols and standards that powers the internet — was designed by having a bunch of really smart engineers get together, execute an RFC process and then issue an ISO standard more or less by fiat. This is not to say that markets can’t help us arrive at good solutions — cable vs. DSL vs. FiOS is a good example of such a market working (or would be if the regulatory picture weren’t so complicated). But it ought to be acknowledged that markets are not always an optimal tool for making technical decisions. In fact there are now pseudo-centralized organizations that take responsibility for many of the technical standards that power our world, and nearly all engineers agree that we’re vastly better off for it.
The history of the Internet is something I’ve been researching lately, so this is a question I know something about. And I think Tom’s only telling part of the story.
Continue reading →
I have to at least partly agree with Tom about the HD-DVD vs. Blu-Ray battle, and about libertarians’ tendency to sometimes over-sell the outcome of market processes. Libertarians have been known to go overboard in insisting that the outcome of market processes is superior to other potential outcomes in every conceivable respects, even when it’s clearly not so. To take the obvious example, as a Mac user, I’m not prepared to concede that simply because Microsoft has a larger share of the operating system market than Apple, that Windows must be a better operating system than Mac OS.
In particular, network effects and path dependence are real phenomena. For example, the x86 architecture really is a kind of Frankenstein’s monster that has limped along for decades despite, not because of, its underlying technical design. Yet it has become the undisputed champion of desktop computing because of network effects—there is now vastly more desktop software written for it than any other architecture, and so that’s what the market demands. From the perspective of an electrical engineer, this is kind of a pity. Newer architectures like the Alpha have a variety of advantages, and if we could somehow wave a magic wand and cause everyone to suddenly have an installed base of Alpha software instead of x86 software, it would almost certainly be an improvement.
Continue reading →
This may not be of interest to anyone but me, but on the theory that it’s better to post them all and let Google sort them out, here are the books I read for my forthcoming paper on network neutrality.
Continue reading →