Writing my critique of Larry Lessig’s Stanford lecture brought to mind an important ambiguity in Lessig’s oft-repeated slogan that code is law. I think the phrase actually has two distinct meanings that are often conflated but actually have quite different implications. One is, I think, indisputably true, whereas the other one is at best greatly overstated.
It’s certainly true that code is law in the sense that technological architectures place important constraints on the actions of their users. A TCP/IP network allows me to do different things than Verizon’s mobile market or the global ATM network. Wikipedia is the way it is largely because the software on which it runs constrains users in certain ways and empowers them in others. CALEA likely has given law enforcement agencies surveillance powers they couldn’t have had otherwise. To the extent that this was Lessig’s point, he’s obviously right.
However, when Lessig says “code is law,” he often seems to be making a significantly stronger claim about the power of code as law: not simply that code constrains us in some ways, but that the authors of code have a great deal of control over the exact nature of the constraints the technology they build will place on people. So that virtually any outcome the code designer wants to achieve, within reason, can be achieved if the code is designed by a sufficiently clever and determined programmer.
This stronger formulation strikes me as obviously wrong. This is true for at least two reasons. First, the set of tools available to the code-writer is often rather limited. For one thing, barring major breakthroughs in AI technology, many concepts and categories that are common sense to human beings cannot easily be translated into code. Rules like “block messages critical of president Bush” or “don’t run applications that undermine our business model” can’t easily be translated into hardware or software. There are a variety of heuristics that can be used to approximate these results, but they’re almost always going to be possible for human beings to circumvent.
The deeper challenge is a Hayekian point about spontaneous order: with a sufficiently complex and powerful technological platform, it often will not be possible to even predict, much less control, how the technology will be used in practice. Complex technologies often exhibit emergent properties, with the whole exhibiting behaviors that “emerge” from the complex interplay of much simpler constituent parts. It would have been hard for anyone to predict, for example, that the simple rules of wiki software could form the basis for a million-entry encyclopedia. Indeed, it’s pretty clear that Wikipedia was possible only because the site’s creators gave up any semblance of centralized control and allowed spontaneous order to work its magic.
A similar point applies to the Internet, and to the network owners that nominally control it. They certainly have some levers they can push to change certain micro-level characteristics of their networks, just as Jimmy Wales could make changes to the code of Wikipedia. But there’s no reason to think that either Wikipedia or Comcast would have any way to either predict or control what effects any given micro-level change to their platforms would have on the macro-level behavior of the whole system. They’re both large, complex systems whose millions of participants are each pursuing their own idiosyncratic objectives, and they’re likely to react and interact in surprising ways.
Now, the fact that Comcast can’t predict or control what effects its tinkering with its network might have does not, of course, mean that it won’t try. But it does mean that we should be skeptical of just-so stories about telcos turning the Internet into a well-manicured walled garden. When you create complex, open systems, they tend to take on a life of their own, and once you’ve ceded centralized authority, it can be very difficult to take back. Code may be law, but I think it’s a much more limited kind of law than a lot of people seem to think.