What We’re Reading

Building on this week’s Cato Unbound online debate over the impact of Lawrence Lessig’s Code ten years after it’s release, Tim Lee has posted a terrific essay over at the Freedom to Tinker BlogSizing Up “Code” with 20/20 Hindsight.”  Tim concludes:

It seems to me that the Internet is rather less malleable than Lessig imagined a decade ago. We would have gotten more or less the Internet we got regardless of what Congress or the FCC did over the last decade. And therefore, Lessig’s urgent call to action — his argument that we must act in 1999 to ensure that we have the kind of Internet we want in 2009 — was misguided. In general, it works pretty well to wait until new technologies emerge and then debate whether to regulate them after the fact, rather than trying to regulate preemptively to shape the kinds of technologies that are developed.

As I wrote a few months back, I think Jonathan Zittrain’s The Future of the Internet and How to Stop It makes the same kind of mistake Lessig made a decade ago: overestimating regulators’ ability to shape the evolution of new technologies and underestimating the robustness of open platforms. The evolution of technology is mostly shaped by engineering and economic constraints. Government policies can sometimes force new technologies underground, but regulators rarely have the kind of fine-grained control they would need to promote “generative” technologies over sterile ones, any more than they could have stopped the emergence of cookies or DPI if they’d made different policy choices a decade ago.

I agree whole-heartedly, of course, and this is the point I was trying to make in my in my first essay in the Cato debate when I argued:

Lessig’s lugubrious predictions proved largely unwarranted. Code has not become the great regulator of markets or enslaver of man; it has been a liberator of both. Indeed, the story of the past digital decade has been the exact opposite of the one Lessig envisioned in Code. Cyberspace has proven far more difficult to “control” or regulate than any of us ever imagined. More importantly, the volume and pace of technological innovation we have witnessed over the past decade has been nothing short of stunning.

Anyway, read Tim’s entire essay.

I’ve posted another response in the Cato Unbound online debate over the impact of Lawrence Lessig’s Code and Other Laws of Cyberspace upon the book’s 10th anniversary.  You will recall that I went fairly hard on Prof. Lessig in my essay, “Code, Pessimism, and the Illusion of ‘Perfect Control,’” and Lessig responded with a counter-punch that went after me for it.  I respond in a new essay about “Our Conflict of Cyber-Visions.” In the piece, I address Lessig’s assertion that I just didn’t understand the central teachings of Code, as well as his reluctance to accept the “cyber-collectivism” label that I affixed to his book and life’s work.  Again, please hop over to Cato Unbound for my complete response.

But one thing from the essay that I thought worth reproducing here is my effort to better define the key principles that separate the cyber-libertarian and cyber-collectivist schools of thinking.  I argue that it comes down to this:

The cyber-libertarian believes that “code failures” are ultimately better addressed by voluntary, spontaneous, bottom-up, marketplace responses than by coerced, top-down, governmental solutions. Moreover, the decisive advantage of the market-driven approach to correcting code failure comes down to the rapidity and nimbleness of those response(s).

Of course, another key difference relates to how quickly one jumps to the conclusion that “code failures” are actually occurring at all. I argue:

Continue reading →

The week-long Cato Unbound online debate about the 10th anniversary of Lawrence Lessig’s Code and Other Laws of Cyberspace continues today with Prof. Lessig’s response to Declan McCullagh’s opening essay, “What Larry Didn’t Get,” Jonathan Zittrain’s follow-up essay, and my essay on, “Code, Pessimism, and the Illusion of ‘Perfect Control.’”  Needless to say, Prof. Lessig isn’t too happy with my response. You should jump over to the Cato site to read the entire thing, but here are a couple of excerpts and my response.

To my suggestion that there is a qualitative difference between law and code, Prof. Lessig says:

I’ve argued that things aren’t quite a simple as some libertarians would suggest. That there’s not just bad law. There’s bad code. That we don’t need to worry just about Mussolini. We also need to worry about DRM or the code AT&T deploys to help the government spy upon users. That public threats to liberty can be complemented by private threats to liberty. And that the libertarian must be focused on both.  […]

Of course, law is law. Who could be oblivious to that? And who would need a book to explain it?  But the fact that “law is law” does not imply that it has a “much greater impact in shaping markets and human behavior.” Sometimes it does — especially when that “law” is delivered by a B1 bomber. But ask the RIAA whether it is law or code that is having a “greater impact in shaping markets” for music. Or ask the makers of Second Life whether the citizens of that space find themselves more constrained by the commercial code of their geo-jurisdiction or by the fact that the software code of Second Life doesn’t permit you simply to walk away (so to speak) with another person’s scepter. Whether and when law is more effective than code is an empirical matter — something to be studied, and considered, not dismissed by banalities spruced up with italics.

Well, I beg the professor’s pardon for excessive use of italics.  [I won’t ask for an apology for misspelling my last name in his piece!] Regardless, it’s obvious that we’ll just never see eye-to-eye on the crucial distinction between law and code. Again, as I stated in my essay: “With code, escape is possible. Law, by contrast, tends to lock in and limit; spontaneous evolution is supplanted by the stagnation of top-down, one-size-fits-all regulatory schemes.”

Lessig largely dismisses much of this with that last line above, suggesting that we just need to keep studying the matter to determine the right mix of what works best.  To be clear, while I’m all for studying the impact of law vs. code as “an empirical matter,” that in turn begs the question of how we define effectiveness or success. I suspect that the professor and I would have a “values clash” over some rather important first principles in that regard.  This is, of course, a conflict of visions that we see throughout the history of philosophy; a conflict between those who put the individual and the individual’s rights at the core of any ethical political system versus those who would place the rights of “the community,” “the public” or some other amorphous grouping(s) at the center of everything.  It’s a classic libertarian vs. communitarian / collectivist debate.

Continue reading →

Ted Dziuba has penned a humorous and sharp-tongued piece for The Register about last week’s Adblock vs. NoScript fiasco.  For those of you who aren’t Firefox junkies, a nasty public spat broke out between the makers of these two very popular Firefox Browser extensions (they are the #1 and #3 most popular downloads respectively).  To make a long and complicated story much shorter, basically, NoScript didn’t like Adblock placing them on their list of blacklisted sites and so they fought back by tinkering with the NoScript code to evade the prohibition.  Adblock responded by further tinkering with their code to circumvent the circumvention!  And then, as they say, words were exchanged.

Thus, a war of words and code took place.  In the end, however, it had a (generally) happy ending with NoScript backing down and apologizing. Regardless, Mr. Dzuiba doesn’t like the way things played out:

The real cause of this dispute is something I like to call Nerd Law.  Nerd Law is some policy that can only be enforced by a piece of code, a public standard, or terms of service. For example, under no circumstances will a police officer throw you to the ground and introduce you to his friend the Tazer if you crawl a website and disrespect the robots.txt file.

The only way to adjudicate Nerd Law is to write about a transgression on your blog and hope that it gets to the front page of Digg. Nerd Law is the result of the pathological introversion software engineers carry around with them, being too afraid of confrontation after that one time in high school when you stood up to a jock and ended up getting your ass kicked.

Dziuba goes on to suggest that “If you actually talk to people, network, and make agreements, you’ll find that most are reasonable” and, therefore, this confrontation and resulting public fight could have been avoided. They “could have come to a mutually-agreeable solution,” he says.

But no. Sadly, software engineers will do what they were raised to do. And while it may be a really big hullabaloo to a very small subset of people who Twitter and blog their every thought as if anybody cared, to the rest of us, it just reaffirms our knowledge that it’s easy to exploit your average introvert.  After all, what’s he gonna do? Blog about it?

OK, so maybe the developers could have come to some sort of an agreement if they had opened direct channels of communications or, better yet, if someone at the Mozilla Foundation could have intervened early on and mediated the dispute.  At the end of the day, however, that did not happen and a public “Nerd War”  ensued.  But I’d like to say a word in defense of Nerd Law and public fights about “a piece of code, a public standard, or terms of service.”

Continue reading →

The Cato Unbound online debate about the 10th anniversary of Lawrence Lessig’s Code and Other Laws of Cyberspace continues today with my response to Declan McCullagh’s opening essay, “What Larry Didn’t Get,” as well as Jonathan Zittrain’s follow-up.

In my response, “Code, Pessimism, and the Illusion of ‘Perfect Control,'” I begin by arguing that:

The problem with peddling tales of a pending techno-apocalypse is that, at some point, you may have to account for your prophecies — or false prophecies as the case may be. Hence, the problem for Lawrence Lessig ten years after the publication of his seminal book, Code and Other Laws of Cyberspace.

I go on to argue that:

Lessig’s lugubrious predictions proved largely unwarranted. Code has not become the great regulator of markets or enslaver of man; it has been a liberator of both. Indeed, the story of the past digital decade has been the exact opposite of the one Lessig envisioned in Code.

After providing several examples of just how wrong Lessig’s predictions were, I then ask:

[W]hy have Lessig’s predictions proven so off the mark? Lessig failed to appreciate that markets are evolutionary and dynamic, and when those markets are built upon code, the pace and nature of change becomes unrelenting and utterly unpredictable. With the exception of some of the problems identified above, a largely unfettered cyberspace has left digital denizens better off in terms of the information they can access as well as the goods and services from which they can choose. Oh, and did I mention it’s all pretty much free-of-charge? Say what you want about our cyber-existence, but you can’t argue with the price!

Continue reading →

As I mentioned on Monday,  the folks over at Cato Unbound have put together an online debate about the impact of Lawrence Lessig’s Code and Other Laws of Cyberspace as it turns 10 this year.

The opening essay from Declan McCullagh, “What Larry Didn’t Get,” took Lessig to task for favoring rule by “technocratic philosopher kings” over the spontaneous invisible hand of code.   In Round 2 of the debate, Harvard’s Jonathan Zittrain comes to Lessig’s defense and suggests that the gap between Lessig and libertarians is not as wide as Declan suggests:

The debate between Larry and the libertarians is more subtle. Larry says: I’m with you on the aim — I want to maintain a free Internet, defined roughly as one in which bits can move between people without much scrutiny by the authorities or gatekeeping by private entities. Code’s argument was and is that this state of freedom isn’t self-perpetuating. Sooner or later government will wake up to the possibilities of regulation through code, and where it makes sense to regulate that way, we might give way — especially if it forestalls broader interventions.

Run over to Cato Unbound to read the rest.  My response will be going up next (on Friday) and then Prof. Lessig’s will be up next Monday.

CodeLawrence Lessig’s Code and Other Laws of Cyberspace turns 10 this year and the folks over at Cato Unbound have put together an online debate about the book and its impact on cyberlaw, which I am honored to be taking part in.  The discussion begins today with a lead essay from Declan McCullagh of CNet News and then continues throughout the week with responses from Harvard’s Jonathan Zittrain, myself, and then Prof. Lessig himself.

Declan’s lead essay, “What Larry Didn’t Get,” starts things off with a bang:

[Lessig] prefers what probably could be called technocratic philosopher kings, of the breed that Plato’s The Republic said would be “best able to guard the laws and institutions of our State — let them be our guardians.” These technocrats would be entrusted with making wise decisions on our behalf, because, according to Lessig, “politics is that process by which we collectively decide how we should live.”

Declan goes on to cite a litany of high-profile legislative and regulatory failures that have unfolded over the past decade, calling into question the wisdom of Prof. Lessig’s approach.  Declan continues:

One response might be that the right philosopher-kings have not yet been elevated to the right thrones.  But assuming perfection on the part of political systems (especially when sketching plans to expand their influence) is less than compelling.  The field of public choice theory has described many forms of government failure, and there’s no obvious reason to exempt Internet regulation from its insights about rent-seeking and regulatory capture.

Sounds like it could be a heated discussion!  Jonathan Zittrain is up next with an essay due to be posted on Wednesday and then my response will follow on Friday.  Prof. Lessig’s response will go up a week from today.  I look forward to this exchange and the responses it generates.  I encourage readers to head over to the Cato Unbound site and check out the essays as they appear.  I’ll post reminders here as the installments go live on the Cato site.

The Supreme Court building (thank Chief Justice Taft!)During my summer internship at CEI, a couple of us interns discussed the book Cato’s Robert Levy published last May, The Dirty Dozen: How Twelve Supreme Court Cases Radically Expanded Government and Eroded Freedom. We looked at Levy’s list of the worst decisions and sent each other lists of our own. Now that I’m taking ConLaw, I feel as though the time has come to post my lists of the twelve worst and the twelve best Supreme Court decisions of all time. It is by no means an exhaustive list. My inclusion of different cases than Levy does not indicate that I disagree with his assessment that those decisions are terrible – just maybe not as bad as the ones I select.

The Dirty DozenThe Worst:

  1. The Slaughter-House Cases (1873). The very worst decision ever made by the US Supreme Court. Eviscerated the 14th Amendment only five years after its adoption. It is best known for reading the Privileges or Immunities Clause, which was supposed to be (and could have been) a vehicle for both incorporation and unenumerated rights, out of the Constitution. But it also wrote out the Due Process Clause and the Equal Protection Clause, though those two clauses eventually crawled back into existence, to a degree.
  2. Katzenbach v. McClung (1964). It was tough to decide which of the various cases reading the Commerce Clause expansively enough to permit Congress to pass any law it desires, thus destroying the basis of the federal government as one of defined and limited powers to include. But McClung seems to be the most expansive in both its result and its holding.

Continue reading →

Congress investigates ETFs

Bureaucrash has just posted a new round of libertarian lolcats. Many involve tech policy. Check them out if you’re in the mood for some feline-and-political-commentary-based hilarity!

John Palfrey, co-author of Born DigitalOn this episode of “Tech Policy Weekly,” we’re launching a new format called “Tech Book Corner” that will feature occasional conversations with the authors of important new books about technology policy and the other issues that we debate frequently at the Tech Liberation Front blog.

On this debut episode of Book Corner, we are joined by John Palfrey, a professor of law at Harvard University and the co-director of the Berkman Center for Internet & Society at Harvard. Along with his Berkman Center colleague Urs Gasser, Prof. Palfrey has recently co-authored Born Digital: Understanding the First Generation of Digital Natives, which was published last summer by Basic Books and which you can find out more information about at www.borndigitalbook.com. [Incidentally, I reviewed Born Digital here last October and I also named it one of the most important technology policy books of 2008.]

Born Digital cover

In our discussion, Prof. Palfrey explains who exactly counts as a “digital native” and tells us why he decided to write a book about them. He discusses why he believes that there has been some overreaction by older generations to fears about this Digital Generation and he argues that we need “to separate what we need to worry about from what’s not so scary” and “what we ought to resist from what we ought to embrace.” He then outlines how we should think about these issues and concerns going forward, and he stresses the importance of “balancing caution with encouragement” as we do so. Finally, he then applies that framework to three specific issues: privacy, child safety, and copyright.

It’s an interesting conversation and you can begin listening to it immediately by downloading the MP3 file here or by just clicking the play button below!

[display_podcast]