Philosophy & Cyber-Libertarianism

This is the second of two essays making “The Case for Internet Optimism.” This essay was included in the book, The Next Digital Decade: Essays on the Future of the Internet (2011), which was edited by Berin Szoka and Adam Marcus of TechFreedom. In my previous essay, which I discussed here yesterday, I examined the first variant of Internet pessimism: “Net Skeptics,” who are pessimistic about the Internet improving the lot of mankind. In this second essay, I take on a very different breed of Net pessimists:  “Net Lovers” who, though they embrace the Net and digital technologies, argue that they are “dying” due to a lack of sufficient care or collective oversight.  In particular, they fear that the “open” Internet and “generative” digital systems are giving way to closed, proprietary systems, typically run by villainous corporations out to erect walled gardens and quash our digital liberties.  Thus, they are pessimistic about the long-term survival of the Internet that we currently know and love.

Leading exponents of this theory include noted cyberlaw scholars Lawrence Lessig, Jonathan Zittrain, and Tim Wu.  I argue that these scholars tend to significantly overstate the severity of this problem (the supposed decline of openness or generativity, that is) and seem to have very little faith in the ability of such systems to win out in a free market. Moreover, there’s nothing wrong with a hybrid world in which some “closed” devices and platforms remain (or even thrive) alongside “open” ones. Importantly, “openness” is a highly subjective term, and a constantly evolving one.  And many “open” systems or devices are as perfectly open as these advocates suggest.

Finally, I argue that it’s likely that the “openness” advocated by these advocates will devolve into expanded government control of cyberspace and digital systems than that unregulated systems will become subject to “perfect control” by the private sector, as they fear.  Indeed, the implicit message in the work of all these hyper-pessimistic critics is that markets must be steered in a more sensible direction by those technocratic philosopher kings (although the details of their blueprint for digital salvation are often scarce).   Thus, I conclude that the dour, depressing “the-Net-is-about-to-die” fear that seems to fuel this worldview is almost completely unfounded and should be rejected before serious damage is done to the evolutionary Internet through misguided government action.

I’ve embedded the entire essay down below in Scribd reader, but it can also be found on TechFreedom’s Next Digital Decade book website and SSRN.

Continue reading →

Here’s the first of two essays I’ve recently penned making “The Case for Internet Optimism.” This essay was included in the book, The Next Digital Decade: Essays on the Future of the Internet (2011), which was edited by Berin Szoka and Adam Marcus of TechFreedom.  In these essays, I identify two schools of Internet pessimism: (1) “Net Skeptics,” who are pessimistic about the Internet improving the lot of mankind; and (2) “Net Lovers,” who appreciate the benefits the Net brings society but who fear those benefits are disappearing, or that the Net or openness are dying.  (Regular readers of this blog will be familiar with these themes since I sketched them out in previous essays here such as, “Are You an Internet Optimist or Pessimist?” and “Two Schools of Internet Pessimism.”) The second essay is here.

This essay focuses on the first variant of Internet pessimism, which is rooted in general skepticism about the supposed benefits of cyberspace, digital technologies, and information abundance. The proponents of this pessimistic view often wax nostalgic about some supposed “good ‘ol days” when life was much better (although they can’t seem to agree when those were). At a minimum, they want us to slow down and think twice about life in the Information Age and how it’s personally affecting each of us.  Occasionally, however, this pessimism borders on neo-Ludditism, with some proponents recommending steps to curtail what they feel is the destructive impact of the Net or digital technologies on culture or the economy.  I identify the leading exponents of this view of Internet pessimism and their major works. I trace their technological pessimism back to Plato but argue that their pessimism is largely unwarranted. Humans are more resilient than pessimists care to admit and we learn how to adapt to technological change and assimilate new tools into our lives over time. Moreover, were we really better off in the scarcity era when we were collectively suffering from information poverty?  Generally speaking, despite the challenges it presents society, information abundance is a better dilemma to be facing than information poverty.  Nonetheless, I argue, we should not underestimate or belittle the disruptive impacts associated with the Information Revolution.  But we need to find ways to better cope with turbulent change in a dynamist fashion instead of attempting to roll back the clock on progress or recapture “the good ‘ol days,” which actually weren’t all that good.

Down below, I have embedded the entire chapter in a Scribd reader, but the essay can also be found on the TechFreedom website for the book as well as on SSRN.  I have also includes two updated tables that appeared in my old “optimists vs. pessimists” essay.  The first lists some of the leading Internet optimists and pessimists and their books. The second table outlines some of the major lines of disagreement between these two camps and I divided those disagreements into (1) Cultural / Social beliefs vs. (2) Economic / Business beliefs.

Continue reading →

TechFreedom launched last week with a half-day symposium dedicated to our first publication, The Next Digital Decade: Essays on the Future of the Internetincluding a fireside chat with FCC Commissioner Robert McDowell, three panels and a conversation about TechFreedom and its mission. Santa Clara Law Professor Eric Goldman, who has three essays in the book, provides a detailed write-up of the discussion on his blog.

Read a summary of the book here, or our Manifesto for TechFreedom. You can watch or download video from the event below (download links are at the bottom).

Fireside Chat: FCC Cmr. Robert McDowell & CNET’s Declan McCullagh

Continue reading →

CLU seeks systemic perfection in Tron: Legacy

Note: The following post contains spoilers pertaining to the plot and theme of the film Tron: Legacy.

Near the end of Tron: Legacy, the character CLU (short for Codified Likeness Utility), on the verge of releasing his army of re-purposed computer programs into the brick-and-mortar world to destroy humanity, confronts Kevin Flynn, his creator-turned-nemesis, with a plaintiff, “I did everything you asked.”  Flynn, older and wiser than the character we met in 1982’s Tron, and his techno-idealism tempered by the realization that to save humanity he must destroy both his physical and virtual self, wistfully answers, “I know.”

It’s a rather poignant scene that punctuates the film’s unique take on technology and humanity. Traditionally in the movies, when technology turns evil, it does so with a will of its own. The Matrix and Terminator films are just two examples. Tron: Legacy, however, upends the idea. CLU, sure enough, turns on his human creator, but not out of rebellion, but to carry out his human-engineered programming.

You see, Flynn programmed CLU to create the “perfect system.” In the film, Flynn explains that, as a younger man he thought he could design a technology-based solution that would end war, illness, poverty and hunger and, in a nutshell, make humanity better. But when the Grid—the computer environment Flynn nurtured—actually does something spontaneously, spawning a new life form, so-called isomorphic programs (called isos for short), CLU destroys them. While this act of cybernetic genocide horrifies Flynn, from CLU’s perspective, it was nothing but a logical response. The isos, as free and independent entities that did not respond to his command and control, introduced an element of randomness and uncertainty into the Grid that CLU could not abide. They were an obstacle to the systemic perfection he was programmed to create and therefore had to be eliminated.

Continue reading →

Please join us on January 19, 2011 in Washington, DC for the launch of The Next Digital Decade: Essays on the Future of the Internet, a collection of 31 essays from 26 leading cyber thought leaders, including Tim Wu, Hal Varian, the Hon. Alex Kozinski, Stewart Baker, Jonathan Zittrain, Milton Mueller, Eric Goldman, and Yochai Benkler—as well as the TLF’s own Adam Thierer, Larry Downes and Geoff Manne.

This event will feature panel discussions of several of the book’s organizing questions:

  • Internet Optimism, Pessimism & the Future of Online Culture
  • Internet Exceptionalism & Intermediary Deputization
  • Who Will Govern the Net in 2020?

The January 19 event will run from 12:30pm to 5:30pm immediately following the State of the Net conference in the same location: the Columbia B room at the Hyatt Regency (400 New Jersey Ave NW, Washington DC). The event will begin with lunch and end with a cocktail reception between 5:30pm and 7:00pm. Admission is free but space is limited so RSVP now!

Registered attendees will receive a free copy of the book, which can be read online or downloaded as a PDF, or purchased in hardcover. Free eBook versions are coming soon. To learn more about the book, check out the foreword and introduction, or the table of contents.

Visit NextDigitalDecade.com for details or follow us on Twitter or Facebook for updates!

I really enjoyed this editorial in today’s Wall Street Journal by sci-fi novelist Orson Scott Card, author of Ender’s Game, among many other books.  Card engages in some interesting soul searching about the impact of the Net and digital technology on our lives, economy, and culture.  He concludes his essay by noting that:

We’re still the same human beings we always were. Consumers still act like consumers; people still search for love and friendship. But the Internet has freed us from the boundaries of distance and many of the risks of embarrassment in social interactions. This re-sorted geography has brought its own pitfalls and forced us to create new rules of etiquette.

But just as I have no desire to give up cars, trains and planes to return to the hay-eating, vet-needing, poop-generating, one-horsepower horse, I don’t want to go back to pre-Google research, pre-Amazon shopping, pre-blog newsmedia, or the loneliness of villages limited by geography.

Quite right.  Card is expressing the sort of “pragmatic optimism” I’ve written about here before in my essays about the ongoing battle between Internet optimists and pessimists.  I’ve tried to articulate a sort of middle ground position in this debate that embraces the amazing technological changes at work in today’s Information Age but does so with a healthy dose of humility and appreciation for the disruptive impact and pace of that change. As I’ve noted before, we need to think about how to mitigate the negative impacts associated with technological change without adopting the paranoid tone or Luddite-ish recommendations of the pessimists.  Read Card’s entire essay to get a better feel for how we can begin to think in that way.

Over a year ago Adam Thierer and Berin Szoka penned [an essay](http://techliberation.com/2009/08/12/cyber-libertarianism-the-case-for-real-internet-freedom/) seeking to define the contours of cyber-libertarianism, and they drew a contrast with the digital commons movement, part of what they called “cyber-collectivism.” They were criticized, however, for not drawing a similar contrast to “cyber-conservatism.” The reason they didn’t do this, Adam [explained](http://techliberation.com/2009/08/12/cyber-libertarianism-the-case-for-real-internet-freedom/#comment-14730877), was because they didn’t “think there really is a coherent ‘cyber-conservative’ movement out there the same way we see a rising ‘Digital Commons’ movement.” I think the reaction to Cablegate might be allowing us to see the outlines of cyber-conservatism a bit better.

The most vocal and strident reaction against Wikileaks has come from folks we can identify as neocons. Aside from demanding that the U.S. hunt down Julian Assange, Charles Krauthammer [wrote](http://www.washingtonpost.com/wp-dyn/content/article/2010/12/02/AR2010120204561.html), “Putting U.S. secrets on the Internet, a medium of universal dissemination new in human history, requires a reconceptualization of sabotage and espionage — and the laws to punish and prevent them.” Meanwhile Marc Thiessen, ignoring the distributed nature of WikiLeaks, [called](http://www.washingtonpost.com/wp-dyn/content/article/2010/12/06/AR2010120603074.html) for the U.S. to “rally a coalition of the willing to defeat WikiLeaks by shutting down its servers and cutting off its finances.” And William Kristol, for his part, [asked rhetorically](http://www.weeklystandard.com/blogs/whack-wikileaks_520462.html), “Why can’t we disrupt and destroy WikiLeaks in both cyberspace and physical space, to the extent possible? Why can’t we warn others of repercussions from assisting this criminal enterprise hostile to the United States?”

I won’t say there’s a fully developed theory of internet policy in these statements, but you can definitely see a rejection of an unregulated internet, not to mention of internet exceptionalism. Information control in the name of security, they seem to argue, is more than justified. And despite his technical [cluelessness](http://www.techdirt.com/articles/20101208/10133512187/how-political-pundits-get-confused-when-they-dont-understand-that-wikileaks-is-distributed.shtml), Marc Thiessen does grasp that pressuring internet intermediaries, like Amazon and PayPal, is an important way to control information.
Continue reading →

Milton Mueller, a professor at Syracuse University’s School of Information Studies, is a familiar figure to anyone who follows Internet governance issues.  He has established himself as a leading Net governance guru thanks to his extensive academic record in this field with books like Ruling the Root: Internet Governance and the Taming of Cyberspace (2002) and his work with The Internet Governance Project and the Global Internet Governance Academic Network.  Mueller’s latest book, Networks and States: The Global Politics of Internet Governance, continues his exploration of the forces shaping Internet policy across the globe.

The de Tocqueville of Cyberspace

What Mueller is doing – better than anyone else, in my opinion – is becoming the early chronicler of the unfolding Internet governance scene.  He meticulously reports on, and then deconstructs, ongoing governance developments along the cyber-frontier.  He is, in effect, a sort of de Tocqueville for cyberspace; an outsider looking in and asking questions about what makes this new world tick.  Fifty years from now, when historians look back on the opening era of Internet governance squabbles, Milton Mueller’s work will be among the first things they consult.

Mueller’s goal in Networks and States is two-fold and has both an empirical and normative element.  First, he aims to extend his exploration of the actors and forces affecting Internet governance debates and then develop a framework and taxonomy to better map and understand these forces and actors. He does a wonderful job on that front, even though many Net governance issues (especially those related to domain name system issues and ICANN) can be incredibly boring.  Mueller finds a way to make them far more interesting, especially by helping to familiarize the reader with the personalities and organizations that increasingly dominate these debates and the issues and principles that drive their actions or activism.

Mueller’s second goal in Networks and States is to breathe new life into the old cyber-libertarian philosophy that was more prevalent during the Net’s founding era but has lost favor today.  I plan to discuss this second goal in more detail here because Mueller has done something quite important in Networks and States: He has issued a call to arms to those who care about classical liberalism telling us, in effect, to get off our duffs and get serious about the fight for Internet freedom. Continue reading →

What is a Tech Libertarian?

by on November 16, 2010 · 38 comments

THE MASTER SWITCH was written to be readable and hopefully entertaining.   But its real goal is to urge readers to examine our relationship with all forms of centralized power.  There are deep contradictions between a fully centralized society and a free one; indeed I am not sure the two can co-exist.   Its message to libertarians is this:   if human freedom is truly the value that matters most, we need pay attention to the size of the institutions that govern the most important human functions.

As this suggests, while my book is designed to be relatively easy reading, at deeper levels it is a meditation on human freedom.  And while this may be unkind, I will respond to Thierer’s review to show how I think that contemporary libertarianism has begun to lose its way and betray its own creed.   Instead of a philosophy of freedom, it is at risk of becoming a theory of villanization, where every single wrong must be traced, somehow, to “government;” to say otherwise is to betray the movement.   To my mind that’s not true libertarianism in the tradition of John Stuart Mill, but just another theology of blame.

(N.B. I am grateful for Adam for inviting me to post a response.   Why we disagree in profound ways, I am flattered by his engagement with the book, and his readiness to give me space to respond).

Let’s get to the basics.  I define a libertarian as someone who is, at the deepest levels, prioritizes freedom over other values.  He is willing to forgo a preferred substantive outcome he might prefer, in exchange for a system that gives him freedom.  The classic example, of course, is the speech system: many of us might prefer that certain people to shut up forever (pick your favorite), but nonetheless still support a system of free speech.

Anyone deeply interested in freedom as a value should, by implication, be interested any non-chosen limits on that freedom, no matter what the source. Continue reading →

An important anniversary just passed with little more notice than an email newsletter about the report that played a pivotal role in causing the courts to strike down the 1998 Child Online Protection Act (COPA) as an unconstitutional restriction on the speech of adults and website operators. (COPA required all commercial distributors of “material harmful to minors” to restrict their sites from access by minors, such as by requiring a credit card for age verification.)

The Congressional Internet Caucus Advisory Committee is pleased to report that even after 10 years of its release the COPA Commission’s final report to Congress is still being downloaded at an astounding rate – between 700 and 1,000 copies a month. Users from all over the world are downloading the report from the COPA Commission, a congressionally appointed panel mandated by the Child Online Protection Act. The primary purpose of the Commission was to “identify technological or other methods that will help reduce access by minors to material that is harmful to minors on the Internet.” The Commission released its final report to Congress on Friday, October 20, 2000.

As a public service the Congressional Internet Caucus Advisory Committee agreed to virtually host the deliberations of the COPA Commission on the Web site COPACommission.org. The final posting to the site was the actual COPA Commission final report making it available for download. In the subsequent 10 years it is estimated that close to 150,000 copies of the report have been downloaded.

The COPA Report played a critical role in fending off efforts to regulate the Internet in the name of “protecting our children,” and marked a shift towards focusing on what, in First Amendment caselaw is called “less restrictive” alternatives to regulation. This summary of the report’s recommendations bears repeating:

After consideration of the record, the Commission concludes that the most effective current means of protecting children from content on the Internet harmful to minors include: aggressive efforts toward public education, consumer empowerment, increased resources for enforcement of existing laws, and greater use of existing technologies. Witness after witness testified that protection of children online requires more education, more technologies, heightened public awareness of existing technologies and better enforcement of existing laws. Continue reading →