Philosophy & Cyber-Libertarianism

I have always struggled with the work of media theorist Marshall McLuhan. I find it to be equal parts confusing and compelling; it’s persuasive at times and then utterly perplexing elsewhere.  I just can’t wrap my head around him and yet I can’t stop coming back to him.

Today would have been his 100th birthday. He died in 1980, but he’s just as towering of a figure today as he was during his own lifetime. His work is eerily prescient and speaks to us as if written yesterday instead of decades ago. Take, for example, McLuhan’s mind-blowing 1969 interview with Playboy. [PDF] The verse is awe-inspiring, but much of the substance is simply impenetrable. Regardless, it serves as perhaps the best introduction to McLuhan’s work. I strongly encourage you to read the entire thing. The questions posed by interviewer Eric Norden are brilliant and bring out the best in McLuhan.

I was re-reading the interview while working on a chapter for my next book on Internet optimism and pessimism, a topic I’ve spent a great deal of time pondering here in the past. Toward the end of the interview, McLuhan is asked by Norden to respond to some of his critics. McLuhan responds in typically brilliant, colorful fashion: Continue reading →

Copyrights and patents differ from tangible property in fundamental ways. Economically speaking, copyrights and patents are not rivalrous in consumption; whereas all the world can sing the same beautiful song, for instance, only one person can swallow a cool gulp of iced tea. Legally speaking, copyrights and patents exist only thanks to the express terms of the U.S. Constitution and various statutory enactments. In contrast, we enjoy tangible property thanks to common law, customary practices, and nature itself. Even birds recognize property rights in nests. They do not, however, copyright their songs.

Those represent but some of the reasons I have argued that we should call copyright an intellectual privilege, reserving property for things that deserve the label. Another, related reason: Calling copyright property risks eroding that valuable service mark.

Property as a service mark, like FedEx or Hooters? Yes. Thanks to long use, property has come to represent a distinct set of legal relations, including hard and fast rules relating to exclusion, use, alienation, and so forth. Copyright embodies those characteristics imperfectly, if at all. To call it intellectual property risks confusing consumers of legal services—citizens, attorneys, academics, judges, and lawmakers—about the nature of copyright. Worse yet, it confuses them about the nature of property. The property service mark suffers not merely dilution from copyright’s infringing use, but tarnishment, too.

As proof of how copyright threatens to erode property, consider Ben Depooter, Fair Trespass, 111 Col. L. Rev. 1090 (2011). From the abstract:

Trespass law is commonly presented as a relatively straightforward doctrine that protects landowners against intrusions by opportunistic trespassers. . . . This Essay . . . develops a new doctrinal framework for determining the limits of a property owner’s right to exclude. Adopting the doctrine of fair use from copyright law, the Essay introduces the concept of “fair trespass” to property law doctrine. When deciding trespass disputes, courts should evaluate the following factors: (1) the nature and character of the trespass; (2) the nature of the protected property; (3) the amount and substantiality of the trespass; and (4) the impact of the trespass on the owner’s property interest. . . . [T]his novel doctrine more carefully weighs the interests of society in access against the interests of property owners in exclusion.

Although I do not agree with every aspect of Prof. Depooter’s doctrinal analysis, he correctly observes that trespass law includes some fuzzy bits. Nor do I complain about his overall form of argument. It is not a tack I would take, but it was near-inevitable that some legal scholar would eventually argue back from copyright to claim that real property, too, should fall prey to a multi-factor, fact-intensive “fair use” defense. I merely take this opportunity to remind fellow friends of liberty that they can expect more of the same—and more erosion of the property service mark—if they fail to recognize copyrights and patents as no more than intellectual privileges.

[Crossposted at Agoraphilia, Technology Liberation Front, and Intellectual Privilege.]

John Perry Barlow famously said that in cyberspace, the First Amendment is just a local ordinance.  That’s still true, of course, and worth remembering.  But at least today there is good news in the shire.  The local ordinance still applies with full force, if only locally.

As I write in CNET this evening (see “Video Games Given Full First Amendment Protection“), the U.S. Supreme Court issued a strong and clear opinion today nullifying California’s 2005 law prohibiting the sale or rental to minors of what the state deemed “violent video games.” Continue reading →

I enjoyed this Wall Street Journal essay by Daniel H. Wilson on “The Terrifying Truth About New Technology.”  It touches on many of the themes I’ve discussed here in my essays on techno-panics, fears about information overload, and the broader battle throughout history between technology optimists and pessimists regarding the impact of new technologies on culture, life, and learning. Wilson correctly notes that:

The fear of the never-ending onslaught of gizmos and gadgets is nothing new. The radio, the telephone, Facebook — each of these inventions changed the world. Each of them scared the heck out of an older generation. And each of them was invented by people who were in their 20s.

He continues:

Young people adapt quickly to the most absurd things. Consider the social network Foursquare, in which people not only willingly broadcast their location to the world but earn goofy virtual badges for doing so. My first impulse was to ignore Foursquare—for the rest of my life, if I have to.

And that’s the problem. As we get older, the process of adaptation slows way down. Unfortunately, we depend on alternating waves of assimilation and accommodation to adapt to a constantly changing world. For [developmental psychologist Jean] Piaget, this balance between what’s in the mind and what’s in the environment is called equilibrium. It’s pretty obvious when equilibrium breaks down. For example, my grandmother has phone numbers taped to her cellphone. Having grown up with the Rolodex (a collection of numbers stored next to the phone), she doesn’t quite grasp the concept of putting the numbers in the phone.

Why are we so nostalgic about the technology we grew up with? Old people say things like: “This new technology is stupid. I liked (new, digital) technology X better when it was called (old, analog) technology Y. Why, back in my day….” Which leads inexorably to, “I just don’t get it.”

There’s a simple explanation for this phenomenon: “adventure window.” At a certain age, that which is familiar and feels safe becomes more important to you than that which is new, different, and exciting. Think of it as “set-in-your-ways syndrome.”

Continue reading →

John Naughton, a professor at the Open University in the U.K. and a columnist for the U.K. Guardian, has a new essay out entitled “Only a Fool or Nicolas Sarkozy Would Go to War with Facebook.” I enjoyed it because it touches upon two interrelated concepts that I’ve spent years writing about: “moral panic” and “third-person effect hypothesis” (although Naughton doesn’t discuss the latter by name in his piece.) To recap, let’s define those terms:

“Moral Panic” / “Techno-Panic: Christopher Ferguson, a professor at Texas A&M’s Department of Behavioral, Applied Sciences and Criminal Justice, offers the following definition: “A moral panic occurs when a segment of society believes that the behavior or moral choices of others within that society poses a significant risk to the society as a whole.” By extension, a “techno-panic” is simply a moral panic that centers around societal fears about a specific contemporary technology (or technological activity) instead of merely the content flowing over that technology or medium.

Third-Person Effect Hypothesis“: First formulated by psychologist W. Phillips Davison in 1983, “this hypothesis predicts that people will tend to overestimate the influence that mass communications have on the attitudes and behavior of others. More specifically, individuals who are members of an audience that is exposed to a persuasive communication (whether or not this communication is intended to be persuasive) will expect the communication to have a greater effect on others than on themselves.” While originally formulated as an explanation for how people convinced themselves “media bias” existed where none was present, the third-person-effect hypothesis has provided an explanation for other phenomenon and forms of regulation, especially content censorship. Indeed, one of the most intriguing aspects about censorship efforts historically is that it is apparent that many censorship advocates desire regulation to protect others, not themselves, from what they perceive to be persuasive or harmful content. That is, many people imagine themselves immune from the supposedly ill effects of “objectionable” material, or even just persuasive communications or viewpoints they do not agree with, but they claim it will have a corrupting influence on others.

All my past essays about moral panics and third-person effect hypothesis can be found here. These theories are also frequently on display in the work of some of the “Internet pessimists” I have written about here, as well as in many bills and regulatory proposals floated by lawmakers. Which brings us back to the Naughton essay.

Continue reading →

In my work critiquing the Lessig-Zittrain-Wu school of thinking–which fears the decline and fall of online “openness” and digital  “generativity”–I have argued that, while there is no such thing as perfect “openness,” things are actually getting more open and generative all the time. All that really counts from my perspective is that we are witnessing healthy innovation across the generativity continuum.

Will some devices and platforms continue to be “closed”? Sure. Think Apple and cable set-top boxes. But (a) there’s a ton of innovation taking place on top of those supposedly “closed” platforms and (b) there are other options consumers can exercise if they don’t like those content /information delivery methods. [See this chapter from the Next Digital Decade book for my fuller critique.]

And, even if one adopts a rigid Zittrainian view of openness and generativity, each day seems to bring more good news. From that perspective it’s hard to find a better headline than this one: “Smartphone Makers Bow to Demands for More Openness.” That’s from ArsTechnica today and it refers to the fact that smartphone giant HTC just announced it would no longer attempt to lock the bootloader on its smartphones, meaning geeks like me can root and hack their devices to their heart’s content. As the Ars story notes:

Continue reading →

One of my favorite topics lately has been the challenges faced by information control regimes. Jerry Brito and I are writing a big paper on this issue right now. Part of the story we tell is that the sheer scale / volume of modern information flows is becoming so overwhelming that it raises practical questions about just how effective any info control regime can be. [See our recent essays on the topic: 1, 23, 4, 5.]  As we continue our research, we’ve been attempting to unearth some good metrics / factoids to help tell this story.  It’s challenging because there aren’t many consistent data sets depicting online data growth over time and some of the best anecdotes from key digital companies are only released sporadically. Anyway, I’d love to hear from others about good metrics and data sets that we should be examining.  In the meantime, here are a few fun facts I’ve unearthed in my research so far. Please let me know if more recent data is available. [Note: Last updated 7/18/11]

  • Facebook: users submit around 650,000 comments on the 100 million pieces of content served up every minute on its site.[1]  People on Facebook install 20 million applications every day.[2]
  • YouTube: every minute, 48 hours of video were uploaded.  According to Peter Kafka of The Wall Street Journal, “That’s up 37 percent in the last six months, and 100 percent in the last year. YouTube says the increase comes in part because it’s easier than ever to upload stuff, and in part because YouTube has started embracing lengthy live streaming sessions. YouTube users are now watching more than 3 billion videos a day. That’s up 50 percent from the last year, which is also a huge leap, though the growth rate has declined a bit: Last year, views doubled from a billion a day to two billion in six months.”[3]
  • eBay is now the world’s largest online marketplace with more than 90 million active users globally and $60 billion in transactions annually, or $2,000 every second.[4]
  • Google: 34,000 searches per second (2 million per minute; 121 million per hour; 3 billion per day; 88 billion per month).[5]
  • Twitter already has 300 million users producing 140 million Tweets a day, which adds up to a billion Tweets every 8 days[6] (@ 1,600 Tweets per second)  “On the first day Twitter was made available to the public, 224 tweets were sent. Today, that number of updates are posted at least 10 times a second.”[7]
  • Apple: more than 10 billion apps have been downloaded from its App Store by customers in over 77 countries.[8] According to Chris Burns of SlashGear, “Currently it appears that another thousand apps are downloaded every 9 seconds in the Android Marketplace while every 3 seconds another 1,000 apps are downloaded in the App Store.”
  • Yelp: as of July 2011 the site hosted over 18 million user reviews.[9]
  • Wikipedia: Every six weeks, there are 10 million edits made to Wikipedia.[10]
  • “Humankind shared 65 exabytes of information in 2007, the equivalent of every person in the world sending out the contents of six newspapers every day.”[11]
  • Researchers at the San Diego Supercomputer Center at the University of California, San Diego, estimate that, in 2008, the world’s 27 million business servers processed 9.57 zettabytes, or 9,570,000,000,000,000,000,000 bytes of information.  This is “the digital equivalent of a 5.6-billion-mile-high stack of books from Earth to Neptune and back to Earth, repeated about 20 times a year.” The study also estimated that enterprise server workloads are doubling about every two years, “which means that by 2024 the world’s enterprise servers will annually process the digital equivalent of a stack of books extending more than 4.37 light-years to Alpha Centauri, our closest neighboring star system in the Milky Way Galaxy.”[12]
  • According to Dave Evans, Cisco’s chief futurist and chief technologist for the Cisco Internet Business Solutions Group, about 5 exabytes of unique information were created in 2008. That’s 1 billion DVDs. Fast forward three years and we are creating 1.2 zettabytes, with one zettabyte equal to 1,024 exabytes. “This is the same as every person on Earth tweeting for 100 years, or 125 million years of your favorite one-hour TV show,” says Evans. Our love of high-definition video accounts for much of the increase. By Cisco’s count, 91% of Internet data in 2015 will be video.[13]


[1]     Ken Deeter, “Live Commenting: Behind the Scenes,” Facebook.com, February 7, 2011, http://www.facebook.com/note.php?note_id=496077348919.

[4]     eBay, “Who We Are,” http://www.ebayinc.com/who

[5]     Matt McGee, “By The Numbers: Twitter Vs. Facebook Vs. Google Buzz,” SearchEngineLand, February 23, 2010, http://searchengineland.com/by-the-numbers-twitter-vs-facebook-vs-google-buzz-36709

[7]     Nicholas Jackson, “Infographic: A Look at Twitter’s Explosive Five-Year History,” The Atlantic, July 18, 2011, http://www.theatlantic.com/technology/archive/2011/07/infographic-a-look-at-twitters-explosive-five-year-history/242070

[9]     “10 Things You Should Know about Yelp,” Yelp.com, http://www.yelp.com/about [accessed July 18, 2011]

[10]   “Wikipedia: Edit Growth Measured in Time between Every 10,000,000th Edit,” http://en.wikipedia.org/wiki/User:Katalaveno/TBE

[11]   Martin Hilbert and Priscila Lopez, “The World’s Technological Capacity to Store, Communicate, and Compute Information,” Science, February 10, 2011, http://annenberg.usc.edu/News%20and%20Events/News/110210Hilbert.aspx.

[12]   Rex Graham, “Business Information Consumption: 9,570,000,000,000,000,000,000 Bytes per Year,” UC San Diego News Center, April 6, 2011, http://ucsdnews.ucsd.edu/newsrel/general/04-05BusinessInformation.asp.

[13]   Julie Bort, “10 Technologies That Will Change the World in the Next 10 Years,” Network World, July 15, 2011, http://m.networkworld.com/news/2011/071511-cisco-futurist.html?page=1

Hanno F. Kaiser, a U.S. and EU antitrust lawyer and partner with Latham & Watkins LLP, has just released an important essay on a topic I have devoted much time to here over the years: the debate over the relative advantages of “open” vs. “closed” technological systems and the Lessig-Zittrain-Wu school of thinking about these issues.

Kaiser’s essay is entitled, “Are Closed Systems an Antitrust Problem?” and it appears in the latest edition of Competition Policy International.  This essay is not to be missed. Kaiser’s terrific paper helps us better understand and debunk many of the myths and misperceptions that continue to riddle this debate. Here’s Kaiser’s key insight:

At bottom, the bad reputation of closed systems or walled gardens in the “open versus closed” debate is quite undeserved. Walled gardens generally benefit their environments—both in the real world and the digital realm. The primary purpose of a garden wall, after all, is to shelter plants from wind and frost, not to keep intruders out. In the protected space of the garden, flowers can grow that would not otherwise survive in the wild. Walled gardens thus deliberately create a microcosm that is different from the surrounding ecosystem. Therefore, as long as the garden does not take over the entire ecosystem, walled gardens increase, not reduce, overall diversity. From a competition policy perspective, enjoying the fruits of a walled garden is generally not a guilty pleasure.

Therefore, “as a policy matter, ‘open’ is not necessarily better than ‘closed’,” Kaiser argues, and elaborates as follows: Continue reading →

I’ve spent a great deal of time here defending “techno-optimism” or “Internet optimism” against various attacks through the years, so I was interested to see Cory Doctorow, a novelist and Net activist, take on the issue in a new essay at Locus Online.  I summarized my own views on this issue in two recent book chapters. Both chapters appear in The Next Digital Decade and are labeled “The Case for Internet Optimism.” Part 1 is sub-titled “Saving the Net From Its Detractors” and Part 2 is called “Saving the Net From Its Supporters.” More on my own thoughts in a moment. But let’s begin with Doctorow’s conception of the term.

Doctorow defines “techno-optimism” as follows:

In order to be an activist, you have to be… pessimistic enough to believe that things will get worse if left unchecked, optimistic enough to believe that if you take action, the worst can be prevented. […]

Techno-optimism is an ideology that embodies the pessimism and the optimism above: the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.

What this definition suggests is that Doctorow has a very clear vision of what constitutes “good” vs. “bad” technology or technological developments. He turns to that dichotomy next as he seeks to essentially marry “techno-optimism” to a devotion to the free/open software movement and a rejection of “proprietary technology”: Continue reading →

When it comes to information control, everybody has a pet issue and everyone will be disappointed when law can’t resolve it. I was reminded of this truism while reading a provocative blog post yesterday by computer scientist Ben Adida entitled “(Your) Information Wants to be Free.” Adida’s essay touches upon an issue I have been writing about here a lot lately: the complexity of information control — especially in the context of individual privacy. [See my essays on “Privacy as an Information Control Regime: The Challenges Ahead,” “And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars,” and this recent FTC filing.]

In his essay, Adida observes that:

In 1984, Stewart Brand famously said that information wants to be free. John Perry Barlow reiterated it in the early 90s, and added “Information Replicates into the Cracks of Possibility.” When this idea was applied to online music sharing, it was cool in a “fight the man!” kind of way. Unfortunately, information replication doesn’t discriminate: your personal data, credit cards and medical problems alike, also want to be free. Keeping it secret is really, really hard.

Quite right. We’ve been debating the complexities of information control in the Internet policy arena for the last 20 years and I think we can all now safely conclude that information control is hugely challenging regardless of the sort of information in question. As I’ll note below, that doesn’t mean control is impossible, but the relative difficulty of slowing or stopping information flows of all varieties has increased exponentially in recent years.

But Adida’s more interesting point is the one about the selective morality at play in debates over information control. That is, people generally expect or favor information freedom in some arenas, but then get pretty upset when they can’t crack down on information flows elsewhere. Indeed, some people can get downright religious about the whole “information-wants-to-be-free” thing in some cases and then, without missing a beat, turn around and talk like information totalitarians in the next breath. Continue reading →