November 2008

A few months ago, I penned a mega book review about the growing divide between “Internet optimists and pessimists.” I noted that the Internet optimists — people like Chris Anderson, Clay Shirky, Yochai Benkler, Kevin Kelly, and others — believe that the Internet is generally improving our culture, economy, and society for the better. They believe the Net has empowered and liberated the masses, sparked unparalleled human creativity and communication, provided greater personalization and customization of media content, and created greater diversity of thought and a more deliberative democracy. By contrast, the Internet pessimists — including Nick Carr, Andrew Keen, Lee Siegel, and others — argue that the Internet is destroying popular culture and professional media, calling “truth” and “authority” into question by over-glamorizing amateurism and user-generated content, and that increased personalization is damaging deliberative democracy by leading to homogenization, close-mindedness, and an online echo-chamber. Needless to say, it’s a very heated debate!

I am currently working on a greatly expanded version of my “Net optimists vs. pessimists” essay for a magazine in which I will draw out more of these distinctions and weigh the arguments made by those in both camps. I plan on concluding that article by arguing that the optimists generally have the better of the argument, but that the pessimists make some fair points about the downsides of the Net’s radically disintermediating role on culture and economy.

So, this got me thinking that I needed to come up with some sort of a label for my middle-of-the-road position as well as a statement of my personal beliefs. As far as labels go, I guess I would call myself a “pragmatic optimist” since I generally side with the optimists in most of these debates, but not without some occasional reservations. Specifically, I don’t always subscribe to the Pollyanna-ish, rose-colored view of the world that some optimists seem to adopt. But the outright Chicken Little-like Ludditism of some Internet pessimists is even more over-the-top at times. Anyway, what follows is my “Pragmatic (Internet) Optimist’s Creed” which better explains my views. (Again, read my old essay first for some context about the relevant battle lines in this intellectual war).

Continue reading →

What’s the right way to allocate the airwaves? For years and years and years, the governing policy of federal communications was that the electro-magnetic spectrum was too “scarce” to be left to the devices of the marketplace. This kind of reasoning has always lacked substance. As I wrote in a piece occoccasioned by the rise of indecency enforcement:

Congress began regulating broadcasters in 1927 on the grounds of scarcity. In return for free and exclusive use of a given wavelength, broadcasters agreed to serve the “public interest, convenience, and necessity” — or at least to do what Congress and the FCC ordered. One element of this agreement was a ban on obscene, indecent and profane language.

This scarcity theory has always lacked substance. Nobel Prize-winning economist Ronald Coase’s reputation is based, in part, on a notable paper he wrote in 1959 that criticized the rationale behind the FCC’s command and control regime of licensing broadcasters. “It is a commonplace of economics that almost all resources in the economic system (and not simply radio and television frequencies) are limited in amount and scarce, in that people would like to use more than exists,” Coase argued in his seminal essay.

From Shouldn’t FCC Rules Over Indecency Just Grow Up? Reflections on Free Speech and Converging Media

The FCC eventually came to realize that it could endow electromagnetic frequencies with property rights-like characteristics. In 1993, under Bill Clinton and a Democratic congress, the United States finally moved to such a system — at least in those frequencies used by cell-phone operators. As in so many other ways, broadcasters have remained immune from historical trends.

This backdrop is important to understand our current moment in wireless policy. Tomorrow, on Wednesday, November 12, at 4 p.m., those near Washington will be able to gain insight into how other nations have approached radio frequency regulation. The Information Economy Project at the George Mason University School of Law (Disclosure: I’m the Assistant Director at the Information Economy Project, a part-time position that I currently hold) will host its next “Big Ideas About Information Lecture” featuring an address by Dr. William Webb, a top policy maker at OFCOM, the U.K. telecommunications regulator.

OFCOM’s ambitious liberalization strategy, announced in 2004, permits the large majority of valuable frequencies to be used freely by competitive licensees, offering an exciting and informative experiment in public policy.  Dr. Webb’s lecture, “Spectrum Reform: A U.K. Regulator’s Perspective,” will offer a timely progress report for the American audience.

Continue reading →

My friend Louisa Gilder’s brand new book The Age of Entanglement: When Quantum Physics Was Reborn arrived in the mail from Amazon today.

Matt Ridley, author of Genome, says:

Louisa Gilder disentangles the story of entanglement with such narrative panache, such poetic verve, and such metaphysical precision that for a moment I almost thought I understood quantum mechanics.

The cover art alone is spectacular. Can’t wait to crack it open tonight.

Me around the Web

by on November 10, 2008 · 6 comments

Over at Ars Technica, the final installment of my story on self-driving cars is up. This one focuses on the political and regulatory aspects of self-driving technologies. In particular, I offer three suggestions for the inevitable self-driving regulatory regime:

Three principles should govern the regulation of self-driving cars. First, it’s important to ensure that regulation be a complement to, rather than a substitute for, liability for accidents. Private firms will always have more information than government regulators about the safety of their products, and so the primary mechanism for ensuring car safety will always be manufacturers’ desires to avoid liability. Tort law gives carmakers an important, independent incentive to make safer cars. So while there may be good arguments for limiting liability, it would be a mistake to excuse regulated auto manufacturers from tort liability entirely.

Second, regulators should let industry take the lead in developing the basic software architecture of self-driving technologies. The last couple of decades have given us many examples of high-tech industries converging on well-designed technical standards. It should be sufficient for regulators to examine these standards after they have been developed, rather than trying to impose government-designed standards on the industry.

Finally, regulators need to bear in mind that too much regulation can be just as dangerous as too little. If self-driving cars will save lives, then delaying their introduction can kill just as many people as approving a dangerous car can. Therefore, it’s important that regulators focus narrowly on safety and that they don’t impose unrealistically high standards. If self-driving software can be shown to be at least as safe as the average human driver, it should be allowed on the road.

Meanwhile, Josephine Wolff at the Daily Princetonian was kind enough to quote me in an article about self-driving technologies. For the record, I was exaggerating a bit when I said “The only reasons there are pilots is because people feel safer with pilots.” Most aspects of flying can be done on autopilot, but I’m not sure we’re to the point where you could literally turn on the autopilot, close the cockpit door, and let the plane take you to the destination.

And if any TLF readers are in the Princeton area, I hope you’ll come to my talk on the future of self-driving technology, which will be a week from Thursday.

Finally, over at Techdirt, I’ve got the final installment of my series (1 2 3 4) on network neutrality regulation. I’ve got a new Cato Policy Analysis coming out later this week that will expand on many of the themes of those posts. Stay tuned.

There’s much to discuss as Obama shapes his administration (more on this at OpenMarket.org) but arguably one of the most important unanswered questions is who Obama will pick to staff the Federal Communications Commission.

CNET reports that Henry Rivera, a lawyer and former FCC Commissioner, has been selected to head the transition team tasked with reshaping the FCC. This selection gives us a glimpse of what the FCC’s agenda will look like under Obama, and it’s quite troubling.

Rivera has embraced a media “reform” agenda aimed at promoting minority ownership of broadcast media outlets. A couple weeks ago, Rivera sent a letter to the FCC that backed rules originally conceived by the Media Access Project to create a new class of stations to which only “small and distressed businesses” (SDB) could belong. The S-Class stations would be authorized to sublease digital spectrum and formulate must-carry programming, with the caveat that only half of the content can be “commercial”. To avoid the Constitutional issues surrounding racial quotas, eligibility for SDB classification would be based on economic status, rather than the racial composition of would-be station owners.

The S-Class proposal, like other media reform proposals, falsely assumes that current owners of media outlets are failing to meet the demands of their audience for a diverse range of content. The proposal also ignores the fact that consumers already enjoy an abundance of voices from all viewpoints, as we’ve discussed extensively here on TLF.

Continue reading →

The conventional Beltway wisdom would be that net neutrality legislation should have a real chance now with the election of President-Elect Barack Obama and strengthened Democratic majorities in the Senate and House.

But there are two recent developments which make the case for net neutrality regulation less compelling.

Free Airwaves

The Federal Communications Commission approved the use of unlicensed wireless devices to operate in broadcast television spectrum on a secondary basis at locations where that spectrum is open, i.e., the television “white spaces.” In other words, a vast amount of spectrum will soon be available to provide broadband data and other services, and the spectrum will be free.

George Mason University Professor Thomas W. Hazlett notes that

[S]ome 250 million mobile subscribers in the US paid about $140 billion to make 2 trillion minutes’ worth of phone calls in 2007, accessing just 190MHz of radio spectrum. The digital TV band, in contrast, is allocated some 294MHz—and it’s more productive bandwidth. Tapping into this mother lode would unleash powerful waves of rivalry and innovation.

Continue reading →

Solove Understanding Privacy book coverWith the publication of Understanding Privacy (Harvard University Press 2008), George Washington University Law School professor Daniel J. Solove has firmly established himself as one of America’s leading intellectuals in the field of information policy and cyberlaw.  Solove had already made himself a force to be reckoned with in this field with the publication of important books like The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (Yale University Press 2007), The Digital Person: Technology and Privacy in the Information Age (NYU Press 2004) and his treatise on Information Privacy Law with Paul M. Schwartz of the Berkeley School of Law (Aspen Publishing, 2d ed. 2006).  But with Understanding Privacy, Solove has now elevated himself to that rarefied air of “people worth watching” in the cyberlaw field; an intellectual — like Lawrence Lessig or Jonathan Zittrain — whose every publication becomes something of an event in the field to which all eyes turn upon release.

Like those other intellectuals, however, my respect for their stature should not be confused with agreement with their positions.  In fact, my disagreements with Lessig and Zittrain are frequently on display here and, we have been critical of Solove here in the past as well. [Here’s Jim Harper’s review of Solove’s last book, with which I am in wholehearted agreement.]  In a similar vein, although I greatly appreciate what Prof. Solove attempts to accomplish in Understanding Privacy — and I am sure it will change the way we conceptualize and debate privacy policy in the future — I found his approach and conclusions highly problematic.

Continue reading →

Good editorial in the Boston Globe today about “The Dangers of Internet Censorship” by Harry Lewis, a professor of computer science at Harvard and fellow at Harvard’s Berkman Center for Internet and Society. Lewis argues that:

Determining which ideas are “harmful” is not the government’s job. Parents should judge what information their children should see – and should expect that older children will, as they always have, find ways around restrictive rules.

Worth reading the whole thing. Incidentally, Harry Lewis is the co-author of an interesting new book I am reading right now, Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion. I’m going to try to review it here eventually.

Great post over on the Tor blog about how “anonymity on the Internet is not going away.” This is a subject I care about deeply. Here, for example, is an essay I wrote about mandatory age verification and the threat it poses to online anonymity.  I love this paragraph from the Tor essay, and agree with it wholeheartedly:

Anonymity is a defense against the tyranny of the majority. There are many, many valid uses of anonymity tools, such as Tor. The belief that anonymous tools exist only for the edges of societies is narrow-minded. The tools exist and are used by all. Much like the Internet, the tools can be used for good or bad. The negative uses of such tools typically generate huge headlines, but not the positive uses. Raising the profile of the positive uses of anonymity tools, such as Tor, is one of our challenges.

Amen brother.

“Take Up the Flame”

by on November 8, 2008 · 8 comments

The fight for freedom has seen brighter days, I grant. I think it will see still brighter days yet, though, if we can encourage another generation to join the cause. Towards that end, I wrote a song, “Take Up the Flame.”

As the song’s credits indicate, I’ve dedicated the song to my old friend and mentor, Walter E. Grinder—one of the many people who inspired me to take up “the flame.” I originally planned to debut the song at a conference planned by the West Coast chapter of the Students for Liberty, to be held at Stanford University in mid-November. I figured that Walter, who lives nearby, could hear the tune in person. That meeting got cancelled, alas. Not to be deterred, though, I’m now distributing the song virtually.

The song’s credits also indicate that I’ve released it under a Creative Commons Attribution-Noncommercial 3.0 Unported License, and made the lyrics and chords freely available for downloading. It would delight me if somewhere, someday, “Take Up the Flame” helped to raise the spirits of young folks rallying for the Good Fight. (Although I don’t imagine anyone will find much reason to license the song commercially, I’ve also stipulated that any such licensee must agree to tithe a portion of the proceeds—10% of income, traditionally—to the Institute for Humane Studies, an organization that has long taught students about liberty.) Sing it loudly and proudly, friends of freedom!

[Crossposted at Agoraphilia and Technology Liberation Front.]