This catfight between Ron Rosenbaum of Slate and Jeff Jarvis of Buzz Machine about the future of journalism in the Internet Age is quite a heated affair. But what I found most interesting about it is that it reflects one element of the Net “optimist — pessimist” divide that I have been writing about here recently. Specifically, it touches on the divide over whether the Internet and digital technologies are reshaping the media marketplace and the field of journalism for better or for worse.

Rosenbaum is playing the pessimist role here and asking some sharp questions about the advice being dished out by “Web futurists” and “new-media gurus” as it relates the reversing the decline of the journalism profession. Rosenbaum says that the problem with Jarvis is that:

he’s become increasingly heartless about the reporters, writers, and other “content providers” who have been put out on the street by the changes in the industry. Not only does he blame the victims, he denies them the right to consider themselves victims. They deserve their miserable fate — and if they don’t know it, he’ll tell them why at great length. Sometimes it sounds as if he’s virtually dancing on their graves.

Continue reading →

My new network neutrality paper has prompted a cogent criticism from Steve Schultze at Harvard’s Berkman Center. Since Steve has helpfully broken his critique up into two parts, I’ll deal with them in turn. Here’s his first point:

The gating question is whether or not the elements of the Interstate Commerce Commission that led to the inefficiencies that Lee claims are at all related to the non-disciminatory language that he claims connect the two. If and only if the answer is “yes,” then a responsible analysis would consider whether or not the markets are relatively analogous, whether or not the administrative agencies tend toward the same failures, and whether the costs of regulation truly outweigh the benefits. In short, it is not enough to simply assert that net neutrality smells like the ICA, therefore it is doomed to fail.

I think this rather badly misunderstands the thrust of my argument with respect to the ICC (and the CAB and FCC). I’m absolutely not claiming that enacting network neutrality regulations will lead to exactly the same policy failures that befell the ICC. This would be a silly thing to argue, especially since (we hope) policymakers learn from their mistakes and take steps to avoid the precise mistakes they made in the past.

So my point is not that “net neutrality smells like the ICA, therefore it is doomed to fail.” Let me re-state my point this way: imagine putting yourself in the shoes of an average member of Congress in 1887. You’re worried about the monopolistic behavior of the railroads, and you’re about to vote on legislation that will require rates to be reasonable, non-discriminatory, and so forth. You would be extremely surprised to learn that the commission whose creation you just supported would wind up working primarily to limit competition and lining the pockets of incumbent railroads. That’s not what the legislation said it would do, that’s not what you intended to accomplish, but it turns out that’s what actually did happen.

Now imagine it’s 2009, and you’re a member of Congress deciding whether to vote on legislation. You’re worried about the monopolistic behavior of the telcos, and you’re about to vote on legislation that will require their routing policies to be reasonable, non-discriminatory, and so forth. My point is simply that there’s a significant probability that the practical effect of that legislation will be very different from what you or the legislation’s authors intended. And that history tells us that the regulatory process has a systematic bias in favor of well-organized incumbents and against poorly-organized consumers. And so if you’re going to give a regulatory agency more power, you’d better be absolutely certain you know what you’re doing, because any mistakes are likely to benefit industry incumbents at the expense of consumers.

What specific problems will we have? Well, it’s hard to say. That’s why it’s called “unintended consequences.” If we could predict exactly how legislation would be applied, the argument for regulation would be a lot stronger. My point is that precisely because it’s hard to predict how regulation will be applied, and because industry incumbents have more influence than the rest of us, we shouldn’t be too cavalier about giving regulators more power.

With that caveat in mind, I do point to some aspects of popular network neutrality proposals that could lead to problems. Most importantly, I have yet to see anyone produce a clear and unambiguous definition of network neutrality. Indeed, network neutrality advocates disagree among themselves about such issues as prioritization and DNS servers. Legal ambiguity creates a variety of problems, including discretion in the hands of regulators and increased difficulty for private-sector actors to determine what the law requires of them.

But to demand that I predict exactly what problems network neutrality legislation will cause is to miss the point. One of the biggest reasons reason we should be reluctant to enact network neutrality regulation is that legislation often has untintended consequences. Now, obviously that doesn’t mean that regulation is never a good idea, but it does mean that we should regard regulation as a last resort to deal with clear problems we can’t solve in other ways. It’s not a good way to deal with the kind of highly speculative threats that are the bread and butter of network neutrality activists.

Tim Lee’s long anticipated Cato Institute Policy Analysis has been released today.

The Durable Internet: Preserving Network Neutrality without Regulation is a must-read for people on both sides of the debate over network neutrality regulation.

What I like best about this paper is how Tim avoids joining one “team” or another. He evenly gives each side its due – each side is right about some things, after all – and calls out the specific instances where he thinks each is wrong.

Tim makes the case for treating the “end-to-end principle” as an important part of the Internet’s fundamental design. Tim disagrees with the people who argue for a network with “smarter” innards and believes that neutrality advocates seek the best engineering for the network. But they are wrong to believe that the network is fragile or susceptible to control. The Internet’s end-to-end architecture is durable, despite examples where it is not an absolute.

Tim has history lessons for those who believe that regulatory control of network management will have salutary effects. Time and time again, regulatory agencies have fallen into service of the industries they regulate.

“In 1970,” Tim tells us, “a report released by a Ralph Nader group described the [Interstate Commerce Commission] as ‘primarily a forum at which transportation interests divide up the national transportation market.'” Such is the likely fate of the Internet were management of it given to regulators at the FCC and their lobbyist friends at Verizon, AT&T, Comcast, and so on.

This paper has something for everyone, and will be a reference work as the network neutrality discussion continues. Highly recommended: The Durable Internet: Preserving Network Neutrality without Regulation.

ZDnet ran a story last week citing how security guru Bruce Schneier slams the US-VISIT program, which collects biometrics from people entering the country, saying that it has “zero benefit.”

I respect and like Bruce – he will be a participant in a major counterterrorism strategy conference we are having at the Cato Institute in January – but I have to voice my disagreement with him on this score. My belief is that border biometrics have an extremely small benefit – a benefit that rounds to zero, and one that is more than cancelled out by the costs. But not zero.

Continue reading →

NebuAd Lawsuit

by on November 12, 2008 · 30 comments

I don’t have an opinion about the specific legal issues involved, but I think the general approach of this lawsuit against NebuAd is the right one. Consumers have a reasonable expectation of privacy when they sign up for Internet service. As it happens, I was a Charter customer during the last three years, and I don’t remember them disclosing that they would be sharing the contents of my Internet communications to a third party for advertising purposes.

A few months ago, I penned a mega book review about the growing divide between “Internet optimists and pessimists.” I noted that the Internet optimists — people like Chris Anderson, Clay Shirky, Yochai Benkler, Kevin Kelly, and others — believe that the Internet is generally improving our culture, economy, and society for the better. They believe the Net has empowered and liberated the masses, sparked unparalleled human creativity and communication, provided greater personalization and customization of media content, and created greater diversity of thought and a more deliberative democracy. By contrast, the Internet pessimists — including Nick Carr, Andrew Keen, Lee Siegel, and others — argue that the Internet is destroying popular culture and professional media, calling “truth” and “authority” into question by over-glamorizing amateurism and user-generated content, and that increased personalization is damaging deliberative democracy by leading to homogenization, close-mindedness, and an online echo-chamber. Needless to say, it’s a very heated debate!

I am currently working on a greatly expanded version of my “Net optimists vs. pessimists” essay for a magazine in which I will draw out more of these distinctions and weigh the arguments made by those in both camps. I plan on concluding that article by arguing that the optimists generally have the better of the argument, but that the pessimists make some fair points about the downsides of the Net’s radically disintermediating role on culture and economy.

So, this got me thinking that I needed to come up with some sort of a label for my middle-of-the-road position as well as a statement of my personal beliefs. As far as labels go, I guess I would call myself a “pragmatic optimist” since I generally side with the optimists in most of these debates, but not without some occasional reservations. Specifically, I don’t always subscribe to the Pollyanna-ish, rose-colored view of the world that some optimists seem to adopt. But the outright Chicken Little-like Ludditism of some Internet pessimists is even more over-the-top at times. Anyway, what follows is my “Pragmatic (Internet) Optimist’s Creed” which better explains my views. (Again, read my old essay first for some context about the relevant battle lines in this intellectual war).

Continue reading →

What’s the right way to allocate the airwaves? For years and years and years, the governing policy of federal communications was that the electro-magnetic spectrum was too “scarce” to be left to the devices of the marketplace. This kind of reasoning has always lacked substance. As I wrote in a piece occoccasioned by the rise of indecency enforcement:

Congress began regulating broadcasters in 1927 on the grounds of scarcity. In return for free and exclusive use of a given wavelength, broadcasters agreed to serve the “public interest, convenience, and necessity” — or at least to do what Congress and the FCC ordered. One element of this agreement was a ban on obscene, indecent and profane language.

This scarcity theory has always lacked substance. Nobel Prize-winning economist Ronald Coase’s reputation is based, in part, on a notable paper he wrote in 1959 that criticized the rationale behind the FCC’s command and control regime of licensing broadcasters. “It is a commonplace of economics that almost all resources in the economic system (and not simply radio and television frequencies) are limited in amount and scarce, in that people would like to use more than exists,” Coase argued in his seminal essay.

From Shouldn’t FCC Rules Over Indecency Just Grow Up? Reflections on Free Speech and Converging Media

The FCC eventually came to realize that it could endow electromagnetic frequencies with property rights-like characteristics. In 1993, under Bill Clinton and a Democratic congress, the United States finally moved to such a system — at least in those frequencies used by cell-phone operators. As in so many other ways, broadcasters have remained immune from historical trends.

This backdrop is important to understand our current moment in wireless policy. Tomorrow, on Wednesday, November 12, at 4 p.m., those near Washington will be able to gain insight into how other nations have approached radio frequency regulation. The Information Economy Project at the George Mason University School of Law (Disclosure: I’m the Assistant Director at the Information Economy Project, a part-time position that I currently hold) will host its next “Big Ideas About Information Lecture” featuring an address by Dr. William Webb, a top policy maker at OFCOM, the U.K. telecommunications regulator.

OFCOM’s ambitious liberalization strategy, announced in 2004, permits the large majority of valuable frequencies to be used freely by competitive licensees, offering an exciting and informative experiment in public policy.  Dr. Webb’s lecture, “Spectrum Reform: A U.K. Regulator’s Perspective,” will offer a timely progress report for the American audience.

Continue reading →

My friend Louisa Gilder’s brand new book The Age of Entanglement: When Quantum Physics Was Reborn arrived in the mail from Amazon today.

Matt Ridley, author of Genome, says:

Louisa Gilder disentangles the story of entanglement with such narrative panache, such poetic verve, and such metaphysical precision that for a moment I almost thought I understood quantum mechanics.

The cover art alone is spectacular. Can’t wait to crack it open tonight.

Me around the Web

by on November 10, 2008 · 6 comments

Over at Ars Technica, the final installment of my story on self-driving cars is up. This one focuses on the political and regulatory aspects of self-driving technologies. In particular, I offer three suggestions for the inevitable self-driving regulatory regime:

Three principles should govern the regulation of self-driving cars. First, it’s important to ensure that regulation be a complement to, rather than a substitute for, liability for accidents. Private firms will always have more information than government regulators about the safety of their products, and so the primary mechanism for ensuring car safety will always be manufacturers’ desires to avoid liability. Tort law gives carmakers an important, independent incentive to make safer cars. So while there may be good arguments for limiting liability, it would be a mistake to excuse regulated auto manufacturers from tort liability entirely.

Second, regulators should let industry take the lead in developing the basic software architecture of self-driving technologies. The last couple of decades have given us many examples of high-tech industries converging on well-designed technical standards. It should be sufficient for regulators to examine these standards after they have been developed, rather than trying to impose government-designed standards on the industry.

Finally, regulators need to bear in mind that too much regulation can be just as dangerous as too little. If self-driving cars will save lives, then delaying their introduction can kill just as many people as approving a dangerous car can. Therefore, it’s important that regulators focus narrowly on safety and that they don’t impose unrealistically high standards. If self-driving software can be shown to be at least as safe as the average human driver, it should be allowed on the road.

Meanwhile, Josephine Wolff at the Daily Princetonian was kind enough to quote me in an article about self-driving technologies. For the record, I was exaggerating a bit when I said “The only reasons there are pilots is because people feel safer with pilots.” Most aspects of flying can be done on autopilot, but I’m not sure we’re to the point where you could literally turn on the autopilot, close the cockpit door, and let the plane take you to the destination.

And if any TLF readers are in the Princeton area, I hope you’ll come to my talk on the future of self-driving technology, which will be a week from Thursday.

Finally, over at Techdirt, I’ve got the final installment of my series (1 2 3 4) on network neutrality regulation. I’ve got a new Cato Policy Analysis coming out later this week that will expand on many of the themes of those posts. Stay tuned.

There’s much to discuss as Obama shapes his administration (more on this at OpenMarket.org) but arguably one of the most important unanswered questions is who Obama will pick to staff the Federal Communications Commission.

CNET reports that Henry Rivera, a lawyer and former FCC Commissioner, has been selected to head the transition team tasked with reshaping the FCC. This selection gives us a glimpse of what the FCC’s agenda will look like under Obama, and it’s quite troubling.

Rivera has embraced a media “reform” agenda aimed at promoting minority ownership of broadcast media outlets. A couple weeks ago, Rivera sent a letter to the FCC that backed rules originally conceived by the Media Access Project to create a new class of stations to which only “small and distressed businesses” (SDB) could belong. The S-Class stations would be authorized to sublease digital spectrum and formulate must-carry programming, with the caveat that only half of the content can be “commercial”. To avoid the Constitutional issues surrounding racial quotas, eligibility for SDB classification would be based on economic status, rather than the racial composition of would-be station owners.

The S-Class proposal, like other media reform proposals, falsely assumes that current owners of media outlets are failing to meet the demands of their audience for a diverse range of content. The proposal also ignores the fact that consumers already enjoy an abundance of voices from all viewpoints, as we’ve discussed extensively here on TLF.

Continue reading →