November 2005

After months of delay, President Bush announced two appointments to the FCC yesterday–Republican Deborah Tate , currently director of the Tennessee Regulatory Authority, and Democrat Michael Copps, who will be reappointed to his current seat. Neither was a surprise. What should raise eyebrows, however, is the missing name–White House tech staffer Richard Russell, who had been widely expected to be tapped for a third seat. His absence–reportedly due to objections by Sen Ted “Bridge to Nowhere” Stevens–is cause for concern.

Tate will fill the empty seat left by departed FCC chair Michael Powell, giving the Republicans a 3-2 edge at the agency. This is good news. The current 2-2 split has stalled the commission, and given the two Democratic members veto power over decision making. Case in point: the recent conditional approval of the AT&T and MCI acquisitions.

Copps’ reappointment is more disturbing. Like a modern-day Will Rogers, it seems Copps has never met a regulation he doesn’t like. He is an avid critic of free markets, and (except perhaps on indecency issues) seems to oppose the Bush agenda across the board. Yet, he gets Bush’s nod for the seat because of a recently invented “tradition” of letting Democratic Senate leaders choose Democratic members of the FCC. By bowing to this practice, Bush is sacrificing not just his presidential perogatives, but good telecom policy.

Yesterday’s surprise however was the FCC dog that did not bark–Richard Russell. Russell is a highly-regarded associate director at the White House Office of Science and Technology Policy, and had been widely assumed to be in line for the FCC seat being vacated by Commissioner Kathleen Abernathy. So what happened? Apparently, he was nixed by Commerce Committee chair Ted Stevens. Stevens–who most recently has been in the news for spending billions on empty bridges in Alaska and subsidies for old TVs–reportedly did not think Russell was adequately supportive of rural telephone service subsidies.

The details–as they tend to be in such situations–are unclear. Russell may or may not still be in the running. And if he is out, who will replace him? And who will choose? Having given away the right to choose the two Democratic members, the White House can scarcely afford cede the rest of its appointment power. Certainly there should be at least some appointments saved for people who actually support the president’s agenda. Or–perhaps I’m dreaming here–actually support free-markets.

Stay tuned.

Addressing Google Print Concerns

by on November 9, 2005

Unlike Larry Lessig, I’m pleased to have James DeLong on our side in the Google Print debate.

I’m especially pleased that Lessig and DeLong agree on the fundamental principle that should be used in evaluating the issue:

Causby was entitled only to the decline in his property value, not to a share of the gains from the air age. Truly, if there is a principle here, that should be it. The baseline is the value of the property BEFORE the new technology. Does the new technology reduce THAT value. Put differently, would authors and publishers be worse off with Google Print than they were before Google Print?

This is something Jerry and I have argued about in the past. I think Lessig and DeLong is correct: the appropriate standard is whether the new use harms the market for the product that existed before the technology was created, not whether there is any profit potential in licensing the new technology. The publishers are, in effect, claiming that any new value created using their content belongs to them. I think that claim is contradicted by the priniples of America’s copyright system, and by past precedents on technology-related developments.

I’d also like to offer some suggestions on how two of DeLong’s concerns could be dealt with:

Continue reading →

Each quarter, the Federal Communications Commission (FCC) releases a report documenting the number of complaints that the agency receives. The numbers that they gather for “indecency” related complaints are increasingly drawing the most attention. Indeed, these numbers are mentioned frequently in news reports and are also cited by many lawmakers as the driving force underlying federal efforts to crack down on unseemly broadcast content.

But what do we know about these numbers and how they are gathered? Like most people, I’ve always just taken it for granted that most government statistics are accurate and can be trusted. I know there are flaws in some statistically gathering efforts (consider inflation or productivity numbers), but at least the government is doing it’s best to accurately gauge those trends. And, so, I figured the same was true of FCC indecency data.

Sadly, however, that doesn’t appear to be the case. Indeed, as my new paper “Examining the FCC’s Complaint-Driven Broadcast Indecency Enforcement Process” shows, the FCC now measures indecency complaints differently than all other types of complaints and does so in a way that artificially inflates indecency tallies relative to other types of complaints.

Continue reading →

Thierer v. Von Lohmann

by on November 9, 2005 · 12 comments

Adam and Fred duke it out over on the PFF blog, and I have to admit that my previous criticism was a bit hasty. I skimmed the report, but didn’t read the conclusion very carefully. He proposes a plausible alternative business model for the labels wherein consumers pay a flat fee for the right to make unlimited peer-to-peer downloads of copyrighted music. This is, at a minimum, a serious proposal worth discussing, and I can see some appeal in it. Notably, it does give people a way to “go legit” without being forced to put up with irritating and pointless DRM restrictions.

I do, however, think Adam continues to have a good point: what to do with the people who don’t join this scheme either? Fred seems to think that the number of such people will be trivial and so enforcing the rules against them won’t be very difficult. I’m not so sure. Even $5/month is a non-trivial amount of money to some people, and lots of people are lazy. I think a lot of people might continue to use P2P without paying the fee, and you’d be left in the same situation you’re in now: no way to enforce the rules except to sue them.

Secondly, is $5/month going to be enough to come anywhere close to replacing music industry revenues? The music industry currently gets about $13 billion in revenues. They’d be saving some money by not having to ship plastic discs around, so let’s round that down to $10 billion. To replace that revenue with $5/month subscriptions, it would need 150,000,000 subscribers. Is that reasonable? I honestly don’t know. If you combine the populations of the U.S., the EU, and Japan, there certainly are enough people in the industrialized world to support such a scheme, but I think the music industry would find it extremely difficult to corall enough of them into signing up.

After all, if I were a consumer in Fred’s future, what I would do is pay my subscription, download all the music I wanted in a month, and then cancel the subscription the following month. I might do that every 6 months or so. That would mean instead of adding $60/year to the industry’s coffers, I’d be adding about $10. To break even at that price you’d need about a billion subscribers, which is probably impossible, at least until China and India join the ranks of the wealthy nations.

Moreover, this kind of scheme seems like it would be extraordinarily difficult to enforce. If legitimate P2P services require users to log in with an RIAA-approved password, what’s to stop a dozen friends from sharing the same password? If the P2P services don’t require logging in, how will anyone figure out which users are legit?

The New York Times reports on the latest way that the DMCA is stifling technological innovation:

The Internet, in theory, can offer a selection of video programming that even the most advanced cable systems cannot match, and technology is helping improve the often grainy quality of online video.

But the cable and satellite companies are becoming concerned about providers of video programming using the Internet to reach customers directly.

Larry Kramer, the president of CBS Digital Media, has explicitly called the network’s Internet video strategy a “cable bypass.”

Yuanzhe Cai, the director of broadband research at Parks Associates, said: “We are seeing a lot of experimentation in terms of video programming through the Internet, and a lot of people are going to want to sit back and watch it on their TV. The big hurdle now is the digital rights issues of the studios and content owners.”

TiVo is caught in the middle. Its current digital recorder is capable of viewing programming from the Internet. Indeed, it recently did a test that allowed its users to download movies offered by the Independent Film Channel. “There is more video content that is coming down the broadband pipes,” said Tom Rogers, TiVo’s chief executive, referring to high-speed connections. He argued that TiVo’s technology could be important in helping providers that put programs on the Internet to gain a wider audience.

So what’s the problem?

Continue reading →

DRM Delusions

by on November 7, 2005 · 6 comments

Over at the PFF blog, co-blogger Solveig Singleton has some points about the connection between fair use and DRM technology. Some are fair points, others I disagree with, but I think this one is particularly worth commenting on:

DRM does respond to demand. Take interoperability, for example. This is important to consumers. Thus the market began with many types of not-particularly-interoperable DRM. But now there are all kinds of interoperability ventures going on for all types of media. It’s unlikely the market will converge to one… but it is converging.

Lots of people are talking about DRM interoperability. But so far, none have been widely deployed. There’s a good reason for this: genuinely interoperable DRM is a contradiction in terms.

Why? It’s difficult to explain in non-technical language, but I think the fundamental reason is this: the restrictions of a DRM scheme are enforced by devices, not files. That means that every single device that accesses DRMed content must be tightly controlled to ensure it doesn’t become a conduit for unauthorized access to the copyrighted materials.

Therefore, a truly interoperable DRM system–one which anyone is free to participate in–isn’t just a difficult technical challenge. It’s a flat contradiction in terms. A DRM scheme’s security is only as strong as its weakest device. Every software has bugs, and each time a new device is built, it’s a new opportunity for a hacker to examine it and find flaws. Moreover, as flaws are found (which there always will be) the DRM scheme must be constantly upgraded to fix those flaws. Those upgrades must be done in a synchronized fashion, otherwise upgrades to one device might break compatibility with the others. Coordinating those updates becomes harder as the number of licensees increases.

As a result, the specifications for the DRM scheme must remain secret, and every compatible device must be approved by the owner of the DRM scheme before it’s allowed on the market. You can have “interoperability” in the very limited sense that Microsoft’s DRM scheme is interoperable: multiple companies all share Microsoft’s DRM format and so their files can be shared. But that works because the participating companies are all Microsoft licensees, and Microsoft tightly controls who is allowed to participate and what kinds of devices they’re allowed to make.

Real interoperability as it has existed in the technology industry, is quite different. Modern PC hardware is a good example of this. The processor, the memory, the hard drive, the mother board, the graphics card, and plenty of other parts are all built to publicly available specifications. For each part, there are multiple vendors (Intel and AMD for processors, Seagate and Western Digital for hard drives, ATI and nVidia for graphics cards, etc) competing for the business of computer builders. Any new company that knows how to build a part better or cheaper can build it without asking anyone’s permission. No one–Microsoft, IBM, Intel, or anyone else–has the power to exclude anyone from the PC industry or dictate what features a new PC device can have.

The “interoperability” that DRM builders are talking about isn’t like that at all. DRM interoperability is a closed system, with only those vendors who’ve gotten the permission of the DRM maker allowed to participate. If someone wants to do something that the DRM maker isn’t interested in, that’s just too bad.

Why does this matter? The PC industry has been so astonishingly innovative precisely because there wasn’t a central authority approving every device before it went on the market. Innovation often happens precisely when people mix-and-match technologies from different vendors in ways unforseen by either. And it’s vital that new firms be allowed to enter the market, even if their products threaten the market position of entrenched firms.

Moreover, hobbyists and open-source programmers are completely locked out of DRM schemes. Hobbyists can’t be given access to the secret specifications of DRM systems because there’s not way to prevent them from sharing them with others, or to inspect their devices to make sure they implement the DRM scheme successfully. Open source projects are locked out because by definition, the operation of an open source application cannot be secret, and anyone could modify open source applications to disable the DRM restrictions. The first successfully personal computer (the Aople II)was built by a hobbyists. And the most popular web server (Apache) and the #2 and #3 web browsers (Mozilla/Netscape/Firefox and Safari/Konquerer) are built on open source foundations. If you exclude hobbyists and open source programmers from the DRM marketplace, you’re forgoing a lot of potential innovation.

DRM vendors (and before that, copy protection vendors) have a long history of making promises they couldn’t deliver. Every DRM scheme ever made has been broken. Yet they continue to promise that the next scheme will work better. By the same token, DRM vendors are promising a bright future where all DRM schemes will work seamlessly with each other. But that will never happen, because open DRM, like unbreakable DRM, is a contradiction in terms.

No One “Runs” the Internet

by on November 7, 2005

I’ve got a new article up at Brainwash about the confused state of the debate over Internet governance. Most pundits seem to assume that ICANN has vastly more power than it really does. ICANN’s authority is largely dependent on the support of the Internet community. It can’t kick people off the Internet, censor them, or invade their privacy.

What it does do is perform an important coordinating functions that helps the world Internet community reach consensus on technical quesitons. It’s vital that that process not be undully politicized. The UN has neither the technical expertise nor the institutional self-restraint to excercize that role effectively.

Still, it would have a certain amount of poetic justice. The UN has spent the last 60 years pretending to keep order in the realm of international politics while, in reality, nations mostly ignored them and did as they pleased. It would be fitting if the UN attempted to assert its authority over the Internet, only to find that people ignored them there as well.

Ars Technica has a great article on the 15th anniversary of the World Wide Web:

In an article published to coincide with the Web’s 15th anniversary, James Boyle, law professor and co-founder of the Center for the Study of the Public Domain, points out that the web developed in a unique fashion, due to conditions unlikely to be repeated today. The idea of hypertext was not invented by Berners-Lee. Vannevar Bush proposed a hypertext-like linking system as early as 1945. A working model was built by the team led by Douglas Engelbart, the inventor of the mouse, in 1968. Computer activist Ted Nelson proposed a much more advanced form of the World Wide Web, called Xanadu, in his seminal work Computer Lib. Even Apple created a non-networked version of hypertext called Hypercard in 1987.

The main difference with Berners-Lee’s creation was that it was based on open standards, such as the TCP/IP networking protocol, and that anyone could create content for the World Wide Web with tools no more complex than a text editor. While most people remember the Web taking off with the initial release of a browser from the commercial company Netscape, the original WWW grew mainly out of academia, where source code was traded freely in the interest of promoting learning. The “View Source” feature, available in all browsers today, grew out of this environment.

What also isn’t commonly remembered today is that major commercial interests were trying their best to promote proprietary, closed-off online networks that the Web eventually replaced. Many of these networks ran on mainframes and had stiff hourly access charges, like CompuServe and GEnie. A graphical version, Prodigy, was run by IBM and Sears, and enjoyed some success with a flat-rate access model. The most popular service was America On-Line, which still exists today, albeit as a shadow of its former self. The days where television programs would advertise their “AOL keyword” are rapidly vanishing; today, almost everyone simply gives a URL instead. Ultimately, the larger range of sites provided by the World Wide Web won out over the more restricted and content-controlled private services.

Exactly right. As I’ve said before, I think that the current crop of proprietary music services (iTunes, Napster, Rhapsody, et al) will look as anachronistic in 15 years as Compuserve and Prodigy do today. Open standards produce vibrant technology ecosystems. Proprietary ones produce sterile, stagnant technology platforms. It’s only a matter of town before some music publisher figures out that releasing music in an open format would be a dramatic competitive advantage over the crippled versions being offered by Apple and company.

On the other hand, I think the article’s final paragraph misses the boat:

The open WWW itself is coming under increasing legal threats. The furor caused by phone companies over technology like Voice-over-IP has developed into proposed legal action, threatening the fundamental principle of Network Neutrality (the idea that the network does not care what bits you send over it) that caused the World Wide Web to surge in popularity in the first place. Will powerful corporate interests end up undoing every digital freedom that has been won? Or will the momentum of the WWW carry its open foundations into a new age?

Broadband ISPs aren’t likely to succeed in shutting down VoIP, as most do face at least one competitor, and it would be quite easy for VoIP software to evade attempts by ISPs to squash them. What is a threat to the openness of the Internet is FCC regulation in the name of “net neutrality.” Putting the FCC in charge of telling ISPs how they may or may not run their networks is the first step toward politicizing network policies. Although initially the FCC might use that power to do some worthwhile things, in the long run, the FCC is likely to be captured by special interests and do things the Arsians wouldn’t like in the least.

More Google Confusion

by on November 6, 2005

More misrepresentation of Google Print, this time from Paul Aiken, executive director of the Authors Guild:

“This is the way it’s supposed to work: to give consumers access to books and have revenues flow back to publishers and authors,” Mr. Aiken said. “Conceptually, something similar might be possible for the Google program.”

The clear implication is that, in contrast, under Google Print revenues do not “flow back to publishers and authors.” But that’s nonsense. The Google Print program already splits all advertising revenue with publishers.

Any publisher can go to Google’s web site and sign up to participate in the Google Print publisher program. As the company explains clearly on its web site, participating in the program allows publishers to “attract new readers and boost book sales, earn new revenue from Google contextual ads, and interact more closely with your customers through direct ‘Buy this Book’ links back to your website.” On the other hand, if a publisher chooses not to participate in the program, Google doesn’t display any ads, which means there isn’t any revenue to share.

So the idea of revenue sharing isn’t an abstract possibility for Google Print. It’s the way the program already works. Have any of Google’s critics bothered to read the company’s web site?

The EFF has a new study out surveying the results of more than two years of RIAA lawsuits against file-sharers. I’m ordinarily sympathetic to the EFF’s arguments, but in this case, I agree with Adam:

OK Fred, then what exactly IS the answer to the P2P dilemma? Because you don’t favor individual lawsuits, you don’t favor P2P liability, or much of anything else. This is what infuriates me most about the Lessig-ites; they give lip service to the P2P problem but then lambaste each and every legal solution proposed. In my opinion, if you can’t even support the lawsuits against individual users, then you essentially don’t believe in ANY sort of copyright enforcement.

People who don’t like the RIAA’s litigous agenda need to come up with a workable alternative. Too many people on the anti-RIAA side like to criticize every attempt to enforce current copyright laws without suggesting alternative enforcement mechanisms, and without proposing an alternative legal regime. I’m not comfortable with simply shrugging at wide-spread piracy and telling the RIAA to lower their prices and stop whining.

I do, however, have two caveats. First, I think the EFF’s report does highlight some abuses. Getting sued by a deep-pocketed corporation is an extremely intimidating experience, and it’s probably true that some of the RIAA’s targets were wrongly accused. So we should all be thinking about the legal balance that’s created between the RIAA and accused file-sharers. It might be that a legal regime designed to go after commercial pirates is too heavy-handed to deal with individual file sharers.

Secondly, I think the EFF might be right on the empirical question: that in the long run, these kinds of lawsuits aren’t going to prevent widespread use of P2P software. That doesn’t make piracy OK, and it doesn’t mean the RIAA should stop suing people, but it does mean that they should be thinking hard about what they’ll do if, a decade and 100,000 lawsuits from now, they find that peer-to-peer software is more popular than ever. It might be that there just isn’t any way to stop piracy short of shutting down the Internet. If that’s true, then at some point laws are going to have to change to reflect that reality. It would clearly be a bad idea to have a law that’s universally ignored. But I have no particular insights about what the new legal regime ought to look like.