Cato Unbound is the Cato Institute’s new quasi-blog Web magazine. Their recently released January issue is right up TLF’s alley. The topic this month is “Internet Liberation: Alive or Dead?” It leads off with a “step-outside-the-box-burn-the-box-grow-tea-with-the-ashes-and-read-the-tea-leaves” big-think essay by virtual reality pioneer Jaron Lanier. Today, they posted a crisp litany of criticisms from open source guru Eric Raymond. Replies from Glenn Reynolds, John Perry Barlow, and David Gelernter will roll out over the next week. Recommended reading!

More on Google Video

by on January 11, 2006

What’s really frustrating about Google’s decision to include DRM in its new video software is how unnecessary it was. Look at the content they’re offering: basketball games, reruns of “I Love Lucy” and “The Brady Bunch,” and music videos. This isn’t content for which there’s a strong aftermarket at present. People don’t, as a general rule, cruise eDonkey looking for copies of last year’s basketball game. Music videos are mostly played for free on cable. Shows like “I Love Lucy” have been available on Nick at Night for decades.

Moreover, these video files are too big to be easily and casually swapped among friends. It seems unlikely that very many people would download a 200 MB basketball game and then try to email it to their friends. Sure, a few would probably upload them to P2P networks, but much of this content was on P2P networks already, and it’s only a matter of time before someone hacks Google’s DRM and puts this content up anyway.

So exactly how would it have hurt CBS or the NBA’ bottom line to offer some of this content in a DRM-free format? My guess is: not one bit. They’d have lost a few sales to people who share the content with friends, but this would likely have been a small fraction of the overall sales. On the other hand, they’d increase customer satisfaction by allowing users the freedom to freely make legal uses of their content. Without the impediment of DRM, Google could easily implement seamless transfer of videos to iPods and other portable devices. Geeks like me would feel more comfortable knowing we could watch the videos with third-party video players.

Perhaps most importantly, in a year or two when consumers start to realize that they’re being locked into proprietary formats by Apple, Microsoft, and their ilk, it would be a fantastic marketing opportuinty to be able to point out that their products, in contrast, don’t lock them into one company’s proprietary products.

Unfortunately, Google instead took the easy way out and acquiesced to Hollywood’s demands. Google, Hollywood, and consumers will all be worse off for it.

The End is Near! (for CDs)

by on January 11, 2006 · 6 comments

Ars Technica crunches the numbers and finds that the end of the CD era is fast approaching:

Sticking with twelve tracks to an album, 16 million full albums equal 192 million tracks, or 35 percent of all paid downloads. Also, remember that album sales figures lump the downloads in with the regular CD sales, and 2.6 percent of all album sales were really downloads, just not on a piecemeal basis. With all of these numbers in hand, we can calculate the total market share downloads enjoy today, and it’s a healthy 7.3 percent. If we can assume that the ratio of full-album to single-track downloads stays relatively stable over time, the online market share was 2.3 percent in 2004. The 149 percent sales growth sounds good all right, but it’s nothing compared to a 220 percent market share gain from one year to the next.

So online downloading’s share of the overall music market tripled between 2004 and 2005. If growth comtinues at that pace, online download revenue should outpace physical CDs sometime in 2008, and CDs are likely to be an inconsequential fraction of the overall music market by the end of the decade.

The record industry would do well to think hard about how this tidal wave will affect their business strategies. In particular, they should be careful not to hand technology companies like Apple a dominant position in their industry using a misguided DRM strategy. If they think negotiating with Steve Jobs was tough last year, when he was generating 5% of their revenue, it will be much worse next year when it’s 25%!

I’m with Mike:

Google’s copy protection scheme sounds just as bad as we feared. It is their very own, and it will limit what you can do with the video significantly. You can’t transfer the video to mobile devices. It doesn’t work on a Mac. And, you can only view the video when you’re online, as the copy protection obviously is calling home first (which, of course, opens up the potential of security holes).

On the flip side, Google will (I’m sure) quickly point out that their DRM offers more “flexibility” than others, in that you don’t have to use it, and if you do, you have choices about how restrictive it is. In other words, Google is basically going to say that they built the locks, but it’s up to the content provider to be evil with those locks. As part of this whole offering of letting anyone sell videos through their system, they’re also offering more payment options so that (unlike iTunes) content providers can choose how much things cost, and even allows some variability (for example, Charlie Rose will offer free streaming for a day after his shows air, and then unencumbered downloads for $0.99 after that). Google takes a 30% cut of any sale. It’s nice that they’re giving content providers some choice, but it’s still quite worrisome that there’s now yet another incompatible copy protection scheme that will be making the rounds. This isn’t good for anyone and shrinks the overall market. Google may think that it was “necessary” to simply give content providers the option to hang themselves with bad copy protection, but it’s a cop out position. Google, at this point, should have a strong enough market position to let content providers know that there’s a better way to offer content without copy protection–and if content providers are too scared, that’s their problem. Eventually they would come around when they saw success stories without copy protection.

So now if I want to build a personal library of legally-purchased digital video, I have to decide whether to buy them in iTunes, Windows Media, or Google formats. Whichever company I choose, I can only play my videos on that company’s proprietary products. If I choose iTunes, then I can’t play it on a PSP, and I’m subject to the whims of Steve Jobs. If I choose Google or Windows Media, I can’t play videos on my Mac or my iPod. None of them will work with Linux. And if I ever decide I’m fed up with Apple, Microsoft, or Google, and want to switch to a different platform, that’s tough: I can’t take my videos with me.

This is progress?

I’m going to respectfully disagree with Sonia Arrison, and agree with Mike over at Techdirt: voting machine manufacturers should be required to make their source code–all of it–available for public inspection, and a mandatory paper trail (in which each vote is recorded on paper as it’s made) is a good idea as well.

The issue seems pretty simple to me: by far the most important concern when it comes to voting machines is transparency. I want to be sure that my vote will be counted fairly. The more people who inspect the machine’s code to make sure it works as it should, the more confidence I’m going to have in the process.

It’s not clear to me what the counterargument is. The source code for a voting machine isn’t particularly valuable. More to the point, publicly available code doesn’t have to mean open source in the traditional sense: the source can easily be licensed for public inspection, but not for use in competing products.

Moreover, it’s not like we need to worry about stifling innovation in the voting machine industry–voting machines perform a relatively simple task, and I don’t particularly want them to innovate, if “innovation” involves rapid technological change. Stability, predictability, and security are far more important considerations, and those requrements don’t change very much over time. It seems unlikely that the quality of our voting machines will suffer if voting machine companies are required to disclose

As for the paper trail issue, the key issue is that it’s important that as many people as possible have confidence in the integrity of the voting process. While advanced cryptographic methods might be more reliable in a theoretical sense, their very complexity makes it difficult for ordinary voters (most of whom are not computer experts) to have confidence in them. Moreover, there’s always a possibility that the technological wizards will overlook something–that some key component will fail that will lead to votes being lost, or to confusion over the final vote total. If that happens, the existence of paper records, which are freely available for public inspection, will be crucial to establishing the credibility of the voting process.

I’m actually sympathetic to the view that switching to electronic voting machines at all is a bad idea, at least for the near future. Although there have been problems with punch-card ballots, and those should be replaced, the latest optical-scan voting systems seem to work just fine, and I don’t see any compelling reason to change them. Why fix what’s not broken, especially when the integrity of our elections are at stake? Why not stick with small-scale trials of e-voting for the next decade, to make sure the kinks have been worked out?

But if we are going to adopt electronic machines on a wide scale, we should do so cautiously and transparently. Publicly available source code and a mandatory paper trail are sensible steps in that direction.

Here is yet another example of software patent abuse. A patent trolling firm called Rates Technology is suing Google alleging patent infringement by its Google Talk program.

I haven’t found a copy of the lawsuit, but according to the Register, one of the two patents at issue is Patent Number 5,519,769, “Method and system for updating a call rating database”:

The advantages and features of the present invention now allows the database that stores billing rate parameters in a rate table for call rating devices to be updated. The call rating device is connected at a predetermined time and date via a data transfer line to a rate provider having billing rate parameters for a plurality of calling stations. Indicia identifying the call rating device and the date and time of the last update of the billing rate parameters is transmitted over the data transfer line to the rate provider. The rate provider verifies that the billing rate parameters should be updated, and it transmits to the call rating device the updated billing rate parameters when the rate provider determines that an update is required.

It goes on in this vein for paragraphs and paragraphs. Skimming the entire patent, I don’t really understand how this could be considered an “invention.” Obviously, if you wanted to find the lowest-cost route for a particular call, you would poll each possible service provider seeking their rates, and then store their answers in a database, which would be updated periodically. If any idea is “obvious,” that surely is.

I’m also bewildered about how Google Talk could be considered to have infringed this patent, unless anyone who routes calls over a switched network is considered to be an infringer. If that’s what’s going on, then this is a pretty good example of why a “no software patents” rule would good policy: whatever the merits of this patent as applied to telecommunications hardware, it’s pretty clearly an impediment to innovation when applied to a software-only product like Google Talk.

Update: Here is the complaint. It doesn’t appear to give any details about how Google Talk infringes the patents.

Richard Epstein has a new essay on the DMCRA, Rep. Boucher’s DMCA-reform legislation:

But means as well as ends matter in the constant struggle to deal with copyright piracy. In looking at the structural problem, the key question is just how much noninfringing use is there relative to the torrent of illegal copying. In answering this question, it’s not appropriate to look at the issue of interoperability, because that has already been dealt with first by the DMCA and second by the standard end user licenses. So it is not likely that there is much fair use to worry about.

Once the first of these two provisions is in place, then someone can circumvent the device for the appropriate purpose. But unfortunately H.R. 1201 does not say one word about how the circumvention in question will be limited just to those cases. Nor does it indicate what penalties will be given to individuals who first circumvent for fair use and then proceed, as is likely to be the norm, to circumvent for all other purposes. So if equipment can be sold for good purposes, then it can be used for bad ones, and the DMCA has lost its teeth. It is not too much to say that this stealth provision, which is never referred to in the findings of the act could work a comprehensive repeal of the DMCA. Much too much is lost, and very little is gained.

He’s wrong about interoperability: although the DMCA does purport to carve out an exemption for interoperability, that exception is of virtually no help in practice. The reasons are a bit complex, and I deal with them extensively in my forthcoming Cato Policy Analysis, so I won’t rehash them here. Suffice it to say that despite the reverse-engineering exception, the DMCA effectively makes it illegal to interoperate with DRMed products, and that’s a very bad thing.

I think the professor is being a bit too clever with his mock surprise at the “stealth provision” repealing the DMCA. I was under the impression that everyone understood that was its purpose. Indeed, I’ve heard that the primary reason that the labeling provisions were included were so that the bill could be considered in the Commerce Committee, chaired by a sympathetic Rep. Barton, rather than the Judiciary committee, which is less friendly to DMCA reform. If the Judiciary Committee were more sympathetic, Boucher would doubtless be happy to introduce just the DMCA-reform portions of his legislation there. There’s certainly nothing “stealth” about the bill, given that commentators routinely cite Boucher’s bill as effectively repealing the DMCA’s anti-circumvention provisions.

He does, however, have an interesting point about the section that codifies the Betamax rule:

Continue reading →

Declan McCullagh and Anne Broache at CNET today report on federal agencies tracking web visitors against the rules. It’s not surprising, but it is disturbing. If government wants to increase surveillance in America and argues that we should trust them to follow the rules, then this example puts a huge dent in their argument. If a company made a similar mistake, they’d be facing a fine, but what will happen to the bureaucrats who ignore their public duties?

This is cross posted from Sonia Arrison’s blog.

Replace Howard Stern? Like him or not, that’s a tall order for CBS Radio (formerly Infinity Broadcasting). In fact, the company chose two people to replace the famously potty-mouthed and popular radio host (who now operates from Sirius satellite radio, safe from the FCC’s prying ears.) On the east coast, listeners will hear David Lee Roth, on the West Coast, Adam Carolla–known to cable television viewers for hosting the “Man Show.”

So where do you go to promote your show if you are replacing Howard Stern? To National Public Radio, of course. This morning Carolla did just that, shaking up the normally ever proper and serious “Morning Edition” show.

Toward the end of the interview, host Rene Montagne asked Adam whether he’d been given a talk on FCC indecency rules. He had of course–Infinitiy’s lawyer’s aren’t crazy. But Carolla then quickly pointed out this stuff wasn’t new to him. Everyone asks him, he said, whether he’s just going to go out and start throwing the “f-bomb”. His answer: “I’ve been on TV for ten years, what the [bleep] do you think I’m gonna do?”

It was probably the first time in decades the Morning Edition crew had to use their bleep button. Stern may be gone, but things may still be interesting for CBS and the FCC. Stay tuned.

I started work at the Cato Institute at the beginning of 1997, and here it is 2006. As I write, I know that very few of the reforms that I and other free-marketers advocate have ever been enacted. Some bad legislation has been prevented (opt-in!); some unconstitutional legislation has been voided; the FCC has continued to move towards something more like real property rights in spectrum at an absurdly incremental pace. But universal service has not been abolished or even replaced with targeted subsidies or auctions. Indecency rules continue to be used to harass broadcasters. A few predicted that the Net would make censorship impossible, or that cyberspace would become its own sovereign nation. Yet China censors the Net with mixed success; Yahoo and other companies must cooperate or get out.

In spite of this, I am full of hope for the future…

Continue reading →