January 2006

A la carte regulation continues to be a hot topic these days and I’ve recently penned two new editorials on the subject.

First, I have a piece in today’s National Review Online with the rather boring title “A La Carte Cable Concerns.” It’s a response to this piece by Cesar Conda in which he made they case for conservatives to support a la carte regulation.

Second, Ray Gifford and I penned an editorial on a la carte for the Rocky Mountain News last week as a counter-point to an essay by Brent Bozell and Gene Kimmelman. Our editorial can be found here.

For more background on the a la carte debate, read my PFF paper on the “Moral and Philosophical Aspects of the Debate over A La Carte Regulation” and this essay on “A ‘Voluntary’ Charade: The ‘Family-Friendly Tier’ Case Study.”

Yet More Telecom Competition

by on January 17, 2006

Here’s another example of the ever-increasing competition in the telecom industry: BusinessWeek reports that Rupert Murdoch is considering investing a billion dollars to transform the DirecTV satellit network, which currently allows only one-way transmission of high-bandwidth content, into a full-fledged broadband network offering voice, video, and data service. That would put it squarely in competition with the cable industry and the Baby Bells, both of whom are moving toward that same type of “triple play” broadband service.

If Murdoch follows through with this, and if the Baby Bells roll out fiber-optic networks as planned, that will mean that most homes will have at least three options each for voice, video, and data services.

Actually, there are more choices than that: there are already a half-dozen mobile phone companies competing for voice business. For video, consumers have the option of broadcast TV, which now features crystal-clear picture due to digital transmission. And Internet Giants like Google, AOL, and Apple seem to be announcing new Internet video options every month. And for data, there are dedicated lines available at the high end, and dial-up modem access at the low end of the market, not to mention a growing number of WiFi hotspots. In short, the typical consumer has a dizzying array of choices for all of his telecom needs.

Why exactly are we still regulating these industries as though they’re natural monopolies?

Is Patrick Ross trying to make his own side look bad?

The conference heard a compelling perspective on software patents from Guenther Schmalz, director of IP for Europe for SAP, the largest European software maker and the third-largest software maker in the world. SAP burst on the scene in the early1970s, like Microsoft moving into areas of software that IBM had no interest in commoditizing. For years SAP grew and grew and had no patents. Now they’re ramping up a major patent office with a few dozen attorneys around the world. Why? “Times have changed,” Schmalz said. In the early days, SAP’s competitors were way behind, but now competitors are everywhere. Patents are the only way for SAP to ensure returns on its development investment, he said, adding that copyright is no solution, as the actual writing of code only makes up about 20% of the development of software.

“Those who drive innovation need patents,” he said. “Those who don’t imitate.”

To put this a little bit differently, in the past, SAP was an innovative company that was able to stay ahead of the competition by virtue of their superior technology. However, now that they’re a fat, lazy incumbent, they’re discovering the joys of using patent law as a club against their more innovative competitors.

The idea that creators have the right to recoup all of the value they create is a common idea in intellectual property debates. Larry Lessig quotes NYU law professor Rochelle Dreyfuss as dubbing this the “if value, then right” theory of intellectual property. It’s an idea that sounds intuitively plausible, but with a little bit of reflection, it becomes obvious how pernicious it is.

After all, capitalism is founded on the ideal of vigorous competition. When some entrepreneur invents a new product category of business model–say, the fast-food restaurant–he invariably attracts a swarm of competitors. Competition forces down the prices in this newly-created industry, preventing the innovator from capturing the full value of his or her invention. Ray Kroc’s invention of the fast food restaurant, for example, created a whole lot of value, and most of that value went to consumers, not to Kroc. He wasn’t able to use the law to exclude competitors and “ensure returns” on his innovation. To the contrary, under our economic system, he was required to continue innovating if he wanted to turn a profit.

It’s quite possible that Kroc felt resentful toward these upstart “imitators” who copied his business model without (he would claim) adding value themselves. But that’s the way the business world works. Competition is cutthroat, and businesses who stop innovating shrivel up and die. That’s as it should be, and consumers benefit tremendously from such creative destruction.

By the same token, SAP probably resents all these pesky little companies that are barging in on market opportunities that they pioneered. Tough. If they want to stay at the top of their industry, they need to continue innovating. They have no right to expect the government to “ensure returns” on their investments. That’s the way it goes in most other industries, and it’s the way the software industry ought to work as well.

Telecom Reform: Just Say No

by on January 17, 2006 · 2 comments

Over at Brainwash, I argue that the best outcome libertarians can hope for from the telecom reform debate is probably for Congress to do nothing.

The question Congress ought to be asking is: “why are we regulating these industries at all?” Telecommunications regulations are traditionally justified as a way to limit “natural monopolies,” but there don’t seem to be any monopolies left. A sensible telecom reform would repeal the anachronistic regulations that draw increasingly meaningless distinctions among voice, video and data services.

Instead, Congress is headed in precisely the opposite direction. In September, the House released a draft of proposed telecom legislation that would create three new categories of service: Broadband Internet Transmission Service (BITS), Broadband Video Service (BVS), and Voice Over Internet Protocol Service (VoIP). Each of these services would be subject to its own set of arcane regulations, which are largely focused on shoehorning 21st-century services into a 20th-century regulatory framework.

Obviously, it would be great if Congress saw the light and passed genuine deregulation. But I think the odds of that are very, very low. And as I argue in the essay, the 1996 Telecom Act is now so out of touch with reality that it amounts to de facto deregulation, as the FCC just doesn’t have the authority to regulate the latest technologies. Let’s hope it stays that way.

Can You Copyright Viagra?

by on January 12, 2006

James DeLong analogizes pharmaceutical and software patents:

The general principle is that if governments remove the incentives for invention and innovation, then (surprise!) it does not happen. This applies to software and other creative products as well as pharma.

This is a plausible point until you realize that software is already protected by copyright law. And for a variety of reasons I’ve argued before, software patents have a lot more flaws than software copyright. I have yet to see anyone explain why software patents are necessary, given that copyright law appears to give plenty of protection for precisely the same investments.

Cato Unbound is the Cato Institute’s new quasi-blog Web magazine. Their recently released January issue is right up TLF’s alley. The topic this month is “Internet Liberation: Alive or Dead?” It leads off with a “step-outside-the-box-burn-the-box-grow-tea-with-the-ashes-and-read-the-tea-leaves” big-think essay by virtual reality pioneer Jaron Lanier. Today, they posted a crisp litany of criticisms from open source guru Eric Raymond. Replies from Glenn Reynolds, John Perry Barlow, and David Gelernter will roll out over the next week. Recommended reading!

More on Google Video

by on January 11, 2006

What’s really frustrating about Google’s decision to include DRM in its new video software is how unnecessary it was. Look at the content they’re offering: basketball games, reruns of “I Love Lucy” and “The Brady Bunch,” and music videos. This isn’t content for which there’s a strong aftermarket at present. People don’t, as a general rule, cruise eDonkey looking for copies of last year’s basketball game. Music videos are mostly played for free on cable. Shows like “I Love Lucy” have been available on Nick at Night for decades.

Moreover, these video files are too big to be easily and casually swapped among friends. It seems unlikely that very many people would download a 200 MB basketball game and then try to email it to their friends. Sure, a few would probably upload them to P2P networks, but much of this content was on P2P networks already, and it’s only a matter of time before someone hacks Google’s DRM and puts this content up anyway.

So exactly how would it have hurt CBS or the NBA’ bottom line to offer some of this content in a DRM-free format? My guess is: not one bit. They’d have lost a few sales to people who share the content with friends, but this would likely have been a small fraction of the overall sales. On the other hand, they’d increase customer satisfaction by allowing users the freedom to freely make legal uses of their content. Without the impediment of DRM, Google could easily implement seamless transfer of videos to iPods and other portable devices. Geeks like me would feel more comfortable knowing we could watch the videos with third-party video players.

Perhaps most importantly, in a year or two when consumers start to realize that they’re being locked into proprietary formats by Apple, Microsoft, and their ilk, it would be a fantastic marketing opportuinty to be able to point out that their products, in contrast, don’t lock them into one company’s proprietary products.

Unfortunately, Google instead took the easy way out and acquiesced to Hollywood’s demands. Google, Hollywood, and consumers will all be worse off for it.

The End is Near! (for CDs)

by on January 11, 2006 · 6 comments

Ars Technica crunches the numbers and finds that the end of the CD era is fast approaching:

Sticking with twelve tracks to an album, 16 million full albums equal 192 million tracks, or 35 percent of all paid downloads. Also, remember that album sales figures lump the downloads in with the regular CD sales, and 2.6 percent of all album sales were really downloads, just not on a piecemeal basis. With all of these numbers in hand, we can calculate the total market share downloads enjoy today, and it’s a healthy 7.3 percent. If we can assume that the ratio of full-album to single-track downloads stays relatively stable over time, the online market share was 2.3 percent in 2004. The 149 percent sales growth sounds good all right, but it’s nothing compared to a 220 percent market share gain from one year to the next.

So online downloading’s share of the overall music market tripled between 2004 and 2005. If growth comtinues at that pace, online download revenue should outpace physical CDs sometime in 2008, and CDs are likely to be an inconsequential fraction of the overall music market by the end of the decade.

The record industry would do well to think hard about how this tidal wave will affect their business strategies. In particular, they should be careful not to hand technology companies like Apple a dominant position in their industry using a misguided DRM strategy. If they think negotiating with Steve Jobs was tough last year, when he was generating 5% of their revenue, it will be much worse next year when it’s 25%!

I’m with Mike:

Google’s copy protection scheme sounds just as bad as we feared. It is their very own, and it will limit what you can do with the video significantly. You can’t transfer the video to mobile devices. It doesn’t work on a Mac. And, you can only view the video when you’re online, as the copy protection obviously is calling home first (which, of course, opens up the potential of security holes).

On the flip side, Google will (I’m sure) quickly point out that their DRM offers more “flexibility” than others, in that you don’t have to use it, and if you do, you have choices about how restrictive it is. In other words, Google is basically going to say that they built the locks, but it’s up to the content provider to be evil with those locks. As part of this whole offering of letting anyone sell videos through their system, they’re also offering more payment options so that (unlike iTunes) content providers can choose how much things cost, and even allows some variability (for example, Charlie Rose will offer free streaming for a day after his shows air, and then unencumbered downloads for $0.99 after that). Google takes a 30% cut of any sale. It’s nice that they’re giving content providers some choice, but it’s still quite worrisome that there’s now yet another incompatible copy protection scheme that will be making the rounds. This isn’t good for anyone and shrinks the overall market. Google may think that it was “necessary” to simply give content providers the option to hang themselves with bad copy protection, but it’s a cop out position. Google, at this point, should have a strong enough market position to let content providers know that there’s a better way to offer content without copy protection–and if content providers are too scared, that’s their problem. Eventually they would come around when they saw success stories without copy protection.

So now if I want to build a personal library of legally-purchased digital video, I have to decide whether to buy them in iTunes, Windows Media, or Google formats. Whichever company I choose, I can only play my videos on that company’s proprietary products. If I choose iTunes, then I can’t play it on a PSP, and I’m subject to the whims of Steve Jobs. If I choose Google or Windows Media, I can’t play videos on my Mac or my iPod. None of them will work with Linux. And if I ever decide I’m fed up with Apple, Microsoft, or Google, and want to switch to a different platform, that’s tough: I can’t take my videos with me.

This is progress?

I’m going to respectfully disagree with Sonia Arrison, and agree with Mike over at Techdirt: voting machine manufacturers should be required to make their source code–all of it–available for public inspection, and a mandatory paper trail (in which each vote is recorded on paper as it’s made) is a good idea as well.

The issue seems pretty simple to me: by far the most important concern when it comes to voting machines is transparency. I want to be sure that my vote will be counted fairly. The more people who inspect the machine’s code to make sure it works as it should, the more confidence I’m going to have in the process.

It’s not clear to me what the counterargument is. The source code for a voting machine isn’t particularly valuable. More to the point, publicly available code doesn’t have to mean open source in the traditional sense: the source can easily be licensed for public inspection, but not for use in competing products.

Moreover, it’s not like we need to worry about stifling innovation in the voting machine industry–voting machines perform a relatively simple task, and I don’t particularly want them to innovate, if “innovation” involves rapid technological change. Stability, predictability, and security are far more important considerations, and those requrements don’t change very much over time. It seems unlikely that the quality of our voting machines will suffer if voting machine companies are required to disclose

As for the paper trail issue, the key issue is that it’s important that as many people as possible have confidence in the integrity of the voting process. While advanced cryptographic methods might be more reliable in a theoretical sense, their very complexity makes it difficult for ordinary voters (most of whom are not computer experts) to have confidence in them. Moreover, there’s always a possibility that the technological wizards will overlook something–that some key component will fail that will lead to votes being lost, or to confusion over the final vote total. If that happens, the existence of paper records, which are freely available for public inspection, will be crucial to establishing the credibility of the voting process.

I’m actually sympathetic to the view that switching to electronic voting machines at all is a bad idea, at least for the near future. Although there have been problems with punch-card ballots, and those should be replaced, the latest optical-scan voting systems seem to work just fine, and I don’t see any compelling reason to change them. Why fix what’s not broken, especially when the integrity of our elections are at stake? Why not stick with small-scale trials of e-voting for the next decade, to make sure the kinks have been worked out?

But if we are going to adopt electronic machines on a wide scale, we should do so cautiously and transparently. Publicly available source code and a mandatory paper trail are sensible steps in that direction.