September 2005

Yesterday, economist Daniel English and I filed comments at the FCC in the cable ownership caps proceeding. (MM Docket No. 92-264)

In the filing, Daniel and I argue that because of the array of new distribution platforms and the resulting increase in outlets and competition, cable ownership caps are no longer needed. More broadly, we show how the old regulatory model for analyzing media ownership issues is obsolete in light of rapid market evolution and ongoing technological innovations. Any future market power issues that might creep up in the media sector can be better handled by relying on the nation’s antitrust laws, we argue.

In our new media model, we show how media content is no longer tied to specific distribution platforms and receiving devices because of the transition to digital formats and transmission methods. The combination of ongoing digital convergence and an increase in market innovation and entry makes it nearly impossible for any media company to restrict the flow of programming in the market. Simply stated, cable ownership regulations are specifically geared towards an outdated media environment and no longer apply to today’s video marketplace.

Again, the paper can be found here, and you might also be interested in this earlier filing that we did in the Comcast-Time Warner-Adelphia merger proceeding. Daniel and I also recently penned a short paper entitled “Testing ‘Media Monopoly’ Claims: A Look at What Markets Say” in which we evaluate the market performance of five large media stocks (Time Warner, News Corp., Clear Channel, Comcast, and Viacom) over the past five years and show that they have lost 52 percent of their market value (in terms of market capitalization). Moreover, we chart the performance of the entire Dow Jones U.S. Broadcasting & Entertainment Index and show that it is down almost 45 percent below where it stood in 2000. This is another indication that the media industry is not one big monopoly but instead subject to intensely competitive rivalry and new entry / substitutes.

Finally, I’ll just put in another plug for my recent book on these issues, “Media Myths: Making Sense of the Debate over Media Ownership” as well as an important paper by media economist Bruce Owen that we recently published, “Confusing Success with Access: ‘Correctly’ Measuring Concentration of Ownership and Control in Mass Media and Online Services.”

This is three weeks old, but I missed it when it happened, so better late than never:

Google has put its digital library project on hold until November, citing concerns from copyright holders and publishers.

The search company unveiled Google Print in October 2004 as a way for publishers to make their books accessible over the Internet. The company also quickly introduced a comparable program to allow libraries to scan in their collections, index them, and make them web-accessible as well.

The rest of the article has a bunch of details on the concessions Google is making to the publishers to keep them happy. Maybe I’m missing something, but it looks to me like Google is bending over backwards to accomodate the publishers, and the publishers are stiff-arming them. As I argued back in July, I think Google has a perfectly plausible case that Google Print is a fair use under copyright law, and that they have every right to do what they’re doing without consulting the publishers and without giving them a cut of the ad revenue. Yet even after Google has offered to share the revenues with the publishers and easily opt out of the program, the publishers have refused to budge.

I think it’s hard to overstate how big a deal this is. We’re all used to using search engines to search content on the web. It would be amazing if the same functionality were to exist with every book in the library. Until now, the barrier has been technology. But now, when such a search engine is becoming technologically feasible, it looks like it’s going to be thwarted by the lawyers.

Why shouldn’t Goole just negotiate with the publishers and get their permission? If you want to see how that will end up, just look at Nexis and Factiva. Nexis is an extremely useful and powerful search engine, but it’s expensive, it’s proprietary, and it’s not especially user-friendly. And if you want to search for articles that appeared in the Wall Street Journal, you can’t do that, because the Dow Jones company decided they wanted to create their own proprietary search service. It’s a royal pain in the ass, and it will probably never change, because anyone who tried to create a full-text search of Wall Street Journal content– even if they made you buy the actual articles from the WSJ web page– would get sued out of existence.

We’re at a crossroads. If Google (or someone else) stands up to the publishing industry and wins in court, we’ll end up with a future in which book searches are like web searches. If Google caves, and no one else dares get into a lawsuit with the publishers, then book searching is going to look like Nexis and Factiva– proprietary, expensive, and fractured among different mutually exclusive services.

The saddest part, I think, is that hardly anyone is even paying attention. If Google Print gets strangled by the publishing industry, 99% of consumers will never even realize what they’ve lost.

I never ceased to be amazed at the staying power of “Moore’s Law.” Moore’s Law, of course, states that the number of transistors on a chip doubles roughly every 18 months to two years. This “law” continues to hold and most experts continue to believe it will hold for at least two more decades.

As a result, computers today can be found just about everywhere, even in places we don’t realize. Intel’s website, for example, notes that “A musical birthday card costing a few U.S. dollars today has more computing power than the fastest mainframes of a few decades ago.” That’s right, that silly singing birthday card you bought for your grandma last month probably had more computing power than the machines that sent man to the moon in the 1960s. Crazy, isn’t it?

And don’t forget about “Kryder’s Law,” which argues that Moore’s Law holds for computer storage capacity as well. Indeed, according to a recent report in Scientific American, the density of information on hard drives has been growing at an even faster rate than semiconductor power. The article notes that:

“Since the introduction of the disk drive in 1956, the density of information it can record has swelled from a paltry 2,000 bits to 100 billion bits (gigabits), all crowded in the small space of a square inch. That represents a 50-million-fold increase. Not even Moore’s silicon chips can boast that kind of progress.”

Again, just amazing, don’t you think? OK, maybe you don’t get as easily turned on as I do by numbers such as these, but let me explain to you what makes this nerdy stuff so darn cool for even the average, non-techie consumer. What is really impressive about Moore’s Law and Kryder’s Law is that these laws have translated into absolutely stunning efficiency gains and price savings for every single American.

Continue reading →

Last week the California Public Utilities Commission supported a statement of policy in favor of “standalone DSL.” Standalone, or “naked” DSL is when DSL service is provided without local phone service. The PUC said it voted to support a policy of “consumer choice.” But when the a la carte preferences of regulators and some consumers conflict with the bundled prerogatives of technology providers and other consumers, which policy should win out?

Policymakers should resist the urge of forcing communication providers to unbundle their products. And while the California PUC’s policy statement has only the force of persuasion, not law, the premise underlying the statement is still wrong. Bundling is clearly a good thing for the vast majority of consumers. It’s not gouging. It’s not unreasonable tying. Instead, it’s just another example of the way that communication products will be packaged in a world where telephone networks compete against cable and wireless networks.

Continue reading →