Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Who says there’s no broadband competition? While reading up on franchise reform, it occurred to me that I hadn’t tried the “threaten to switch and get a discount” tactic on my broadband provider, Charter. A 10-minute conversation with an extremely helpful gentleman cut my monthly bill nearly in half, from $52.99 to $29.99 in exchange for a one-year contract. And I only paid the $52.99/month for one month, because the first six month I was paying $26.99 in a 6-month introductory offer.

This was for “naked” broadband–I don’t watch TV and I’m cell phone-only. And it’s no doubt a result of AT&T’s aggressive $12.99 DSL offering. Cable Internet is considerably faster than DSL, so I’m happy to pay a moderately higher price for the faster speed. I remember paying over $50/month for a significantly slower DSL connection as recently as 2002.

Could things be even more competitive? Sure. But I think this illustrates that a duopoly is dramatically preferable to a monopoly. The best way to help the consumer is by cutting red tape for cable and telephone companies so they can continue undermining each others’ monopolies in voice, video, and data.

Oh, and if you aren’t currently getting some sort of discount from your broadband provider, give them a call and threaten to switch to the other team. Chances are you’ll get big savings on your monthly bill.

A Correction

by on April 7, 2006 · 2 comments

Steve Wildstrom, who to my mind is the savviest tech columnist in the mainstream press, flags an error in my DMCA paper, on page 12:

Did banning DeCSS at least make it more difficult to pirate movies? There’s little reason to think so. The CSS system prevents playback of DVD movies, but it does nothing to prevent duplication of the scrambled data. A pirate can make a perfect copy of a scrambled DVD without ever cracking its encryption. No circumvention software is needed to download CSS-scrambled video, burn it to a DVD-R disc, and play it in any consumer DVD player.

Via email, Wildstrom writes:

This is inaccurate because of a secondary, little-known protection scheme. Writeable DVD media come in two types, designated A (for authoring) and G (for general). A CSS encoded DVD can only be copied bit-for-bit onto Type A media, and by means that I don’t understand, the industry has managed to maintain extremely tight controls on the distribution of Type A disks. That is why commercial DVD copying software like 123 Studio’s DVD X Copy (forced off the market by DMCA litigation) was not able to make exact copies of commercial DVDs. The same is true of the numerous non-commercial programs still available on the Internet. Because CSS is trivially broken, the unavailability of of Type A media has probably done more to prevent amateur copying of DVDs than has CSS.

I did not know about this distinction, but I’m inclined to believe him, especially given that another fellow wrote to make the same point. So my apologies for the error.

I think it’s worth pointing out, however, that (at least based on my admittedly limited knowledge of the underlying technologies) this objection would apply only to burning DVDs, not to pressing them. Commercial pirates are far more likely to use the latter, so CSS isn’t going to prevent commercial piracy. I hope someone will correct me if I screwed that up as well.

Also, while I’m on the subject of Mr. Wildstrom, he’s got a great column up on the mess DRM is making of digital video.

Supporters of network neutrality regulation have been deploying a lot of apocalyptic rhetoric. For example, before yesterday’s Commerce committee vote on network neutrality regulation, Rep. Markey warned, “We’re about to break with the entire history of the Internet.” And in the same article, we learn that Rep. Eshoo thinks that “this walled garden approach that many network providers would like to create would fundamentally change the way the Internet works and undermine the power of the Net as a force of innovation and change.”

This is ridiculous. In the first place, the Internet is much bigger than the American broadband market, to say nothing of any one broadband ISP. Even if all of the major American telcos were to simultaneously cut American broadband users off from the Internet (which would obviously never happen), the rest of the world can perfectly well carry on operating the Internet without us, and we could pass network neutrality legislation at that point to force the telcos to re-connect us to the real Internet.

In the second place, it’s important to keep in mind the kind of network discrimination the telcos are likely to use. They’re not going to block users’ access to the Google website unless Google coughs up an access fee. That would be financial suicide. What they’re interested in doing is setting aside some of the new capacity they’re building to deliver their own services. For example, they might build a 25 Mbit fiber pipe into a consumer’s home, and reserve 20 Mbits of it for their own video applications. Now, I think that would suck. But it doesn’t “change the nature of the Internet,” any more than it changes the nature of the Internet when Comcast uses the bandwidth on its coax pipes to deliver video content to its cable subscribers. In this scenario, I’d still have 5 Mbits of “network neutral” access to the Internet. I could do everything on that pipe that I’m able to do today. So the issue isn’t about “the nature of the Internet.” It’s about whether Comcast has the right to decide how to use the infrastructure it deploys.

Finally, if anything, the apocalyptic scenarios run in the other direction. If Congress does nothing this year, they’ll have every opportunity to step in next year, or the year after, to stop any nightmare scenarios that might unfold. On the other hand, if network neutrality regulations are passed and they turn out to be a disaster, they’re unlikely to be repealed. Bad regulations are never repealed. Instead, they spawn endless litigation and “reform” proposals that are even more intrusive. Once we give the FCC authority to regulate the Internet, there’s no going back.

There’s no looming crisis here requiring Congressional intervention. If the pro-regulatory folks turn out to be right, we can always come back and have this debate again in a couple of years. But it’s extraordinarily premature to create a new regulatory framework for technologies that are barely off the ground.

Solveig Singleton has a great post over at the PFF blog setting the record straight on build-out requirements. She really should have posted the post here herself, but since she didn’t, I’ll do it for her:

This whole debate is saddening, and a little surreal. Here are some basic realities about build-out: Whether or not an area can be profitably built out has to do mainly with population density. Low-income areas tend to be high-density (at least in urban centers) and therefore historically the tendency has been that these areas are built out well before more sparsely populated suburban areas. Furthermore, lower-income areas have had a pretty healthy demand for tech services. The odd thing is that legislators such as Rep. Markey, who have been around the tech legislation scene for years, really should know this. The second fact is that build-out requirements have a rather sad history themselves: As economist Tom Hazlett has thoroughly documented, these requirements were rarely imposed on the first entrant into the cable market. Generally it was rare for such entrants to build out into the entire market immediately, it usually took a few years. The idea that first entrants have labored under such requirements is a myth, a myth fostered largely to present formidable obstances to the entrance of a second competitor in the market.

The more I learn about the issue, the more amazed I become at how weak the arguments of franchise reform opponents are.

Apple’s Wednesday announcement of Boot Camp, a utility that allows users to run Windows on their Intel-based Macs, may be the final chapter in the decades-long commodification of the PC industry. “Wintel” PCs were commodified by the rise of “IBM clones” in the early 1980s, and the release of Pentium clones and LInux in the 1990s. By the mid-1990s, virtually every component in a Wintel PC was a commodity with vigorous intra-platform competition.

Apple began joining the commodity hardware party in earnest with the release of the iMac, which abandoned several Apple-only hardware components in favor of PC equivalents. Over the subsequent 8 years, they gradually phased out virtually all of their Mac-specific hardware, culminating in the adoption of Intel processors early this year. And this week they put to rest any notion that a Mac is anything but a glorified PC by giving users an easy way to install Windows on their Macs if they want to.

This is surprising because Steve Jobs is a control freak. When he rejoined Apple in 1997, he killed off the Macintosh clone program, which was beginning to allow third parties to build Mac-compatible computers. Five years ago, it would have been crazy-talk to predict that Jobs would soon transform Macs into glorified PCs with pretty cases.

What has happened, though, is that economies of scale have became such a powerful force that no one, even a closed-platform zealot like Jobs, could resist them. In the last few years, Intel and AMD together have sold more than ten times as many chips as did the PowerPC manufacturers who supplied Apple. As a result, they could afford to spend ten times as much on R&D. No amount of ingenuity or superior processor architecture can make up for such a lopsided funding advantage.

In addition, I suspect the iPod experience has changed Jobs’s perspective. It’s hard to fathom today, but the iPod was originally conceived as a loss-leader to sell more Macs. Only after it became obvious they had a huge hit on their hands did they release a version that would work with Windows. And it took them even longer to release a Windows version of iTunes. Today, the iPod and iTunes are arguably more important to Apple’s future than the Mac is. Tying the iPod to the Mac held back its potential for success. By making it as widely compatible as possible, Apple allowed it to achieve much greater success.

Jobs may have realized that Mac hardware and the Mac OS may be holding each other back as well. There may very well be a lot of customers who love Apple’s superb industrial design but need to run Windows to get work done. There might also be people who would like to try out the Mac OS but don’t want to drop several hundred dollars on a new computer. By de-coupling the two–allowing Windows to run on a Mac and (I hope soon) allowing Mac OS X to run on PCs–Apple allows each to survive on its merits. Perhaps Mac OS X will grab significant market share away from Microsoft. Or maybe Macs will steal market share from HP and Dell.

Either way, the bottom line is that network effects are an irresistible force in the computer industry. No matter how innovative your product might be, it’s not likely to succeed if it’s only used by a small cadre of technological elitists. Bill Gates figured this out in the 1980s, and it made him the richest man in the world. Perhaps Steve Jobs is beginning to figure it out as well. Better late than never.

Paul Graham’s essays are usually brilliant, but I found this essay on software patents of software patents to be rather short of his usual standard. He actually asks two separate questions: First, given the state of the law, is it evil for companies to seek software patents? And second, is permitting patents on software good policy?

I agree with him on the first question–a technology company that doesn’t play the patent game opens itself up to the risk of extortion by patent trolls. So I don’t blame innovative companies like Microsoft from acquiring software patents in self-defense. And like Graham, I fault companies that attempt to use their software patents offensively against competitors, as Amazon has done.

However, on the merits of software patents as public policy, his defense strikes me as rather weak:

Continue reading →

California DReaMing

by on April 5, 2006 · 2 comments

Eliot Van Buskirk grapples with the apparent self-contradiction of open-source DRM:

Another potential objection to Sun’s plan is that it sounds a lot like existing Microsoft or Apple DRM, in which secure content only plays on certified devices. But there’s one major difference in that area: The certification process would be run by a standards body, rather than by individual companies. I asked Jacobs to explain who would certify the players, and what would block the non-certified players from playing DReaM-protected content. “There will be an independent legal entity whose sole job it would be to take submissions of devices or players and do certification and testing of the device,” he said. He expects that group will be in place by the summer. Any manufacturer in the world would be able to add support for DReaM files at a negligible expense (remember, this is open source) and submit its device to the standards body for certification, similar to the way CSS worked with DVD players. Players and programs that aren’t certified cannot legally use the DReaM scheme to play protected content.

There seems to be a strange definition of “open source” at work here. In fact, it’s unlikely that any genuinely open source software would be able to receive certification–just as open source software has been unable to get a license from the DVD CCA–because anyone could modify the software post-certification in order to bypass the DRM scheme’s restrictions.

But the more fundamental point is that the openness of the DRM system would be entirely dependent on the nature of the restrictions the “standards body” placed on the functionality of approved devices. (Actually, describing it as a “standards body” in the first place strikes me as an abuse of language–IEEE doesn’t require me to get its certification before I can build devices that implements an IEEE standard) If the “standards body” imposes highly restrictive rules on the design and functionality of compliant devices, it won’t be any different than existing DRM schemes. The DVD CCA and Cable Labs are ostensibly independent certification organizations too. While it’s possible that the DReaM certification organization will allow greater diversity than existing DRM licensing organizations, there don’t appear to be any guarantees to that effect. And given that the DReaM licensing organization will likely be controlled by industry incumbents, it will most likely become a tool for incumbents to exclude competitors and limit functionality, just like existing DRM licensing bodies.

I would love to be proven wrong, but I’m not holding my breath.

I think we have a few geek readers, so I thought I’d take advantage of that to ask: can anyone suggest a good introduction to Python? I’m fluent in Perl and semi-proficient with C and Java, so a book geared toward procedurally-oriented programmers would be ideal. I’m particularly interested in mastering the Lispy aspects of the language, so a book that talks about its functional-programming attributes would be great.

I see that O’Reilly’s Python books have titles that mirror the Canonical O’Reilly Perl books. Are they any good?

It warms my heart to see AT&T playing hardball with local governments that are trying to micromanage the rollout of its next-generation video services:

What’s not to love about a brand-spanking-new fiber deployment (even if it’s only to the node, and not the curb)? Consumers will get higher Internet speeds, better service over new infrastructure, plus more choice when it comes to television. If you’re a local government that is used to revenues from cable franchises, the fact that AT&T is not willing to enter into local franchise agreements to deliver its IPTV service is a serious drawback. The Chicago suburb of Roselle is firing back at AT&T over the issue, passing an ordinance that will require the telecom to halt work on Project Lightspeed for 180 days… AT&T is responding to Roselle’s action by essentially threatening to take its ball and go home. “Roselle passed an ordinance and our lawyers are looking at it,” said Mike Tye, AT&T Midwest vice president for legislative affairs. “We’re dismayed that Roselle halted a network upgrade to bring enhanced services to its citizens. But we have finite capital and will allocate it to communities that want us there,” Tye said.

Although statewide franchise reform is certainly a good idea, AT&T may very well have a stronger bargaining position than municipalities even without reform. There are doubtless thousands of Roselle residents who want what AT&T is selling. Roselle’s elected officials might want to keep in mind that AT&T sends a bill every month to the vast majority of Roselle voters. A series of “Dear Customer…” inserts in those bills explaining that their city council is preventing them from getting competitive TV service might change some minds. And if not, I’m sure there are plenty of other municipalities whose elected officials would be happy to have AT&T provide new and better service to their constituents.

In Your DReaMs

by on April 4, 2006

Sun’s DReaM project, billed as an open source DRM format, smells like something that was dreamed up by Sun’s management without sufficient input from its engineers. It seems to me that if we use the term “open” in its ordinary sense–i.e. a publicly available standard that anyone is free to implement–“open DRM” is a contradiction in terms. DRM depends on its implementation details being secret in order to prevent unauthorized parties from accessing the content. My guess is that Sun’s management is on a kick to make all of their products more “open” and figured that if we can have open operating systems and open processors, why not DRM?

This hunch was confirmed by their recently released overview of the project. Consider this passage, for example:

Historically, proprietary end-to-end architectures have relied upon obscurity to avoid being cracked. Such systems are based upon a false foundation of security promises. Such systems have been cracked and will continue to be breached. Additionally, the opaque nature of these systems has led to monolithic system architectures (by nature) that presume delivery by a single vendor, which inherently increases costs through the lack of interoperability and adds difficulty when attempting to substitute one supplier for another. DReaM promotes the view that open system architectures will present greater opportunities for review and discussion of technology choices so that shortcomings can be better evaluated and corrected (“review & repair” versus “hope & pray”) to provide the greatest protection possible.

This is an argument that you’ll commonly hear in defense of open encryption standards. And in that domain, it’s absolutely correct: today’s best encryption standards rely on only the encryption key being a secret. Everything else about the internal workings of the standard are publicly available. That allows security researchers to examine and correct any flaws discovered in the algorithm. The oldest and most widely-used encryption standards are the most likely to be secure, because they’ve received the most scrutiny, and so the odds of someone finding a flaw in the future are quite small.

Whoever wrote this DReaM overview clearly took the standard argument for open crypto and applied it to DRM. He missed the fact that the problem that DRM is trying to solve is fundamentally different from the problem that traditional crypto is trying to solve.

Continue reading →