Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Look Ma, Faster Broadband!

by on January 29, 2009 · 15 comments

In the summer of 2000, while I was in college, I moved into a big house with 6 other guys. DSL was just coming on the market, and we were big nerds, so we decided to splurge on fast Internet access. Back then, “fast Internet access” meant a blazing fast (Update: 512k) DSL connection. We had to pay the phone company about $65/month for the line. And we paid our Internet Service Provider $55/month for the connectivity and 8 static IP addresses (thanks to local loop unbundling these were separate services). For $120/month we got to live in the future, enjoying connectivity 10 times faster than the 56k modems that almost everyone had at the time.

Adjusting for inflation, $120 of 2000 money is about $140 of 2009 money. So I was interested to see that St. Louis, MO, where I lived until recently, is about to get 60 mbps Internet service courtesy of Charter, the local cable monopoly. Had I stayed in St. Louis for another year, I would have been able to get 120 times the bandwidth for the same inflation-adjusted cost as the broadband access I had less than a decade ago.

It has almost become a cliche to lament the dismal state of America’s broadband market. There do seem to be countries that are doing better than we are, and we should certainly study what they’ve done and see if there are ideas we could adapt here in the states. But I also think a sense of perspective is important. I can’t get too upset about the possibility that in 2018 Americans might be limping along with 2 gbps broadband connections while the average Japanese family has a 20 gbps connection.

One of the most fundamental disagreements in the debate over software patents concerns the Supreme Court. Some software patent supporters like to cite the case of Diamond v. Diehr as the decision that legalized software patents. Many others argue that the Supreme Court’s classic trilogy of software patent decisions from the 1970s and early 1980s just weren’t clear enough to be of much use in the modern world.

In a new article for Ars Technica, I take a close look at these claims and talk to a couple of prominent patent scholars who make them. I find that like most Supreme Court decisions, the Benson, Flook, and Diehr decisions are hardly models of clarity. It’s possible to find passages in those decisions that could be cited in support of either side of the software patent debate.

However, I think that it’s hard to escape the conclusion that, at a minimum, the Supreme Court intended software patents to be far more limited than they are today. For the last decade, the Federal Circuit has been allowing so-called Beauregard claims, which claim software printed on a machine-readable media such as a CD. As I explain in the article, it’s hard to see a plausible interpretation of the Supreme Court’s precedents that would include these kinds of “pure” software patents. And it seems to me that the most reasonable interpretation of the high court’s decisions is the one Ben Klemens has articulated: that software by itself cannot be patented, and that “insignificant post-solution activity” (to quote the Diehr majority) such as displaying the results of a calculation on a computer screen, cannot transform an unpatentable algorithm into patentable machine.

There’s a lot more detail in my article, so I hope you’ll check it out.

Don Marti has some choice words for Braden’s post on Scott McNealy and government open source contracting:

Let’s say that one of those Rent-to-Own stores that sells electronics under a confusing, one-sided contract got a big idea. Hey, we’re going to get a piece of the government market for LCD monitors!

Wait a minute, though. The government has a competitive bidding process for electronics, and no bureaucrat is going to commit to paying two grand for a $300 monitor. Even if you could bribe him, somebody is going to look at the books eventually.

Looks like life is tough for our fine-print-slinging rent-to-own sales weasel. But all is not lost. Next step: hire a fake-Libertarian rent-seeking lobbying operation out of Washington, D.C. Now you can re-cast the corporate welfare you want as having the freedom not to get your rightful corporate welfare, I mean property, taken away from you.

Put some Libertarian-sounding spin on the rent-to-own monitor plan, and now it’s: There’s a Regulatory Mandate to buy only Open Bid Monitors! Why can’t we have Fair Competition betwen the Open Bid model and our business model?

I don’t think the name-calling is necessary, but Don does raise some interesting points. My thoughts:
Continue reading →

Patent Solipsism

by on January 26, 2009 · 11 comments

I’ve been thinking a fair amount about software patents the last couple of weeks. I recently attended a Brookings Institution conference that focused pretty heavily on software patents, and since then I’ve interviewed several sharp patent scholars in preparation for a forthcoming article. In those conversations, I noticed the same cultural gulf I blogged about on Techdirt last week. You might say that on the subject of software patents, lawyers are from Mars and programmers are from Venus.

I think there’s a universal human tendency to over-estimate the importance of whatever you happen to be an expert on. I know lots of geeks who believe everyone and their grandmother should use Ubuntu, vi, git, RAID, and so forth. A lot of economists believe that the rest of the social sciences would be better off they all started using the methods of economists to do their jobs. When we develop human capital in some particular field, we tend to get a corresponding emotional investment in that field.

So when a programmer thinks about software patents, he’s interested in improving the software industry. Given how screwed up software patents are, a lot of us go straight for the most direct and elegant way to accomplish that objective: excluding software from patentability. In contrast, when a patent lawyers thinks about software patents, he’s interested in fixing the patent system. From that perspective, abolishing software patents looks like a horrible hack, because the underlying problems that caused software patents to be such a mess are probably responsible for problems in other industries too.
Continue reading →

Boxee vs. the DMCA

by on January 18, 2009 · 18 comments

I was very interested to read Berin’s post about the Boxee, a device I had not heard about until today. I’ve been asking for years why there are no good video jukebox products on the market, so I’m always interested to see new entrants in the market.

If Wikipedia is to be believed, Boxee is a fork of the XBMC Media Center, which I first wrote about way back in 2006. The reason you may not have heard more about the XBMC Media Center is that it sits in an uncomfortable legal grey area. Thanks to the DMCA, one of its most inportant features—the ability to play and rip DVDs—is illegal. And there are probably other DMCA- and software-patent-related legal impediments to releasing a product like the XBMC. As a consequence, the major consumer electronics manufacturers have released relatively crippled set-top boxes that have not caught on with consumers.

Boxee’s wikipedia page suggests that Boxee uses libdvdcss, a cousin of the DeCSS library that the courts ruled to be an illegal “circumvention device” back in 2001. And the DMCA holds that someone who “trafficks” in a circumvention device “willfully and for purposes of commercial advantage or private financial gain” should be fined up to $500,000 and imprisoned for up to 5 years.

Now, the NYT article says that “Lawyers say that Boxee does not appear to be doing anything illegal,” although it doesn’t quote any actual lawyers, nor does it say which legal issues those lawyers examined. It’s possible that Boxee stripped out libdvdcss and replaced it with code that has been approved by the DVD founders. Moreover, it seems that Boxee’s strategy is to just build cool technologies and let the legal chips fall where they may:

Mr. Ronen said that like many start-ups, Boxee was definitely leaping without looking. “Don’t assume we have lawyers. That’s expensive,” he said.

This is a very risky strategy, both from a business perspective and for Ronen personally. But it’s also likely to pay off. If Ronen is able to get enough customers before the MPAA can be roused into taking legal action, they have a pretty good shot at winning the resulting PR war and forcing the MPAA to back down, even if the MPAA has the law on its side. And indeed, that may be the only way to break into this market, because if he plays by the rules he’ll never get the studios’ permission to build a set-top box the studios don’t control.

Fortunately, courts tend to be swayed by the perceived “legitimacy” of a technology’s designers. Remember, for example, that just 7 years after suing to keep MP3 players off the market, the recording industry insisted to the Supreme Court that everyone knew MP3 players were legal. There weren’t any changes to the law in the interim. Rather, MP3 players had become a familiar technology and so judges intuitively “knew” that any interpretation of the law that ruled out MP3 players must be wrong. If Boxee can grow fast enough, and can cultivate a “good citizen” image, it may be able to pursuade judges that any interpretation of the DMCA that precludes Boxee must be wrong.

The more fundamental point, of course, is that it’s ridiculous that Ronen has to worry about these legal issues in the first place. The copy protection technologies Ronen is circumventing haven’t stopped piracy, they’ve simply given Hollywood a legal club with which to bludgeon technology companies it doesn’t like. Had the DMCA not been on the books, we likely would have seen a proliferation of XBMC-like device and software on the market several years ago.

It has been suggested that the American wireless market is a “textbook oligopoly” in which the four national carriers have little incentive to innovate or further reduce prices. I’m more sympathetic to this argument than some libertarians, but over at Techdirt Carlo offers some evidence that competition is alive and well in the wireless marketplace. For a while, the national carriers have offered unlimited voice and text messaging services for around $100/month. Carlo notes that a couple of regional carriers that focus on the low end of the market and have less comprehensive coverage maps have started offering unlimited connectivity for as little as $50/month. The latest development is that Sprint’s Boost Mobile unit is joining the $50/month flat rate club.

Jim Harper has made the point in the wired broadband market, but it deserves to be made here too: competition happens along multiple dimensions. Consumers have different trade-offs between price and quality, and so products with different feature sets and price points often compete directly with one another. There may be only four national carriers, and the regional carriers may not be able to offer service that the typical consumer finds comparable to the offerings of the national networks, but that doesn’t mean the regional carriers are irrelevant. Offering a bargain option at the low end of the market really does put pressure on the margins of the tiers above them. As long as there are some AT&T and Verizon customers who would be willing to put up with spotty coverage in exchange for a lower phone bill, AT&T and Verizon will have an incentive to cut their prices over time.

Of course, we could use more wireless competition. But we also shouldn’t lose sight of how much good the spectrum that’s already been auctioned off has done. It’s hard to create competitive telecom markets. For all of its flaws, the mobile industry is a real success story. And the solution to the flaws is to continue what we started 15 years ago: auctioning off more spectrum and creating real property rights in the airwaves.

Me in DC

by on January 12, 2009 · 5 comments

If you’re in the DC area (and not at Cato’s important counter-terrorism conference that starts this morning) I hope you’ll consider attending two DC-area events I’ll be participating in. First, tomorrow I’ll be tag-teaming with fellow TLFer Jerry Brito to give a Hill Briefing on network neutrality. The talk will be designed for Hill staffers, but it’s open to the public and you’re encouraged to come and ask us softball questions.

Then on Wednesday, I’m going to be a panelist at a Brookings Institution conference on the limits of abstract patents. Also on panels will be some of my favorite patent law scholars, including Ben Klemens (an organizer of the conference whose excellent book I discussed here), Jim Bessen (whose book I reviewed here), Peter Menell (whose Regulation article I discussed here), and John Duffy (with whom I often disagree: I criticized him here but loved his work on appellate competition). It promises to be a great conference on an important topic.

Real Regulators

by on December 24, 2008 · 32 comments

Don’t miss Jim Harper’s excellent post on the strange way people have responded to the failures of regulation on wall street. In a Meet the Press exchange, we learn that people reported Bernie Madoff’s suspicious books to the SEC, which chose not to do anything about it. And it was agreed around the table that the Madoff affair debunks “the idea that wealthy individuals and ‘sophisticated’ institutional investors don’t need the protection of government regulators.” “There’s no question we need a real regulator,” says CNBC’s Erin Burnett.

The problem is that we had a “real regulator.” Ponzi schemes and dishonest bookkeeping are already illegal. Had the SEC been so motivated, it had all the authority it needed to investigate Madoff’s books, discover the problems, and shut his firm down. In a rational world, this would be taken as a cautionary tale about the dangers of assuming that regulators will be vigilant, competent, or interested in defending the interests of the general public rather than those with political clout. Instead, we live in a bizarro world in which people believe that the SEC’s failure to do its job is an illustration of the need to give agencies like the SEC more power.

We of course see the same sort of confusion in debates over regulation of the technology sector. For example, the leading network neutrality proposals invariably wind up placing a significant amount of authority in the hands of the FCC to decide the exact definition of network neutrality and to resolve complex questions about what constitutes a network neutrality violation. Too many advocates of regulation seem to have never considered the possibility that the FCC bureaucrats in charge of making these decisions at any point in time might be lazy, incompetent, technically confused, or biased in favor of industry incumbents. That’s often what “real regulators” are like, and it’s important that when policy makers are crafting regulatory scheme, they assume that some of the people administering the law will have these kinds of flaws, rather than imagining that the rules they right will be applied by infallible philosopher-kings.

More on News Spin-offs

by on December 23, 2008 · 9 comments

A couple of quick follow-ups to my last post: first, a commenter points out that Career Builder is an example of a successful spin-off from a major media company. (Actually, from three media companies; I bet having ownership by multiple companies helped insulate the company from any one firm’s internal politics). So it appears that the spin-off model can work.

Second, the always-interesting Tom Lee points to the Washington Post’s online operation as an example of the spin-off model. This is a really interesting example because it’s closer to the core of the WaPo‘s business than Career Builder is to Gannett’s. And by all accounts, it was relatively successful. I’m pretty sure I’ve read multiple people comment that the Post is a local newspaper with a national website, which is precisely what you’d want a successful spin-off news organization to do.

The problem is that washingtonpost.com is nowhere close to being a free-standing organization. They get tremendous benefit from having access to content from the print Post, and while I haven’t looked at their business model in any detail, I’d be willing to bet that there’s massive cross-subsidy going on. That makes it a better website, but the problem is that it relieves the web side of the business of the need to come up with new, lower-cost methods for generating news. Which means that if and when the print side hits an iceberg, the online side won’t be able to stand on its own.

Having recently read The Innovator’s Dilemma, it’s worth pointing out that the discussion Ezra Klein and Matt Yglesias are having about the decline of newspapers is a classic illustration of the principles Clayton Christensen laid out a decade ago. Internet news is a classic disruptive technology. At its outset, it was simple, dirt cheap, and in many ways inferior to established journalism. But it improved over time, and once it began to rival traditional journalistic outfits in quality around the middle of this decade, the “dirt cheap” part of the equation began to dominate. When your competition can produce a roughly comparable product for a small fraction of the cost, your days are numbered.

But here’s the really important point that Christensen made that is often missed in these kinds of discussions: it’s often close to impossible for an organization built around an older technology to retool for a new, disruptive one because their cost structures just don’t allow it. The New York Times is an expensive place to run. It’s got writers, editors, typesetters, delivery trucks, an ad sales force, a big building, travel budgets, and so forth. In order to recoup those costs, they have to make a certain amount of revenue per unit of output. The institutional structure of the New York Times makes it almost impossible for it to produce news the way TPM Muckraker or Ars Technica do. The need to make payroll and cover their rent makes it almost mandatory for them to focus on their traditional core competencies because even as those markets shrink they still offer better margins than the emerging businesses.

Matt’s suggestion of launching NYTList a decade ago illustrates the point well. It’s true that in the long run this probably would have made the Times more money. But in the short run this would have been a truly wrenching transition. At a time when other papers were enjoying fat margins from their classified business, the Times would find more and more of its classified customers switching to the new version. It would have had to start laying off the classified staff and trimming other parts of the budget to cover the lost revenue. And it would have been a huge gamble. It was far from obvious in 2000 that Craigslist would be as big as it has become. So yes, theoretically an enlightened NYT manager could have foreseen the growth of Craig’s List and countered it. But in practice doing so would have required super-human foresight and determination, and an extremely deferential board of directors.

Christensen’s conclusion is that the only way to avoid this grim fate is to spin off an independent subsidiary that can pursue new markets without worrying about fat profit margins or cannibalization of existing product lines. GM’s spin-off of Saturn in the 1980s is a good example of this model. This is still an extremely difficult thing to pull off. It takes a CEO with the foresight to see what’s coming and the political capital within the firm to shield the spin-off from the parent company’s politics. I’m not aware of any high-profile newspaper firms that attempted this, but I’m not sure we can really blame the newspaper managers. It’s a really hard thing to pull off. Christensen was only able to find a handful of firms—in any industry—that pulled it off successfully, and the CEOs who did it almost all said that it was one of the most difficult things they did as managers.

Companies are not big people. They change much more slowly than individual people do. And anyone suggesting that a firm should do things in a new way—even the guy at the top—is going to face strong pressures from traditionalists who want to continue doing things the old way. And in the short run, the traditionalists are almost always right. The old way of doing things is almost always going to be more profitable in the short run. So although I think those who predicted the newspaper industry’s decline are entitled to a certain amount of smugness, I think it’s absolutely not fair excoriate the managers who failed to move more decisively to address the problem. With the benefit of 20/20 hindsight, it’s easy to come up with scenarios that would have turned out better. But from an ex ante perspective, these trends were far from clear, and the people making the decisions were under tremendous pressure to continue the status quo.