Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Over at Ars, I have an in-depth look at the White House’s email troubles. The administration is either spectacularly incompetent or going out of its way to avoid complying with the law:

When the Bush administration took office, it decided to replace the Lotus Notes-based e-mail system used under the Clinton Administration with Microsoft Outlook and Exchange. The transition broke compatibility with the old archiving system, and the White House IT shop did not immediately have a new one to put in its place.

Instead, the White House has instituted a comically primitive system called “journaling,” in which (to quote from a recent Congressional report) “a White House staffer or contractor would collect from a ‘journal’ e-mail folder in the Microsoft Exchange system copies of e-mails sent and received by White House employees.” These would be manually named and saved as “.pst” files on White House servers…

These deficiencies were repeatedly brought to the attention of White House systems administrators. In 2002 and 2003, they attempted to retrofit the old, Lotus Notes—based archiving system to work with the new Exchange-based email system. When this effort failed, they awarded a contract to Booz Allen Hamilton to design a new system, and to Unisys to implement it. According to McDevitt, the new system was set up and configured during 2005 and was “ready to go live” in August 2006. But the White House CIO, Theresa Payton, reportedly aborted the project in late 2006, citing perceived inadequacies with the system’s performance and its ability segregate official presidential correspondence from political or personal materials. McDervitt resigned in protest soon afterwards.

Payton claims that the White House is working on yet another archiving system. But until it’s completed—and it’s now looking increasingly unlikely that it will be operational before the end of the administration—the White House will lack an automated system for complying with the requirements of federal law.

Julian didn’t like Tom Sydnor’s paper on Lessig either. In particular, he went back and looked up the sections in Code in which Lessig ostensibly expressed sympathy for Communism. Here’s the rest of the story:

We learn that Lessig wrote, in the first edition of his book Code, of his “impulse to sympathize” with those on the left who are “radically skeptical” about using property rights in personal information to protect privacy. We do not learn that the long quotation that follows is Lessig’s summary of an anti-market view of which he declares himself “not convinced.” (Lessig originally endorsed a property right in personal data; he has since altered his view, and now supports treating Web sites’ privacy policies as binding contracts.)

Sydnor similarly presents selective quotations from a passage in Code where Lessig describes his impression of life in communist Vietnam as surprisingly free and unregulated in certain respects. Lessig’s point is that despite a formal ideology of state omnipotence, the lack of an effective architecture of control leaves many ordinary Vietnamese relatively unfettered in their day-to-day interactions; institutional structure often determines reality more powerfully than official philosophy. Possibly Lessig is mistaken about modern Vietnamese life, but Sydnor, in what seems like a willful misreading, deploys the anecdote to depict Lessig as a disciple of Ho Chi Minh.

And of course yesterday Mike pointed out that Lessig’s point about property rights and DDT wasn’t as outrageous as Tom seemed to think. These examples strikes me as a serious problem. One of the basic obligations of any scholar is to present one’s opponents’ quotes fairly and in context. If a scholar writes “I’m sympathetic to view X, but ultimately I find the arguments for it unconvincing,” it’s extremely misleading for someone to quote the first half of the sentence without mentioning the second half.

Likewise, Julian suggests that Tom’s summary of Fisher’s proposal leaves something to be desired:
Continue reading →

Our friends at the Progress and Freedom Foundation have released a paper by PFF’s new copyright guru about Larry Lessig, Free Culture, and whether libertarians should take them seriously. Since the paper is framed as a response to my recent post on Lessig’s work, I suppose I should offer some thoughts on the subject.

I have to say that I found the paper disappointing. I’ve frequently said I wished more libertarians took Lessig’s ideas about copyright seriously, and so I’m generally happy to see libertarian organizations writing about Lessig’s work, even if they do so critically. But it seems to me that a basic principle of good scholarship is that you start with a good-faith interpretation of your opponent’s position and then proceed to explain the flaws in fair-minded way. The goal isn’t to give your readers the worst possible impression of your opponent, but to help your readers to better understand the opponent’s arguments even as you refute them. That doesn’t appear to be what Tom did. Rather, he appears to have read through Lessig’s rather substantial body of work (3 books and numerous papers) and cherry-picked the words, phrases, sentences, and paragraphs that, when taken in isolation, give the impression that Lessig is (as Tom puts it) a “name-calling demogogue.”

This makes it awfully hard to know where to begin in analyzing Tom’s arguments, such as they are. For example, consider the first paragraph after the introduction:

Disputes about whether Lessig “demonizes” property owners are easily resolved. He does so incessantly. Scholars are supposed to be disinterested, balanced and thoughtful. Lessig is an name-calling demagogue: In just one law-review article, he calls those who fail to agree with him sheep, cows, unimaginative, extreme, stupid, simplistic, blind, uncomprehending, oblivious, pathetic, resigned, unnoticing, unresisting, unquestioning, and confused—”most don’t really get it.”

Now, he does indeed use all of those words in “The Architecture of Innovation.” In some cases, they’re even applied to people he disagrees with. But they’re sprinkled through a 15-page paper, and to judge how demagogic they are, you really have to see the full context to see who, exactly, he’s referring to with each of these words. To take just the first example—sheep—what Lessig actually says is that he frequently encounters a sheep-like stare from his audience when he asks the questions “what would a free resource give us that controlled resources don’t? What is the value of avoiding systems of control?” He’s clearly not calling everyone who disagrees with him sheep, he’s making a point—valid or not—about peoples’ failure to understand a set of questions that he thinks are important.
Continue reading →

Headline of the Day

by on April 29, 2008 · 9 comments

Hans Reiser is fscked: jury delivers guilty verdict

Ars certainly has a history of running edgy headlines, but this takes things to a new level.

Update: As PJ points out, fsck is a Unix file system utility, and Reiser did work on Linux file systems.

The New York Times casts its spotlight on the “Censored 11,” 11 racially-charged cartoons from the middle of the last century that have been unavailable to the public for decades. But despite repeated attempts to take them down, they keep popping up online. You can see some of them here, and the most notorious is “Coal Black and the De Sebben Dwarfs,” which as you can imagine from the title is pretty offensive:

Preventing people from watching them seems pretty silly to me. I wouldn’t want them on heavy rotation on the cartoon network, but people are entitled to know about their history, and I doubt letting people see them will cause anybody to be more racist. But this creates a dillema for Disney and Warner Brothers. If they release them in an official capacity, they’re opening themselves up to a lot of negative publicity and highlighting a part of their past they’re probably not too proud of. This wouldn’t be a problem if we didn’t grant copyrights for such absurd lengths of time. If we had the rules on the books at the time most of these videos were made–28 years, renewable once–then all films made before 1952 would now be in the public domain, which would encompass the vast majority of these cartoons. That would allow the studios to officially disavow any support for them while allowing people to view them.

It’s an interesting question whether putting these films on YouTube could constitute fair use. The fact that the entire work is being shown obviously strongly cuts against it with regard to the third factor. However, the second and fourth factors would cut strongly in favor of fair use—there is no commercial market, and the work is of particular historical importance. As to the first factor, one could argue that the cultural climate in 2008 is so different from the climate in 1935 that the act of showing it in 2008 has a fundamentally different “purpose and character” than when it was first shown, thereby rendering the simple act of showing the video, at least on a non-profit basis, transformative.

Update: OK, this one is even worse.

Every once in a while, a Slashdot post wanders out of the realm of the science/IT areas where the editors have the most expertise, and the results are often underwhelming. For example:

“The bill to ban genetic discrimination in employment or insurance coverage is moving forward. Is this the death knell of private insurance? I think private health insurance is pretty much incompatible with genetic testing (GT) for disease predisposition, if said testing turns out to be of any use whatsoever. The great strength of GT is that it will (as technology improves) take a lot of the uncertainty out of disease prediction. But that uncertainty is what insurance is based on. If discrimination is allowed, the person with the bad genes is out of luck because no one would insure them. However, if that isn’t allowed, the companies are in trouble. If I know I’m likely to get a certain condition, I’ll stock up on ‘insurance’ for it. The only solution I can see is single-payer universal coverage along the lines of the Canadian model, where everyone pays, and no one (insurer or patient) can game the system based on advance knowledge of the outcomes. Any other ideas? This bill has been in the works for a while.”

At the risk of committing the same sin of opining outside of my area of expertise, this seems to be rather misguided. I should give the guy credit for understanding the basic point that insurance is about managing risk. If you’re 100 percent sure you’ll need a heart transplant in the near future, and you buy a policy that will pay for it, that’s not an “insurance policy.” It’s just a health care plan. An insurance policy is a tool for managing the risks of events that you don’t know will definitely happen.

Unfortunately, this anonymous reader takes this kernel of truth and uses it to draw sweeping conclusions that just don’t follow from them. Because genetic tests hardly ever tell you precisely what diseases you’ll get and when you’ll get them. Rather, they tell you about dispositions and tendencies. They say “your chance of getting heart disease is twice as high as normal” or “You’re likely to get parkinsons disease sometime in your 40s or 50s.”

If it were true that anyone with an elevated risk of health problems would be ineligible for health insurance, then you’d also expect that men under 30 would be ineligible for auto insurance. But of course, that’s not what happens. Insurance companies take the elevated risk into account in setting premiums. In a world with widespread genetic screening, the price of your insurance would take into account your genetic predispositions. Those who are blessed with good genes would pay lower premiums, while those with bad genes would pay higher premiums.

Now, reasonable people can object that this is unfair. And there will likely be a small minority of individuals whose genes are so bad that they’ll be unable to pay the premium required to properly compensate the insurance company for the risk they’re taking. But if you’re inclined to have the state do something about this, it doesn’t by any means follow that the state needs to run the entire insurance/payment system. Rather, the state can take a variety of actions targeted at the losers of the genetic lottery while leaving the market free to work for the majority of individuals with average or below-average risks. This can take several forms. One would be premium subsidies at the front end: say, the state picks up a percentage of the premium for people with above-average premiums. Another would be to directly subsidize treatments for the most expensive-to-treat diseases, which would have the effect of reducing premiums for people with those diseases. Or you can (although I think we shouldn’t) continue in the direction we’ve been going, of imposing all sorts of implicit cross-subsidies in the health care market itself (such as the tax preferences for employer-provided group policies and rules requiring hospitals to treat patients regardless of their ability to pay).

This isn’t a health care blog, and I’m not a health care expert, so I won’t venture an opinion on which of those options, if any, is most desirable. But it’s a non-sequitur to assert that because we’ll be able to more accurately assess risk, that private insurance will no longer be viable. The insurance industry is extremely good at pricing risk in other parts of the economy, and they’d do it in health care too if the government didn’t exert so much effort to preventing them from doing so. It’s a complete non-sequitur to suggest that any of this is a strong argument for a centrally-planned government health care system.

Megan McCardle suggests that the Patriot Act, while bad, is hardly a harbinger of a police state. She points out, correctly, that American history is full of violations of civil liberties, some of them—the suspension of Habeas Corpus during the Civil War, the internment of Japanese during World War II—probably worse than anything the Bush administration has done. And if we’re looking at things from a narrowly legalistic perspective, it’s plainly not the case that we’re in a uniquely bad place. The courts were far less protective of civil liberties for most of the 20th century than they are today.

However, I think this telling of things misses some important context. Most importantly, it misses the point that the federal government has changed a great deal over the last century. In 1908, the United States was a balkanized country with a small, distant federal government. Few people had telephones and (obviously) nobody had a cell phone or an Internet connection. The federal government, for its part, didn’t have an NSA, a CIA, or an FBI, and even if they had existed they wouldn’t have had access to the kind of transportation, communications, and computational capacities that they have today. All of which is to say, if the federal government circa 1908 had wanted to build a panopticon police state, they wouldn’t have been able to do it, because they wouldn’t have had the technology or the manpower to do so.

So the first 150 years of American history just isn’t relevant when we’re talking about the rise of a modern police state. There’s a reason Russia and Germany got totalitarian police states in the middle of the 20th century; this was the first time modern transportation and communications technologies gave governments the ability to exert that kind of control. And while we missed the rise of totalitarianism, the post-World War II period was an extremely bad one from a civil liberties perspective. J. Edgar Hoover was an extremely bad man who wielded a great deal of power with essentially no accountability. The federal government spied on activists, journalists, politicians, and celebrities, and those with access to these surveillance tools used them to blackmail, manipulate, and ruin those they didn’t like.

Luckily, Watergate happened, and Congress passed a series of reforms that dramatically enhanced judicial and Congressional oversight of the executive branch. This were a lot better in the early 1980s than they had been since the early 20th century. And since then, we have seen a gradual erosion of those safeguards. I just put together a presentation on the subject, and it’s a long list: CALEA, roving wiretaps, national security letters, Carnivore successors, the Patriot Act, warrantless wiretapping, and probably other programs we don’t know about.

If this process continues unchecked, we will reach a point where the NSA has the technological capability to track a breathtaking amount of information about every American. And if the wrong people get ahold of this infrastructure, they can cause a lot of big problems. If the surveillance state is allowed to grow for another decade or two, we likely will reach a point where civil liberties are in the worst shape they’ve been in American history.

Will that happen? I’m optimistic. I think we’ll be able to at least slow the process down and impose some additional oversight. But if it doesn’t happen, it will be precisely because thousands of Americans were alarmed enough about the developments to fight back against them. I would far rather over-estimate the threat and be proven wrong than to underestimate the threat and wake up one morning in a world where the 21st century’s J Edgar Hoover has the power to blackmail anyone in America.

This is an absolutely devastating review of Ubuntu:

In recent years Linux has suffered a major set-back following the shock revelations form SCO Group, whose software had been stolen wholesale and incorporated into illegal distributions of Linux. For the past five years, the overwhelming effort of Linux developers has gone into removing SCO’s intellectual property from the Linux operating system – but as the threat of litigation has subsided many users are asking once-again if Linux is a serious contender?

…if you object to this communism, tough luck: The so-called “Completely Fair Scheduler” is incorporated into the “kernel” ( which is the Linux equivalent of the MS Dos layer in older versions of windows). This “feature” alone is enough to frighten serious users away from the upstart operating system… Windows users have no need of the “Completely Fair Scheduler” because we have modern scheduling software such as Microsoft Outlook (above). Outlook allows you to give your time to whoever you want, regardless of any socialist definitions of ‘fairness’.

I’ve traditionally been favorable toward Linux-based operating systems, but this puts them in a whole new light.

I’ve been noticing recently that wi-fi connections are flakier than they used to be. It seems to me that from about 2001 to 2005, it was almost unheard-of for my home wi-fi connection to suddenly drop out on me. In the last year or two, it has seemed like this is an increasingly common occurrence. For the last half hour or so, my Internet connection has been going out for 5 or 10 seconds at a time every few minutes. It’s not a huge problem, but it happens just often enough to be pretty annoying.

I can think of a number of possible explanations for this. One might be that my current laptop, a MacBook I bought about a year ago, might have a lower-quality wireless card. Another might be that I’m using wi-fi in more places where it might be hard to get good coverage. Or maybe I’m imagining things.

But it also seems possible that we’re starting to experience of a tragedy of the wi-fi commons. I seem to recall (and Wikipedia confirms) that wi-fi cards effectively have only 3 “channels” to choose from, and that the wi-fi protocol isn’t especially well-designed to deal with multiple networks using the same channel at close proximity. It has now become commonplace for me to whip out my laptop in an urban setting and see a dozen or more wi-fi networks. Which suggests that there’s got to be some serious contention going on for those channels.

If I’m right (and I might be wildly off base) I’m not sure where the analysis goes from there, either from a technical perspective or a policy one. One knee-jerk libertarian answer is to suggest this is an argument against allocating a lot of spectrum to be used as a commons because it tends to be over-used and there’s no one in a position to resolve this kind of contention. On the other hand, maybe people are working on better protocols for negotiating this kind of contention and achieving a fair sharing of bandwidth without these problems. Or perhaps—at least for wi-fi—it would be possible to allocate enough bandwidth that there’d be enough to go around even in dense urban areas.

Dingel points to a paper (non-paywalled draft here) exploring the historical connection between the free trade movement and the movement for worldwide copyright harmonization:

Free traders failed repeatedly for sixty years after the end of the Civil War to reduce the average tariff to its immediate prewar level. They failed despite making a case that, by comparison with the one made for free trade today, was compelling. Speci?cally, the principles of free labor engendered an antimonopoly argument for trade. Free trade, its advocates argued, would eliminate the special privileges granted to producers in speci?c industries, most notably cotton goods, iron, and steel. It would promote competition, lower prices, and raise consumers’ real incomes…

Carey attempted to turn the tables on the free traders: he argued that free trade promoted monopoly, and protection mitigated it. His conviction was sincere—but that particular part of his argument was unpersuasive, and relatively few of his followers bothered to repeat it. He was much more persuasive in arguing that international copyright promoted monopoly. In the face of the latter argument, the proponents of free trade and international copyright were put on the defensive…

One wonders whether the tireless advocacy of international copyright by free traders like Bryant—who framed the cause as one inextricably related to free trade—hindered the advancement of their principal cause. The long-awaited sweeping tariff reductions were deferred until 1913. Might the wait have been shorter if the antimonopoly credentials of the free-trade advocates had not been called into question?

This is a fascinating question. One of the things I find really interesting about the 19th century political debate is that the opposing political coalitions were more sensibly aligned, perhaps because people had a slightly clearer sense of what was at stake. My impression (which may be wrong in its details) is that the free traders tended to be liberals and economic populists. They clearly understood that protectionism brought about a transfer of wealth from relatively poor consumers to relatively wealthy business interests. In the opposing coalition were a coalition of business interests and xenophobes making fundamentally mercantilist arguments about economic nationalism.

Today’s free trade debate is much weirder, because there are enough businesses who want to export things that significant parts of the business community are for freer trade. On the other hand, the liberals who fancy themselves defenders of relatively poor consumers find themselves in bed with predatory industries like sugar and stell that have been using trade barriers to gouge consumers. And the “trade” debate has increasingly come to be focused on issues that don’t actually have much to do with trade, whether it’s labor and environmental “standards,” copyright and patent requirements, working retraining programs, cross-border subsidies, etc.

I suspect part of what’s happening is that in the United States, at least, consumers are so rich that they really don’t notice the remaining costs of protectionism. A T-shirt at Target might cost $10 instead of the $8 it would cost if there were no trade barriers with China, but this is such a tiny fraction of the average American’s budget that they don’t really care. Likewise, if the domestic price of rice or flour were to double, a significant number of Americans wouldn’t even notice. In contrast, in the 19th century, we were still poor enough that a 10 or 20 percent increase in the price of basic staples might be the difference between being able to afford meat once a week or having to skip meals once in a while to make ends meet. We may now be rich enough that we can afford to be politically clueless.