Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Luis points to an interesting paper on the fragility of intrinsic motivation in volunteer efforts. Luis explains:

The paper also has some more detailed observations that come out of the experimental work; among them that voluntary cooperation is fragile; group composition matters (i.e., groups with more conditional cooperators will be healthier); and that ‘belief management’ maters- i.e., if people think that they are in a group with more conditional cooperators, that group will be more robust. None of these will come as a huge surprise to anyone who has been involved with volunteer communities, but still interesting to see it experimentally confirmed.

I’ve always suspected that something like this is the case, and that it explains in part why the GPL is so successful, since it uses copyright to force cooperation and penalize defection, and (importantly) makes a clear public statement that that is the case, which serves a signaling function (everyone in the community knows these are the ground rules) and a filtering function (people who aren’t interested in collaborating don’t join as much as they join other groups.)

I think this is the key explanation for the outrage over the MS-Novell deal a couple of years back. By signing on to the GPL, Novell had signaled that it intended to honor the free software community’s principle of reciprocity. Then, it signed an agreement with Microsoft that looked like an attempt to skirt the GPL in a way that gave Novell an unfair advantage over other members of the Linux ecosystem. People who weren’t steeped in the ethos of the free software community saw it as a simple business deal, and objections to it as some kind of knee-jerk reaction to profit-making. They didn’t realize the extent to which the community is made up of “conditional cooperators” whose participation is contingent on everyone else in the community following the rules. When Novell “defected” from the community’s expectations, the rest of the community felt a need to ostracize it to ensure that no one else would be tempted to similarly defect.

Luis also linked to this old post of his which has more interesting citations on intrinsic motivations.

Malcolm Gladwell has an engaging write-up of Intellectual Ventures, a kind of reductio ad absurdum of the patent system:

In August of 2003, I.V. held its first invention session, and it was a revelation. “Afterward, Nathan kept saying, ‘There are so many inventions,’ ” Wood recalled. “He thought if we came up with a half-dozen good ideas it would be great, and we came up with somewhere between fifty and a hundred. I said to him, ‘But you had eight people in that room who are seasoned inventors. Weren’t you expecting a multiplier effect?’ And he said, ‘Yeah, but it was more than multiplicity.’ Not even Nathan had any idea of what it was going to be like.”

The original expectation was that I.V. would file a hundred patents a year. Currently, it’s filing five hundred a year. It has a backlog of three thousand ideas. Wood said that he once attended a two-day invention session presided over by Jung, and after the first day the group went out to dinner. “So Edward took his people out, plus me,” Wood said. “And the eight of us sat down at a table and the attorney said, ‘Do you mind if I record the evening?’ And we all said no, of course not. We sat there. It was a long dinner. I thought we were lightly chewing the rag. But the next day the attorney comes up with eight single-spaced pages flagging thirty-six different inventions from dinner. Dinner.”

As Mike points out, the blindingly obvious conclusion from this is that patents are way, way too easy to get. If a room full of smart people—even absolutely brilliant people—can come up with 36 “inventions” in one evening, the logical conclusion is that “inventions” are not rare or hard to produce, and that therefore there’s no good public policy reason to offer monopolies to people who invent them. After all, the classic theory of patent law says just the opposite: that inventions are so difficult and expensive to produce that we wouldn’t get them at all without patent protection. That’s clearly not true of the “inventions” IV is developing, which means that if IV does get patents on them, the patent system is seriously flawed.
Continue reading →

The National Cable & Telecommunications Association blog did a series of posts back in February about the OECD study. There seems to be three basic criticisms. First, businesses in the US have a higher proportion of “special access” lines than other countries ranked, and these are not counted in the statistics, while businesses with normal DSL lines are counted. Second, the OECD statistics are focused on “connections per 100 subscribers” rather than proportion of subscribers who have an Internet connect. The result is to penalize the US, which has a larger-than-average household size (all of whom can share a single Internet connection) while giving an edge to countries with smaller household sizes. Finally, the report relies on advertised speeds and prices, which the NCTA suggests exaggerates Japan’s lead over a metric that focuses on the actual available speeds in that country. Obviously, the NCTA has an agenda to promote so it’s worth taking the criticisms with a grain of salt, but they’re interesting in any event. Thanks to reader Wyatt Ditzler for the link.

Slashdot recently linked to this comparison of the cost of Windows in Brazil and the US. This brings to mind a point I think I’ve seen Mike make: beyond the general point that libertarians should celebrate free software because it’s an example of non-coercive production of public goods, libertarians also have reasons to like free software because it’s more resistant to the coercive power of the state. When software is produced by a commercial company and sold in the marketplace, it’s relatively easy for the state to tax and regulate it. Commercial companies tend to be reflexively law-abiding, and they can afford the lawyers necessary to collect taxes or comply with complex regulatory schemes.

In contrast, free software will prove strongly resistant to state interference. Because virtually everyone associated with a free software project is a volunteer, the state cannot easily compel them to participate in tax and regulatory schemes. Such projects are likely to react to any attempt to tax or regulate them is likely to be met with passive resistance: people will stop contributing entirely rather than waste time dealing with the government.

Hence, free software thus has the salutary effect of depriving the state of tax revenue. But even better, free software is likely to prove extremely resistant to state efforts to build privacy-violating features into software systems. CALEA requires telecom infrastructure to include hooks for eavesdropping by government officials, but it will prove extremely difficult to get similar hooks added to free software. No one is likely to volunteer to add such a “feature”, and even if the state added it itself, it wouldn’t have any realistic way to force people to use its version.

OECD vs. SpeedTest

by on May 5, 2008 · 8 comments

Nate Anderson points to a new report on broadband around the world that I’m looking forward to reading. I have to say I’m skeptical of this sort of thing, though:

Critics of the current US approach to spurring broadband deployment and adoption point out that the country has been falling on most broadband metrics throughout the decade. One of the most reliable, that issued by the OECD, shows the US falling from 4th place in 2001 to 15th place in 2007. While this ranking in particular has come under criticism from staunchly pro-market groups, the ITIF’s analysis shows that these numbers are the most accurate we have. According to an ITIF analysis of various OECD surveys, the US is in 15th place worldwide and it lags numerous other countries in price, speed, and availability—a trifecta of lost opportunities.

With an average broadband speed of 4.9Mbps, the US is being Chariots of Fire-d by South Korea (49.5Mbps), Japan (63.6Mbps), Finland (21.7Mbps), Sweden (16.8Mbps), and France (17.6Mbps), among others. Not only that, but the price paid per megabyte in the US ($2.83) is substantially higher than those countries, all of which come in at less than $0.50 per megabyte.

Now, this site is a tool for measuring the speed of your broadband connection, and it purports to have data from around the world. I have no idea how reliable their methodology is generally, or how good their testing equipment is around the world, but I’ve used it in several different places in the US and it at least seems reliable around here. According to their measurements, the US has an average broadband speed of 5.3 mbps, roughly what the OECD study said. But the numbers for the other countries cited are wildly different: Japan is 13 mbps, Sweden is 8.7 mbps, South Korea is 6.1 mbps, and France is 5.5 mbps. If these numbers are right, te US is behind Sweden and Japan, and slightly behind South Korea and France, but we’re not nearly as far behind the curve as the OECD reports would suggest.

And then there’s this:

The ITIF warns against simply implementing the policies that have worked for other countries, however, and it notes that a good percentage of the difference can be chalked up to non-policy factors like density. For instance, more than half of all South Koreans lived in apartment buildings that are much easier to wire with fiber connections than are the sprawling American suburbs.

Now, I haven’t examined SpeedTest’s methodology, so they might have all sorts of problems that make their results suspect. But it’s at least one data point suggesting that the OECD data might be flawed. And I think the very fact that there seems to be only one widely cited ranking out there ought to make us somewhat suspicious of its findings. Scott Wallsten had bad things to say about the OECD numbers on our podcast. Is there other work out there analyzing the quality of the OECD rankings?

Ideology

by on May 2, 2008 · 40 comments

True or false: “Openness” is the dominenant ideology of Silicon Valley. Discuss.

Update: I should clarify that I mean “openness” in the technical sense of open standards, open platforms, open networks, open source, etc.

One of the more positive consequences of the whole Sydnor/Lessig debate is that it’s enticed the always-interesting David Friedman to weigh in on tech policy issues, giving me the opportunity to quote him in his entirety:

I’ve been trying for years to persuade Larry to admit to libertarian leanings. I’m not sure from comments here whether he actually did it while I wasn’t looking or is merely being accused of it. My interpretation of his attitude, long ago, was libertarian instincts hindered by a leftist self-image.

Consider his basic argument, a book or two back, for treating the net as a commons–by which he actually meant a commodity, since he wasn’t proposing zero cost access. It was that if the people in the middle, the ones transmitting the bits, got a veto over what sorts of bits they transmitted, that would make innovation very hard, since there would be too many people whose agreement you needed before doing anything.

I think he acknowledged–certainly he knew–that the counter argument was that what was being transmitted varied in ways that were relevant to the cost of transmitting it–burst vs steady stream, material where very low lag was important (real time games, distance surgery) vs material where it wasn’t (downloading), etc. So requiring the same cost for everything, or even specifying the cost structure, meant that some people were free to impose external costs on others without their consent. His conclusion depended on the judgment that the inefficiencies due to permitting that were less important than the inefficiencies due to the high transaction costs of innovation with the alternative system.

What didn’t seem to occur to him was that he had just sketched the argument against zoning. There too, the individual’s decisions–what sort of house to build, whether to use his land for residence or commerce, and the like–can impose external costs on others. There too, requiring the permission of those others, whether directly or via variances in zoning, makes innovation hard. The same argument Larry was making for the net as a commodity, applied to land use, is an argument for strong individual property rights and against land use control. Once you take seriously the point that forcing people to take account of all effects of their acts on others means nobody ever gets to do anything, you undercut a lot of the arguments for a wide variety of government interventions.

Quite so. I’ve said before that network neutrality (the technical principle, not the proposed legal regime) is the division of labor. The end-to-end principle allows decentralized decision-making on the Internet in precisely the same way that the price mechanism allows decentralized decision-making in the broader economy: by giving people a simple, predictable interface to the rest of the world that isn’t dependent on the whim of any central decision-maker.

ISPs that try to implement discriminatory network policies create the same kinds of problems as government officials that enact regulations: they often cause unintended consequences (like blocking Lotus Notes) and they cause people to waste resources evading the restrictions (as with BitTorrent header encryption).

Now, I should hasten to add that as I’ve written before, the fact that neutral networks have good properties doesn’t mean that mandating them is good public policy. Because of course a network neutrality rule would itself have unintended consequences and lead people (in this case ISPs) to waste resources trying to evade the rules. But if we’re talking about network design, rather than government regulation, it seems to me that libertarians ought to look favorably on decentralized networks mediated by the end-to-end principle for all the same reasons we look favorably on decentralized economies mediated by the price mechanism.

I was planning to leave the Lessig/Sydnor thing alone because I feel like we’ve beat it to death, but Tom’s really pissing me off. For those who haven’t been following the now-voluminous comments (and I don’t blame you), Mike Masnick recently wrote the following:

[Lessig] wasn’t praising communism in the slightest — but pointing out how regulatory regimes in the US can impact someone’s day-to-day life quite strongly, while for certain aspects of life in Vietnam those similar regulations do not impact them. That doesn’t mean communism is good or that life is great in Vietnam. In fact, Lessig pointed out that neither point is true. But he was pointing out what the factual situation was concerning certain aspects of day-to-day life.

You don’t dispute those points — you can’t, because they’re true. You merely take those statements and pretend they’re an endorsement of communism. It’s not even remotely a defense of communism. It’s showing the problems with US regulations, something I would think you would endorse.

And Tom responds:

I must distance myself from Mike’s claim that the admittedly deregulatory effect of terrorizing civilians “is something I would think you would endorse.”

And I had to pick my jaw up off the floor.

In case English isn’t your first language, let me dissect this a little bit. Scholars have a basic obligation to represent their opponents’ words accurately. If you put a phrase in quotes, you have an obligation for the quoted phrase to be a faithful representation of what the person being quoted actually said. That obligation counts double if you precede the quote by a phrase like “Mike’s claim” that unambiguously attributes the entire sentence to the person you’re criticizing. And in particular, if you quote half of a sentence, say the verb and direct object, you have an obligation not to change the subject to be something totally different. I if I write “Ice cream is great,” it would be dishonest for you to write “I must distance myself from Tim’s claim that the Holocaust ‘is great.'” Yes, I literally wrote the phrase “is great,” but the subject of that phrase wasn’t “the Holocaust,” and implying that it was is just as dishonest as writing “Tim claimed ‘the Holocaust is great.'”

What Tom did here is identical. In Mike’s comment, the subject of the phrase “something I would think you would endorse” is “showing the problems with US regulations.” Tom’s response plainly implies that the subject of the phase “something I would think you would endorse” was “the admittedly deregulatory effect of terrorizing civilians.” This, of course, is a totally different proposition, and something that Mike never said. Yet Tom has the audacity to precede the sentence with “Mike’s claim” plainly attributing the whole sentence to Mike.

This is, quite simply, a lie. And a stupid, transparent lie at that. I’m really confused about what Tom thinks he’s accomplishing. Surely he doesn’t believe the readership of TLF is so dumb that we’ll be persuaded by these kinds of grade-school rhetorical sleights of hand.

Update: Now that I’ve posted this, it occurs to me that I’ll probably see a post on IPCentral in a few minutes with the headline “Lessig supporter endorses the Holocaust.”

Over at Larry Lessig’s blog, David Friedman has a really interesting comment about libertarian attitudes toward patent and copyright law (I’m going to relax my usual rule about the phrase “IP” because of the way Friedman and Lessig have framed it):

You write: ” There is a divide in the libertarian camp about IP extremism.”

I think that understates the case. There has long been a divide among libertarians about IP itself. Some see it as the purest and most morally defensible form of property, on the grounds that it is produced by the human mind without using any unproduced resources, such as land, which one might have difficulty justifying the ownership of. Others see it as a clear violation of rights, on the grounds that if something such as a book belongs to me, I have the right to do with it as I will.

This is an accurate summary of the state of play among philosophically-minded libertarian generalists. Anybody’s who’s spent a lot of time in libertarian circles can almost recite the competing arguments in their sleep. Frankly, they start to seem kind of vacuous after a while.

This is most obvious in the anti-“IP” camp. If you believe that copyright and patent law are nothing but infringements on peoples’ natural rights, then you have a simple, compelling answer to every question in this area of law. You’re also going to be completely left out of the practical discussions of copyright and patent reform. Because if all copyright and patent monopolies are illegitimate, there’s no obvious way to tell which ones are the most illegitimate. Or to put it a different way, if you’re an “IP” abolitionist and you want to participate in contemporary policy debates, you need to have an additional set of principles that tells you which parts of the copyright and patent systems to reform first, and these principles are ultimately going to do more to drive your policy choices than the principled opposition to government monopolies in all of their forms.
Continue reading →

Larry Lessig, Demagogue?

by on April 30, 2008 · 92 comments

Tom Sydnor and Richard Bennett have both made a big deal of the fact that Larry Lessig is purportedly a demogogue. Richard, for example, says:

It’s an error to consider Lessig a serious scholar with serious views about serious issues. He’s a performer/demagogue who will latch onto any issue that he can use to promote the Lessig brand.

At the Stanford FCC hearing, he portrayed capitalism as a law of the jungle, in pictures of tigers eating prey. What intellectual critique if appropriate to refute that point of view, a picture of George Soros writing a fat check to Free Press so they can bus partisans to the hearing?

Now as it happens, I watched Lessig’s Stanford presentation, so I know what Richard is referring to here. And while this characterization is not wrong, exactly, it’s certainly not a fair summary of Lessig’s point. Here’s what he actually said:

If we had right policy, I don’t think that we would be talking about questions of trust. I don’t think the Department of Justice after the IBM case was talking about whether we trust IBM, or trust Microsoft, or trust Google. We don’t talk about trusting a company just like you don’t talk about trusting a tiger, even though the brand management for tigers has very cute images that they try to sell you on how beautiful and wonderful the tiger is.

If you looked at that picture and you thought to yourself the great thing for my child to do would be to play with that tiger you’d be a fool because a tiger has a nature. The nature is not one you trust with your child. And likewise, a company has a nature, and thank god it does. Its nature is to produce economic value and wealth for its shareholders. We don’t trust it to follow good public policy. We trust it to follow that objective. Public policy is designed to make it profitable for them to behave in a way that serves the objectives of public policy, in this case the objective of an open, neutral network. It makes it more profitable for them to behave than to misbehave.

Continue reading →