Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


The Wall Street Journal says ($) that EMI and Apple will announce tomorrow that “significant amounts” of EMI’s catalog will be available on iTunes sans copy protection. Fantastic. If this proves true, they’ll have earned at least one new customer—me.

Mark Blafkin and I have been having an interesting and productive discussion in the comments to Braden’s post about the GPL v. 3. Mark says:

The FSF and the GPL itself actively attempt to limit collaboration between proprietary and free software communities. As you’ll find in the article previously mentioned, Mr. Stallman says that it is better for GNU/Linux to not support video cards rather than include a proprietary binary.

In fact, the entire basis of the GPL is to frustrate cooperation between the immoral proprietary software guys and free software. The viral nature of the GPL (if you use code and integrate or build upon it, your code must become GPL) is designed to prevent that cooperation because it will lessen the freedom of the free software itself.

Continue reading →

I’m starting a research project on network neutrality, and I’m hoping some of our smart readers can point me to stuff I ought to be reading. Below the fold I’ve got a brief summary of what I’m looking for. If you’ve ever studied the technical, economic, or political aspects of Internet routing policies, I would be eternally grateful if you could click through and give me your suggestions.

Continue reading →

I really wish that the pro-regulatory people would stop scaring musicians with wildly implausible horror stories:

The Rock the Net campaign, made up mostly of musicians who are on smaller record labels or none at all, said they are fearful that if the so-called “Net neutrality” principle is abandoned their music may not be heard because they do not have the financial means to pay for preferential treatment.

Some said they do not want to pay. The Web, they said, has allowed many unknown musicians to put their music online, giving fans instant access to new music and giving bands greater marketing capabilities.

This is implausible on so many levels that I don’t even know where to begin. I’ve argued in the past that ISPs are unlikely to have the bargaining power to extract preferential access fees, that any fees are likely to be bundled with basic connectivity, and that ISPs have little or no control over what appears on a user’s screen.

But let’s say I’m wrong about all that and a dystopian future does materialize in which the Internet is limited to the websites of a handful of deep-pocketed corporations. Then independent artists are screwed, right?

Well, not really. How do artists reach fans now? A lot of them use sites like MySpace, Blogger, and YouTube. Sites, in other words, run by large corporations with deep pockets. Even in the exceedingly unlikely event that the Internet is somehow closed off to all but the largest corporations, it’s likely that Google and News Corp. will pay what’s necessary to ensure that their own properties continue to function.

So to buy the artists’ fears, you not only have to believe that the telcos will succeed in radically transforming the Internet at the logical layer, but you also have to believe that they’ll be able to twist the arms of companies like Google that control the content layer into changing their sites to lock out local artists. Not only does it seem exceedingly unlikely that they’d be able to do that, but it’s not even clear why they’d want to. If News Corp is paying the appropriate bribe to give MySpace preferential access, why would Verizon care what kind of content MySpace is making available?

Another person who testified about HR 811 on Friday was disability access advocate Harold Snider. He makes some good points about how DREs improve the accessibility of elections to disabled voters, and raises concerns that the requirement for a paper trail will delay the arrival of fully accessible voting. But then he veers off into hyperbole:

I am very proud of the fact that I was able to complete a Doctorate at Oxford University in 1974, where I studied 19th Century British History. I learned that in early 19th –Century England, a group of people called Luddites attempted to destroy early industrial production machinery because they perceived it as a threat, and had no confidence in it. I believe that the same is true with those who favor H.R. 811. In the 21st Century there are still people who have no faith in modern technology and in its ability to deliver a secure electronic voting process.

This argument is extremely silly, and the supporters of DREs are only shooting themselves in the foot when they make it. The most vocal critics of DREs are computer geeks. Jon Stokes, for example, writes in-depth reviews of new computer chips for Ars Technica. The idea that computer science professors, free software enthusiasts, and the Electronic Frontier Foundation are luddites doesn’t pass the straight face test.

Multiple-language Ballots

by on March 29, 2007 · 4 comments

I’ve been reading through last week’s testimony on the Holt bill, and I’m learning that one of the major concerns for designing an election system is ensuring accessibility to non-native voters with limited English skills.

I’m normally pretty hostile to nativist English-only movements. If people want to speak Spanish, or Chinese, or Klingon in their private lives, that’s their business. And if a significant number of citizens are most fluent in a language other than English, I see nothing wrong with the government offering services in other languages. Just today my colleauge Sarah Brodsky did an excellent post about a protectionist effort to require English fluency to be a commercial driver in Missouri.

However, I still have trouble seeing a strong argument for accommodating voting systems for non-native speakers. American politics, at least at the federal level, is overwhelmingly carried out in English. If your grasp of English is so weak that you have difficulty deciphering a ballot, then chances are you’ll have an equally difficult time following contemporary political debates. And if you can’t follow the debate, you’re not likely to make very sensible choices in the ballot box.

I certainly don’t think the federal government should prohibit states from offering multi-lingual voting systems. But I also don’t think it makes sense to require states to accommodate non-English speakers. For states whose politics are carried out almost exclusively in English (which I believe is all of them outside of Florida and the Southwest), I think it’s perfectly reasonable for ballots to be exclusively in English.

A very sensible video editorial from Walt Mossberg:

I agree with Mossberg that we need “a law written from the perspective of the consumer and the internet, rather than strictly from the perspective of the copyright holders.” But I think Mossberg is lumping together two things that it might be better to keep clearly distinct: the DMCA’s anti-circumvention language, and its notice-and-takedown provisions. As I’ve said repeatedly on this site, I think the former are bad news from almost every perspective and should be repealed. But I don’t think the latter is so terrible, and I haven’t seen anyone propose an alternative that I can get excited about. Clearly, if copyright is going to mean anything, Viacom has to have some cause of action when people upload non-trivial amounts of its copyrighted materials onto YouTube. For all of their flaws, the notice-and-takedown provisions seem to strike a pretty good balance. I would be hesitant to start lobbying Congress to re-consider that part of the DMCA before we have a clear idea of what ought to replace it.

Mike Masnick warns that the future of VoIP is in jeopardy:

You have to wonder how many times fans of the patent system have to repeat the mantra that “patents encourage innovation” before they can actually believe it. There continues to be new evidence on nearly a daily basis of patents doing the exact opposite that it’s hard to believe the patent system retains as many supporters as it does. The latest is that a ton of patent holders are preparing to sue over various VoIP-related patents, following the news of Verizon’s big win over Vonage for VoIP patents. The problem, of course, is that tons of companies (some big, some small) all claim patents on various aspects of VoIP — creating the very definition of the “patent thicket.” That is, there are so many patents around the very concept of VoIP that no one company can actually afford to offer a VoIP service, since the cost to license all the patents is simply too prohibitive. Expect plenty more lawsuits in the near future as this all comes out in court. The big players will use their patents to keep out competition, and the small players will use the patents to try to create an NTP-style lottery ticket. The lawyers will all win — but consumers who just want to use VoIP will lose big time. What’s wrong with letting companies simply compete in the marketplace and letting the natural forces of competition encourage innovation? Instead, we get patent holders trying to hold back competition and hold back innovation.

I think critics of the patent system need to be careful about over-stating our case. I don’t think software patents will destroy the VoIP industry. Rather, they will serve as a steady drag on the industry, raising the cost of doing business and forcing innovative upstarts to spend their money hiring lawyers rather than engineers. This will, in turn, tilt the playing field to Verizon’s benefit.

Most likely the result will be that small, entrepreneurial firms will be squeezed out of the market, leaving a bifurcated market between large, deep-pocketed incumbents on the one hand, and decentralized open source projects and overseas firms on the other. Tech-savvy Americans will have no trouble finding and installing innovative VoIP solutions, but for the vast bulk of Americans, they’ll have to use whatever Verizon and other patent-heavy firms choose to dish out. (It’s an interesting question what will happen to firms like Skype, Apple, AOL, and Google that offer “pure Internet” voice calling and have not, to date, made a significant dent on the telephone market)

What this case makes crystal clear is that there’s no appreciable connection between which innovating and getting patents. No one would argue that Verizon has been more innovative than Vonage in the VoIP market, yet because Verizon has spent more money filling for patents in recent years, Vonage is placed in the ridiculous position of paying Verizon for the privilege of using Verizon’s “inventions.”

Having smart readers is great! Check out the comments to my post on wireless commons, wherein TLF reader who actually know what they’re talking about elaborate on the strengths and weaknesses of unlicensed spectrum and mesh networks. For example:

In general, the concept of spectrum commons is intuitively appealing. Unlicensed spectrum has already proven its value with the proliferation of WiFi and the spectrum commons approach dangles the possibility of extending the promise of unlicensed spectrum to a near utopian degree. This is always presented as an superior alternative to the sclerotic bureaucracy of the FCC making decisions on spectrum use. However, in the real world where people are actually building modems, radios and consumer devices, the regulatory context of the FCC provides more than just an economic model of how spectrum is used (i.e. spectrum as property with markets vs unlicensed or common spectrum). It also provides a technical context for engineers who design and build the technology. RF is pretty wacky stuff and although increasing computational power and antenna technologies are of critical importance and key enablers to new wireless architectures and protocols, they don’t eliminate the world of cavity filters, intermodulation distortion, adjacent channel interference, etc.

Ultimately, the either/or approach is problematic. Spectrum commons, like unlicensed spectrum before it, hold great promise and regulatory bodies should embrace it by making spectrum available. But it’s also 10 or 20 years away from being ready for primetime. There’s a lot of usable radio spectrum. The real answer is to embrace and enable multiple approaches and philosophies of spectrum usage.

More good stuff here.

Don Marti has an excellent analogy to help illustrate what’s wrong with software patents:

If Victor invents something, and I describe it in prose, I’m not infringing. If he invents something and I build it as hardware, I am. But if I do something in between between hardware and prose—”software”— where do you draw the line of where he can sue me? If Dr. David S. Touretzky doesn’t know where you draw the line between “speech” and “device” how should the courts know?

All of the arguments for software patents work just as well for prose patents. Just as a software patent covers the algorithm, not the code, a prose patent could cover the literary device, sequence of topics, or ideas used to produce some effect on the reader…

The debate over software patents isn’t just an attempt to set one arbitrary line between the patentable and the unpatentable. It’s about resisting the slide toward higher and higher transaction costs that happens when patents creep into places where they don’t make sense. We have algorithm patents but not prose patents because lawyers and judges use analogies and other prose inventions more than they use algorithms.

Quite so. I think the reason you see such violent and near-unanimous dislike for software patents among computer programmers is that it’s not an abstraction for them. For most people, software is just a magical icon that sits on their desktop and does stuff when they double click on it. The question of whether software should be covered by patents is akin to debates over who owns the moon: intellectually interesting, but not really relevant to their day-to-day lives. But what computer programmers see is that widespread enforcement of software patents would mean that a significant portion of their professional lives would suddenly require regular consultation with lawyers. This pisses them off in precisely the same way—and for precisely the same reasons—that patents on plot devices, analogies, literary styles, and other prose concepts would piss off writers.