March 2007

Mark Blafkin and I have been having an interesting and productive discussion in the comments to Braden’s post about the GPL v. 3. Mark says:

The FSF and the GPL itself actively attempt to limit collaboration between proprietary and free software communities. As you’ll find in the article previously mentioned, Mr. Stallman says that it is better for GNU/Linux to not support video cards rather than include a proprietary binary.

In fact, the entire basis of the GPL is to frustrate cooperation between the immoral proprietary software guys and free software. The viral nature of the GPL (if you use code and integrate or build upon it, your code must become GPL) is designed to prevent that cooperation because it will lessen the freedom of the free software itself.

Continue reading →

The Washington Post reports today on a couple of Virginia high school students who are suing anti-plagiarism service turnitin.com for copyright infringement. According to press accounts, the service is used by 6,000 schools, including Harvard and Georgetown. The way it works is that students turn in papers to their teachers by submitting them through Turnitin’s website. Turnitin then compares the submitted papers to a snapshot of the web, to databases of published articles, and to its own database of millions of other student papers. The problem is that the submitted papers are added to the company’s database of student papers without student permission. Plaintiffs in the case specifically marked their papers asking that they not be archived but they where nonetheless. The students have a website at dontturnitin.com.

What’s striking to me is how similar this is to Google Book Search. It remains to be seen whether Turnitin will make a fair use defense, but their past statements suggest that they will. (Here is a PDF of a legal opinion that Turnitin commissioned.)

Google is copying books without the copyright owners’ consent and storing them in a searchable database, just as Turnitin does with student papers. Google copies the whole book, but argues it’s a fair use because they only display a “snippet” of the text in search results. Turnitin also copies the whole work and only displays snippets to teachers if there’s a plagiarism match. Both Google and Turnitin make commercial use of the works they copy and they both arguably serve educational purposes. And If Google’s use doesn’t affect the “potential market” for licensing books to be included in searchable databases, then Turnitin’s use certainly doesn’t affect the potential market for licensing papers to be included in a plagiarism database.

So, can these cases be distinguished? If not, are they both fair use? I’m still thinking about this one, and I’d like to hear what your analysis is.

I’m starting a research project on network neutrality, and I’m hoping some of our smart readers can point me to stuff I ought to be reading. Below the fold I’ve got a brief summary of what I’m looking for. If you’ve ever studied the technical, economic, or political aspects of Internet routing policies, I would be eternally grateful if you could click through and give me your suggestions.

Continue reading →


Tech Policy Weekly from the Technology Liberation Front is a weekly podcast about technology policy from TLF’s learned band of contributors. The shows’s panelists this week are Jerry Brito, Drew Clark, Hance Haney, and Tim Lee. Topics include,

  • Patent reform looms large on the D.C. agenda
  • What does the FreeConference controversy have to do with net neutrality
  • A new e-voting bill makes the rounds

There are several ways to listen to the TLF Podcast. You can press play on the player below to listen right now, or download the MP3 file. You can also subscribe to the podcast by clicking on the button for your preferred service. And do us a favor, Digg this podcast!

Get the Flash Player to see this player.

Subscribe to Tech Policy Weekly from TLF on Odeo.com Subscribe to Tech Policy Weekly from TLF in iTunes Add to Pageflakes Subscribe in Bloglines

I thought this was interesting and with permission I quote in its entirety from ipcentral:

Having examined the latest draft of the Free Software Foundation’s General Public License version 3 (GPLv3) several times, and having looked over the Rationale document, I have come to a diagnosis.

If GPLv3 were a human being, one would say that it has delusions of grandeur. It thinks it is a law rather than a license.

Legally speaking, GPLv3 is a license, which is a form of contract. It specifies the terms on which the holder of copyrights or patents on software will permit others to make use of it. It is a bit of a special case because it is open to the world at large; anyone may use it, without payment, as long as they abide by its terms, which is unusual in contract law. However, there are doctrines of promissory estoppel and third party beneficiaries that take account of such things, and GPLv3 is firmly within the legal genre of contract.

But the GPLv3 was apparently drafted on the assumption that it is something quite different — that it is a regulation controlling a range of general behavior by software users, and that it is being promulgated by a governmental body with law-creating power.

The difference between a contract and a regulation is extremely important.

Continue reading →

I really wish that the pro-regulatory people would stop scaring musicians with wildly implausible horror stories:

The Rock the Net campaign, made up mostly of musicians who are on smaller record labels or none at all, said they are fearful that if the so-called “Net neutrality” principle is abandoned their music may not be heard because they do not have the financial means to pay for preferential treatment.

Some said they do not want to pay. The Web, they said, has allowed many unknown musicians to put their music online, giving fans instant access to new music and giving bands greater marketing capabilities.

This is implausible on so many levels that I don’t even know where to begin. I’ve argued in the past that ISPs are unlikely to have the bargaining power to extract preferential access fees, that any fees are likely to be bundled with basic connectivity, and that ISPs have little or no control over what appears on a user’s screen.

But let’s say I’m wrong about all that and a dystopian future does materialize in which the Internet is limited to the websites of a handful of deep-pocketed corporations. Then independent artists are screwed, right?

Well, not really. How do artists reach fans now? A lot of them use sites like MySpace, Blogger, and YouTube. Sites, in other words, run by large corporations with deep pockets. Even in the exceedingly unlikely event that the Internet is somehow closed off to all but the largest corporations, it’s likely that Google and News Corp. will pay what’s necessary to ensure that their own properties continue to function.

So to buy the artists’ fears, you not only have to believe that the telcos will succeed in radically transforming the Internet at the logical layer, but you also have to believe that they’ll be able to twist the arms of companies like Google that control the content layer into changing their sites to lock out local artists. Not only does it seem exceedingly unlikely that they’d be able to do that, but it’s not even clear why they’d want to. If News Corp is paying the appropriate bribe to give MySpace preferential access, why would Verizon care what kind of content MySpace is making available?

Another person who testified about HR 811 on Friday was disability access advocate Harold Snider. He makes some good points about how DREs improve the accessibility of elections to disabled voters, and raises concerns that the requirement for a paper trail will delay the arrival of fully accessible voting. But then he veers off into hyperbole:

I am very proud of the fact that I was able to complete a Doctorate at Oxford University in 1974, where I studied 19th Century British History. I learned that in early 19th –Century England, a group of people called Luddites attempted to destroy early industrial production machinery because they perceived it as a threat, and had no confidence in it. I believe that the same is true with those who favor H.R. 811. In the 21st Century there are still people who have no faith in modern technology and in its ability to deliver a secure electronic voting process.

This argument is extremely silly, and the supporters of DREs are only shooting themselves in the foot when they make it. The most vocal critics of DREs are computer geeks. Jon Stokes, for example, writes in-depth reviews of new computer chips for Ars Technica. The idea that computer science professors, free software enthusiasts, and the Electronic Frontier Foundation are luddites doesn’t pass the straight face test.

Multiple-language Ballots

by on March 29, 2007 · 4 comments

I’ve been reading through last week’s testimony on the Holt bill, and I’m learning that one of the major concerns for designing an election system is ensuring accessibility to non-native voters with limited English skills.

I’m normally pretty hostile to nativist English-only movements. If people want to speak Spanish, or Chinese, or Klingon in their private lives, that’s their business. And if a significant number of citizens are most fluent in a language other than English, I see nothing wrong with the government offering services in other languages. Just today my colleauge Sarah Brodsky did an excellent post about a protectionist effort to require English fluency to be a commercial driver in Missouri.

However, I still have trouble seeing a strong argument for accommodating voting systems for non-native speakers. American politics, at least at the federal level, is overwhelmingly carried out in English. If your grasp of English is so weak that you have difficulty deciphering a ballot, then chances are you’ll have an equally difficult time following contemporary political debates. And if you can’t follow the debate, you’re not likely to make very sensible choices in the ballot box.

I certainly don’t think the federal government should prohibit states from offering multi-lingual voting systems. But I also don’t think it makes sense to require states to accommodate non-English speakers. For states whose politics are carried out almost exclusively in English (which I believe is all of them outside of Florida and the Southwest), I think it’s perfectly reasonable for ballots to be exclusively in English.

Justin Levine claims to have predicted the Orwellian copyright dispute about Orwell’s works.

Continue reading →

GPL 3.0: v. (for Vendetta)

by on March 28, 2007

With the release of the most recent discussion draft today, one thing is immediately clear: this third version of the General Public License can be simply written “GPL v.” – where “v” stands not for “version” but for “vendetta.”

There’s little doubt that this GPL 3 draft is a vendetta against the patent non-assertion agreement we saw in the Microsoft and Novell deal. But it is also aimed at the use of technological protection measures like digital rights management. This may not upset the fundamentalists at the Free Software Foundation, but here’s something that I think will concern them: GPL code will become more isolated and less relevant in the technology marketplace.

Turning the Four Freedoms into the Ten Commandments

The GPL 3 draft is no longer just about protecting the four freedoms. Instead, it preaches about what can’t be done with software – thou shall not use DRM, thou shall not partner with proprietary software companies, etc. The draft contains provisions that block the use of anticircumvention technologies and patent non-assertion agreements. It’s the patent provision that attempts to strike a dagger at the heart of the collaboration between Microsoft and Novell.

Continue reading →