Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certaincompanies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.
The Institute for Policy Innovation has an essay by Lee Hollaar on their website criticizing the fair use critique of the DMCA. The premise of the essay seems to be that DMCA critics haven’t been appropriately specific about which fair uses the DMCA restricts, and that in fact many of the things that DMCA critics call fair use are not, in fact, fair use under the law.
There are two problems with this line of argument. In the first place, Hollaar uses an absurdly narrow definition of fair use in order to argue that DRM systems don’t restrict it. For example:
Very few digital rights management systems prevent transformative fair use of a work, such as including quotes from a work in a criticism, comment, or news report.
It’s obviously true that DRM systems do not prevent you from watching a video and then typing up a transcript of what it says. In fact, it’s so obvious that I wonder if Hollaar’s being a bit obtuse. What DMCA critics are concerned about here is the ability to include video excerpts in their creative works. And DRM schemes clearly do prevent you from doing that.
The other thing that occurs to me as I study Verizon’s patents is that patent law presents some huge problems from the standpoint of the rule of law. We libertarians frequently hammer home the importance of having laws that are clear and predictable. On network neutrality, for example, we point out that no one has been able to come up with language that unambiguously elucidates what is and isn’t allowed.
Yet every single patent is a miniature government regulation. If the FCC had issued regulations that looked like this, we libertarians (myself included) would be kicking and screaming about how unfair it is to expect people to comply with such vague requirements. Yet Vonage has had to stake the future of its company on correctly predicting how the courts will interpret phrases like:
software running on the central processing unit, causing the server to formulate and transmit a reply to a query for translation of a name specified in a second protocol received via the interface, wherein the software controls the central processing unit to include an address of a destination terminal device conforming to the first protocol associated with the name if the server receives the query for translation within a predetermined time window.
…and it goes on for pages and pages. That’s as bad as anything you’ll find in Snowe-Dorgan.
I’m doing a story on the Verizon-Vonage case, and the more I think about the patent system, the more trouble I’m having believing that anyone could seriously support the Federal Circuit’s current patent rules.
So Verizon won its case on three patents, two of which were almost identical. So we’ve got this one, which seems to cover the concept of converting an IP address into a phone number. And then we’ve got this one which seems to cover the concept of making a wireless phone call via the Internet.
I want to step back from the specifics of the case (the Federal Circuit may or may not reverse the ruling—although even if they do, it won’t halp if Vonage has already declared bankruptcy) and ask what possible policy rationale there could be for granting patents like these. Why would we want to set up a system that in principle allows the first person who figures out how to hook the PSTN up to packet-switched networks to have a 20-year monopoly on that market?
Even if we had some insanely innovative guy who in, say, 1992, invented the first VoIP application, and even if at that point no one else had ever thought of sending voice calls over the Internet, I still don’t understand the policy rationale for banning anyone else from developing VoIP software until 2012. Even if it was wildly innovative, novel, and non-obvious in 1992, the shear march of technology would have rendered it obvious long before 2012. Hell, today I suspect most competent CS grad student could develop a perfectly functional VoIP application in a matter of weeks using off-the-shelf programming tools. What’s been holding it back is a lack of infrastructure, not any mysteries about how to write the software.
So somebody explain the argument to me. How does giving a single company a monopoly over an emerging Internet technology—even a company that really is years ahead of its time—good for innovation?
My co-blogger Brian Moore notes another of these ridiculous “wi-fi theft” cases, this one in the UK. Brian’s take is spot-on:
I can’t think of a better example of a victim-less crimes. The “victims” were so unconcerned about people accessing their network that they didn’t bother to give it even the minimal security that most wireless access points will automatically prompt you to install. Secondly, absolutely nothing the “perpetrators” did harmed the “victim.” This is a positive externality — this is like me listening to good music coming out of your house. We both benefit from your purchase. It’s better than bad, it’s good!
The only exception would be if the connection burglars were doing something that negatively impacted the owners of the network — such as downloading massive files that slowed their connection. But nothing in the article implies this — they merely saw him sitting in a car across the street with a laptop. If someone cracks your encryption and steals your credit card numbers, then yes, this is a crime. But that’s not what’s happening.
Here’s the crime they were charged with: “dishonestly obtaining electronic communications services with intent to avoid payment.” Imagine a similar crime — I purchase a newspaper and throw it out into the street after I’m done with it. Someone walks by and picks it up, and starts reading it. Then the police arrest him for “dishonestly obtaining paper communications services with intent to avoid payment.” What’s worse, in this example, the original owner of the newspaper has actually lost the ability to use it because there’s only one copy, even if it’s obvious he doesn’t care about it. “Stealing” wireless access doesn’t (normally) impact the owner’s ability to use it.
There is a loophole (that Stallman hasn’t found a way to close yet) in the GPL that allows distributors to ship proprietary binaries on the same CD as free software, but they can’t be part of the same program/system. The GPL is designed to make it as difficult possible (and GPLv3 more so) to run both proprietary and free software at the same time.
Now, Merriam-Webster defines a “loophole” as “an ambiguity or omission in the text through which the intent of a statute, contract, or obligation may be evaded.” With that in mind, here is the relevant provision of GPL v2:
Mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
Now, as it states above, the term “loophole” describes an interpretation of a contract that is contrary to the intention of its drafters. If the ability to distribute free and proprietary software side-by-side on a CD is a “loophole,” it’s mighty hard to explain why they would have added a provision that explicitly permits such distribution.
But whether that was a loophole or not, at least Stallman is working hard to close it, right? Well, here’s the latest version of the GPL 3 draft:
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation’s users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
This is a bit wordier, but it seems to me that the intent is no less clear: the GPL specifically and deliberately permits distributors to “ship proprietary binaries on the same CD as free software.” Blafkin either doesn’t know what a loophole is, or didn’t bother to read and understand the GPL before criticizing it.
Ed Felten describes the latest phase of the cat-and-mouse game between the HD-DVD/Blu-Ray cartel and hackers trying to crack their AACS encryption scheme:
To reduce the harm to law-abiding customers, the authority apparently required the affected programs to issue free online updates, where the updates contain new software along with new decryptions keys. This way, customers who download the update will be able to keep playing discs, even though the the software’s old keys won’t work any more.
The attackers’ response is obvious: they’ll try to analyze the new software and extract the new keys. If the software updates changed only the decryption keys, the attackers could just repeat their previous analysis exactly, to get the new keys. To prevent this, the updates will have to restructure the software significantly, in the hope that the attackers will have to start their analysis from scratch.
Over at Ars, I cover the European Commission’s new report advocating the creation of a new set of patent courts to unify Europe’s patent system:
The report argues that European competitiveness is hampered by its limited use of patents compared with Japan and the United States. It notes that Americans and Japanese both file more patents per capita than due Europeans, and speculate that Europe’s unwieldy patent system is holding back innovation.
However, the report pays little attention to the possibility that too much patenting could be an even bigger impediment to economic growth. The report devotes only one short paragraph to patent thickets, patent trolls, and other problems created by low-quality patents. It doesn’t offer any concrete recommendations for ensuring that the European patent system avoids the problems now being encountered in the United States.
Nor does the report address the risk that creating a separate patent court will lead to a judicial system that is too sympathetic to expanding the scope of patents. Some observers argue that’s precisely what happened in the United States. They note that since Congress created the Federal Circuit in 1982 to hear patent appeals, the scope of patents has been greatly expanded. Software and business method patents have been legalized, and the bar for obviousness has been dramatically lowered. Some speculate that the large number of former patent lawyers among the judges of the Federal Circuit are one reason for this shift—lawyers who have devoted their lives to patent law will naturally be sympathetic to arguments for expanding the scope of the patent system.
Ryan Paul at Ars has a write-up on the continuing revolt against the REAL ID Act among the states:
The New Hampshire House of Representatives voted last week to block implementation of the federal government’s controversial Real ID act. Since New Hampshire Governor John Lynch does not intend to veto the Real ID rejection bill, it will pass if approved by the state senate. Characterized by New Hampshire Representative Sherman Packard as “the worst piece of blackmail to come out of the federal government,” the Real ID Act creates a set of uniform standards for state-issued ID cards, and mandates the construction of a centralized national database to store information on American citizens…
Idaho and Maine have already passed bills rejecting implementation of the Real ID act, and similar proposed bills are being evaluated in South Carolina and Arkansas as well as New Hampshire. ACLU state legislative department director Charlie Mitchell says that this is just the beginning of a “tidal wave of rebellion against Real ID.” If enough state governments refuse to comply with the requirements of the Real ID act, it is likely that congress will have to reevaluate the entire plan. “Across the nation, local lawmakers from both parties are rejecting the federal government’s demand to undermine their constituents privacy and civil liberties with a massive unfunded mandate,” says Mitchell. “Congress must revisit the Real ID Act and fix this real mess.”
Indeed. Our own Jim Harper has been on the front lines in this fight, testifying before state legislatures and urging them to reject REAL ID. Perhaps his hard work is paying off.
I’m excited to report that, as you can see here, I’ve been named an adjunct scholar at the Cato Institute. I’m not moving back to DC, but once my replacement here at the Show-Me Institute starts on May 1, I’m going to be spending about half my time at home doing tech policy research for Cato. The remainder of my time will be spent on a variety of freelance work. Initially I’ll be doing some freelance work for Show-Me to ensure a smooth transition, but longer-term, I’m hoping to be able to focus full-time on tech policy work.
That means I should be moderately more prolific here at TLF. Also, keep an eye out for my contributions to Cato @ Liberty and the sadly-neglected-of-late TechKnowledge, two great publications you ought to be reading whether I’m contributing to them or not.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →