May 2006

For anyone who ever needed to understand what’s so good about derivative works, presenting: 10 Things I Hate About Commandments.

Like its sibling, Must Love Jaws, it fuses many different copyrighted works together into a hilarious, farcical cultural commentary.

It might be easy to assume that the use of each copyrighted work is a fair use because the entire piece is parody. But it’s certainly not a parody of each work it uses. I wonder whether this artist might catch a lawyer letter or a lawsuit pretty soon.

These works also illustrate why there’s some weakness to the argument that there can or should be hermetically sealed copyright-based entertainment and non-copyright based entertainment. If cultural referents from the copyright side can’t be used in the non-copyright side, the non-copyright side is diminished.

(ht: IPCentral)

UPDATE: I e-mailed the creator who declined comment. The conclusion I would draw is that he is not confident that these pieces do not violate copyright.

Ars points out how Skype plans to use, and has used, consumer demand to prevent being blocked by ISPs.

Skype’s battleplan is simple. If their user base is large enough, companies will think twice about tampering with Skype traffic. When Brazil’s biggest telecom pulled the plug on Skype, the outcry in the country was big enough that the decision was soon reversed. [The head of Skype’s European operations, James] Bilefield said, “The community has the power to change things.”

If consumers want unfiltered Internet access, they’ll get it. Regulators, go away.

TechLawJournal has carefully parsed the statements issued by Verizon and BellSouth denying participation in the NSA spying program. I’ll quote TLJ liberally here, with permission.

Regarding the BellSouth statement, TLJ notes that it took three working days and two weekend days to prepare a three paragraph response. As to the substance:

BellSouth uses the phrases “customer calling information” and “customer calling records”. In contrast, the USA Today article uses the phrases “phone call records” and “domestic call records”. BellSouth associates the word “customer” with the word “record”. There is a difference between what USA Today wrote, and what BellSouth now denies.

BellSouth portrays the USA Today article as asserting that BellSouth provided customer identifying information combined with the customer’s call information. In fact, the USA Today article only asserts that BellSouth turned over call information. Moreover, the USA Today article points out the difference. It states that “Customers’ names, street addresses and other personal information are not being handed over as part of NSA’s domestic program”. The article added that “But the phone numbers the NSA collects can easily be cross-checked with other databases to obtain that information.”

Thus, the BellSouth statement denies something that USA Today did not assert, and leaves undenied that which USA Today did actually assert.

Of course, it is another question whether BellSouth, in writing its statement, understood there to be a difference between “customer calling records” and “phone call records”, and intended its statement to constitute a non-denial.

On Verizon’s May 16 statement:

Verizon’s six paragraph statement is longer than BellSouth’s, but employs the same approach. It restates the assertions of USA Today, with variations, and then denies its restatements.

Verizon uses the phrases “customers’ domestic calls”, “customer phone records”, and “customer records or call data”. Like BellSouth, it adds the word “customer”. USA Today wrote about “phone call records”, without the word “customer”.

Verizon does at one point deny that it provided “any call data”, but it then immediately follows this with the phrase “from those records”, which is a reference back to “customer phone records”. This leaves open the possibility that it provided “call data” that it retrieved from a database other that “customer phone records”.

This is helpful insight from a dogged, indpendent reporter. And subscription rates are not too expensive either.

Today there are reports that a startup headed by former FCC Wireless Bureau chief John Muleta and @Home founder Milo Medin has asked the FCC to give it a spectrum license to offer a national wireless broadband service. No auction, just an assignment. According to Reuters, “Most wireless spectrum is auctioned to the highest bidder but M2Z has offered to pay the U.S. Treasury 5 percent of its gross revenues from the premium broadband service it plans to offer alongside free, but slower, Internet access.” You can read their filing here (PDF).

If this deal goes through, we will have officially learned nothing. The FCC Spectrum Policy Task Force Report found that “To increase opportunities for technologically innovative and economically efficient spectrum use, spectrum policy must evolve towards more flexible and market-oriented regulatory models.” But this would cut in just the opposite direction. Spectrum would be licensed for one particular use and wouldn’t be flexible. The five percent kickback to the U.S. Treasury is eerily reminiscent of the uncompetitive franchise fees that cable operators have paid to municipalities for a local monopoly. And what would this do to the natural development of a market in wireless broadband when every other competing network has to bid for spectrum at auction? M2Z was able to raise over $400 million in venture capital, so why can’t it put it’s money where it’s mouth is and buy the license?

Chicago law professor Doug Lichtman has a great new paper about the patent holdout problem:

A patent holder whose patent is made public only after the relevant technology has been widely adopted can demand not only a royalty that reflects the intrinsic value of that technology but also a royalty that reflects the value of each infringing firm’s technology-specific investments. This is the familiar patent holdout problem, and it particularly plagues the standard-setting process. Importantly, and the insight missed both in practice and in the literature today, the greater the number of patent holders in this holdout position, the less each can expect to earn from this tactic. That is, if fifteen patent holders can credibly threaten to shut an infringer for six months while that firm redesigns its products and services, the value associated with avoiding six months of disruption must be split fifteen ways. If three hundred patent holders can credibly make that threat, the pro rata share drops by a factor of twenty. More patents means less money per patent holder. Less money, in turn, means less of an incentive for a firm to strategically delay in the hopes of being a patent holdout, and less of an incentive for an accidental patent holdout to actually bring suit.

This might explain why standards like MPEG, which are buried in dozens of overlapping patents, haven’t been brought to their knees by litigation. There are probably a number of patent holders who could credibly threaten to shut down the world’s DVD players. However, the MPEG-LA can credibly refuse such extortionary demands, because they know that the moment they allow one patent holder to extort more than their fair share, the floodgates would be opened to continued extortion.

But I also expect there’s some legal realism at work here. No judge is crazy enough to order the sale of all DVD devices be halted. Even if a particular patent holder theoretically has the right to obtain an injunction, in practice they are constrained by the fact that if they behave too outrageously, the judge in charge of the case will begin to rule against them. This further strengthens the hand of standards-setters in negotiations with would-be patent trolls.

What I found most interesting about the paper, though, is how frankly Lichtman–who I don’t think is a critic of patents generally or software patents in particular–lays out the flaws in the current patent system, especially as it relates to high-tech inventions with hundreds of components. He explains how difficult it is for honest technology creators to discover patents that might be relevant to the technology in development, how the system gives inventors the perverse incentive not to search for relevant patents to avoid treble damages, and how the holder of an undiscovered patent can lie in wait until other companies make significant investments based on their patents and then exort large sums of money from the hapless inventor.

Lichtman offers a creative explanation as to why this screwy system hasn’t done more damage than it has, but the broader question is: what can we do to fix the system so it’s not so broken in the first place?

His paper is (as he puts it) mercifully short, so I encourage you to check it out. He has invited comments over at the Chicago law blog.

Work at Cato

by on May 17, 2006 · 4 comments

In case any of you are interested in careers in public policy, I wanted to draw your attention to several great entry-level job opportunities at the Cato Institute. I wanted to particularly note that my old job at Cato, the staff writer position, is open. You get to attend all of Cato’s events, and you get to work closely with David Boaz, Cato’s executive vice president, and the man who makes a lot of the day-to-day management decisions at Cato. In my opinion, it’s the best entry-level position at Cato. The primary qualifications are excellent writing skills and a deep familiarity with the libertarian philosophical tradition. Past staff writers have gone on to be successful policy analysts and journalists.

There are also three research assistant positions available: in the defense, welfare, and health care departments, respectively. If you’ve got a background and/or interest in any of those areas and would like to work for the nation’s premiere libertarian think tank, consider sending them a resume. Finally, if you’d like to be the one who pitches op-eds by Cato scholars, there’s a position as manager of editorial services available. Top-notch editing and writing skills are a must in that position as well.

Yglesias on NSA Spying

by on May 17, 2006

Matthew Yglesias has a fantastic post about what’s wrong with data-mining programs like that apparently being deployed by the NSA:

The problem is that when you’re searching for a rare condition, like being a terrorist, even a very precise statistical tool is going to overwhelmingly give you false positives. Ordinarily, when people are doing statistical analyses they take 95 percent confidence to constitute a statistically meaningful result. But there are 200 million people in the NSA pool and only a handful of terrorists. How many? Let’s be generous and say there are 200 al-Qaeda sleeper agents in the USA. Then you apply a 95 percent accurate statistical filter to 200 million people. What you’re going to wind up with are 10 terrorists labeled non-terrorists, 190 terrorists labeled terrorists, and a whopping 10 million non-terrorists labeled terrorists.

That’s a process that works. You’ve reduced the size of your search pool by an order of magnitude. The program “works.” But what does it really accomplish?

In practice, nothing. The NSA can’t hand the FBI the names of 10 million Americans and ask them to investigate–that would be a silly waste of time.

Now what you can do is that if in addition to your secret, illegal, oversight-free call records database you’re also running a secret, illegal, oversight-free wiretapping operation is start listening to the content of everyone in the 10 million group’s conversations. Obviously, the manpower’s not going to exist to actually listen to all that, but maybe you have another data-mining algorithm that can run on the content. Say this one is also 95 percent accurate. That means 10 more terrorists will get away. And 7.5 million innocent people will be off the hook. But you’re still left with a pool of 2.5 million innocent people and only 180 terrorists left under suspicion.

What you would do with that information just isn’t clear to me. There’s still not enough manpower to do serious investigations into all those people. And it would be insanely abusive anyway to subject such a huge group to invasive investigations when over 99.9 percent of them are totally innocent. Trying to compile a list of “people with Arab-sounding names” would be about as effective as these two computer algorithms.

So you’re not likely to catch many terrorists with a program like that. What such a database would be useful for is harrassment and blackmail. Want to know who’s been spilling White House gossip to the New York Times? All you need is the reporter’s phone number and you can dramatically narrow down the list of likely leakers. Want to find out if a political opponent has a mistress? Pull up a list of his phone calls over the previous 6 months and you’ll have a short list in a matter of minutes.

Matt concludes:

In a lot of ways, that’s the most troubling aspect of this. You have a program that would be much more effective for abusive uses than it would be for its ostensible purpose. The people ultimately in charge of the program have a well-earned repuation for dishonesty and a well-earned reputation for hardball politics. They’ve gone out of their way to make sure that the program operates in total secrecy and is subject to no meaningful oversight. Why on earth would you want a program like that?

Go read the whole thing.

Update: Obviously 5% of 10 million is 500,000, not 2.5 million. I don’t think that really affects his argument, though.

Ask the average American were to go to get an identification card and they will tell you, of course, to go to the local Department of Motor Vehicles. Across the country, DMVs are the dominant source of identification cards, with perhaps the State Department in second because it issues passports. People who think about this carefully might realize that many corporations also issue identification cards.

So, with governments eclipsing all other issuers, who do you suppose Americans trust to issue identity credentials?

Banks.

A Ponemon Institute study, funded by Unisys, has found that banking institutions are most trusted to issue and manage identity credentials (graph, page 6). The least trusted organizations are police and law enforcement.

Banks were trusted on every continent, and tax authorities were distrusted on every continent. Police authorities are distrusted deeply in the United States and Latin America, but not as much in Asia and Europe. Curiously, the postal service is trusted very highly in the United States, while registering little reaction, positive or negative, on other continents.

To avert a national ID, “identity management” is the way to go: cards, tokens, and devices that share only the information required for transactions. Who should be issuing those things? Banks and other private entities.

Over at the Cato blog, Radley Balko reports that James Sensenbrenner has prepared legislation to require your ISP to maintain records of your online activities to assist law enforcement officials. For the children, of course:

In addition, Sensenbrenner’s legislation–expected to be announced as early as this week–also would create a federal felony targeted at bloggers, search engines, e-mail service providers and many other Web sites. It’s aimed at any site that might have “reason to believe” it facilitates access to child pornography–through hyperlinks or a discussion forum, for instance.

Speaking to the National Center for Missing and Exploited Children last month, Gonzales warned of the dangers of pedophiles using the Internet anonymously and called for new laws from Congress. “At the most basic level, the Internet is used as a tool for sending and receiving large amounts of child pornography on a relatively anonymous basis,” Gonzales said.

I’ll just say I don’t think that sounds like a good idea.

Geek Humor

by on May 17, 2006

Update: Hmmm… our template seems not to be wide enough to accomodate cartoons. Click it (or the permalink below) to read the whole thing.