October 2006

Every week, I look at a software patent that’s been in the news. You can see previous installments in the series here. But this week, Luis Villa has done most of my work for me:

IBM has generally been very good about supporting open source, and as steven says, they’ve been very up front about their motivations- they are doing it because they want to make money, and they think open source and open standards help them make money.

This consistency has extended to their opinions on patents- they have made it clear that they think the system is broken, but they have also made it clear that they think patents are a perfectly legitimate business tool, and that they want to fix the system so that they can continue to make money on patents…

So it shouldn’t be a surprise to anyone that IBM are using patents to go after Amazon. What surprised me, after skimming the patents, is that the patents they are using to go after Amazon are so broad. With the exception of one (which is so opaque I can’t figure out what exactly it is patenting) a cursory reading suggests that these are exactly the kinds of broad, obvious patents that everyone (even IBM) at least says on the surface that they hate. Maybe by demonstrating that they have what Tim Bray calls ‘the Internet Tollbooth’ they think they can precipitate real patent reform, but that seems unlikely; more likely they just want a cut of Amazon’s pile. Shame, really, but it shouldn’t be a surprise.

Companies have a fiduciary duty to their shareholders to maximize their profits, so it shouldn’t surprise us when companies do legal but shady things that enhance their bottom lines. However, it should make us ask why the patent system is giving companies the incentive to engage in such rent-seeking. It does nothing to promote “the progress of science and the useful arts” to give companies monopolies on ideas like “System for ordering items using an electronic catalogue” that are so obvious that it’s inevitable that dozens of companies would independently “invent” them.

Alcohol Liberation Front 2

by on October 27, 2006

In light of the rousing success of the first Alcohol Liberation Front, we’ve decided to reprise the event this coming Monday. We’ll be at RFD from 5:30-7 PM.

If you’re coming, you might want to email me at tlee -at- showmeinstitute.org so we know to keep an eye out for you. Although we probably won’t be that hard to find. James Gattuso will be especially easy to spot at the center of his throng of groupies.

The Other DMCA Provision

by on October 27, 2006 · 4 comments

My DMCA paper focuses on the law’s most controversial section, the part that prohibits circumventing DRM schemes. When I was writing it, I briefly considered discussing its other provisions, most notably the “notice and takedown” provisions of §512. After all, EFF has a whole web site documenting the chilling effects of that provision. But although I think EFF has some legitimate gripes, I ultimately concluded that the anti-circumvention provision was far more problematic, and decided to focus my paper exclusively on that section.

Today Tim Wu has an interesting article in Slate arguing that we should be grateful we got §512, because if Hollywood had gotten its way, things would have been much worse:

This summer, Sen. Ted Stevens, R-Alaska, earned the bemused contempt of geeks everywhere when he described the Internet as “a series of tubes.” But back in 1995, Hollywood was insisting that the Internet be characterized as “a bookstore.” And a bookstore, unlike a series of tubes, breaks the law if it “carries” pirated novels. So too, Hollywood urged, Internet companies should be liable if they carry any illegal materials, whether the companies know it or not.

Had that view prevailed, there would probably be no YouTube today, and also no free blog sites, and maybe not even Google or Web 2.0. What venture capitalist would invest in a company already on the hook for everything its users might do? But, in one of the lesser-known turning points in Internet history, Hollywood never got its law. Its unstoppable lobbyists ran into an unmovable object: the Bell companies, who own those “tubes” over which the Internet runs. In the mid-1990s, fearing a future of liability, the Bells ordered their lobbyists to fight Hollywood’s reforms, leading to one of the greatest political struggles in copyright history. (This paper provides a history of this and other struggles.)

Hollywood employs legendary lobbyists, like Jack Valenti, but when they ran into the Bells, it was like Frazier meeting Foreman. The Bells quickly put holds on all the legislation the content industries wanted. Telecom lobbyists like Roy Neel, a close friend of Al Gore (and later Howard Dean’s campaign manager), went to Congress and began saying things like, the “copyright law threatens to put a damper on the expression of ideas on the Internet.”

Copyright law is at its worst when it’s unclear where the boundaries of liability lie, because then deep-pocketed, risk-averse companies will decline to take the risk of incurring large copyright liabilities. The “safe harbor” provision gives businesses clarity regarding what they need to do to avoid liability when it comes to user-generated content. And that, in turn, has allowed individuals to push the boundaries of copyright law and produce absolutely brilliant works of likely copyright infringement.

Washington Post technology columnist Mike Musgrove reminds us in his column today that the video game industry’s voluntary ratings system–the Entertainment Software Rating Board (ESRB)–continues to come under fire in Washington and in the states. Musgrove notes that:

“Earlier this year, Sen. Sam Brownback (R-Kan.) was one of several lawmakers who introduced bills that would take the video game rating system away from the ESRB, but those bills never made it out of committee. Last week, at a summit on video games, youth and public policy, Rep. Betty McCollum (D-Minn.) trashed the game industry’s ratings system and called for a new, independent system. Brownback and McCollum agree that the current system–because it’s run by the game industry–can’t be trusted.”

This is nothing new, of course. I have written extensively about the politics of video game regulation and discussed how the video game ratings system has been criticized for a number of supposed shortcomings. Most recently, I wrote about Sen. Hillary Clinton (D-NY) and Sen. Joe Lieberman’s (D-CT) “Family Entertainment Protection Act” (FEPA, S. 2126), which would create a federal enforcement regime for video games sales and require ongoing regulatory scrutiny of industry ratings and practices. (Note: There was also a House version of the bill).

Continue reading →

With the holidays approaching, a new program providing greater access to airport concourses is underway. At select airports throughout the country, non-travelers can now enter and meet arriving loved ones, as was routine just a few years ago.

Everyone entering the concourse will still be subject to physical security checks, but the program permits travelers to pass through security and board planes without showing ID to transportation authorities or by using a false/pseudonymous ID.

Has the Transportation Security Administration seen fit to restore convenience, privacy, and freedom to air travelers? Seen the light on identification-based security and relented on ID/boarding card checks? Well, no.

A PhD student in the Security Informatics program at Indiana University has created a generator that anyone can use to mock up their own boarding pass. He notes a number of different uses for it – among them, meeting your elderly grandparents at the gate, or evading the TSA’s no-fly list. So far, it’s only good for Northwest Airlines, but others would be equally easy to design.

Checking the ID and boarding pass is intended to communicate to personnel at the concourse checkpoint that a person has been run past the watch list and “no-fly” list. It provides a sort of second credential, linked by name to the ID of the person who has been reviewed. This spoof easily breaks that link. Fake a credential matching any ID you have, and you are in the concourse.

I wouldn’t recommend using this system without a careful check of the law – if you are allowed to see it. It’s probably illegal to access an airport concourse this way and the TSA would bring the full weight of its enforcement powers down on you if you were caught. Needless to say, making it illegal to evade security is what keeps the terrorists in line.

Hmm. Or maybe security procedures actually need to work.

And that’s the researcher’s point: Comparing a boarding pass to an identification document at the airport does little to prevent a watch-listed or no-fly-listed person from passing (except perhaps to inconvenience him a little more than everyone else). Indeed, identification-based security is swiss-cheesed with flaws.

The first problem is that you have to know who the bad guys are. If you don’t know who is bad, your ID-based security system can’t catch them. If you do know who is bad, you have to make sure that they aren’t using an alias. The cost of doing so may vary, but defrauding or corrupting identity systems is an option that will never be closed to wrongdoers. Making an identity system costly for bad guys to defeat also makes it costly for good people to use. Witness the REAL ID Act.

The linear response to the exposure of this flaw could be to “tighten up” the system – perhaps by discontinuing the use of self-printed boarding passes. The right response is to abandon the folly of identity-based security and use security methods that address tools and methods of attack directly.

There’s plenty on identity and identity-based security in my book Identity Crisis.

One of the important points made in Jon Stokes’s write up of e-voting is how much easier it is to hide malicious code in a program than it is to find it. This was also a point that Avi Rubin made quite well in Brave New Ballot, when he describes a computer security course he taught in 2004:

I broke the class up into several small groups, and we divided the semester into thirds. In the first third, each group built an electronic voting machine that it demonstrated to the rest of the class. These machines were basically simple programs that allowed a user to make choices among several candidates in different races and that were required to keep an electronic audit log and produce the final tallies when the election was over. The groups then devoted the second third of the term to planting a back door in their voting machines–a mechanism by which a voter could cheat and change the vote totals and the audit logs so that the change would be undetectable. Each team had to turn in two versions of its system, one that worked properly and one that “cheated,” with all the code for both.

The groups spent the last third of the semester analyzing the machines and code from the other groups, looking for malicious code. The goal of the project was to determine whether people could hide code in a voting machine such that others of comparable skill could not find it, even with complete access to the whole development environment. Each group was assigned three machines from other groups–one good one, one bad one, and one chosen at random, but none of them identified as such. That was for the students to figure out by analyzing the code and running the machines. Admittedly, this setting was not much like that of a real manufacturer, in which there would be years to develop and hide malicious code in a code base that would be orders of magnitude larger and more complex than in our little mock-ups. Furthermore, the students had all just spent more than a month developing and hiding their own malicious code, so they had a good idea of what other groups might try. Conversely, in practice, auditors would have considerably more time to analyze and test potential code for problems. Still, I expected the results to be revealing, and I was not disappointed.

Many of the groups succeeded in building machines in which the hidden code was not detected. In addition, some fo the groups succeeded in detecting malicious code, and did so in a way that in and of itself was enlightening. In one case, the student discovered the cheating almot by accident because the compiler used by the programmer was incompatible with the one used by the analyzing team. The experiment demonstrated, as we suspected it would, that hiding code is much easier than finding hidden code.

I think this is a big part of the reason that computer security experts tend to be so skeptical of claims that independent testing has “proven” that a company’s voting machine code was secure. Even if the “independent” firm were genuinely independent, (which it usually isn’t) and even if they were to do a truly exhaustive security audit, (which judging from the Rubin and Felten reports, they usually don’t) it would still be unlikely that they would be able to detect malicious code that was inserted and camouflaged by a relatively talented programmer.

Ars on Vote Stealing

by on October 26, 2006 · 0 comments

For years, Jon “Hannibal” Stokes has been writing incredibly detailed articles on CPU architecture. He’s particularly good at presenting a lot of in-depth technical information in a way that’s accessible to moderately tech-savvy people. I’m much more capable of pretending to understand CPU architectures after reading his articles.

Now he’s turned his attention to voting machines, and he does his usual thorough and clear job explaining “How to steal an election by hacking the vote”:

hat if I told you that it would take only one person–one highly motivated, but only moderately skilled bad apple, with either authorized or unauthorized access to the right company’s internal computer network–to steal a statewide election? You might think I was crazy, or alarmist, or just talking about something that’s only a remote, highly theoretical possibility. You also probably would think I was being really over-the-top if I told you that, without sweeping and very costly changes to the American electoral process, this scenario is almost certain to play out at some point in the future in some county or state in America, and that after it happens not only will we not have a clue as to what has taken place, but if we do get suspicious there will be no way to prove anything. You certainly wouldn’t want to believe me, and I don’t blame you.

So what if I told you that one highly motivated and moderately skilled bad apple could cause hundreds of millions of dollars in damage to America’s private sector by unleashing a Windows virus from the safety of his parents’ basement, and that many of the victims in the attack would never know that they’d been compromised? Before the rise of the Internet, this scenario also might’ve been considered alarmist folly by most, but now we know that it’s all too real.

Continue reading →

Today’s Cato podcast features yours truly discussing the DMCA. Anastasia was obviously a friendly interviewer, but I still found it challenging to boil the complexities of the issue down to something that could be readily understood in a 10-minute interview. We discuss the French protests from earlier this month, what the recently-passed French law did, and how the courts were handling reverse engineering cases before Congress enacted the DMCA.

Slater on Taste Sharing

by on October 26, 2006

Many months ago, Derek Slater pointed me to the paper he did while he was at the Berkman Center late last year. It’s been on my to-read list ever since, and I’ve finally gotten a chance to check it out.

The paper reports on the increasing popularity of what they call “taste-sharing” tools on the Internet. That would include peer-to-peer file-sharing sites, but it also includes collaborative filtering tools like Amazon’s “People who bought this book also bought…” feature, and Apple’s iTunes playlist sharing tools.

Clearly, the trends Derek identified in this paper have continued into 2006, as evidenced by Microsoft choosing to make music sharing one of the central selling points of its Zune media devices. I’ve also read that the community features of YouTube were an important factor in that sites meteoric rise. Consumers clearly love being able to share their cultural tastes with others, and so smart media companies will find ways to make it easier for their customers to recommend their products to others.

After a year of debate, neutrality regulation proponents have singularly failed (Salon.com notwithstanding) to get Congress to enact their proposals. This of course could change, especially if there’s a change in the control of Congress. But, should this front-door approach fail, it now seems proponents have a plan B: sneak regulation in as a condition of AT&T’s merger with Bell South.

The “It’s Our Net Coalition” asked the FCC to do just that in a petition filed with the agency yesterday. Specifically, the group asks the Commission to impose the net neutrality rules contained in the amendment by Senators Snowe and Dorgan now pending in the Senate.

This legislation has been the subject of, to put it mildly, considerable controversy. It hasn’t been voted on–in this Congress it would likely fail if it were. And similar proposals were repeatedly defeated in the House. But the “It’s Our Net Coalition” would save us all the inconveniences of this congressional debate, and simply have the FCC impose the Snowe-Dorgan rules (at least as to AT&T) on its own, without even the bother of a separate rulemaking proceeding.

The idea of imposing conditions on mergers isn’t new–or by itself controversial. But such conditions should be aimed at alleviating a reduction in competition caused by a merger. The Department of Justice, however, has already found that the merger is not likely to substantially reduce competition in any market. BellSouth and AT&T’s businesses simply don’t overlap much.

Strangely, the Coalition, in it’s eight-page petition devotes only a single paragraph to the merger’s effect on competition. broadly asserting that the merger would solidify the market power of broadband firms. Most of the petition is instead devoted to rehashing general arguments for neutrality regulation.

Former FCC Commissioner Harold Furchtgott-Roth often complained about the FCC merger reviews, and the conditions imposed on approvals, called the process “lawless, standardless, and endless.” He was right. Mergers should be approved or rejected based on their specific effect on competition. The process should not be used to impose regulation through the back door.