Occasional TLF co-blogger Solveig Singleton has some very sensible comments about the pending lawsuits against Sony BMG. I largely agree with her that the actual damages of Sony’s actions are pretty small, and that these class action lawsuits are more likely to enrich lawyers than compensate consumers. I still think the lawsuits should go forward, however, especially given that Sony has yet to pull its other spyware, MediaMax, from the shelves, despite well-documented problems.
The part of her argument that I found most interesting was this paragraph:
It isn’t the technical characteristics of something alone that determine its legal treatment (whether or not we should think of it as an “attack”), it is partly the intent of the actors. Set aside the intent issue for a second and look at the tech. Is it really always clear what is a “pure” hacker tool and what is not? Isn’t it likely that in future programmers might well continue to experiment with “hacker tools” to see if they can use principles in those tools for a useful purpose? Isn’t the argument that there is such a thing as a purely useless and bad tech usually made by advocates of tech bans? Are we saying that all software always has to be easily removable and detectable? By everyone? What about security software or content filters used by parents or schools or employers? Suppose experts could find and remove it but not beginners? Suppose a DRM system was hard to find or hard to remove, but didn’t create a security vulnerability to outsiders? Or suppose it did, but was easy to find and remove? There are a million possible permutations of technology here–hard to imagine the legal system coming up with a top-down rule that makes sense for all of them, especially at this early stage of the game. Markets adapting after the fact are much more flexible.
I wholeheartedly agree. And I’m curious how Ms. Singleton would apply this reasoning to the DMCA. After all, the DMCA is a “tech ban” on a class of devices, namely “circumvention devices,” (which in practice means any devices that interoperate with DRM’ed devices without the permission of the DRM creator). It’s quite true that some “hacker tools” might be useful in software like parental controls. It’s equally true that some “circumvention tools” have legitimate uses as well. For example, as long as Hollywood refuses to create a DVD player for the Linux operating system, any software to play DVDs on Linux is by definition a “circumvention device.” Likewise, any utility to convert songs from the iTunes Music Store format directly to the Windows Media format (so they can be played on WM-based MP3 players from Dell, or Sony) is a “circumvention device.” I could give lots of other examples.
In short, the line between legitimate software and piracy tools isn’t clear-cut, and, to paraphrase Ms. Singleton, it’s hard to imagine Congress coming up with a top-down rule that makes sense for all of them. Which is why it was stupid for Congress to legislate such a rule in 1998. Markets adapting after the facts would, as she says, have been much more flexible.
So is there some distinction I’m missing? Or is Ms. Singleton a closet supporter of the DMCRA, which would repeal the “top down rule” Congress imposed on this market in 1998 and allow market actors to experiment with the potentially beneficial uses of circumvention technology?
Comments on this entry are closed.