January 2007

Julian on NSA Surveillance

by on January 18, 2007

Julian thinks that the president’s announcement that he’ll suddenly start running his NSA wiretapping program by the book smells fishy:

But as Orin Kerr notes there’s a big honking ambiguity in this new oversight: Justice department officials won’t clarify whether that means FISA will be ordering the familiar sort of case-by-case warrant based on individualized suspicion or some kind of blanket approval of the old TSP as a whole. Because if it’s the latter, that’s not oversight. That’s writ of assistance. It’s hard to read this transcript and not come away with that conclusion, and equally hard for me to fathom how such a general clarification could somehow be perilous to national security. The only reason I hesitate is that it seems odd that a FISA judge would sign off on so dramatic a departure from the normal rules of the game.

Quite a bit about this doesn’t smell right, actually. Suppose we are talking about real, case-by-case oversight. We were supposed to believe that the ordinary FISA process was too slow and cumbersome to allow intelligence agencies to hunt terrorists effectively, and for some reason it wasn’t possible to remedy this by normal legislative means–say, by asking Congress to extend the 72-hour window within which agencies can conduct emergency taps before securing retroactive approval. As Mark Moller notes, that seems still more dubious in light of this new announcement: How much can actually have changed in the process without any legislative action? Why would it take five years to make those changes, requiring the creation of a separate program in the interim?

Excellent questions. Given that the administration refused to even disclose the existence of this program until the press got wind of it, and given that they’ve suddenly become interested in following the rules once there’s a Democratic Congress around to provide real oversight, it would be crazy to take the White House at its word as to what the new procedure is. Congress needs to demand a full, public disclosure of exactly how this new FISA approval process works so we can judge for ourselves if the White House is playing fast and loose with the law.

According to Julia Angwin of The Wall Street Journal, social networking giant MySpace.com will soon be offering parents free monitoring software to help them keep tabs on their child’s online activities.

“Parents who install the monitoring software on their home computers would be able to find out what name, age and location their children are using to represent themselves on MySpace. The software doesn’t enable parents to read their child’s e-mail or see the child’s profile page and children would be alerted that their information was being shared. The program would continue to send updates about changes in the child’s name, age and location, even when the child logs on from other computers.”

MySpace is in a difficult position right now and I think this was a wise move. The company has been under intense pressure from lawmakers, especially state AGs, to take more steps to protect kids online. But it remains unclear whether this move will satisfy the AGs since they are more interested in forcing MySpace to age-verify all their users using public databases and then raising the minimum age of those who can use the site at all.

Last summer, I debated two of the AGs mentioned in the WSJ story–Connecticut Attorney General Richard Blumenthal and North Carolina Attorney General Roy Cooper–and explained why age verification is misguided and just won’t work anyway:

Continue reading →

I’ve got a good friend who’s a DJ (as a hobbyist), and I asked him for his thoughts on the copyright SWAT team story. I thought his comments were worth quoting:

First, the CDs contain recordings of DJ mixes (the story refers to them as “mixtapes”). A DJ mix consists of someone playing records/CDs/DATs and manipulating the inputs so as to produce a continous flow of music distinct from listening to each single sequentially. The manipulation may include scratching, EQing, sampling, drum machines, digital effects, and mash-ups. Therefore, a DJ mix is distinct from merely uploading/burning a folder of mp3s and distributing it. It’s a performance.

However, the performance is built upon copyrighted material from other artists. When a DJ buys a vinyl/CD/mp3 at a record store, he/she purchases the right of personal listening. Many records will say “Unauthorized public performance, broadcasting, and copying of this record prohibited” on the label. When DJs release professional mix CDs through a record label, they obtain legal permission from the copyright holders to include their tracks in the mix. Dance clubs pay annual fees to the two major artist organizations for public performace rights to cover DJs that play at their venue. Record shops that sell unauthorized mixtapes have been prosecuted for copyright violation, so most stores don’t sell them.

Continue reading →

Every week (more or less), I look at a software patent that’s been in the news. You can see previous installments in the series here. There haven’t been any big patent disputes in the news the last couple of weeks, so this week we’ll look at a patent that’s at the center of a lawsuit that was filed last August by Altnet against Streamcast. You can read about the long and tangled history of the two companies in the link above.

Here is one of the patents at issue in the case. It covers “Data processing system using substantially unique identifiers to identify data items, whereby identical data items have the same identifiers.” Here’s a description of how the patent differs from prior art:

In all of the prior data processing systems the names or identifiers provided to identify data items (the data items being files, directories, records in the database, objects in object-oriented programming, locations in memory or on a physical device, or the like) are always defined relative to a specific context. For instance, the file identified by a particular file name can only be determined when the directory containing the file (the context) is known. The file identified by a pathname can be determined only when the file system (context) is known. Similarly, the addresses in a process address space, the keys in a database table, or domain names on a global computer network such as the Internet are meaningful only because they are specified relative to a context.

Continue reading →

Lee Gomes of the Wall Street Journal has a fun piece in today’s paper about the amazing gains that have been made in the field of digital storage technology. He notes that we reached another amazing milestone in the computing business with the annoucement of several terabyte-capacity disk drives from Hitachi, Seagate and others. (I saw some of these at CES this year. Very cool stuff.) The last time we reached a major storage milestone like this, he points out, was back in 1991 when we crossed the gigabyte threshold.

I’ll never forget when those first 1-gig drives came out how I thought to myself “Geez, who in the hell would ever need that much capacity?” What an idiot I was. Of course, I could not have envisioned the explosion of so much downloadable digital content, the rise of digital photography / camcorders, and the coming of storable HD video. I recently maxed out an old 100-gig hard drive on a PC at my house and started stacking external hard drives to store all my digital content. And my wife and I have been holding off on upgrading to an HD camcorder because we fear we don’t have enough storage space for all the home movies of the kids.

But hopefully that will now change for me. As Gomes points out, back when those old 1-gig drives where announced, they were priced in the $2000 range. By contrast, the new 1-terabyte drives are hitting the market at just $400 bucks. This means that, on a cost-per-byte basis, the old 1-gig models were 5,000 times as expensive as the newer models.

You gotta love capitalism!

Here we go.

Shall I spoil it with a lecture? Nah.

Radley Balko, who has tirelessly publicized the problems created by the promiscuous use of SWAT teams, reports that federal police in Atlanta have used a SWAT team to help the recording industry enforce copyright law. Even worse, the target wasn’t even a commercial piracy operation:

Last night, a federal SWAT team assisted the RIAA in a raid on the studio of Atlanta musician DJ Drama.

This local news report says the locally famous mixtape DJ is under investigation for piracy. But Drama’s supporters say the DJ is a mix artist, not a bootlegger. They say news footage of the raid shows RIAA officials boxing up only recordable CDs filled with mixes, not bootlegs of retail CDs (the local news reporter seems to conflate the two as well).

Assuming for a moment that RIAA and federal officials do indeed know the difference between a mash-up DJ and a bootleg operation, and that they did find evidence of actual piracy in the bust, there’s still the problem of why RIAA officials were participating in a police action, and why a SWAT team was used to raid a professional studio under investigation for a nonviolent, white-collar crime.

Quite so. It’s not like this is a fly-by-night operation selling CDs out of the back of a truck. This is clearly not the sort of problem that justifies dramatic police raids. If the RIAA thinks DJ Drama’s activities violate copyright law, they have plenty of civil law remedies available that don’t involve Gestapo tactics.

Also, check out the gratuitous smearing of the two as drug dealers and gangsters. A police officer comments that “In this case, we didn’t find drugs and weapons, but it’s not uncommon for us to find other sorts of contraband when we execute a search warrant.”

If they didn’t find drugs or weapons, why did this factoid merit a mention in the story?

Save Us from Fox News, FCC!

by on January 17, 2007 · 12 comments

Over at Techdirt, Carlo nails Dennis Kucinich’s proposal to bring back the so-called fairness doctrine:

One of the earliest lessons a lot of kids learn (though don’t necessarily accept) is that life isn’t fair, if for no other reason than what they think is fair is often wildly different than what their parents do. Now, once-failed and now long-shot presidential candidiate Dennis Kucinich says he’ll be heading up a new House subcommittee on issues around the FCC, that he might try to bring back the Fairness Doctrine. The Fairness Doctrine was an FCC rule, in force until 1987, that said broadcasters had a responsibility to discuss controversial issues of public importance, and to do so in a balanced manner that addressed differing points of view. While the goal of the doctrine might sound nice, the rule itself is a little troublesome, not least of which because it could be interpreted as violating the First Amendment (though the current FCC isn’t likely to care about that), but also because it holds broadcasters to a wholly subjective ideal. Who decides what’s fair? After all, one popular news network famously uses the tagline “fair and balanced”, when plenty of people feel it’s neither. The Fairness Doctrine also makes less and less sense in an age where the number of media outlets is proliferating. There’s no limit to the number of places that can provide news or opinion, and professionals and the public have more tools than ever at their disposal to tell their own stories and express their own viewpoints. To require certain media to provide an arbitrary level of “balance” makes less sense than encouraging people with disagreeing viewpoints to develop their own media outlets, whether it’s a blog, newsletter or even a cable TV channel. Kucinich says that “the media has become the servant of a very narrow corporate agenda”–but reinstituting the Fairness Doctrine would simply replace that corporate agenda with that of a political appointee, and that’s really not very fair.

It’s truly mind-boggling that someone could look at today’s media landscape, which is by almost any measure more diverse, vibrant, and competitive than at any point in the history of the world, and conclude that we need to turn back the clock to the 1970s, when a government bureaucrat sat in judgment of the “fairness” of each television outlet’s news and commentary.

It’s particularly irritating to see it come from the political left because if there’s one that the Bush administration has taught us about journalistic objectivity, it’s that a White House that’s willing to twist the truth can use the concept of “fairness” to browbeat journalists into putting its obfuscations on an equal footing with more credible observers. This just isn’t the sort of problem that a bureaucracy like the FCC can solve. It can only be solved by journalists who are willing to call a spade a spade, and opposition politicians who are willing to highlight their opponents’ dishonesty. Putting the FCC in charge of determining what’s “fair” is not only an affront to the First Amendment, but it’s not likely to work either.

I’m excited to report that the good folks at Ars Technica, probably the best source of in-depth technology news and analysis on the web, has asked me to contribute to their site. Ars will be familiar to regular TLF readers because we link to them all the time. If you aren’t already a regular reader, you should be. And not just because you’ll occasionally find my writing there.

My first contribution focuses on Alan Cox’s application for a patent on digital rights management technology:

It’s unlikely that Cox’s patent is part of a grand plan to rid the software industry of digital rights management technology. Rather, the patent application is probably part of Red Hat’s patent self-defense strategy. Microsoft has darkly hinted that Linux and other free software infringes on Microsoft’s patents. Red Hat is responding with defensive stockpiling, applying for about two dozen patents in the last two years. Most likely, it’s working to build a patent portfolio extensive enough that it will be able to retaliate should it become the target of patent litigation.

The fact that even Red Hat, a company publicly opposed to software patents and unlikely to assert them against anyone, feels the need to apply for dozens of patents suggests that there are serious problems with the American patent system. The resources Red Hat spends hiring lawyers to obtain patents it will most likely never use could be more productively spent hiring programmers and customer support personnel to do useful work.

Copying Innovation is Hard

by on January 16, 2007 · 46 comments

Mike Masnick offers another example related to last week’s discussion of whether patents are needed to protect software innovations:

Microsoft has long viewed Google as a serious competitor, and apparently Bill Gates and the folks in Redmond have been pulling out all the stops to compete with Google. In many cases, they’ve created products that seem as good, if not better, than Google’s versions. Yet, despite all of that, they’re losing traffic while Google gains it. Once again, it’s not just about the technology, but the perceived view people have of Google as compared to Microsoft. Microsoft just hasn’t been able to convince that many people that its search and mapping solutions are as good or better than Google’s. Despite the claim that there are “no switching costs” for users to go elsewhere, that’s not quite true. The perception that Google is better (and the feeling that it’s “good enough”) means that there’s no reason for people to look elsewhere, and a Microsoft offering would need to be not just better, but significantly better to attract attention. Alternatively, they can work on increasing their brand value as well, in the space of online services. In other words, there are plenty of things that go into being able to innovate and build a successful product–and simply copying someone else’s technology is often a small part of that (and usually not a particularly good strategy). Patent protection only protects that aspect of copying (business model patents are another issue completely), but if they’re supposed to encourage innovation, and the technology is only a small part of innovation, then the incentives are mis-aligned. The market can reward innovation without needing government monopolies and protectionist policies. The trick is to continually innovate, not just in the technology, but in the quality, the service and the brand as well.

Quite so. It needs to be stressed that the goal of patent law is to provide sufficient incentive to “promote the progress of science and the useful arts,” not to maximize the profits of innovators. Clearly, Google has been able to turn a tidy profit (to put it mildly) from its search engine without any significant recourse to patent law. Even after five years, one of the wealthiest companies on the planet has apparently not been able to produce a search engine that consumers perceive as being equivalent to Google’s offering. This suggests, I think, that a software innovator retains significant market advantages even after competitors have succeeded in cloning the major features of its product. And that, in turn, casts serious doubt on the notion that innovative products like the iPhone or Google wouldn’t exist but for the patent system.