Here‘s Harper on PFF on Net Neutrality in Regulation magazine.
My review of the Progress & Freedom Foundation book Net Neutrality or Net Neutering: Should Broadband Services be Regulated (which starts on page 5 of the PDF) takes a sort of “pox on both your houses” approach while concluding that the opponents of public utility regulation for broadand have the better argument.
Here‘s Farber and Katz (with Faulhaber and Yoo) in the Washington Post.
I would quote this Ars article, but really, a picture is worth a thousand words:

As the priceless tagline to the story puts it: “Obscenity complaints to the FCC jumped 40,000 percent between August and September 2006. Shame on you, September! Don’t you know that our children are watching?”
What’s happening here, obviously, isn’t that the major networks decided to start running porn during prime time in September. Rather, groups like the Parents Television Council put out an action alert describing the filthiest moments in television they can find, and their members (the vast majority of whom probably didn’t even watch the show) send form letters to the FCC.
Over at IPCentral, Jim DeLong quotes a lengthy critique of the SFLC brief in the Microsoft v. AT&T case. The critique was written by one Greg Aharonian. A lot of it is the kind of legal inside baseball that I’m not really qualified to comment on, but there’s one theme that runs throughout the critique that’s just flatly wrong:
The first lie of Moglen’s brief is a big lie of omission. Nowhere in his brief does there appear the word “hardware”. It is unethical to talk about the patentability of software without simultaneously talking about the patentability of hardware, especially in light of hardware/software codesigns tools. And even using the word “hardware” is pointless unless you provide rigorous definitions of “hardware” and “software”. Moglen doesn’t. So when Moglen bases his software patent hatred on Benson:
“The holding of Benson is properly applicable to all software, because a computer program, no matter what its function, is nothing more or less than the representation of an algorithm.”
as well, he is arguing hardware patent hatred:
“The holding of Benson is properly applicable to all hardware, because a digital circuit, no matter what its function, is nothing more or less than the representation of an [Boolean] algorithm.”
This is silly. I would be very interested to see the boolean algorithm that is equivalent to, say, an LCD panel. Some characteristics of hardware can be described as equivalent to software algorithms, but other aspects (such as, say, the ability to display information to the user) cannot.
Continue reading →
Julian thinks that the president’s announcement that he’ll suddenly start running his NSA wiretapping program by the book smells fishy:
But as Orin Kerr notes there’s a big honking ambiguity in this new oversight: Justice department officials won’t clarify whether that means FISA will be ordering the familiar sort of case-by-case warrant based on individualized suspicion or some kind of blanket approval of the old TSP as a whole. Because if it’s the latter, that’s not oversight. That’s writ of assistance. It’s hard to read this transcript and not come away with that conclusion, and equally hard for me to fathom how such a general clarification could somehow be perilous to national security. The only reason I hesitate is that it seems odd that a FISA judge would sign off on so dramatic a departure from the normal rules of the game.
Quite a bit about this doesn’t smell right, actually. Suppose we are talking about real, case-by-case oversight. We were supposed to believe that the ordinary FISA process was too slow and cumbersome to allow intelligence agencies to hunt terrorists effectively, and for some reason it wasn’t possible to remedy this by normal legislative means–say, by asking Congress to extend the 72-hour window within which agencies can conduct emergency taps before securing retroactive approval. As Mark Moller notes, that seems still more dubious in light of this new announcement: How much can actually have changed in the process without any legislative action? Why would it take five years to make those changes, requiring the creation of a separate program in the interim?
Excellent questions. Given that the administration refused to even disclose the existence of this program until the press got wind of it, and given that they’ve suddenly become interested in following the rules once there’s a Democratic Congress around to provide real oversight, it would be crazy to take the White House at its word as to what the new procedure is. Congress needs to demand a full, public disclosure of exactly how this new FISA approval process works so we can judge for ourselves if the White House is playing fast and loose with the law.
According to Julia Angwin of The Wall Street Journal, social networking giant MySpace.com will soon be offering parents free monitoring software to help them keep tabs on their child’s online activities.
“Parents who install the monitoring software on their home computers would be able to find out what name, age and location their children are using to represent themselves on MySpace. The software doesn’t enable parents to read their child’s e-mail or see the child’s profile page and children would be alerted that their information was being shared. The program would continue to send updates about changes in the child’s name, age and location, even when the child logs on from other computers.”
MySpace is in a difficult position right now and I think this was a wise move. The company has been under intense pressure from lawmakers, especially state AGs, to take more steps to protect kids online. But it remains unclear whether this move will satisfy the AGs since they are more interested in forcing MySpace to age-verify all their users using public databases and then raising the minimum age of those who can use the site at all.
Last summer, I debated two of the AGs mentioned in the WSJ story–Connecticut Attorney General Richard Blumenthal and North Carolina Attorney General Roy Cooper–and explained why age verification is misguided and just won’t work anyway:
Continue reading →
I’ve got a good friend who’s a DJ (as a hobbyist), and I asked him for his thoughts on the copyright SWAT team story. I thought his comments were worth quoting:
First, the CDs contain recordings of DJ mixes (the story refers to them as “mixtapes”). A DJ mix consists of someone playing records/CDs/DATs and manipulating the inputs so as to produce a continous flow of music distinct from listening to each single sequentially. The manipulation may include scratching, EQing, sampling, drum machines, digital effects, and mash-ups. Therefore, a DJ mix is distinct from merely uploading/burning a folder of mp3s and distributing it. It’s a performance.
However, the performance is built upon copyrighted material from other artists. When a DJ buys a vinyl/CD/mp3 at a record store, he/she purchases the right of personal listening. Many records will say “Unauthorized public performance, broadcasting, and copying of this record prohibited” on the label. When DJs release professional mix CDs through a record label, they obtain legal permission from the copyright holders to include their tracks in the mix. Dance clubs pay annual fees to the two major artist organizations for public performace rights to cover DJs that play at their venue. Record shops that sell unauthorized mixtapes have been prosecuted for copyright violation, so most stores don’t sell them.
Continue reading →
Every week (more or less), I look at a software patent that’s been in the news. You can see previous installments in the series here. There haven’t been any big patent disputes in the news the last couple of weeks, so this week we’ll look at a patent that’s at the center of a lawsuit that was filed last August by Altnet against Streamcast. You can read about the long and tangled history of the two companies in the link above.
Here is one of the patents at issue in the case. It covers “Data processing system using substantially unique identifiers to identify data items, whereby identical data items have the same identifiers.” Here’s a description of how the patent differs from prior art:
In all of the prior data processing systems the names or identifiers provided to identify data items (the data items being files, directories, records in the database, objects in object-oriented programming, locations in memory or on a physical device, or the like) are always defined relative to a specific context. For instance, the file identified by a particular file name can only be determined when the directory containing the file (the context) is known. The file identified by a pathname can be determined only when the file system (context) is known. Similarly, the addresses in a process address space, the keys in a database table, or domain names on a global computer network such as the Internet are meaningful only because they are specified relative to a context.
Continue reading →
Lee Gomes of the Wall Street Journal has a fun piece in today’s paper about the amazing gains that have been made in the field of digital storage technology. He notes that we reached another amazing milestone in the computing business with the annoucement of several terabyte-capacity disk drives from Hitachi, Seagate and others. (I saw some of these at CES this year. Very cool stuff.) The last time we reached a major storage milestone like this, he points out, was back in 1991 when we crossed the gigabyte threshold.
I’ll never forget when those first 1-gig drives came out how I thought to myself “Geez, who in the hell would ever need that much capacity?” What an idiot I was. Of course, I could not have envisioned the explosion of so much downloadable digital content, the rise of digital photography / camcorders, and the coming of storable HD video. I recently maxed out an old 100-gig hard drive on a PC at my house and started stacking external hard drives to store all my digital content. And my wife and I have been holding off on upgrading to an HD camcorder because we fear we don’t have enough storage space for all the home movies of the kids.
But hopefully that will now change for me. As Gomes points out, back when those old 1-gig drives where announced, they were priced in the $2000 range. By contrast, the new 1-terabyte drives are hitting the market at just $400 bucks. This means that, on a cost-per-byte basis, the old 1-gig models were 5,000 times as expensive as the newer models.
You gotta love capitalism!