February 2007

Time for a quick reality check. The Federal Communications Commission regulates older media sectors and communications technologies: broadcast radio, broadcast TV, telephones, satellites, etc. These sectors and technologies are growing increasingly competitive and face myriad new, unregulated rivals. What, then, is wrong with this picture?
TestFCCBudget_copy.JPG

Seriously, I just don’t get it. Why does the FCC’s budget keep growing without constraint? Why does it need $313 million and nearly 2,000 bureaucrats to regulate industries and technologies that could do just fine, thank you very much, without endless meddling from DC. It seems to me like all those unregulated rivals are doing just fine without the FCC serving as market nanny, so why not cut the flow of funds to the FCC for awhile and see what happens?

This agency needs to be put on a serious diet. There’s just no excuse for this level of spending in an era when the market is growing more competitive. Check out the entire FCC 2008 budget here if you are interested in their profligate spending habits.

(And just imagine how much more the agency will be spending once Net neutrality regulations get on the books!)

Jobs on “Open DRM”

by on February 6, 2007 · 36 comments

You should really read Jobs’ essay in its entirety, but here’s one other passage from it that’s worth highlighting (this one immediately precedes the one I quote below):

The second alternative is for Apple to license its FairPlay DRM technology to current and future competitors with the goal of achieving interoperability between different company’s players and music stores. On the surface, this seems like a good idea since it might offer customers increased choice now and in the future. And Apple might benefit by charging a small licensing fee for its FairPlay DRM. However, when we look a bit deeper, problems begin to emerge. The most serious problem is that licensing a DRM involves disclosing some of its secrets to many people in many companies, and history tells us that inevitably these secrets will leak. The Internet has made such leaks far more damaging, since a single leak can be spread worldwide in less than a minute. Such leaks can rapidly result in software programs available as free downloads on the Internet which will disable the DRM protection so that formerly protected songs can be played on unauthorized players.

An equally serious problem is how to quickly repair the damage caused by such a leak. A successful repair will likely involve enhancing the music store software, the music jukebox software, and the software in the players with new secrets, then transferring this updated software into the tens (or hundreds) of millions of Macs, Windows PCs and players already in use. This must all be done quickly and in a very coordinated way. Such an undertaking is very difficult when just one company controls all of the pieces. It is near impossible if multiple companies control separate pieces of the puzzle, and all of them must quickly act in concert to repair the damage from a leak.

Apple has concluded that if it licenses FairPlay to others, it can no longer guarantee to protect the music it licenses from the big four music companies. Perhaps this same conclusion contributed to Microsoft’s recent decision to switch their emphasis from an “open” model of licensing their DRM to others to a “closed” model of offering a proprietary music store, proprietary jukebox software and proprietary players.

This echoes a point I’ve made before: there is no such thing as an open DRM standard. By definition, an open standard is available for anyone to look at. And by its nature, DRM requires that at least some details of a DRM standard be secret, which means that it cannot be open. “Open DRM” is simply a contradiction in terms.

Jobs Blasts DRM

by on February 6, 2007

Wow.

Steve Jobs just posted an essay on DRM on Apple’s website. I started reading it expecting to hate it, until I got to this:

The third alternative is to abolish DRMs entirely. Imagine a world where every online store sells DRM-free music encoded in open licensable formats. In such a world, any player can play music purchased from any store, and any store can sell music which is playable on all players. This is clearly the best alternative for consumers, and Apple would embrace it in a heartbeat. If the big four music companies would license Apple their music without the requirement that it be protected with a DRM, we would switch to selling only DRM-free music on our iTunes store. Every iPod ever made will play this DRM-free music.

Why would the big four music companies agree to let Apple and others distribute their music without using DRM systems to protect it? The simplest answer is because DRMs haven’t worked, and may never work, to halt music piracy. Though the big four music companies require that all their music sold online be protected with DRMs, these same music companies continue to sell billions of CDs a year which contain completely unprotected music. That’s right! No DRM system was ever developed for the CD, so all the music distributed on CDs can be easily uploaded to the Internet, then (illegally) downloaded and played on any computer or player.

Continue reading →

One of the most interesting things in Cory Doctorow’s article was a link to this network neutrality write-up from last year in Salon. I thought this was fascinating:

There is fractious division among network engineers on whether prioritizing certain time-sensitive traffic would actually improve network performance. Introducing intelligence into the Internet also introduces complexity, and that can reduce how well the network works. Indeed, one of the main reasons scientists first espoused the end-to-end principle is to make networks efficient; it seemed obvious that analyzing each packet that passes over the Internet would add some computational demands to the system.

Gary Bachula, vice president for external affairs of Internet2, a nonprofit project by universities and corporations to build an extremely fast and large network, argues that managing online traffic just doesn’t work very well. At the February Senate hearing, he testified that when Internet2 began setting up its large network, called Abilene, “our engineers started with the assumption that we should find technical ways of prioritizing certain kinds of bits, such as streaming video, or video conferencing, in order to assure that they arrive without delay. As it developed, though, all of our research and practical experience supported the conclusion that it was far more cost effective to simply provide more bandwidth. With enough bandwidth in the network, there is no congestion and video bits do not need preferential treatment.”

Continue reading →

I missed it when it came out last summer but Cory Doctorow has a good article on network neutrality. Good, but a little strange. He makes a lot of good points, but then he ends up at a conclusion that doesn’t seem to follow from the points he made earlier in the article. On the one hand, he rightly points out that the end-to-end principle has been crucial to the success of the Internet to date, and that there’s reason to worry about telcos screwing around with it. On the other hand, he gives some great examples of the challenges regulators are likely to face:

The rules are going to have to do three incredibly tricky things:

1. Define network neutrality. This is harder than it sounds. If a Bell lets Akamai put one of its mirror servers in a central office, then Akamai’s customers can get a better quality of service to the Bell’s customers than those using an Akamai competitor. This is arguably a violation of net neutrality, but how do you solve it? It’s probably not practical to require the Bells to let all comers put local caches on their premises; there’s only so much rack space, after all.

Another tricky case: the University that provides a DSL service to its near-to-campus housing and configures its network to deliver guaranteed throughput to a courseware archive. It gets even stickier if the DSL and/or the courseware archive are supplied by commercial third parties. Poorly written net neutrality regulations could prevent universities from providing those services, which should be allowed.

Continue reading →

A note on the design.

by on February 5, 2007

As regular visitors will notice, we’ve refreshed the design of the site a bit. Nothing radical, but we hope the changes will make the blog easier to read and navigate. New features include author pages that list the posts of individual contributors, Google results in the search bar, and new Digg and Reddit buttons for your convenience–please feel free to use them!

We want to thank all of our readers for making this blog a success. If you enjoy the site, consider telling a friend or linking to it from your blog. ¡Viva la liberation!

Linux Genuine Advantage

by on February 2, 2007

This is hilairous. Be sure to check out the FAQ. And I’m especially amused that they went to the trouble of actually writing source code, which includes some funny comments.

“Hysterical Morons”

by on February 2, 2007 · 4 comments

Via Bruce Schneier, Wired has an analysis of the legality of the great Light Brite Terrorist Plot:

Intent, in essence, is the entire substance of the charge: if Beredovsky meant to cause a panic (somehow psychically being able to foresee the abject hysteria that would grip the officials of Boston in response to a picture of a cartoon character giving onlookers the finger), he’s guilty. If he didn’t–and it’s pretty obvious he didn’t–he’s innocent.

The word, though, that everyone keeps on throwing around is that his Mooninite Boxes were ‘hoaxes.’ What exactly does the state of Massachusetts mean when they claim a bunch of stray Lite-Brites were hoax devices?

Again, according to the law:

For the purposes of this section, the term “hoax device” shall mean any device that would cause a person reasonably to believe that such device is an infernal machine. For the purposes of this section, the term “infernal machine” shall mean any device for endangering life or doing unusual damage to property, or both, by fire or explosion, whether or not contrived to ignite or explode automatically. For the purposes of this section, the words “hoax substance” shall mean any substance that would cause a person reasonably to believe that such substance is a harmful chemical or biological agent, a poison, a harmful radioactive substance or any other substance for causing serious bodily injury, endangering life or doing unusual damage to property, or both.

The million dollar term here? “Reasonably believe.” Could a bunch of light-up boxes advertising a cartoon really be reasonably mistaken for an infernal device? I guess it depends what you mean by reasonably. In my book, someone being reasonable presumes they aren’t a hysterical moron, but I’m not really sure the state of Massachusetts shares my definition.

Lawrence Lessig has a new half-hour presentation on his blog where he outlines his opposition to the Copyright Office’s recommendations on orphan copyright works that were the basis for the proposed Orphan Works Act of 2006, and which were very similar to the proposal Bridget Dooling and I made. He also proposes his own alternative solution, which is much like the proposed Public Domain Enhancement Act he helped craft and which Bridget and I have critiqued. I find his new articulation to still be completely unworkable. Let me explain.

Continue reading →

I’ve written on this blog before about Cyren Call, Nextel founder Morgan O’Brien’s venture to create a national wireless broadband network for first responders. Its plan calls for 30 MHz of spectrum in the 700 MHz band that are slated for auction. A couple of months ago the FCC turned down Cyren Call’s petition, saying Congress’s instructions were quite clear and the Commission didn’t have the authority to refuse to auction the spectrum. Morgan O’Brien spoke at the symposium we held late last year and hinted that he was already working on getting Congress to approve his plan. (Video here.)

Well, today comes word that John McCain has signed on to the Cyren Call plan. This is especially newsworthy since the Senate will soon take a look at the recently passed House bill to implement the 9/11 Commission’s recommendations. As I explained earlier today, that bill addresses first responder communications, but doesn’t mention new spectrum for public safety. McCain said he plans to introduce legislation in the near future to assign the 30 MHz to the Public Safety Broadband Trust the Cyren Call plan calls for. I’m not convinced you need 30 MHz of spectrum to create a viable network, and so I’m not sure it’s time to remove spectrum from efficient allocation by auctions. Verizon hinted a while back that they could do it in just 12 MHz of the 24 already slated for public safety, and the FCC is currently taking comments in a proceeding to create just such a network in 12 MHz. Comments are due on Feb. 26. Note to Verizon: Now would be a fine time to make details of your plan public.

The other problem I see is that the Cyren Call/McCain plan would create one monopoly provider. The FCC plan has the same problem. If it can be done in 12 MHz, why not create two competing networks in the 24 MHz of spectrum already allocated for public safety?