June 2006

Daniel Markham takes me to task for being one of those “software patents are destroying the world!” types:

Imagine for a minute that I just got off a time machine from the year 5600. I know how to make truly intelligent machines, so I sit down and write a patent on how to make computer intelligence. Now at the heart of my patent will be arrays, indexes, memory cores–all of the usual computer stuff. It’s all just ones and zeros, folks. But obviously my patent has tremendous value to society.

This is a silly example, but since it IS possible to make an example where software patents make sense, the question isn’t whether they are useful or not, the question is how to tell the difference. That’s a big point that a lot of folks miss. Get rid of the bath water, keep the baby.

This is a silly example for a number of reasons, and not just the obvious ones. In the first place, it’s unlikely that somebody’s going to sit down at his computer and come up with a single breakthrough that makes computers instantly intelligent. More likely, there will be a long series of incremental improvements. Each advancement will give its creator a short-term advantage in the marketplace before another firm comes up with another incremental improvement that puts it ahead. This process of incremental improvement and imitation is the way the software industry has worked for decades.

Continue reading →

Sloppiness in TNR

by on June 16, 2006 · 22 comments

The New Republic seems to believe that the lack of network neutrality will somehow lead to the end of the blogosphere:

[The Internet] is where Americans can not only search for the best deal on a new digital camera, but also debate the country’s future. Unlike the telephone, it is a medium in which thousands, even millions, of people can participate in the same discussion at the same time. Unlike television, it is interactive. But it can’t function optimally if content is prioritized or filtered by telecom companies. Allowing companies to levy a toll on information providers is not just a blow to consumer choice–it’s a blow to democracy.

Andrew Kantor of USA Today (who reader Raphy points out recently had a change of heart on the issue) has a column that nicely rebuts this kind of silliness:

I’ve read quotes from bloggers saying their content wouldn’t be delivered as quickly as that from, say, USA TODAY–thus depriving people of information that isn’t from the mainstream media. And people speak of the “little guy” not being able to compete with monster corporations with monster bandwidth.

But that makes no sense. Small information providers like bloggers don’t connect directly to the Internet; they buy space on hosting sites, either maintaining their own or on a shared blogging site (e.g., Blogger.com). It’s those hosts that buy the bandwidth, and they often tout their connection speeds.

Think about it: Google owns Blogger. Do you think Blogger users are going to be deprived of bandwidth for lack of funds?

And Jim Lippard points out that TNR repeats the falsehood that network neutrality rules always applied to the Internet before the evil Bush administration stopped enforcing them.

I’m ordinarily a big fan of the New Republic‘s articles. It’s sad to see them repeating bogus MoveOn talking points.

If you thought the FCC’s regime of speech controls for broadcast television and radio was arbitrary and excessive, then just wait till we give similar authority to the Federal Trade Commission (FTC) to regulate video game content! That’s apparently what House Telecommunications Subcommittee Chairman Fred Upton (R-Mich) plans to do.

According to Broadcasting & Cable magazine, he is preparing a bill to give the FTC greater authority to fine video game manufacturers if they contain objectionable content. Barton, you will recall, was the sponsor of the recently passed Broadcast Decency Enforcement Act, which raised the fines that FCC could impose on broadcasters 10-fold. Apparently, he wants to give the FTC greater powers because he is angry about the agency’s recent decision in the “Grand Theft Auto” investigation. He said that the FTC’s action “wasn’t even a slap on the wrist” and that millions of dollars of fines should have been levied.

It’s just more bad news for the video game industry and fans of the First Amendment. (Here’s my recent paper summarizing some of the other threats the industry faces).

On the First Amendment front, the big news coming out of Washington this week was that, well… your government still doesn’t really believe in the First Amendment! President Bush signed into law a massive increase in broadcast “indecency” penalties. The new law, called the Broadcast Indecency Enforcement Act, would boost the fines that the Federal Communications Commission could impose on television and radio broadcasters from a current maximum of $32,500 to $325,000–a 10-fold increase.

No surprise here, of course. It’s an election year and this sort of thing wins you brownie points with certain constituencies. While I don’t want to get into an extended legal analysis about why I think all this will eventually be struck down by the courts–see this essay for that discussion–I just want to point out, for the umpteenth time, the radically unfair and illogical nature of all this. Let’s just lay out the current state of affairs in terms First Amendment protection in America:

Continue reading →

Shooting the Canaries

by on June 16, 2006

Mike Masnick makes a good point about “patent trolls”:

Rep. Lamar Smith… held hearings today to see if Congress could come up with a working definition of a patent troll. While it’s good to see Congress recognizing that patent hoarding can hold back innovation, defining just what a patent troll is doesn’t seem like it’s going to help. The issue isn’t whether or not anyone is a patent troll, but whether the patent system is being used to hold back innovation. Trying to define what a patent troll is will simply confuse the issue, and lead companies to focus on avoiding the specific definitions of a patent troll, while trying to accuse every one they get into a patent lawsuit with of meeting the regulatory definition of patent troll. A much more important issue would be to focus on making sure the patent system is actually encouraging innovation.

I suspect this reflects the distorted view you get when the legislative process is dominated by industry lobbyists. For the most part, big companies don’t mind over-broad patents so much. They have a lot of patents of their own, which they can use as barriers to entry against smaller competitors, while they sign cross-licensing agreements with other big companies to minimize litigation. The only problem comes when a small company dares to sue them. Then they’re pissed!

In a sense, patent trolls are canaries in the coal mine of our patent system. They’re a signal that certain parts of the patent system is becoming harmful to innovation. But instead of figuring out how to fix the patent system, Rep. Smith seems to think the solution is to shoot the canaries.

This week’s software patent is held by Skyline Software Systems, a “leading provider of network-based 3D Earth visualization software and service.” Naturally, Google Earth is one of its primary competitors. Google Earth was originally developed by Keyhole, which Google acquired in October 2004.

When Google acquired Keyhole, it inherited a legal spat with Skyline as well. Last week, the judge in the case declined to order Google Earth shut down pending the outcome of the litigation. But the case goes on.

According to CNet, the patent in question is this one. It describes:

A method of providing data blocks describing three-dimensional terrain to a renderer. The data blocks belong to a hierarchical structure which includes blocks at a plurality of different resolution levels. The method includes receiving from the renderer one or more coordinates in the terrain along with indication of a respective resolution level, providing the renderer with a first data block which includes data corresponding to the one or more coordinates, from a local memory, and downloading from a remote server one or more additional data blocks which include data corresponding to the one or more coordinates if the provided block from the local memory is not at the indicated resolution level.

Is this an obvious patent?

Continue reading →

As regular readers of TLF know, I’m not a big fan of software patents. The more I learn about them, the more I’m amazed at the sheer scope of the problem. Every month, thousands of new software patents are issued. And at any given time there are dozens of software patent lawsuits before the courts. A few of them get big headlines, but most of them are never reported outside of the tech press.

So I’ve decided to do my small part to publicize the scope of the problem: every week, on Friday, I’m going to feature and analyze a software patent. In most cases, they’ll be software patents that are the subject of current litigation. My purpose for each weeks post will be to answer the questions: is this an obvious patent? And do patents like this promote innovation?

There’s enough software patent litigation out there that I don’t expect it to be that difficult to find a new case to highlight each week. But it would be a lot easier with help. So if you know of an example of an interesting software patent case–good or bad–shoot me an email at tlee -at- showmeinstitute.org and let me know about it.

In the future, this post will also serve as an index to the Software Patent of the Week series. Each week, I’ll add the latest software patent to this list, so that people can easily find the whole series.

Continue reading →

This week I traveled to Brussels and, along with my friends at the Internet Content Rating Association (ICRA), co-hosted an interesting roundtable discussion entitled “Mission Impossible: Protecting Children and Free Expression in Our New, Digital Content World.” The focus of the day’s discussion was the same as previous ICRA roundtables that I have participated in and written about here before: What steps can we take to shield children from potentially objectionable media content without repressing freedom of speech / expression?

In addition to being the focus of much of my ongoing research at PFF, you might also recall that I wrote about a major summit on similar issues that took place in Washington, D.C. last week. That event, which was co-hosted by the New America Foundation and the Kaiser Family Foundation, featured keynote addresses from Senator Hillary Clinton among other important lawmakers and public policy experts.

This week’s Brussels roundtable featured a similarly impressive collection of interested parties from major European and American corporations and organizations, including: EU Commission officials, EuroISPA, NICAM (Netherlands Institute for the Classification of Audiovisual Media), Ofcom (UK communications / media regulatory agency), AOL Europe, ECO, MPAA Europe, Microsoft Germany, i-Sieve, Google Europe, Verizon, NASK, Cisco, Telefonica, the U.S. State Department, and several others.

Continue reading →

The Death of Private Media?

by on June 15, 2006

James Pinkerton predicts the rise of the “state owned mainstream media.” He points out that ever-increasing pressures on the margins of traditional media outlets like CNN and the New York Times will create a void that will be filled with government-run media sources like the BBC, NPR, and Voice of America:

This is the future of media: Some elements of the MSM will survive, probably. Bloggers will thrive, of course, but 99.9 percent of them are amateurs, without so many as one full-time employee. What will survive and thrive for sure, however, is the SOMSM. Every country with ambitions on the international stage will soon have its own state-supported media.

If war is too important to be left to generals, then news is too important to be left to reporters. Governments, including ours, have their own ideas, and they want to share them with us, the people–like it or not.

In addition, around the world, states will want to “help” their media. Not satisfied with what the free market is bringing about, politicians will offer to help out the invisible hand–help it, that is, with their own iron fist.

This strikes me as silly. Pinkerton’s actually wrong about bloggers–the percentage of amateur bloggers is much higher than 99.9 percent. But then there are more than 40 million blogs in the world, so even if only a tiny fraction of them are professionals, that still leaves plenty of room for high-quality reporting. Some bloggers (like me) are lucky enough to have jobs that allow blogging on the side. Others, such as Andrew Sullivan, have become successful enough that they generate enough ad revenue, speaking fees, etc to support themselves as full-time bloggers. Others, such as the writers of political magazines like Reason and The American Prospect blog as part of their day jobs. And still other blogs, such as Slashdot have become successful, ad-supported commercial news outlets with full-time staffs.

Continue reading →

Freezing the ‘Net

by on June 15, 2006 · 10 comments

TechDirt points to an excellent article on network neutrality:

Reality check: why doesn’t your landline phone do most of the things your cellphone does? It doesn’t have to worry about either battery life or size? The reason is that it’s attached to the traditional phone network on which innovation simply can’t happen. Telcos would like to make the Internet a similar innovation-free and profit-safe zone.

OK. This shouldn’t be allowed to happen. Proponents of net neutrality legislation say there oughtta be a law. But plenty of smart people–perhaps represented best by Martin Geddes–argue that a net neutrality law would be counterproductive. Turns out that neutrality itself is very hard to define. Should a neutral network be prohibited from blocking packets which attack the network itself? What about spam–does it have to be treated neutrally? What if someone invents a special purpose network good for connecting vending machines to something or other; does that network have to provide Google access in a non-discriminatory manner?

Once neutrality is defined by regulation and enforced by bureaucrats, the requirement itself could become an obstacle to innovation. Even more scary, given the skill of the telcos in manipulating congress (can you say “campaign contribution”?) and the FCC, could the neutrality requirement end up being enforced only against innovators? What if there were a five year wait for a “neutrality” permit before a new application could be deployed. Wouldn’t the telcos love that? Come to think of it, they have been pretty good lately at getting the FCC and the courts to throw obstacles in the way of VoIP.

The article goes on to argue that the real issue is the lack of competition in the broadband market. As some commenters to Tuesday’s post point out, there’s a lack of good data about exactly how many choices the average consumer has, but I think everyone can agree that more choice and competition would be better.

I also think it’s worth pointing out something about the traditional telephone network: the phone network is precisely the model that Larry Lessig holds up as a model for beneficial “common carrier” regulation. I suspect that a big part of the reason that cell phones have become so much more capable than their tethered counterparts is that the Baby Bells have been slowed down by the FCC’s “common carrier” regulations from offering new products and services. Lessig argues, with some plausibility, that those regulatory requirements led to the fiercely competitive dial-up Internet market, but it also caused the landline telephone market itself to be pretty stagnant. That doesn’t strike me as a good model for the Internet.