If you thought the FCC’s regime of speech controls for broadcast television and radio was arbitrary and excessive, then just wait till we give similar authority to the Federal Trade Commission (FTC) to regulate video game content! That’s apparently what House Telecommunications Subcommittee Chairman Fred Upton (R-Mich) plans to do.

According to Broadcasting & Cable magazine, he is preparing a bill to give the FTC greater authority to fine video game manufacturers if they contain objectionable content. Barton, you will recall, was the sponsor of the recently passed Broadcast Decency Enforcement Act, which raised the fines that FCC could impose on broadcasters 10-fold. Apparently, he wants to give the FTC greater powers because he is angry about the agency’s recent decision in the “Grand Theft Auto” investigation. He said that the FTC’s action “wasn’t even a slap on the wrist” and that millions of dollars of fines should have been levied.

It’s just more bad news for the video game industry and fans of the First Amendment. (Here’s my recent paper summarizing some of the other threats the industry faces).

On the First Amendment front, the big news coming out of Washington this week was that, well… your government still doesn’t really believe in the First Amendment! President Bush signed into law a massive increase in broadcast “indecency” penalties. The new law, called the Broadcast Indecency Enforcement Act, would boost the fines that the Federal Communications Commission could impose on television and radio broadcasters from a current maximum of $32,500 to $325,000–a 10-fold increase.

No surprise here, of course. It’s an election year and this sort of thing wins you brownie points with certain constituencies. While I don’t want to get into an extended legal analysis about why I think all this will eventually be struck down by the courts–see this essay for that discussion–I just want to point out, for the umpteenth time, the radically unfair and illogical nature of all this. Let’s just lay out the current state of affairs in terms First Amendment protection in America:

Continue reading →

Shooting the Canaries

by on June 16, 2006

Mike Masnick makes a good point about “patent trolls”:

Rep. Lamar Smith… held hearings today to see if Congress could come up with a working definition of a patent troll. While it’s good to see Congress recognizing that patent hoarding can hold back innovation, defining just what a patent troll is doesn’t seem like it’s going to help. The issue isn’t whether or not anyone is a patent troll, but whether the patent system is being used to hold back innovation. Trying to define what a patent troll is will simply confuse the issue, and lead companies to focus on avoiding the specific definitions of a patent troll, while trying to accuse every one they get into a patent lawsuit with of meeting the regulatory definition of patent troll. A much more important issue would be to focus on making sure the patent system is actually encouraging innovation.

I suspect this reflects the distorted view you get when the legislative process is dominated by industry lobbyists. For the most part, big companies don’t mind over-broad patents so much. They have a lot of patents of their own, which they can use as barriers to entry against smaller competitors, while they sign cross-licensing agreements with other big companies to minimize litigation. The only problem comes when a small company dares to sue them. Then they’re pissed!

In a sense, patent trolls are canaries in the coal mine of our patent system. They’re a signal that certain parts of the patent system is becoming harmful to innovation. But instead of figuring out how to fix the patent system, Rep. Smith seems to think the solution is to shoot the canaries.

This week’s software patent is held by Skyline Software Systems, a “leading provider of network-based 3D Earth visualization software and service.” Naturally, Google Earth is one of its primary competitors. Google Earth was originally developed by Keyhole, which Google acquired in October 2004.

When Google acquired Keyhole, it inherited a legal spat with Skyline as well. Last week, the judge in the case declined to order Google Earth shut down pending the outcome of the litigation. But the case goes on.

According to CNet, the patent in question is this one. It describes:

A method of providing data blocks describing three-dimensional terrain to a renderer. The data blocks belong to a hierarchical structure which includes blocks at a plurality of different resolution levels. The method includes receiving from the renderer one or more coordinates in the terrain along with indication of a respective resolution level, providing the renderer with a first data block which includes data corresponding to the one or more coordinates, from a local memory, and downloading from a remote server one or more additional data blocks which include data corresponding to the one or more coordinates if the provided block from the local memory is not at the indicated resolution level.

Is this an obvious patent?

Continue reading →

As regular readers of TLF know, I’m not a big fan of software patents. The more I learn about them, the more I’m amazed at the sheer scope of the problem. Every month, thousands of new software patents are issued. And at any given time there are dozens of software patent lawsuits before the courts. A few of them get big headlines, but most of them are never reported outside of the tech press.

So I’ve decided to do my small part to publicize the scope of the problem: every week, on Friday, I’m going to feature and analyze a software patent. In most cases, they’ll be software patents that are the subject of current litigation. My purpose for each weeks post will be to answer the questions: is this an obvious patent? And do patents like this promote innovation?

There’s enough software patent litigation out there that I don’t expect it to be that difficult to find a new case to highlight each week. But it would be a lot easier with help. So if you know of an example of an interesting software patent case–good or bad–shoot me an email at tlee -at- showmeinstitute.org and let me know about it.

In the future, this post will also serve as an index to the Software Patent of the Week series. Each week, I’ll add the latest software patent to this list, so that people can easily find the whole series.

Continue reading →

This week I traveled to Brussels and, along with my friends at the Internet Content Rating Association (ICRA), co-hosted an interesting roundtable discussion entitled “Mission Impossible: Protecting Children and Free Expression in Our New, Digital Content World.” The focus of the day’s discussion was the same as previous ICRA roundtables that I have participated in and written about here before: What steps can we take to shield children from potentially objectionable media content without repressing freedom of speech / expression?

In addition to being the focus of much of my ongoing research at PFF, you might also recall that I wrote about a major summit on similar issues that took place in Washington, D.C. last week. That event, which was co-hosted by the New America Foundation and the Kaiser Family Foundation, featured keynote addresses from Senator Hillary Clinton among other important lawmakers and public policy experts.

This week’s Brussels roundtable featured a similarly impressive collection of interested parties from major European and American corporations and organizations, including: EU Commission officials, EuroISPA, NICAM (Netherlands Institute for the Classification of Audiovisual Media), Ofcom (UK communications / media regulatory agency), AOL Europe, ECO, MPAA Europe, Microsoft Germany, i-Sieve, Google Europe, Verizon, NASK, Cisco, Telefonica, the U.S. State Department, and several others.

Continue reading →

The Death of Private Media?

by on June 15, 2006

James Pinkerton predicts the rise of the “state owned mainstream media.” He points out that ever-increasing pressures on the margins of traditional media outlets like CNN and the New York Times will create a void that will be filled with government-run media sources like the BBC, NPR, and Voice of America:

This is the future of media: Some elements of the MSM will survive, probably. Bloggers will thrive, of course, but 99.9 percent of them are amateurs, without so many as one full-time employee. What will survive and thrive for sure, however, is the SOMSM. Every country with ambitions on the international stage will soon have its own state-supported media.

If war is too important to be left to generals, then news is too important to be left to reporters. Governments, including ours, have their own ideas, and they want to share them with us, the people–like it or not.

In addition, around the world, states will want to “help” their media. Not satisfied with what the free market is bringing about, politicians will offer to help out the invisible hand–help it, that is, with their own iron fist.

This strikes me as silly. Pinkerton’s actually wrong about bloggers–the percentage of amateur bloggers is much higher than 99.9 percent. But then there are more than 40 million blogs in the world, so even if only a tiny fraction of them are professionals, that still leaves plenty of room for high-quality reporting. Some bloggers (like me) are lucky enough to have jobs that allow blogging on the side. Others, such as Andrew Sullivan, have become successful enough that they generate enough ad revenue, speaking fees, etc to support themselves as full-time bloggers. Others, such as the writers of political magazines like Reason and The American Prospect blog as part of their day jobs. And still other blogs, such as Slashdot have become successful, ad-supported commercial news outlets with full-time staffs.

Continue reading →

Freezing the ‘Net

by on June 15, 2006 · 10 comments

TechDirt points to an excellent article on network neutrality:

Reality check: why doesn’t your landline phone do most of the things your cellphone does? It doesn’t have to worry about either battery life or size? The reason is that it’s attached to the traditional phone network on which innovation simply can’t happen. Telcos would like to make the Internet a similar innovation-free and profit-safe zone.

OK. This shouldn’t be allowed to happen. Proponents of net neutrality legislation say there oughtta be a law. But plenty of smart people–perhaps represented best by Martin Geddes–argue that a net neutrality law would be counterproductive. Turns out that neutrality itself is very hard to define. Should a neutral network be prohibited from blocking packets which attack the network itself? What about spam–does it have to be treated neutrally? What if someone invents a special purpose network good for connecting vending machines to something or other; does that network have to provide Google access in a non-discriminatory manner?

Once neutrality is defined by regulation and enforced by bureaucrats, the requirement itself could become an obstacle to innovation. Even more scary, given the skill of the telcos in manipulating congress (can you say “campaign contribution”?) and the FCC, could the neutrality requirement end up being enforced only against innovators? What if there were a five year wait for a “neutrality” permit before a new application could be deployed. Wouldn’t the telcos love that? Come to think of it, they have been pretty good lately at getting the FCC and the courts to throw obstacles in the way of VoIP.

The article goes on to argue that the real issue is the lack of competition in the broadband market. As some commenters to Tuesday’s post point out, there’s a lack of good data about exactly how many choices the average consumer has, but I think everyone can agree that more choice and competition would be better.

I also think it’s worth pointing out something about the traditional telephone network: the phone network is precisely the model that Larry Lessig holds up as a model for beneficial “common carrier” regulation. I suspect that a big part of the reason that cell phones have become so much more capable than their tethered counterparts is that the Baby Bells have been slowed down by the FCC’s “common carrier” regulations from offering new products and services. Lessig argues, with some plausibility, that those regulatory requirements led to the fiercely competitive dial-up Internet market, but it also caused the landline telephone market itself to be pretty stagnant. That doesn’t strike me as a good model for the Internet.

Today Adobe finally released its statement on the whole debacle with Microsoft regarding its inclusion of PDF support into upcoming versions of Office and the Vista operating system. The statement is not completely unintelligible gibberish (despite the inclusion in my blog entry’s title of “double talk.”) Indeed, the statement is a remarkable product:of…CAPITALISM.

This concern over open standards may come across as an obscure, geek-infested issue, but at its core is good old fashioned competition. Adobe vs. Microsoft has brought out the real incentives behind open standards. It’s not about the good, pure route toward a better society. It’s about money. Make no mistake about it, companies are willingly pushing open standards to governments for corporate leverage.

Here’s a relevant part of the statement:

Adobe is committed to open standards. Adobe publishes the complete PDF specification and makes it available for free, without restrictions, without royalties, to anyone who cares to use it. Because we license the PDF specification so openly, it has become a de facto standard, used by hundreds of independent software vendors worldwide. PDF is incorporated into a number of ISO standards, and Adobe encourages developers, independent software vendors and publishers to support and embrace it.

The above is Adobe’s pitch that it has created a successful product that it wants everybody to use – except Microsoft. Because as the statement continues:

Continue reading →

Next week, the FCC may revisit the issue of whether cable providers will be required to carry every channel of programming transmitted by over the air broadcasters. “Must-carry” itself is not a new idea–for years cable systems have been forced to carry broadcast signals over their networks. When broadcasters switch to digital transmission, however, each will be able to transmit multiple channels over the same bit of spectrum. So, should cable firms be required to carry each and every one of these channels? The FCC said “no” to such multicast must-carry rules a few years ago. But that was under Chairman Michael Powell. Current chairman Kevin Martin feels differently about “multi-cast must-carry,” and may now have the votes to reverse the prior decision. (More on the issue here.)

This week, he got support for this expanded regulation from an unlikely source: AT&T. AT&T, you may remember, has in recent months been exhaustively making the case against another set of rules–neutrality regulation. The federal government should keep its paws off private networks, they (rightly) argued, warning that they would discourage needed investment in private networks. However, this week a spokesman said that, regarding must-carry, it had no objection to federal paws. “We’re more than happy to put this programming on our network,” he said. “We support multicast must-carry.”

Continue reading →