April 2008

orphan-annie.jpgYesterday bills were introduced in the House (PDF) and the Senate (PDF) addressing the orphan works copyright issue about which I’ve written many times before. Alex Curtis has a great write-up of the bills over at the Public Knowledge blog.

An orphan work is a work under copyright the owner of which cannot be located so that a potential re-user cannot ask for permission to use or license the work. If you can’t find the owner, even after an exhaustive search, and use a work anyway, you risk the possibility that the owner will later come forward, sue you, and claim statutory damages up to $150,000 per infringing use.

Both bills are largely based on the Copyright Office’s recommendations and not the unworkable Lessig proposal that had been previously introduced as the Public Domain Enhancement Act by Rep. Zoe Lofgren. The bills limit the remedies available to a copyright owner if an infringing party can show that they diligently searched for the owner before they used the work. (What constitutes a diligent search is specifically defined, which should address the concerns about the Smith bill expressed by visual and stock artists.)

Rather than statutory damages, the owner would simply be owed the reasonable compensation for the infringing use—that is, what the infringer would have paid for the use if they had been able to negotiate. I think this is a fine solution because it gives all copyright holders an incentive to keep their registrations current and their works marked to the best of their abilities (i.e. what old-time formalities used to accomplish). I’m also happy to see that injunction is also limited.

Like the Smith bill, both of these new bills direct the Copyright Office to complete a study and produce a report on copyright small claims. There are many instances of copyright infringement that are too small to be litigated in federal district court—like a website that uses my copyrighted photo they got off flickr. Professional photographers and other visual artists face this all the time and there should be a way to address their concerns. One idea is to create a copyright small claims court and it’s something I’d love to research and contribute to a Copyright Office proceeding. So if Congress has been thinking about this for a few years, what’s stopping the Copyright Office from taking on the project sua sponte?

Anyhow, stay tuned as these bills wind their way through committee and the IP maximalists are engaged.

Check out my write-up of the State Secrets Protection Act, which is Ted Kennedy’s answer to the Bush administration’s habit of answering every lawsuit with “we can’t litigate about that because it’s a state secret. We can’t tell you why it’s a state secret because that’s a state secret too.” It would create some clear ground rules regarding when the state secrets privilege can be invoked and how judges should deal with such assertions. I haven’t given this a great deal of thought, but from a quick read-through of the bill, it seems like a pretty worthwhile approach.

GTA4Salon’s technology writer Farhad Manjoo has some sensible comments about the hullabaloo we’re already hearing about the forthcoming “Grand Theft Auto 4”:

When I watched the game, I caught one sequence that would seem sure to prompt outrage — your character gets falling-down drunk and can, if he wants, steal and then drive a car. The scene is undeniably fun and funny. Admittedly, the humor is low-brow, more in the tradition of “Jackass” than of Oscar Wilde, but it’s still fun; like much else in the game, it’s the thrill of discovery, the sense of, “Whoa, I can’t believe I can do that!” Of course, that’ll be exactly the sentiment of the game’s detractors: Can you believe they’re letting children do that?! This has to be illegal!

Well, actually, nobody is letting kids play this game. It’s rated M, which means it’s for sale to people 17 or older. Kids will still get it, of course, just like they also get hold of R-rated movies and all kinds of perversities on the Web. But nobody — at least nobody sane — calls for movie houses to refuse to play R-rated movies just because kids might sneak in. It’s hard to see why the policy should be any different with video games.

That’s exactly right. Moreover, as I have pointed out countless times before, parents have more and better tools to control video game consumption by their children than any other form of media. And that’s especially the case considering the cost of video games! When a game costs $60 bucks a pop, you gotta wonder how the kids are getting their hands on it. Are the parents just stuffing their kids’ pants full of cash and saying “OK, Johnny, you go buy whatever you want now.” If so, they have only themselves to blame for failing to effectively use the ‘power of the purse‘ to their advantage.

Finally, let’s not forget that gritty, M-rated games like “Grand Theft Auto” are the exception to the rule, as I have proven here.

More Broadband Progress

by on April 24, 2008 · 6 comments

Comcast recently unveiled a 50 Mbps broadband connection for $150/month, and has promised to have it available nationwide (and most likely bring the price down somewhat) by 2010. Verizon is in the process of rolling out a new fiber infrastructure that will allow it to offer a similar deal (you can also get an 8 Mbps connection for $50/month). All of this makes Qwest look like a relative laggard, with its announcement that it will soon be offering a 20 Mbit service for $100/month, and 12 Mbps for $50/month. And AT&T brings up the rear with plans for “only” 10 Mbit service.

One sometimes sees hand-wringing about the anemic state of the American broadband market. To the extent that other countries are doing even better (a debatable contention), we should certainly be looking for ways to make the broadband market more competitive. No doubt things would be progressing even faster if there were more players in the market. But the strong claim you sometimes see that there’s something deeply dysfunctional about the progress of the US broadband industry, is positively silly.

If Comcast and Verizon deliver on their promises to roll out 50 Mbit service by the end of the decade, and if prices follow their historical pattern of dropping over time, consumers in their service footprints will have seen the average speed of a $50 Internet connection increase by three orders of magnitude in about 15 years. And it’s been a reasonably steady pace of growth: in round numbers, $50 would buy you a 56kbps connection in 1998, a 512kbps connection in 2001, a 5Mbps connection in 2006, and (I hope) a 50 Mbps connection sometime early next decade. Things have consistently improved by a factor of 10 every 4-6 years.

It’s interesting to look back at the broadband argument we were having a couple of years ago. In mid-2006, TLF reader Luis Villa was reporting that he and his folks were still seeing typical broadband speeds of 2 Mpbs. Maybe he can chime in to let us know if that number has gone up at all. Personally, my apartment in St. Louis gets around 5 Mbps for about $30/month, and the house where I’m currently staying in DC has a 17 Mbps Comcast connection.

One of the things that makes it hard to judge is that broadband speeds and prices don’t tend to grow continuously and evenly around the country. Rather, carriers take turns leap-frogging one another, with each upgrade accompanied by a temporary price increase. So it can be tricky to judge the average rate of improvement by looking at just one market, because one market may seem to stagnate for several years at a time. But if one looks at the country as a whole, and focuses on time horizons closer to a decade, I think it’s undeniable that things are improving at a fairly rapid pace.

Don Marti offers a good example of the point of my last post:

The Smoot-Hawley Tariff was law. Many of the “code is law” design choices are similar: they can cause big effects without necessarily implementing the lawmaker’s intent.

I couldn’t have put it better myself. And this brings to mind another important point about “law” that I think is missed in Lessig’s “code is law” formulation: the real “law” Lessig is analogizing computer code to often isn’t exactly omnipotent. The War on Drugs is law but there’s still lots of drug use going on. Strict gun control is law in some big cities but there’s still plenty of gun violence. The McCain-Feingold soft money ban is law but there’s still plenty of corruption in Washington. The copyright system is law but there’s still plenty of illicit file sharing going on. And in all of these cases, the law has had significant effects that were very different from their authors’ intent.

And indeed, if we’re going to analogize ISPs to legislators, it’s important to remember that legislators have several key advantages over ISPs. In the first place, actual legislators can have you thrown in jail, whereas the worst an ISP can do to you is disconnect your service. Second, governments have effectively unlimited sums of money to waste on futile efforts like the drug war. In contrast, ISPs are profit-seeking institutions so there’s some limits on the amount of idiocy they can undertake before they’re forced to stop due to financial considerations. Finally, the law has at least some moral suasion behind it; the fact that something is illegal at least causes people to mostly hide the fact that they’re doing it. In contrast, people evading the “law” of an ISP would be able to claim the moral high ground.

Yet despite all these advantages, our government hasn’t come close to stopping people from using drugs, sharing files, hiring prostitutes, owning guns, corrupting public officials, and so forth. Not because the laws for those purposes were badly designed, or because the people implementing them haven’t tried hard enough, but because there are fundamental limits to government’s ability to control the behavior of citizens in a free society.

I think something similar is true of a large-scale network like the Internet. Incompetent network owners, as the “governments” of their corners of the Internet, can certainly cause a lot of havoc, just like real governments. But their ability to control what their customers do is constrained for many of the same reasons that governments’ ability to stop drug use or file sharing is constrained. And that means that network owners that try to restrict their customers’ use of their network are as likely to shoot themselves in the foot as they are to enhance their bottom lines.

Writing my critique of Larry Lessig’s Stanford lecture brought to mind an important ambiguity in Lessig’s oft-repeated slogan that code is law. I think the phrase actually has two distinct meanings that are often conflated but actually have quite different implications. One is, I think, indisputably true, whereas the other one is at best greatly overstated.

It’s certainly true that code is law in the sense that technological architectures place important constraints on the actions of their users. A TCP/IP network allows me to do different things than Verizon’s mobile market or the global ATM network. Wikipedia is the way it is largely because the software on which it runs constrains users in certain ways and empowers them in others. CALEA likely has given law enforcement agencies surveillance powers they couldn’t have had otherwise. To the extent that this was Lessig’s point, he’s obviously right.

However, when Lessig says “code is law,” he often seems to be making a significantly stronger claim about the power of code as law: not simply that code constrains us in some ways, but that the authors of code have a great deal of control over the exact nature of the constraints the technology they build will place on people. So that virtually any outcome the code designer wants to achieve, within reason, can be achieved if the code is designed by a sufficiently clever and determined programmer.

This stronger formulation strikes me as obviously wrong. This is true for at least two reasons. First, the set of tools available to the code-writer is often rather limited. For one thing, barring major breakthroughs in AI technology, many concepts and categories that are common sense to human beings cannot easily be translated into code. Rules like “block messages critical of president Bush” or “don’t run applications that undermine our business model” can’t easily be translated into hardware or software. There are a variety of heuristics that can be used to approximate these results, but they’re almost always going to be possible for human beings to circumvent.

The deeper challenge is a Hayekian point about spontaneous order: with a sufficiently complex and powerful technological platform, it often will not be possible to even predict, much less control, how the technology will be used in practice. Complex technologies often exhibit emergent properties, with the whole exhibiting behaviors that “emerge” from the complex interplay of much simpler constituent parts. It would have been hard for anyone to predict, for example, that the simple rules of wiki software could form the basis for a million-entry encyclopedia. Indeed, it’s pretty clear that Wikipedia was possible only because the site’s creators gave up any semblance of centralized control and allowed spontaneous order to work its magic.

A similar point applies to the Internet, and to the network owners that nominally control it. They certainly have some levers they can push to change certain micro-level characteristics of their networks, just as Jimmy Wales could make changes to the code of Wikipedia. But there’s no reason to think that either Wikipedia or Comcast would have any way to either predict or control what effects any given micro-level change to their platforms would have on the macro-level behavior of the whole system. They’re both large, complex systems whose millions of participants are each pursuing their own idiosyncratic objectives, and they’re likely to react and interact in surprising ways.

Now, the fact that Comcast can’t predict or control what effects its tinkering with its network might have does not, of course, mean that it won’t try. But it does mean that we should be skeptical of just-so stories about telcos turning the Internet into a well-manicured walled garden. When you create complex, open systems, they tend to take on a life of their own, and once you’ve ceded centralized authority, it can be very difficult to take back. Code may be law, but I think it’s a much more limited kind of law than a lot of people seem to think.

Over at Techdirt, I’ve got a series of posts that highlight some of the major arguments in my forthcoming paper on network neutrality. I’m particularly pleased with the latest installment of the series:

He claims that “owners have the power to change [the Internet’s architecture], using it as a tool, not to facilitate competition but to weaken competition.” Do they? He doesn’t spend any time explaining how networks would do this, or what kind of architectural changes he has in mind. But he does give an example that I think is quite illuminating, although not quite in the way he had in mind.

Lessig imagines a world of proprietary power outlets, in which the electricity grid determines the make and model of an appliance before deciding whether to supply it with power. So your power company might charge you one price for a Sony TV, another price for a Hitachi TV, and it might refuse to work at all with an RCA TV. Lessig is certainly right that that would be a bad way for the electricity grid to work, and it would certainly be a headache for everybody if things had been set up that way from the beginning. But the really interesting question is what a power company would have to do if it wanted to switch an existing electricity grid over to a discriminatory model. Because the AT&Ts and Comcasts of the world wouldn’t be starting from scratch; they’d be changing an existing, open network.

I focus on an aspect of the debate that I’ve seen receive almost no attention elsewhere: assuming that network discrimination remains legal, how much power could network owners actually exert over the use of their networks? There’s an assumption on both sides that ownership of a network automatically translates to comprehensive control over how it’s used. But I have yet to see anyone give a plausible explanation of how a last mile provider would get from the open network they’ve got now to the tightly-controlled one that network neutrality advocates fear. It’s always simply asserted, as Lessig does here, and then taken as a given for the rest of the argument. It’s a weakness in the pro-regulation argument that I wish more critics of regulation would highlight.

I go on to discuss the difficulties of real-world architectural transitions. Read the whole thing here.

Several state public utility commissioners are pleading with the Federal Communications Commission to preserve unnecessary, burdensome and anticompetitive accounting requirements that I have discussed here and here.

Sara Kyle, Tre Hargett and Ron Jones of the Tennessee Regulatory Authority say they review the data required of telephone companies, even if their review has little or nothing to do with the purpose for which the data was originally required.
This information is particularly useful in evaluating competition levels in Tennessee; further, such information may be necessary in fulfilling our Commission’s responsibilities should we decide that a state universal service fund is necessary.

The argument the FCC essentially is hearing is without the data there would be less work for state regulators, which would diminish their power.   

The state commissioners think they have a chance to persuade FCC commissioners Robert M. McDowell and Deborah Taylor Tate to reject the AT&T petition along with one or both of the commission’s two Democrats.

The question McDowell and Tate ought to be asking is whether it is the role of the feds to collect information primarily for the use of the states?  The states can do that for themselves.

Continue reading →

Recenty I commented that the Federal Communications Commission  has an opportunity to relieve AT&T of several unnecessary, burdensome and anticompetitive accounting requirements.

I noted that the data derived from the legacy accounting procedures simply isn’t used anymore to regulate revenue or set prices.  That’s true, by the way.

This week a group which calls itself the Ad Hoc Telecommunications Users Committee filed a letter (in which it didn’t identify its members) claiming:

As we explained at the debate, the data produced by the cost allocations at issue have been used by the Commission and private parties in the past (CALLS), are being used by the Commission and private parties in the present (272 Sunset Nonstructural Safeguards, Separations reform and theSpecial Access Rulemaking) and will in all likelihood be used by the Commission and private parties in the future (Special Access Rulemaking, Inter-Carrier Compensation Reform  and monitoring the efficacy of the Price Caps formula).

What’s going on here?

Well, like I said, the commission doesn’t use the data to regulate revenue or set prices, but competitors apparently do use the data to argue that incumbent telephone companies can “afford” to charge lower wholesale prices.

Continue reading →

Dini book cover Dr. Kourosh Dini is a Chicago-based adolescent and adult psychiatrist who has just published a new book entitled, Video Game Play and Addiction: A Guide for Parents. [You can learn more about him and his many talents and interests at his blog, “Mind, Music and Technology.“] Dini’s book arrives fresh on the heels of the fine book, “Grand Theft Childhood: The Surprising Truth About Violent Video Games and What Parents Can Do,” by Drs. Lawrence Kutner and Cheryl K. Olson. [See my review of that book here.]

Like Kutner & Olson’s book, Dini’s provides a refreshingly balanced and open-minded look at the impact of video games on our kids. One of the things I liked about it is how Dr. Dini tells us right up front that he has been a gamer his entire life and explains how that has helped him frame the issues he discusses in his book. “I have played games both online and off since I was about six years of age, and I have also been involved in child psychiatry, so I felt that I would be in a good position to discuss some inherent positives and negatives associated with playing games,” he says. Dini goes into greater detail about his gaming habits later in the book and it makes it clear that he still enjoys games very much.

Some may find Dini’s gaming background less relevant than his academic credentials, but I think it is important if for no other reason than it shows how we are seeing more and more life-long gamers attain positions of prominence in various professions and writing about these issues using a sensible frame of reference that begins with their own personal experiences. For far too long now, nearly every book and article I have read about video games and their impact on society at some point includes a line like, “I’ve never really played many games” or even “I don’t much care for video games,” but then–without missing a breath–the author or analyst goes on to tell us how imminently qualified they are to be discussing the impact of video games on kids or culture. Whenever I read or hear things like that, I’m reminded of the famous line from an old TV commercial: “I’m not a doctor, but I play one on TV.” Seriously, why is it that we should continue to listen to those critics who denounce video games but who have never picked up a controller in their lives? It’s really quite insulting. Would you take automotive advice from someone who’s never tinkered with cars in their lives but instead based their opinions merely upon watching them pass by on the road? I think not.
Continue reading →