This is an absolutely devastating review of Ubuntu:

In recent years Linux has suffered a major set-back following the shock revelations form SCO Group, whose software had been stolen wholesale and incorporated into illegal distributions of Linux. For the past five years, the overwhelming effort of Linux developers has gone into removing SCO’s intellectual property from the Linux operating system – but as the threat of litigation has subsided many users are asking once-again if Linux is a serious contender? …if you object to this communism, tough luck: The so-called “Completely Fair Scheduler” is incorporated into the “kernel” ( which is the Linux equivalent of the MS Dos layer in older versions of windows). This “feature” alone is enough to frighten serious users away from the upstart operating system… Windows users have no need of the “Completely Fair Scheduler” because we have modern scheduling software such as Microsoft Outlook (above). Outlook allows you to give your time to whoever you want, regardless of any socialist definitions of ‘fairness’.

I’ve traditionally been favorable toward Linux-based operating systems, but this puts them in a whole new light.

Last week a scad of stories from Reuters to News.com covered the growing push for a “Do Not Track” registry similar to the “Do Not Call” list that serves to protect US households from mid-dinner sales calls. While I understand the concerns expressed by folks like Marc Rotenberg of EPIC and Jeff Chester of the Center for Digital Democracy, who were both cited by Anne Broache in the News.com piece from last week, I think that asking the government to hold a master list of IPs and consumer names is a bad idea, or at least one that won’t do much to really protect consumers.

First, tracking people online is a bit different from calling folks in their homes. Telemarketing, while highly effective in terms of sales produced per dollar of marketing money spent, is still orders of magnitude more expensive than spamming or collecting data online without consent. Both of these activities are illegal today, but they still occur. They occur so much that spam-filtering technology contains some of the most advanced natural language recognition and parsing software created. Cory Doctorow has mused that the first artificial intelligences will emerge from Spam and anti-Spam computer arrays.

So this list wouldn’t be the magic wish that privacy advocates and legislators might dream it to be. It would cause law-abiding companies like Google, AOL, and Microsoft to stop collecting data, but so could privately developed and enforced systems.

Anne Broache notes that cookies are a bad solution for stopping data tracking as many anti-spy-ware programs delete cookies, since cookies are often used for the purpose of data tracking. But why not just create a new variety of cookie? Call it a cake, a brownie, a cupcake–maybe even a muffin. Whatever you call it, just specify that a standards-compliant browser must contain a place for something similar to a cookie to be placed that will opt consumers out of tracking schemes. This isn’t a technological problem at all, it’s just a matter of industry deciding to follow this course.

My other concern is something that fellow TLFer and former CEI staffer James Gattuso pointed out in a 2003 piece in regard to the “Do Not Call” list, namely that the government will likely exempt itself from the rules. In our post-9/11 world (whatever that means) we should expect government–the supposed protector of our rights–to make these sorts of moves. But you don’t have to trust my assertion, look no farther than Declan McCullagh’s Wednesday post at New.com. The FBI is pushing hard for Internet companies to retain data so that they can later sift through it. It’s doubtful that the government will place itself on “Do Not Track” list if they believe they can gain useful intelligence by tracking people online.

So, by and large, this proposed registry seems unnecessary and ineffective. Industry can easily work out a way to allow consumers to opt-out and the two groups I’m most afraid of–the Russian Mob and the U.S. Government–won’t pay heed to any registry anyway.

Instead or wringing our hands over advertisers tracking what duvet covers we buy, can we turn our attention to what our freewheelin’ executive branch is trying to pull-over on us? Seems to me they’re cooking up exemptions to more than just this registry–a few of my favorite Constitutional Amendments spring to mind.

I’ve been noticing recently that wi-fi connections are flakier than they used to be. It seems to me that from about 2001 to 2005, it was almost unheard-of for my home wi-fi connection to suddenly drop out on me. In the last year or two, it has seemed like this is an increasingly common occurrence. For the last half hour or so, my Internet connection has been going out for 5 or 10 seconds at a time every few minutes. It’s not a huge problem, but it happens just often enough to be pretty annoying.

I can think of a number of possible explanations for this. One might be that my current laptop, a MacBook I bought about a year ago, might have a lower-quality wireless card. Another might be that I’m using wi-fi in more places where it might be hard to get good coverage. Or maybe I’m imagining things.

But it also seems possible that we’re starting to experience of a tragedy of the wi-fi commons. I seem to recall (and Wikipedia confirms) that wi-fi cards effectively have only 3 “channels” to choose from, and that the wi-fi protocol isn’t especially well-designed to deal with multiple networks using the same channel at close proximity. It has now become commonplace for me to whip out my laptop in an urban setting and see a dozen or more wi-fi networks. Which suggests that there’s got to be some serious contention going on for those channels.

If I’m right (and I might be wildly off base) I’m not sure where the analysis goes from there, either from a technical perspective or a policy one. One knee-jerk libertarian answer is to suggest this is an argument against allocating a lot of spectrum to be used as a commons because it tends to be over-used and there’s no one in a position to resolve this kind of contention. On the other hand, maybe people are working on better protocols for negotiating this kind of contention and achieving a fair sharing of bandwidth without these problems. Or perhaps—at least for wi-fi—it would be possible to allocate enough bandwidth that there’d be enough to go around even in dense urban areas.

Dingel points to a paper (non-paywalled draft here) exploring the historical connection between the free trade movement and the movement for worldwide copyright harmonization:

Free traders failed repeatedly for sixty years after the end of the Civil War to reduce the average tariff to its immediate prewar level. They failed despite making a case that, by comparison with the one made for free trade today, was compelling. Speci?cally, the principles of free labor engendered an antimonopoly argument for trade. Free trade, its advocates argued, would eliminate the special privileges granted to producers in speci?c industries, most notably cotton goods, iron, and steel. It would promote competition, lower prices, and raise consumers’ real incomes… Carey attempted to turn the tables on the free traders: he argued that free trade promoted monopoly, and protection mitigated it. His conviction was sincere—but that particular part of his argument was unpersuasive, and relatively few of his followers bothered to repeat it. He was much more persuasive in arguing that international copyright promoted monopoly. In the face of the latter argument, the proponents of free trade and international copyright were put on the defensive… One wonders whether the tireless advocacy of international copyright by free traders like Bryant—who framed the cause as one inextricably related to free trade—hindered the advancement of their principal cause. The long-awaited sweeping tariff reductions were deferred until 1913. Might the wait have been shorter if the antimonopoly credentials of the free-trade advocates had not been called into question?

This is a fascinating question. One of the things I find really interesting about the 19th century political debate is that the opposing political coalitions were more sensibly aligned, perhaps because people had a slightly clearer sense of what was at stake. My impression (which may be wrong in its details) is that the free traders tended to be liberals and economic populists. They clearly understood that protectionism brought about a transfer of wealth from relatively poor consumers to relatively wealthy business interests. In the opposing coalition were a coalition of business interests and xenophobes making fundamentally mercantilist arguments about economic nationalism.

Today’s free trade debate is much weirder, because there are enough businesses who want to export things that significant parts of the business community are for freer trade. On the other hand, the liberals who fancy themselves defenders of relatively poor consumers find themselves in bed with predatory industries like sugar and stell that have been using trade barriers to gouge consumers. And the “trade” debate has increasingly come to be focused on issues that don’t actually have much to do with trade, whether it’s labor and environmental “standards,” copyright and patent requirements, working retraining programs, cross-border subsidies, etc.

I suspect part of what’s happening is that in the United States, at least, consumers are so rich that they really don’t notice the remaining costs of protectionism. A T-shirt at Target might cost $10 instead of the $8 it would cost if there were no trade barriers with China, but this is such a tiny fraction of the average American’s budget that they don’t really care. Likewise, if the domestic price of rice or flour were to double, a significant number of Americans wouldn’t even notice. In contrast, in the 19th century, we were still poor enough that a 10 or 20 percent increase in the price of basic staples might be the difference between being able to afford meat once a week or having to skip meals once in a while to make ends meet. We may now be rich enough that we can afford to be politically clueless.

orphan-annie.jpgYesterday bills were introduced in the House (PDF) and the Senate (PDF) addressing the orphan works copyright issue about which I’ve written many times before. Alex Curtis has a great write-up of the bills over at the Public Knowledge blog.

An orphan work is a work under copyright the owner of which cannot be located so that a potential re-user cannot ask for permission to use or license the work. If you can’t find the owner, even after an exhaustive search, and use a work anyway, you risk the possibility that the owner will later come forward, sue you, and claim statutory damages up to $150,000 per infringing use.

Both bills are largely based on the Copyright Office’s recommendations and not the unworkable Lessig proposal that had been previously introduced as the Public Domain Enhancement Act by Rep. Zoe Lofgren. The bills limit the remedies available to a copyright owner if an infringing party can show that they diligently searched for the owner before they used the work. (What constitutes a diligent search is specifically defined, which should address the concerns about the Smith bill expressed by visual and stock artists.)

Rather than statutory damages, the owner would simply be owed the reasonable compensation for the infringing use—that is, what the infringer would have paid for the use if they had been able to negotiate. I think this is a fine solution because it gives all copyright holders an incentive to keep their registrations current and their works marked to the best of their abilities (i.e. what old-time formalities used to accomplish). I’m also happy to see that injunction is also limited.

Like the Smith bill, both of these new bills direct the Copyright Office to complete a study and produce a report on copyright small claims. There are many instances of copyright infringement that are too small to be litigated in federal district court—like a website that uses my copyrighted photo they got off flickr. Professional photographers and other visual artists face this all the time and there should be a way to address their concerns. One idea is to create a copyright small claims court and it’s something I’d love to research and contribute to a Copyright Office proceeding. So if Congress has been thinking about this for a few years, what’s stopping the Copyright Office from taking on the project sua sponte?

Anyhow, stay tuned as these bills wind their way through committee and the IP maximalists are engaged.

Check out my write-up of the State Secrets Protection Act, which is Ted Kennedy’s answer to the Bush administration’s habit of answering every lawsuit with “we can’t litigate about that because it’s a state secret. We can’t tell you why it’s a state secret because that’s a state secret too.” It would create some clear ground rules regarding when the state secrets privilege can be invoked and how judges should deal with such assertions. I haven’t given this a great deal of thought, but from a quick read-through of the bill, it seems like a pretty worthwhile approach.

GTA4 Salon’s technology writer Farhad Manjoo has some sensible comments about the hullabaloo we’re already hearing about the forthcoming “Grand Theft Auto 4”:

When I watched the game, I caught one sequence that would seem sure to prompt outrage — your character gets falling-down drunk and can, if he wants, steal and then drive a car. The scene is undeniably fun and funny. Admittedly, the humor is low-brow, more in the tradition of “Jackass” than of Oscar Wilde, but it’s still fun; like much else in the game, it’s the thrill of discovery, the sense of, “Whoa, I can’t believe I can do that!” Of course, that’ll be exactly the sentiment of the game’s detractors: Can you believe they’re letting children do that?! This has to be illegal! Well, actually, nobody is letting kids play this game. It’s rated M, which means it’s for sale to people 17 or older. Kids will still get it, of course, just like they also get hold of R-rated movies and all kinds of perversities on the Web. But nobody — at least nobody sane — calls for movie houses to refuse to play R-rated movies just because kids might sneak in. It’s hard to see why the policy should be any different with video games.

That’s exactly right. Moreover, as I have pointed out countless times before, parents have more and better tools to control video game consumption by their children than any other form of media. And that’s especially the case considering the cost of video games! When a game costs $60 bucks a pop, you gotta wonder how the kids are getting their hands on it. Are the parents just stuffing their kids’ pants full of cash and saying “OK, Johnny, you go buy whatever you want now.” If so, they have only themselves to blame for failing to effectively use the ‘power of the purse‘ to their advantage.

Finally, let’s not forget that gritty, M-rated games like “Grand Theft Auto” are the exception to the rule, as I have proven here.

More Broadband Progress

by on April 24, 2008 · 6 comments

Comcast recently unveiled a 50 Mbps broadband connection for $150/month, and has promised to have it available nationwide (and most likely bring the price down somewhat) by 2010. Verizon is in the process of rolling out a new fiber infrastructure that will allow it to offer a similar deal (you can also get an 8 Mbps connection for $50/month). All of this makes Qwest look like a relative laggard, with its announcement that it will soon be offering a 20 Mbit service for $100/month, and 12 Mbps for $50/month. And AT&T brings up the rear with plans for “only” 10 Mbit service.

One sometimes sees hand-wringing about the anemic state of the American broadband market. To the extent that other countries are doing even better (a debatable contention), we should certainly be looking for ways to make the broadband market more competitive. No doubt things would be progressing even faster if there were more players in the market. But the strong claim you sometimes see that there’s something deeply dysfunctional about the progress of the US broadband industry, is positively silly.

If Comcast and Verizon deliver on their promises to roll out 50 Mbit service by the end of the decade, and if prices follow their historical pattern of dropping over time, consumers in their service footprints will have seen the average speed of a $50 Internet connection increase by three orders of magnitude in about 15 years. And it’s been a reasonably steady pace of growth: in round numbers, $50 would buy you a 56kbps connection in 1998, a 512kbps connection in 2001, a 5Mbps connection in 2006, and (I hope) a 50 Mbps connection sometime early next decade. Things have consistently improved by a factor of 10 every 4-6 years.

It’s interesting to look back at the broadband argument we were having a couple of years ago. In mid-2006, TLF reader Luis Villa was reporting that he and his folks were still seeing typical broadband speeds of 2 Mpbs. Maybe he can chime in to let us know if that number has gone up at all. Personally, my apartment in St. Louis gets around 5 Mbps for about $30/month, and the house where I’m currently staying in DC has a 17 Mbps Comcast connection.

One of the things that makes it hard to judge is that broadband speeds and prices don’t tend to grow continuously and evenly around the country. Rather, carriers take turns leap-frogging one another, with each upgrade accompanied by a temporary price increase. So it can be tricky to judge the average rate of improvement by looking at just one market, because one market may seem to stagnate for several years at a time. But if one looks at the country as a whole, and focuses on time horizons closer to a decade, I think it’s undeniable that things are improving at a fairly rapid pace.

Don Marti offers a good example of the point of my last post:

The Smoot-Hawley Tariff was law. Many of the “code is law” design choices are similar: they can cause big effects without necessarily implementing the lawmaker’s intent.

I couldn’t have put it better myself. And this brings to mind another important point about “law” that I think is missed in Lessig’s “code is law” formulation: the real “law” Lessig is analogizing computer code to often isn’t exactly omnipotent. The War on Drugs is law but there’s still lots of drug use going on. Strict gun control is law in some big cities but there’s still plenty of gun violence. The McCain-Feingold soft money ban is law but there’s still plenty of corruption in Washington. The copyright system is law but there’s still plenty of illicit file sharing going on. And in all of these cases, the law has had significant effects that were very different from their authors’ intent.

And indeed, if we’re going to analogize ISPs to legislators, it’s important to remember that legislators have several key advantages over ISPs. In the first place, actual legislators can have you thrown in jail, whereas the worst an ISP can do to you is disconnect your service. Second, governments have effectively unlimited sums of money to waste on futile efforts like the drug war. In contrast, ISPs are profit-seeking institutions so there’s some limits on the amount of idiocy they can undertake before they’re forced to stop due to financial considerations. Finally, the law has at least some moral suasion behind it; the fact that something is illegal at least causes people to mostly hide the fact that they’re doing it. In contrast, people evading the “law” of an ISP would be able to claim the moral high ground.

Yet despite all these advantages, our government hasn’t come close to stopping people from using drugs, sharing files, hiring prostitutes, owning guns, corrupting public officials, and so forth. Not because the laws for those purposes were badly designed, or because the people implementing them haven’t tried hard enough, but because there are fundamental limits to government’s ability to control the behavior of citizens in a free society.

I think something similar is true of a large-scale network like the Internet. Incompetent network owners, as the “governments” of their corners of the Internet, can certainly cause a lot of havoc, just like real governments. But their ability to control what their customers do is constrained for many of the same reasons that governments’ ability to stop drug use or file sharing is constrained. And that means that network owners that try to restrict their customers’ use of their network are as likely to shoot themselves in the foot as they are to enhance their bottom lines.

Writing my critique of Larry Lessig’s Stanford lecture brought to mind an important ambiguity in Lessig’s oft-repeated slogan that code is law. I think the phrase actually has two distinct meanings that are often conflated but actually have quite different implications. One is, I think, indisputably true, whereas the other one is at best greatly overstated.

It’s certainly true that code is law in the sense that technological architectures place important constraints on the actions of their users. A TCP/IP network allows me to do different things than Verizon’s mobile market or the global ATM network. Wikipedia is the way it is largely because the software on which it runs constrains users in certain ways and empowers them in others. CALEA likely has given law enforcement agencies surveillance powers they couldn’t have had otherwise. To the extent that this was Lessig’s point, he’s obviously right.

However, when Lessig says “code is law,” he often seems to be making a significantly stronger claim about the power of code as law: not simply that code constrains us in some ways, but that the authors of code have a great deal of control over the exact nature of the constraints the technology they build will place on people. So that virtually any outcome the code designer wants to achieve, within reason, can be achieved if the code is designed by a sufficiently clever and determined programmer.

This stronger formulation strikes me as obviously wrong. This is true for at least two reasons. First, the set of tools available to the code-writer is often rather limited. For one thing, barring major breakthroughs in AI technology, many concepts and categories that are common sense to human beings cannot easily be translated into code. Rules like “block messages critical of president Bush” or “don’t run applications that undermine our business model” can’t easily be translated into hardware or software. There are a variety of heuristics that can be used to approximate these results, but they’re almost always going to be possible for human beings to circumvent.

The deeper challenge is a Hayekian point about spontaneous order: with a sufficiently complex and powerful technological platform, it often will not be possible to even predict, much less control, how the technology will be used in practice. Complex technologies often exhibit emergent properties, with the whole exhibiting behaviors that “emerge” from the complex interplay of much simpler constituent parts. It would have been hard for anyone to predict, for example, that the simple rules of wiki software could form the basis for a million-entry encyclopedia. Indeed, it’s pretty clear that Wikipedia was possible only because the site’s creators gave up any semblance of centralized control and allowed spontaneous order to work its magic.

A similar point applies to the Internet, and to the network owners that nominally control it. They certainly have some levers they can push to change certain micro-level characteristics of their networks, just as Jimmy Wales could make changes to the code of Wikipedia. But there’s no reason to think that either Wikipedia or Comcast would have any way to either predict or control what effects any given micro-level change to their platforms would have on the macro-level behavior of the whole system. They’re both large, complex systems whose millions of participants are each pursuing their own idiosyncratic objectives, and they’re likely to react and interact in surprising ways.

Now, the fact that Comcast can’t predict or control what effects its tinkering with its network might have does not, of course, mean that it won’t try. But it does mean that we should be skeptical of just-so stories about telcos turning the Internet into a well-manicured walled garden. When you create complex, open systems, they tend to take on a life of their own, and once you’ve ceded centralized authority, it can be very difficult to take back. Code may be law, but I think it’s a much more limited kind of law than a lot of people seem to think.