Google’s Chief Internet Evangelist Vint Cerf, one of the fathers of the Net, has a very thoughtful post up on the Google Public Policy Blog today asking “What’s a Reasonable Approach for Managing Broadband Networks?” He runs through a variety of theoretical approaches to network load management. There’s much there to ponder, but I just wanted to comment briefly on the very last thing he says in the piece:

Over the past few months, I have been talking with engineers at Comcast about some of these network management issues. I’ve been pleased so far with the tone and substance of these conversations, which have helped me to better understand the underlying motivation and rationale for the network management decisions facing Comcast, and the unique characteristics of cable broadband architecture. And as we said a few weeks ago, their commitment to a protocol-agnostic approach to network management is a step in the right direction.

I found this of great interest because for the last few months I have been wondering: (a) why isn’t there more of that sort of inter- and intra-industry dialogue going on, and (b) what could be done to encourage more of it? With the exception of those folks at the extreme fringe of the Net neutrality movement, most rational people involved in this debate accept the fact that there will be legitimate network management issues that industry must deal with from time to time. So, how can we get people in industry — from all quarters of it — to sit down at a negotiating table and hammer things out voluntarily before calling in the regulators to impose ham-handed, inflexible solutions? What we are talking about here is the need for a technical dispute resolution process that doesn’t involve the FCC.
Continue reading →

There are apparently people who believe that it’s some kind of technological Faux pas to type a website’s URL into the search bar. As Joe Weisenthal points out, this is completely nonsense. There are a number of good reasons to use the search bar even if you have a pretty good idea of a site’s URL.

Beyond the specific reasons Joe gives, there’s a more fundamental issue of cognitive economy. URLs have to be exact, and so remembering them takes a non-trivial amount of cognitive effort. If I want to remember the Institute for Humane Studies website, I have to remember that it’s theIHS, and that it’s a .org rather than a .com or a .net. But if I type “IHS” into Google, the Institute for Humane Studies is the third search term. If I type something a little more descriptive, like humane studies, it comes up as the first result. Search terms don’t have to be exact, and so they tend to be much easier to remember: type something in the general vicinity of what you’re looking for, and Google will find it for you.

The point isn’t that I couldn’t remember theihs.org. Rather, it’s that remembering the URLs of all the websites you visit is a waste of cognitive energy in exactly the same way that it would be a waste to remember IP addresses rather than domain names. Technically speaking, the IP address lookup would be faster, but the difference is so trivial that it’s swamped by the fact that the human brain isn’t as good at remembering 32-bit numbers as it is at remembering well-chosen domain names. By the same token, even if the search bar isn’t the “right” place to put URLs, it will, in practice and on average, be the quickest way for actual human beings to get to the sites they’re looking for.

This is an example of a general attitudinal problem that’s distressingly common among geeks. Geeks have an tendency to over-value lower-level layers of the technology stacks based on the misguided belief that higher-level technologies are unnecessarily wasteful. Many geeks’ preference for text over graphics, command lines over GUIs, text editors over word processors, and so forth seems to too often be motivated by this kind of false economy. (To be clear I’m not claiming that there aren’t good reasons for preferring command lines, text editors, etc, just that this particular reason is bogus.) What they miss is that human time and attention is almost always more scarce than the trivial amount of computing power they’re conserving by using the less complex technology. The 2 seconds it takes me to remember a website’s URL is worth a lot more than the tenth of a second that it takes Google to respond to a search query.

Anyone interested in the long-running debate over how to balance online privacy with anonymity and free speech, whether Section 230‘s broad immunity for Internet intermediaries should be revised, and whether we need new privacy legislation must read the important and enthralling NYT Magazine piece  “The Trolls Among Us” by Mattathias Schwartz about the very real problem of Internet “trolls“–a term dating to the 1980s and defined as “someone who intentionally disrupts online communities.”

While all trolls “do it for the lulz” (“for kicks” in Web-speak) they range from the merely puckish to the truly “malwebolent.”  For some, trolling is essentially senseless web-harassment or “violence” (e.g., griefers), while for others it is intended to make a narrow point or even as part of a broader movement.  These purposeful trolls might be thought of as the Yippies of the Internet, whose generally harmless anti-war counter-cutural antics in the late 1960s were the subject of the star-crossed Vice President Spiro T. Agnew‘s witticism:

And if the hippies and the yippies and the disrupters of the systems that Washington and Lincoln as presidents brought forth in this country will shut up and work within our free system of government, I will lower my voice.

But the more extreme of these “disrupters of systems” might also be compared to the plainly terroristic Weathermen or even the more familiar Al-Qaeda.  While Schwartz himself does not explicitly draw such comparisons, the scenario he paints of human cruelty is truly nightmarish:  After reading his article before heading to bed last night, I myself had Kafka-esque dreams about complete strangers invading my own privacy for no intelligible reason.  So I can certainly appreciate how terrifying Schwartz’s story will be to many readers, especially those less familiar with the Internet or simply less comfortable with the increasing readiness of so many younger Internet users to broadcast their lives online.

But Schwartz leaves unanswered two important questions.  The first question he does not ask:  Just how widespread is trolling? However real and tragic for its victims, without having some sense of the scale of the problem, it is difficult to answer the second question Schwartz raises but, wisely, does not presume to answer:  What should be done about it? The policy implications of Schwartz’s article might be summed up as follows:  Do we need new laws or should we focus on some combination of enforcing existing laws, user education and technological solutions?  While Schwartz focuses on trolling, the same questions can be asked about other forms of malwebolence–best exemplified by the high-profile online defamation Autoadmit.com case, which demonstrates the effectiveness of existing legal tools to deal with such problems.

Continue reading →

Before leaving for its August recess last week, Congress saw the introduction of its 10,000th bill. Meanwhile, not a single one of the twelve annual bills that direct the government’s spending priorities in 2009 has passed the Senate and only one has passed the House. Congress is neglecting its basic responsibility to manage the federal government, and is instead churning out new legislation about everything under the sun.

What does Congress occupy itself with? A commemorative postage stamp on the subject of inflammatory bowel disease. Improbable claims of health care for all Americans. And, of course, bringing home pork. Read about it on the WashingtonWatch.com blog.

I too am sad to see William Patry hanging up his spurs. I can sympathize with a lot of he says. I too consider myself a copyright centrist and a defender of copyright’s traditions and so find it frustrating to be forced by recent trends to be constantly on the “anti-copyright” side of every argument. However, I don’t share Patry’s depression regarding recent trends in the copyright world. Because while the legislative developments over the last 30 years have been an unbroken string of disasters, most other aspects of the copyright system have actually gone pretty well.

One ray of light is the courts, which continue to get more right than they get wrong. The courts have, for example tended to uphold the first sale doctrine and fair use against concerted challenges from the copyright industries. Had Congress not passed the 1976 Copyright Act, the NET Act in 1997, and the DMCA and CTEA in 1998, my sense is that we’d actually have a pretty balanced copyright system. This suggests to me that restoring copyright sanity wouldn’t actually be that hard, if Congress were ever inclined to do so. To a large extent, it would simply have to repeal the bad legislation enacted during the 1990s.

I can think of two reasons my outlook might be more optimistic than Patry’s. One is that I’m younger than he is. I graduated from high school in 1998, which was almost certainly the low point when it comes to copyright policy on the Hill. While advocates of balanced copyright haven’t passed any major legislative victories since then, they have blocked most of the bad ideas that have come down the pike. We killed Fritz Hollings godawful SSSCA, the broadcast flag, “analog hole” legislation, and so forth. Given the lopsided advantages of the copyright maximalist in terms of funding and lobbying muscle, holding our own isn’t bad.

I think another reason I might be less inclined to get depressed than Patry is that I’m not a copyright lawyer. One of the most important trends of the last couple of decades is a steady divergence between the letter of copyright law and peoples’ actual practice. At the same time copyright law has gotten more draconian, it has also grown less powerful. More and more people are simply ignoring copyright law and doing as they please. A few of them get caught and face draconian penalties, but the vast majority ignore the law without any real consequences.

I imagine this is depressing for a copyright lawyer to see an ever-growing chasm between the letter of the law and peoples’ actual behavior. The copyright lobby’s extremism is steadily making copyright law less relevant and pushing more and more people to simply ignore it. That’s depressing for someone who loves copyright law, but I’m not sure it’s so terrible for the rest of us. I would, of course, prefer to have a reasonable set of copyright laws that most people would respect and obey. But I’m not sure it’s such a terrible thing when people react to unreasonable laws by ignoring them. Eventually, Congress will notice that there’s little correspondence between what people are doing and what the law says they ought to be doing, and they’ll change the laws accordingly. I’d prefer that happen sooner rather than later, but I have little doubt that it will happen, and I’m not going to lose sleep over it in the interim.

A couple of years ago I plugged Jerry Brito’s spectrum commons paper. What I said in that post is still true:it’s a great paper that highlights the central challenge of the commons approach. Specifically, a commons will typically require a controller, that controller will almost always be the government, and there’s therefore a danger of re-introducing all the maladies that have traditionally afflicted command-and-control regulation of spectrum.

I’m re-reading the paper after having read the FCC’s spectrum task force report, and while I still agree with the general thrust of Jerry’s paper, I think he overstates his case in a few places. In particular:

Only if spectrum is first allocated for flexible use, with few if any conditions on its use, can a commons or a property rights regime help overcome the inefficiencies of command-and-control spectrum management. For example, if spectrum is allocated for flexible use, a property rights regime will allow the owner of spectrum to put it to the most valuable use or sell it to someone who will. Similarly, if there are no restrictions on use, a commons will allow anyone to use the spectrum however she sees fit, thus overcoming command-and-control misallocation.

However, while title to spectrum could theoretically be auctioned off in fee simple with no strings attached, a government-created and -managed commons will always have its usage rules set through a command-and-control process. Users of a government commons might not be explicitly restricted in the applications they can deploy over the spectrum, but they will have to comply with the sharing rules that govern the commons. Sharing rules, which will be established through regulation, will in turn limit the types and number of applications that can be deployed.

I think the difficulty here is that just as Benkler and Lessig over-idealize the commons by ignoring the inevitable role for government in setting standards, so this over-idealizes the spectrum property regime. It’s not true that spectrum “could theoretically be auctioned off in fee simple with no strings attached.” The key thing to remember here is that electromagnetic waves don’t respect boundaries established by the legal system. There will always be a need for technical rules to prevent interference between adjacent rights holders. If you hold a spectrum right in a geographic territory adjacent to mine, the government is going to have to have some rules about how much of your transmissions can “leak” onto my property before it counts as a trespass.
Continue reading →

I regret to report the end of William F. Patry’s Copyright Blog. Patry, author of a superb multi-volume treatise on copyright law and Google’s Senior Copyright Counsel, not only offered a feast of news and commentary for copyright geeks; he offered it up in style. Consider this, among the many sound reasons he cites for ending his blog:

Copyright law has abandoned its reason for being: to encourage learning and the creation of new works. Instead, its principal functions now are to preserve existing failed business models, to suppress new business models and technologies, and to obtain, if possible, enormous windfall profits from activity that not only causes no harm, but which is beneficial to copyright owners. Like Humpty-Dumpty, the copyright law we used to know can never be put back together again: multilateral and trade agreements have ensured that, and quite deliberately.

In short, Patry found blogging about copyright simply too depressing to keep up. I certainly understand that feeling, though I find righteous indignation a fair remedy for weary sadness. At any rate, I thank Patry for his long and selfless blogging, wish him happier diversions, and look forward to the day when we can discuss copyright’s reformation with smiling pride.

[Crossposted at Agoraphilia and Technology Liberation Front.]

As expected, the FCC has chosen Comcast as the target of its biggest net neutrality enforcement action to date.  I wonder whether the FCC has actually chosen a good set of facts to serve as the foundation for what may possibly be a broad new precedent (we won’t know how broad until the commission publishes the order), considering that the commission will likely be forced to defend it in court.  Like it or not, FCC decisions are required to have a “rational basis.”

FCC Chairman Kevin Martin suggests Comcast acted atrociously:

While Comcast claimed its intent was to manage congestion, they evidence told a different story:

  • Contrary to Comcast’s claims, they blocked customers who were using very little bandwidth simply because they were using a disfavored application;
  • Contrary to Comcast’s claims, they did not affect customers using an extraordinary amount of bandwidth even during periods of peak network congestion as long as he wasn’t using a disfavored application; 
  • Contrary to Comcast’s claims, they delayed and blocked customers using a disfavored application even when there was no network congestion;
  • Contrary to Comcast’s claims, the activity extended to regions much larger than where it claimed congestion occurred.

In short, they were not simply managing their network; they had arbitrarily picked an application and blocked their subscribers’ access to it

Yet Commissioner Robert McDowell seems to claim that the evidence is insubstantial:

The truth is, the FCC does not know what Comcast did or did not do. The evidence in the record is thin and conflicting.  All we have to rely on are the apparently unsigned declarations of three individuals representing the complainant’s view, some press reports, and the conflicting declaration of a Comcast employee. The rest of the record consists purely of differing opinions and conjecture. [footnote omitted]

Continue reading →

WASHINGTON, August 1 – The Federal Communication Commission’s enforcement action against Comcast can be seen either as a limited response to a company’s deceptive practices, or a sweeping new venture by the agency into regulating internet policy.

In ruling against Comcast on Friday, the agency ordered the company to “disclose the details of its discriminatory network management practices,” “submit a compliance plan” to end those practices by year-end, and “disclose to customers and the [FCC] the network management practices that will replace current practices.”

At issue in the decision was whether Comcast had engaged in “reasonable network management” practices when it delayed and effetively blocked access to users of BitTorrent, a peer-to-peer software program.

Although BitTorrent had already settled its complaints with Comcast, FCC Chairman Kevin Martin said that FCC action was necessary because the complaint had been brought by Free Press and Public Knowledge, two non-profit groups. The FCC did not impose a fine.

Martin said that he viewed the agency’s decision to punish the cable operator as a quasi-judicial matter: a “fact-intensive inquiry” against a specific company that it found to have “selectively block[ed]” peer-to-peer traffic.

[Continue reading “FCC Hammers Comcast For Deception and Unreasonable Internet Management“]

[A guest post from Tim Wu]

Well its always fun to have two people you respect read your work and such is the case with Tim and Adam, though to be honest I probably enjoyed Tim’s analysis a little more.

Adam’s reaction is too strong, and doesn’t really get at the main points in the op-ed. The main point was this: that bandwidth has become an essential input in an economy that depends heavily on moving information. For that reason we must gain a sensitivity to the issues of supply and demand surrounding it. If anyone disagrees with that, I’d love to hear why.

I use the comparison to gas and energy because we all know that when gas prices go up or down, large parts of the economy are affected, from tourism through, say, bowling alleys. What I am saying is that bandwidth may have a similar nature: that if prices are high, it effects all of the information-related markets in interesting ways, from startup video services through google. It is still early in the age of the internet economy, so this may be less obvious at this point.

If you agree with this, you must care about industry structure and government’s role in suppressing or helping competition in that market.

Meanwhile, while the OPEC example may be a tad dramatic, harping on the fact that OPEC is comprised of nation-states, as opposed to firms, is a mistake. From an economic perspective, why do we care if it is, say, a worldwide private conspiracy setting prices as opposed to a conspiracy of nation states? The effect on prices is the same whether its four firms setting food prices (like in the 1990s, with the Archer-Daniel Midlands price-setting cases), as opposed to four foreign governments. It is harder to stop the governments, because they rarely respond to lawsuits — but the economic consequences, so long as the price-fixing conspiracy lasts, is no different.

A point made in the comments is also true – which is that telecom tends to be in the realm of state-supported or regulated monopoly, and so there is some confusion as to whether what we are talking about are really private actors in a pure sense. This is a point Hayek made quite well. If government helps create a monopoly, as it has in cable and telephone markets – then being concerned about the consequences of that monopoly makes much sense.

Finally, on Tim Lee’s post – I take much less issue. I’d just like to point out that I am also an advocate of greater propertization as well as more dedication to the commons—its the stuff in the middle I don’t care for. For example, as Tim knows, I would like to see the development of ways for people to own their own fiber connections (homes with Tails). I also believe that, in broad spectrum reform, there should be more propertization of the airwaves. The only silly position, it seems to me, is to maintain on principle that either a commons or private property is of no use whatsoever.