October 2007

I felt like I was reading a story from the future when I read this lead from a news article about a Microsoft exec. pleading why desktop software is still relevant:

A top Microsoft executive defended desktop application software, the
source of the company’s revenue for three decades, arguing on Tuesday
that even services-based companies such as Google still need it.

But then I just realized that I’m old, and time and the competitive software marketplace has moved quickly the past few years.

Nevertheless, I’m so intrigued by all the new business models that are vying for both the business customer and consumers like you and me that I’m currently writing a paper on it. My public policy bent is nuanced, but relevant: do new business models (not a single technology, not a specific technology, but a particular way of doing business like licensing, services, ad-based) need a regulatory helping hand to compete? I’m talking about interoperability mandates, spectrum auction rules, standards…you get the drift. Of course, I’m going to have to say that even if you can think of a reason for antitrust regulation, FCC intervention, etc., there are countervailing reasons against government regulation that are likely more compelling. Back to paper writing….

Threat Level offers some safety tips for laptop users accessing public hotspots:

“The most dangerous places to connect are airports, hotels, convention centers,” say Richard Rushing, Chief Security Officer for AirDefense, which does wireless security. “And most people use credit cards there.”

Oops. I am hooking up to the San Diego Convention Center’s wireless and paying for with a credit card as he says this. Apparently lots of other people are too because a snicker rings through the workshop here at ToorCon9.

By their nature, WiFi hotspots are insecure, he says, though they can be made more secure by using client isolation, which makes it harder to slide up and down the communications links from the server to the client and web.

“Client isolation should be turned on but we can still spoof the address or take the address backwards,” he says, noting that Macs are easily spoofed.

“Hot spots are really set up for the bad guys,” he says.

When Rushing looked at hotspot users, he found 30 percent have no firewalls and 3 percent have active malware they’re inadvertantly introducing to the servers.

This is probably an issue I should have mentioned in my Times piece. It’s true that the risks of sharing your wireless connection are not zero: it does make it the possible for other users on the network to scan your machine for vulnerabilities. However, the tips about public hotspots helps to put that risk in perspective; your laptop is far more likely to encounter someone malicious in an an airport or coffee shop, which is teeming with strangers, than in your home. So if you’re worried about the security risks of sharing your home wireless connection, you should be a lot more reticent about using public access points. The nature of the security risks involved are identical, and the number of potential adversaries is much higher on a public hotspot.

If you’ve never experienced the World Wide Web, you need to read Daniel Solove’s The Future of Reputation: Gossip, Rumor, and Privacy on the Internet. But if you have used the Web, you’ll wonder about passages like this, rudiments that routinely crop up in the book:

When . . . bloggers find a post interesting, they will link to it. A “link” is a hyperlink, text that whisks you at a click to another webpage. The Web is interlaced with links, a giant latticework of connections between websites, where Internet traffic fires like synapses in a gigantic brain.

But forgiving these curiosities, the reader joins Solove on a whirl through some interesting problems created by the new medium of the Internet. Chiefly, personal information is persistent and amenable to copying. This means that slights and slanders can be magnified. Fairly or unfairly, the Internet can break people’s reputations.

Continue reading →

I have a new paper out this week entitled “Unplugging Plug-and-Play Regulation” in which I discuss the ongoing dispute between cable operators and the consumer electronics industry over “digital cable ready” equipment and “plug-and-play” interactive applications. Basically, it’s a fight about how various features or services available on cable systems should work, including electronic programming guides (EPGs), video-on-demand (VOD), pay-per-view (PPV) services, and other interactive television (ITV) capabilities.

This fight is now before the Federal Communications Commission where the Consumer Electronics Association (CEA) has asked the agency to mandate certain standards for those next-generation interactive video services. In my paper, I argue that regulation is unwise:

Ongoing marketplace experimentation and private negotiations represent the better way to establish technical standards. There is no need for the government to involve itself in a private standard-setting dispute between sophisticated, capable industries like consumer electronics and cable. And increased platform competition, not more government regulation of cable platforms, is the better way to ensure that innovation flourishes and consumers gain access to exciting new services.

To read the entire 7-page paper, click here.

Ed Felten isn’t impressed with Comcast’s traffic shaping techniques:

Comcast is using an unusual and nonstandard form of blocking. There are well-established mechanisms for dealing with traffic congestion on the Internet. Networks are supposed to respond to congestion by dropping packets; endpoint computers notice that their packets are being dropped and respond by slowing their transmissions, thus relieving the congestion. The idea sounds simple, but getting the details right, so that the endpoints slow down just enough but not too much, and the network responds quickly to changes in traffic level but doesn’t overreact, required some very clever, subtle engineering.

What Comcast is doing instead is to cut off connections by sending forged TCP Reset packets to the endpoints. Reset packets are supposed to be used by one endpoint to tell the other endpoint that an unexplained, unrecoverable error has occurred and therefore communication cannot continue. Comcast’s equipment (apparently made by a company called Sandvine) seems to send both endpoints a Reset packet, purporting to come from the other endpoint, which causes both endpoints to break the connection. Doing this is a violation of the TCP protocol, which has at least two ill effects: it bypasses TCP’s well-engineered mechanisms for handling congestion, and it erodes the usefulness of Reset packets as true indicators of error.

This brings to mind a question: as I understand it, TCP relies to some extent on clients being well-behaved and voluntarily backing off when faced with congestion problems. Is it possible that part of the reason that Comcast chose to target P2P applications specifically is that these aren’t “well-behaved” applications in this sense? Richard seems to be implying that this is the case. Is he right?

While Comcast scrambles to explain itself, and those better versed in the technical issues debate the merits (see the comments) of what they surmise Comcast to be doing, I think it’s important to focus on another angle.

Look at the press and consumer reaction to the allegation that Comcast defied the public’s expectations. For example, Rob Pegoraro of the Washington Post has announced that he is investigating the issue for his column on Thursday, and has asked the public to help inform his thinking.

A mass of Comcast customers are weighing in, fairly or unfairly heaping a wide array of Internet woes on this ISP. And here’s a key quote from one commenter: “I got rid of comcast the second that Verizon FIOS was available in my neighborhood . . . .”

Are consumers helpless against the predation, real or imagined, of this ISP? No they are not. The market forces playing out before us right now are bringing Comcast sharply to heel – and other ISPs too: they are watching with keen interest – nevermind whether Comcast has done anything wrong from a technical or “neutrality” standpoint.

The challenge again is for proponents of broadband regulation to show how law, regulation, and a regulatory agency could do a better job than the collective brainpower and energy of the Internet community.

I recently completed a draft of Copyright as Intellectual Property Privilege, 58 Syracuse L. Rev. __ (2007) (forthcoming) (invited). Here’s an abstract:

We often call copyright a species of intellectual property, abbreviating it, “IP.” This brief paper suggests that we consider copyright as another sort of IP: an intellectual privilege. Though copyright doubtless has some property-like attributes, it more closely resembles a special statutory benefit than it does a right, general in nature and grounded in common law, deserving the title of “property.” To call copyright a “privilege” accurately reflects legal and popular usage, past and present. It moreover offers salutary policy results, protecting property’s good name and rebalancing the public choice pressures that drive copyright policy. We face a choice between two ways of thinking about, and talking about, copyright: As an intellectual property that authors and their assigns own, or as an intellectual privilege that they merely hold. Perhaps no label can fully capture the unique and protean nature of copyright. Recognizing it as form of intellectual privilege would, however, help to keep copyright within its proper legal limits.

Continue reading →

Brad Stone of the New York Times has a good post on the Bits Blog regarding the Comcast kerfuffle (Jim, Why are we calling it that, again?). The gist:

It seems unlikely that Comcast has a secret agenda to shut down file-sharing applications and combat piracy on its network. But the company is clearly trying to have it both ways. It claims it is a neutral Internet service provider that treats all packets equally, not blocking or “shaping” its Internet traffic. Meanwhile it also positions itself as the champion of average Internet users whose speeds are being slowed by file-sharing.

The problem Comcast may now be facing is that in the absence of a plain explanation about what the company does to disadvantage certain applications in the name of managing traffic on its network, anecdotal reports and conspiracy theories fill the vacuum.

I have no doubt that Comcast’s practices stem from trying to provide a good, quality service for the majority of their customers. The problem their actions pose for those of us who advocate against unnecessary regulation, however, is that they’re not being completely clear about what they’re doing (although they’re trying).

For example, if the problem is one percent of users who tend to be bandwidth hogs, why not address the users instead of a protocol? AOL dial-up and T-Mobile wireless are able to meter customer use above a certain allotment without any negative privacy implications. It seems like Comcast does in fact target bandwidth hogs, although it doesn’t publish what the limit is. These sort of unknowns stir up the conspiracy theories Stone mentions. That makes explaining to folks that there’s nothing nefarious here pretty tough.

If you’d like to get a flavor for the sort of impact that one (tenacious) citizen can have on making government data more transparent, check out this Google Tech Talk by one of my personal heroes, Carl Malamud. (I write about his exploits in my new paper on online transparency.) He talks about cajoling the ITU to put standards online, forcing the SEC to put its public information online, and his new project to live-stream and archive video of all congressional and agency hearings in Washington. He’s a real inspiration.

Comcast was kind enough to invite me to a conference call between one of their engineers and some think tank folks. They feel their policies have been mischaracterized in the press. While I found some of the information they shared helpful, I frankly don’t think they helped their case very much.

While he didn’t say so explicitly, the Comcast guy seemed to implicitly concede that the basic allegations are true. He emphasized that they were not blocking any traffic, but that in high-congestion situations they did “delay” peer-to-peer traffic to ease the load. Apparently the Lotus Notes thing was a bug that they’re working to fix. He refused to go into much detail about exactly how this “delay” was accomplished, but presumably if the AP’s story about TCP resets were inaccurate, he would have said so.

To be fair, most of the people on the call were lawyers or economists, not technologists, so it’s possible he just didn’t think anyone other than me would care about these details. Still, it seems like part of the the point of having an engineer on the call would be to answer engineering-type questions. He also made a couple of points that I found a little patronizing. For example, he emphasized that most users wouldn’t even be able to detect the traffic-shaping activities they use without special equipment and training. Which is true, I guess, but rather beside the point.

If you haven’t read it yet, I recommend the discussion in response to Jerry’s post. I don’t know enough about the internals of cable modem protocols to know for sure who’s right, but Tom seems to me to make a good point when he says that forging reset packets is a wasteful and disruptive way to accomplish traffic shaping. The TCP/IP protocol stack is layered for a reason, and I can’t see any reason for routers to be mucking around at the TCP layer, when throttling can perfectly well be accomplished in a protocol-neutral manner at the IP layer.

Someone asked why Comcast didn’t throttle on a user-by-user basis rather than a protocol-by-protocol basis, and he said they were concerned with the privacy implications of that approach. That doesn’t make a lot of sense to me. Very few users are going to consider the number of bits they’ve transferred in a given time period to be confidential information.

We also asked about why there wasn’t more transparency about what throttling methods were being used and against which protocols. Apparently, Comcast feels that disclosing those sorts of details will make it easier for users to circumvent their throttling efforts. That doesn’t strike me as terribly persuasive; customers are entitled to know what they’re getting for their money, and people are going to figure it out sooner or later anyway. All secrecy accomplishes is to make them look bad when someone discovers it and reports it to the press.

With all that said, I’m not sure I see an obvious policy response. It seems to me that regardless of what the law says, there’s always going to be a certain amount of cat-and-mouse between ISPs and the heaviest network users. As Don Marti has pointed out, workarounds are easy to find. Add in a healthy dose of negative publicity, and it seems to me that while Comcast’s behavior is far from laudable, it’s far from obvious it’s a serious enough problem to justify giving the FCC the opportunity to second-guess every ISP’s routing policies.