Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Freedom of Speech

by on October 29, 2007 · 4 comments

John McCain cuts an ad:

Fox News sends him a nastygram

Here’s a way Hillary Clinton can earn some geek brownie points at effectively no cost:

As the networks who have promised to (effectively) deliver free presidential debates have shown (CNN, NBC, ABC), even when free, it is still worth it enough to at least some. And in a world with YouTubes and p2p technologies, some networks are plainly enough. If Fox demands control, presidential debates don’t need Fox. It is time that the presidential candidates from both parties stand with Senator McCain and defend his right to use this clip to advance his presidential campaign. Not because it is “fair use” (whether or not it is), but because presidential debates are precisely the sort of things that ought to be free of the insanely complex regulation of speech we call copyright law. Indeed, as the target of the attack, and as one who has been totally AWOL on this issue from the start, it would be most appropriate if this demand were to begin with Senator Clinton. Let her defend her colleague’s right to criticize her, by demanding that her party at least condition any presidential debate upon the freedom of candidates and citizens to speak.

On the other hand, it’s hard to imagine a more favorable test case for fair use of video, so I’m sort of hoping neither party backs down and we get a solid ruling that making short clips of prominent public policy discussions is a fair use.

We Didn’t Start the Viral

by on October 26, 2007 · 0 comments

And neither did Julian.

Over at Ars, I have a new article pointing out that there’s probably an inverse relationship between the number of people on the government’s various terrorist-suspect lists (the GAO just reported that there are now 750,000 people on the largest “watch” list) and the effectiveness of those lists. There can’t be anywhere close to three-quarters of a million terrorists in the world, so all a list that size accomplishes is to dilute law enforcement and intelligence resources and ensure that the real terrorists won’t get the required scrutiny.

I also argue that while there’s a pretty good argument for an international watch list, it’s awfully hard to justify using such a list domestically:

If government officials have concrete evidence that an American person is engaged in terrorist-related activities, then the government should be doing a lot more than putting that individual on a no-fly list. They should be actively investigating the individual, tapping his phone, reading his email, monitoring his financial transactions, and generally gathering the evidence required to either clear his name, deport him, or arrest him.

If, on the other hand, the government doesn’t have enough evidence of terrorist ties to justify starting an investigation against an individual, then it’s unreasonable, not to mention a waste of law enforcement resources, to ban him from flying on airplanes or subject him to heightened scrutiny every time he goes to an airport. The sheer number of people on the selectee list and the high rate of false positives may be one reason that screeners do a legendarily bad job finding simulated weapons in security tests. The resources now spent on screening tens of thousands of selectees—most of whom turn out to be false positives—would be far better spent on additional FBI agents to do in-depth investigations of people with actual terrorist ties.

This argument is, of course, cribbed from my colleague Jim Harper’s excellent book on ID cards and privacy.

Me on Wiretapping at Cato

by on October 26, 2007 · 0 comments

I’m a little slow on the draw, but I did a Cato Daily Podcast [MP3] on the wiretapping debate on Monday. Incidentally, as you can see here, Cato’s Daily Podcast is an excellent source for in-depth commentary on a wide range of public policy issues. I listen to it on my way to work and find it invaluable for keeping up with public policy debates outside of tech policy.

I’m not much of a Democratic activist, but I’ll take a swing at Kevin Drum’s question regarding the Democrats’ spinelessness with regard to civil liberties:

When we blogosphere types complain about this weak-kneed attitude, are we complaining because (a) we think the centrists are wrong; they could keep their seats in marginal districts even if they toed the progressive line on national security issues. Or (b) because we don’t care; they should do the right thing even if it means losing next November?

I’m not sure about “the progressive line on national security” in general, but with regard to FISA, I find it awfully hard to believe that telecom immunity is a losing issue for the Democrats. I find it awfully hard to imagine somebody’s Republican challenger running attack ads on the telco immunity issue. I mean, between this, FEMA, Haliburton, and the Blackwater fiasco, the Democrats will have a potent narrative about how the President has put cronyism above the interests of the country. If a Democratic politician can’t at least spin the telco immunity issue to a draw, it’s a miracle he got elected to Congress in the first place.

Now, of course the Republican candidate can still run generic “Rep. Smith hates the troops and loves the terrorists” ads. But as Max Cleland discovered in 2002, Republicans call Democrats soft on terrorism pretty much regardless of how they vote. So I think it’s better to have a clear, easily-explained position on the issue (and “telecom companies should obey the law” seems like a pretty clear position to me) than to curl up into a fetal position and vote with the president on everything related to terrorism in the hope that it will save them.

Spending your life in a defensive crouch simply ensures that the other team gets to define the terms of the debate. The way you win an argument like this is by going on offense. The DCCC should start running ads in swing districts touting the courage of Democratic incumbents in standing up to Pres. Bush and his cronies in the telecom industry. Tie this issue to Haliburton, Blackwater, and “Heck-of-a-job” Brownie’s handling of Katrina. Like those folks, AT&T have sold out your rights in exchange for lucrative government contracts. I guess you’d have to run an ad like that by a focus group before you’d know how effective it was, but surely something like that would work better than the current “cave in and hope they’re nice to us” strategy.

More good stuff from Ed Felten on the Comcast dispute:

Pretend that you’re the net neutrality czar, with authority to punish ISPs for harmful interference with neutrality, and you have to decide whether to punish Comcast. You’re suspicious of Comcast, because you can see their incentive to bolster their cable-TV monopoly power, and because their actions don’t look like a good match for the legitimate network management goals that they claim motivate their behavior. But networks are complicated, and there are many things you don’t know about what’s happening inside Comcast’s network, so you can’t be sure they’re just trying to undermine BitTorrent. And of course it’s possible that they have mixed motives, needing to manage their network but choosing a method that had the extra bonus feature of hurting BitTorrent. You can ask them to justify their actions, but you can expect to get a lawyerly, self-serving answer, and to expend great effort separating truth from spin in that answer.

Are you confident that you, as net neutrality czar, would make the right decision? Are you confident that your successor as net neutrality czar, who would be chosen by the usual political process, would also make the right decision?

Even without a regulatory czar, wheels are turning to punish Comcast for what they’ve done. Customers are unhappy and are putting pressure on Comcast. If they deceived their customers, they’ll face lawsuits. We don’t know yet how things will come out, but it seems likely Comcast will regret their actions, and especially their lack of transparency.

That final point is important. The alternative we face is not regulation or letting companies do whatever they want. The alternative is regulation vs. a variety of other mechanisms—bad PR, lawsuits, customer defections—that can punish Comcast for bad behavior.

But the market process, like the regulatory process, is a process, and processes take time. In an otherwise excellent piece on Comcast’s dubious explanations for its routing policies, my Ars colleague Eric Bangeman included the a sub-heading “when the market can’t sort things out.” It’s seems to be true that the market hasn’t set things straight yet. But that shouldn’t surprise us. It’s been barely a week since the story broke in the mainstream media.

After all, imagine if the shoe were on the other foot: supposed we had passed Snowe-Dorgan last year and network neutrality were the law of the land. How would the FCC have reacted? Well, Snowe-Dorgan envisions a complaint process with a 90-day response period. So somebody would have had to have filed a complaint (it’s possible this could have been done in late August when the story first hit the tech press), and then the FCC would have needed to investigate and hand down a ruling. That ruling may or may not have gone against Comcast, and if it did go against Comcast it likely would have been challenged in court, delaying compliance by months if not years. So it’s hardly an indictment of the market process that it hasn’t magically made Comcast behave itself after barely a week of negative publicity. If Comcast emerges unscathed in a few months (i.e. few customer defections, no successful lawsuits, and no significant changes in policy) then the “market failure” narrative will be a lot more compelling.

Threat Level offers some safety tips for laptop users accessing public hotspots:

“The most dangerous places to connect are airports, hotels, convention centers,” say Richard Rushing, Chief Security Officer for AirDefense, which does wireless security. “And most people use credit cards there.”

Oops. I am hooking up to the San Diego Convention Center’s wireless and paying for with a credit card as he says this. Apparently lots of other people are too because a snicker rings through the workshop here at ToorCon9.

By their nature, WiFi hotspots are insecure, he says, though they can be made more secure by using client isolation, which makes it harder to slide up and down the communications links from the server to the client and web.

“Client isolation should be turned on but we can still spoof the address or take the address backwards,” he says, noting that Macs are easily spoofed.

“Hot spots are really set up for the bad guys,” he says.

When Rushing looked at hotspot users, he found 30 percent have no firewalls and 3 percent have active malware they’re inadvertantly introducing to the servers.

This is probably an issue I should have mentioned in my Times piece. It’s true that the risks of sharing your wireless connection are not zero: it does make it the possible for other users on the network to scan your machine for vulnerabilities. However, the tips about public hotspots helps to put that risk in perspective; your laptop is far more likely to encounter someone malicious in an an airport or coffee shop, which is teeming with strangers, than in your home. So if you’re worried about the security risks of sharing your home wireless connection, you should be a lot more reticent about using public access points. The nature of the security risks involved are identical, and the number of potential adversaries is much higher on a public hotspot.

Ed Felten isn’t impressed with Comcast’s traffic shaping techniques:

Comcast is using an unusual and nonstandard form of blocking. There are well-established mechanisms for dealing with traffic congestion on the Internet. Networks are supposed to respond to congestion by dropping packets; endpoint computers notice that their packets are being dropped and respond by slowing their transmissions, thus relieving the congestion. The idea sounds simple, but getting the details right, so that the endpoints slow down just enough but not too much, and the network responds quickly to changes in traffic level but doesn’t overreact, required some very clever, subtle engineering.

What Comcast is doing instead is to cut off connections by sending forged TCP Reset packets to the endpoints. Reset packets are supposed to be used by one endpoint to tell the other endpoint that an unexplained, unrecoverable error has occurred and therefore communication cannot continue. Comcast’s equipment (apparently made by a company called Sandvine) seems to send both endpoints a Reset packet, purporting to come from the other endpoint, which causes both endpoints to break the connection. Doing this is a violation of the TCP protocol, which has at least two ill effects: it bypasses TCP’s well-engineered mechanisms for handling congestion, and it erodes the usefulness of Reset packets as true indicators of error.

This brings to mind a question: as I understand it, TCP relies to some extent on clients being well-behaved and voluntarily backing off when faced with congestion problems. Is it possible that part of the reason that Comcast chose to target P2P applications specifically is that these aren’t “well-behaved” applications in this sense? Richard seems to be implying that this is the case. Is he right?

Comcast was kind enough to invite me to a conference call between one of their engineers and some think tank folks. They feel their policies have been mischaracterized in the press. While I found some of the information they shared helpful, I frankly don’t think they helped their case very much.

While he didn’t say so explicitly, the Comcast guy seemed to implicitly concede that the basic allegations are true. He emphasized that they were not blocking any traffic, but that in high-congestion situations they did “delay” peer-to-peer traffic to ease the load. Apparently the Lotus Notes thing was a bug that they’re working to fix. He refused to go into much detail about exactly how this “delay” was accomplished, but presumably if the AP’s story about TCP resets were inaccurate, he would have said so.

To be fair, most of the people on the call were lawyers or economists, not technologists, so it’s possible he just didn’t think anyone other than me would care about these details. Still, it seems like part of the the point of having an engineer on the call would be to answer engineering-type questions. He also made a couple of points that I found a little patronizing. For example, he emphasized that most users wouldn’t even be able to detect the traffic-shaping activities they use without special equipment and training. Which is true, I guess, but rather beside the point.

If you haven’t read it yet, I recommend the discussion in response to Jerry’s post. I don’t know enough about the internals of cable modem protocols to know for sure who’s right, but Tom seems to me to make a good point when he says that forging reset packets is a wasteful and disruptive way to accomplish traffic shaping. The TCP/IP protocol stack is layered for a reason, and I can’t see any reason for routers to be mucking around at the TCP layer, when throttling can perfectly well be accomplished in a protocol-neutral manner at the IP layer.

Someone asked why Comcast didn’t throttle on a user-by-user basis rather than a protocol-by-protocol basis, and he said they were concerned with the privacy implications of that approach. That doesn’t make a lot of sense to me. Very few users are going to consider the number of bits they’ve transferred in a given time period to be confidential information.

We also asked about why there wasn’t more transparency about what throttling methods were being used and against which protocols. Apparently, Comcast feels that disclosing those sorts of details will make it easier for users to circumvent their throttling efforts. That doesn’t strike me as terribly persuasive; customers are entitled to know what they’re getting for their money, and people are going to figure it out sooner or later anyway. All secrecy accomplishes is to make them look bad when someone discovers it and reports it to the press.

With all that said, I’m not sure I see an obvious policy response. It seems to me that regardless of what the law says, there’s always going to be a certain amount of cat-and-mouse between ISPs and the heaviest network users. As Don Marti has pointed out, workarounds are easy to find. Add in a healthy dose of negative publicity, and it seems to me that while Comcast’s behavior is far from laudable, it’s far from obvious it’s a serious enough problem to justify giving the FCC the opportunity to second-guess every ISP’s routing policies.

Me on Eminent Domain Abuse

by on October 18, 2007 · 0 comments

This isn’t about tech policy, but since it’s something I’ve spent a ton of time on in recent months, I thought some TLF readers might be interested in my new study on eminent domain abuse in Missouri. Also be sure to check out my spin-off article in the American that focuses on the ways that urban redevelopment projects harm small businesses and poor people.