Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Freeing the Journal

by on August 2, 2007 · 0 comments

Relatedly, Ingram has an interesting post on whether Rupert Murdoch should make the Wall Street Journal‘s website available for free:

I know that many newspapers have looked to the Journal as a model for what a paper can do online, because it is one of the few that has charged for its content from the very beginning and built what appears to be a successful business doing so. But does it make sense now? This Wall Street Journal story notes that Murdoch commissioned a study that looked at what going free would mean for the paper, and from that he concluded that while readership would grow by a factor of 10, advertising would likely only grow by a factor of five, and the loss of subscription revenue would effectively make the whole thing a wash. In other words, maybe’s it’s not worth it.

It’s not clear whether the study looked at the short run or the long run, but it seems to me that if the short-run financial outcome is anywhere close to a wash, then it’s stupid to keep the paywall. Because the biggest harm a paywall does is dramatically limit a site’s long-term growth potential. People who currently like the Journal enough to pay for it will likely keep doing so. But given the massive amount of information out there, most people just leaving college are likely to opt for one of the Journal’s free competitors.

Moreover, being free brings a wealth of ancillary benefits that don’t show up in the bottom line right away. As a blogger, I almost never link to stories behind paywalls because I can usually find a free version of the same story. My impression is that other bloggers tend to act the same way. So as the blogosphere becomes an increasingly important source of traffic, paywalls will become more of a liability. If Murdoch can eliminate the paywall and completely replace the lost revenue with ads, that seems like a no-brainer to me.

Via Matthew Ingram (I’m catching up on RSS feeds), Jack Shafer has makes an important observation about today’s prestige newspapers: their staffs are significantly larger than in the glory days of the 1970s. Although Shafer’s original piece overstated the case, the numbers are still significant: The New York Times has apparently grown from 500 reporters and editors to 750, while the Washington Post has grown from 340 to 550. In other words, each is about 50 percent larger than it was in the 1970s.

Shafer then quotes Post and Times officials explaining why news would suffer from a reduction in headcount to 1970s levels. Apparently fluff stories are bigger revenue drivers than they were in the 1970s, so the hard news headcount would have to be cut below 1970s levels to keep the paper profitable.

But I think this dramatically underestimates how much easier a reporter’s job is today than in the 1970s. There’s a wealth of original materials available online that makes fact-checking easier. There’s a massive distributed reporting system called the blogosphere that helps reporters dig up leads and provide instant feedback. There are people all over the place with cell phones capable of capturing photos and even video. There are sites like YouTube and Flickr that help aggregate and organize this wealth of material.

Obviously, there are still some stories where there’s no substitute for picking up the phone and calling sources, or for hopping on a plane to see a story first-hand. We still need some reporters to do that. But the job of a reporter these days is far more oriented toward synthesizing and summarizing the material that’s already out there. Much of the information is already out there, and the job of a reporter is simply to translate sometimes technical source documents into plain English.

Moreover, one of the points Shafer make is that the Times and the Post relied far more on wire stories in the 1970s than they do today. There’s no reason to think there’d be much loss in story quality if reporters did more of this today. The Times could cut its staff covering technology and instead feature content from CNet or Wired (obviously, they’d want to feature some of CNet’s less geeky or esoteric content, but I’m sure CNet would be happy to produce some less-geeky stories to accommodate them). Many large web properties already do this, but there’s every reason to think this process could continue without significantly harming the quality of news coverage.

The next decade may bring wrenching adjustments for reporters used to secure positions at large newspapers, but there’s little reason to think that the quality of news will suffer as a result. Quite the contrary, thanks to the Internet, the average American has access to far more and better news than he did 20 years ago. Any diminution of the quality of newspaper reporter will be small compared with the benefits of being able to choose from dozens of high-quality news sites.

E-Voting in The Hill

by on August 2, 2007 · 0 comments

In The Hill today, Lawrence Nordin and I make the case that the Holt e-voting bill, while far from perfect, would be a step toward more secure elections.

DRM: Not Secure!

by on August 2, 2007 · 0 comments

Alexander Wolfe points out that every DRM system known to man has been cracked. Slashdot seems to think this is news.

Ars reports that Teleflex is beginning to have a real impact on the outcome of software patent litigation:

Friskit filed a patent infringement lawsuit against RealNetworks in 2003 that sought over $70 million in damages. In a ruling issued last week, Judge William W. Schwarzer granted Realnetworks’ motion for summary judgment, citing “Real’s clear and convincing evidence of obviousness.”

Judge Schwarzer cited the Supreme Court’s decision on KSR v. Teleflex in his opinion. “Two principles from the Supreme Court’s recent opinion in KSR Int’l Co. v. Teleflex Inc. guide the analysis of whether sufficient difference exists between the prior art and Friskit’s claims to render the patents nonobvious,” he wrote. The first of those is patents that rearrange old elements to create a new—but obvious—combination. The second comes from situations where a person of “ordinary skill” pursues known options, and the result is the product of “ordinary skill and common sense.”

“All of the individual features of Friskit’s patents which allow a user to easily search for and listen to streaming media existed in the prior art,” noted the judge, who went on to cite a number of media player

Good for Judge Schwarzer. This bodes well for Vonage.

The New York Times has a story on voting reform that suggests an explanation for something that’s puzzled me for a while. One of the consistent patterns you’ll find in the e-voting debate is that state election officials tend to side with e-voting vendors rather than with security experts. This always struck me as a little bit puzzling, because the case against e-voting isn’t that hard to understand, and people who work with these technologies every day, of all people, should be able to understand them.

One explanation is that once a state has chosen a particular voting technology, they get egg on their face if they subsequently have to admit that the technology in question is a disaster. But some voting officials’ vehemence, especially as documented by Avi Rubin, seemed too strong to be explained purely as not wanting to admit you own mistakes.

Things make more sense if there’s a revolving door between state election officials and voting equipment vendors. You don’t even have to imagine explicit corruption. If many of your friends and former colleagues work for e-voting vendors, you’re more likely to believe them than some Ivory Tower security researcher you’ve never heard of.

I also think this is another reason that touch-screen voting machines are a bad idea—even with paper trails, audits, and the rest. Voting machine vendors have an incentive to make their products as complicated as possible so that they can charge the state more money for them. Making a touch-screen machine more secure means buying more hardware—fancier printers and diagnostic and auditing tools. On the other hand, making paper balloting more secure mostly means investing more in human inputs—hiring more election observers, giving election judges more training, conducting more hand recounts. Those aren’t things for which voting equipment vendors can charge a premium.

A voting machine with a paper trail is still a lot better than a voting machine without one. So I hope the Holt bill passes. But it would be much happier if Congress passed a law simply outlawing the use of touch-screen voting machines. (perhaps with an exception for disabled voters) Such a bill would be a lot shorter and less intrusive, because it wouldn’t include all these extra provisions aimed at papering over the weaknesses of DRE+printer combinations.

On the one hand, I’m glad Kip Hawley took the time to answer some skeptical questions about the TSA’s security regime. On the other hand, I don’t find this remotely reassuring:

Bruce Schneier: You don’t have a responsibility to screen shoes; you have one to protect air travel from terrorism to the best of your ability. You’re picking and choosing. We know the Chechnyan terrorists who downed two Russian planes in 2004 got through security partly because different people carried the explosive and the detonator. Why doesn’t this count as a continued, active attack method?

I don’t want to even think about how much C4 I can strap to my legs and walk through your magnetometers. Or search the Internet for “BeerBelly.” It’s a device you can strap to your chest to smuggle beer into stadiums, but you can also use it smuggle 40 ounces of dangerous liquid explosive onto planes. The magnetometer won’t detect it. Your secondary screening wandings won’t detect it. Why aren’t you making us all take our shirts off? Will you have to find a printout of the webpage in some terrorist safe house? Or will someone actually have to try it? If that doesn’t bother you, search the Internet for “cell phone gun.”

It’s “cover your ass” security. If someone tries to blow up a plane with a shoe or a liquid, you’ll take a lot of blame for not catching it. But if someone uses any of these other, equally known, attack methods, you’ll be blamed less because they’re less public.

Kip Hawley: Dead wrong! Our security strategy assumes an adaptive terrorist, and that looking backwards is not a reliable predictor of the next type of attack. Yes, we screen for shoe bombs and liquids, because it would be stupid not to directly address attack methods that we believe to be active. Overall, we are getting away from trying to predict what the object looks like and looking more for the other markers of a terrorist. (Don’t forget, we see two million people a day, so we know what normal looks like.) What he/she does; the way they behave. That way we don’t put all our eggs in the basket of catching them in the act. We can’t give them free rein to surveil or do dry-runs; we need to put up obstacles for them at every turn. Working backwards, what do you need to do to be successful in an attack? Find the decision points that show the difference between normal action and action needed for an attack. Our odds are better with this approach than by trying to take away methods, annoying object by annoying object. Bruce, as for blame, that’s nothing compared to what all of us would carry inside if we failed to prevent an attack.

This is totally unresponsive to Schneier’s question. What Schneier was looking for was some sort of coherent explanation for why shoes and bottles of liquids were a bigger threat than cell phones and fake bellies. He didn’t have any such explanation, probably because there isn’t one. We’ve given the TSA an impossible job and so they’ve responded with security theater. These “security measures” won’t stop a determined terrorist, but it might make travelers (at least those who don’t think about it too hard) feel better.

There’s lots more great (appalling) stuff where the blockquote came from so click on through to part 1 and part 2.

Worst-case Scenario

by on July 31, 2007 · 2 comments

Voting machine vendors are their own worst enemies:

The study, conducted by the university under a contract with Bowen’s office, examined machines sold by Diebold Election Systems, Hart InterCivic and Sequoia Voting Systems.

It concluded that they were difficult to use for voters with disabilities and that hackers could break into the systems and change vote results.

Machines made by a fourth company, Elections Systems & Software, were not included because the company was late in providing information that the secretary of state needed for the review, Bowen said.

Sequoia, in a statement read by systems sales executive Steven Bennett, called the UC review “an unrealistic, worst-case-scenario evaluation.”

Right. Because the way to tell if a system is secure is to focus at the best-case scenario.

I guess I shouldn’t be surprised. Voting machine vendors have a track record of releasing jaw-droppingly lame responses to criticisms of their products, so why not continue the pattern?

Cord makes some good points about the disadvantages of open networks, but I think it’s a mistake for libertarians to hang our opposition to government regulation of networks on the contention that closed networks are better than open ones. Although it’s always possible to find examples on either side, I think it’s pretty clear that, all else being equal, open networks tend to be better than closed networks.

There are two basic reasons for this. First, networks are subject to network effects—the property that the per-user value of a network grows with the number of people connected to the network. Two networks with a million people each will generally be less valuable than a single network with two million people. The reason TCP/IP won the networking wars is that it was designed from the ground up to connect heterogeneous networks, which meant that it enjoyed the most potent network effects.

Second, open networks have lower barriers to entry. Here, again, the Internet is the poster child. Anybody can create a new website, application, or service on the Internet without asking anyone’s permission. There’s a lot to disagree with in Tim Wu’s Wireless Carterfone paper, but one thing the paper does is eloquently demonstrate how different the situation is in the cell phone world. There are a lot of innovative mobile applications that would likely be created if it weren’t so costly and time-consuming to get the telcos permission to develop for their networks.

Continue reading →

Smart comments on the death of newspapers from Ezra Klein:

The heyday of newspapers had them operating amid a scarcity of information. The average citizen in Omaha, Tallahassee, or even Los Angeles simply couldn’t collect information from DC or Nairobi, couldn’t call up yesterday’s presidential speech, couldn’t choose from thousands of content sources and millions of blogs and dozens of cable news channels. Newspapers, due to their wide array of reporters, their investment-heavy text transmission infrastructure, and their near-monopolies in individual markets, added a ton of value in getting consumers information they couldn’t otherwise access. That’s changed.

Now information is abundant, even too abundant. What readers need is interpretation, filters, guides. The media — dare I say it? — needs to mediate. That’s where they can add the value. The basic stenography that was valuable in one age isn’t worthless in this one, but it’s simplistic, and not nearly enough.

Further, we’re not merely dealing with an era in which information has become overwhelmingly abundant, we’re caught in a moment when all sides have become exquisitely sophisticated at spinning it, at publicizing what they want heard, distorting what scares them, drowning out what hurts them, discrediting what attacks them. So not only is there too much for the average consumer to deal with, it’s not even clear what they should deal with, what’s honest, who can be trusted. This is dicier territory, of course, but I think those who fret over the newspaper’s capability to serve this guiding function give insufficient thought to how odd the concept of objective news coverage has always been, and how much more potential there was for abuse when there was nearly no in-market competition.

And Matt Yglesias:

Continue reading →