Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Former TLF blogger Tim Lee returns with this guest post. Find him most of the time at the Bottom-Up blog.

Thanks to Jim Harper for inviting me to return to TLF to offer some thoughts on the recent Adam ThiererTim Wu smackdown. I’ve recently finished finished reading The Master Switch, and I didn’t have have my friend Adam’s viscerally negative reactions.

To be clear, on the policy questions raised by The Master Switch, Adam and I are largely on the same page. Wu exaggerates the extent to which traditional media has become more “closed” since 1980, he is too pessimistic about the future of the Internet, and the policy agenda he sketches in his final chapter is likely to do more harm than good. I plan to say more about these issues in future writings; for now I’d like to comment on the shape of the discussion that’s taken place so far here at TLF, and to point out what I think Adam is missing about The Master Switch.

Here’s the thing: my copy of the book is 319 pages long. Adam’s critique focuses almost entirely on the final third of the book, (pages 205-319) in which Wu tells the history of the last 30 years and makes some tentative policy suggestions. If Wu had published pages 205-319 as a stand-alone monograph, I would have been cheering along with Adam’s response to it.

But what about the first 200-some pages of the book? A reader of Adam’s epic 6-part critique is mostly left in the dark about their contents. And that’s a shame, because in my view those pages not only contain the best part of the book, but they’re also the most libertarian-friendly parts.

Those pages tell the history of the American communications industries—telephone, cinema, radio, television, and cable—between 1876 and 1980. Adam only discusses this history in one of his six posts. There, he characterizes Wu as blaming market forces for the monopolization of the telephone industry. That’s not how I read the chapter in question. Continue reading →

Going Solo

by on August 6, 2009 · 15 comments

Adam Thierer recruited me to contribute to what became the Technology Liberation Front way back in August 2004, when I was fresh out of college and working as a writer at the Cato Institute. My first post was about DRM (I was against it). I remember going back and forth with Adam about whether there was really a demand for a libertarian tech-policy blog. I think the last five years have laid those questions to rest, as both our traffic and our list of contributors have grown steadily. The last year or so has gone especially well, as we’ve been joined by Ryan, Berin, and Alex and Adam inaugurated new features like his annual Best Tech Books series.

At the same time, I’ve been blessed with a steadily growing list of other blogging opportunities. I’m now nominally a regular contributor to at least 6 blogs. In practice, this has meant woefully neglecting all six of them. And at the same time, I’ve had a number of people complain that it’s impossible to follow my writing, scattered as it is in so many places.

So I’ve decided that now is a good time to “go solo.” I’ve launched a new blog called “Bottom-Up,” and I’m going to be ending or scaling back my involvement with all the other blogs to which I nominally contribute. This will be my last post at TLF, and starting tomorrow the vast majority of my blogging activities will be found at the new site.

Some of what I’ll be talking about will be familiar to longtime TLF readers. I did a post today on the decline of newspapers, a topic I’ve weighed in before. But I’ll also be covering some new ground. This post, for example, examines the why Darwin’s theory of evolution remains so controversial after 150 years. I hope you’ll check it out, and if it looks interesting, please subscribe.

In closing, I want to thank my fellow TLFers, with whom I’ve fought the good fight over the last five years. I’m excited to see what they come up with in the next five years.

Viva la (Technology) Revolution!

My friend Megan McArdle has a sharp post on the causes of the newspaper’s imminent demise:

Journalism is not being brought low by excess supply of content; it’s being steadily eroded by insufficient demand for advertising pages. For most of history, most publications lost money, or at best broke even, on their subscription base, which just about paid for the cost of printing and distributing the papers. Advertising was what paid the bills. To be sure, some of that advertising is migrating to blogs and similar new media. But most of it is simply being siphoned out of journalism altogether. Craigslist ate the classified ads. eHarmony stole the personals. Google took those tiny ads for weird products. And Macy’s can email its own damn customers to announce a sale…

We’re not witnessing the breakup of a monopoly, in which more players make more modest incomes providing more stuff, and everyone flourishes (except the monopolist). We’re witnessing the death of a business model. And no one has figured out how to pay for hard news. Hard news stories take a great deal of time to write–more time than most amateurs can afford, which is why blogs tend to do opinion rather than journalism. Moreover, they are at least greatly improved when their authors are not worried about losing their jobs if what they write pisses off a local power broker.

I think there’s a lot to this: a key part of the newspaper’s business model was that economies of scale made them one of the very few efficient ways of distributing small pieces of printed information to a lot of people. So lots of different kinds of content—classified ads, personal ads, display ads, and various kinds of news reporting—got bundled together and sold as one package. The Internet makes it cheap to distribute information of all kinds, and so the newspaper is getting disaggregated. And so some of the cross-subsidies that supported the traditional newspaper are going away.

So the death of the classified is one important cause of newspapers’ worsening business model. But it’s also true that newspapers are “being brought low by excess supply of content.” The websites of mainstream media outlets run display ads, and these ads generate revenues. They don’t generate enough revenue to cover the costs of producing content, but that’s simply a function of supply and demand: if there were fewer online news sources, the ones that were left would be able to command higher rates. This is easy to see with the following thought experiment: imagine if the government granted the New York Times a monopoly in the news reporting business, so that no other media outlet were permitted to provide news online. Under those circumstances, the Times would be insanely profitable. They’d have tens of millions of daily readers and be able to charge outrageous amounts of money for their display ads.

Each traditional outlet that goes out of business makes the others a little more profitable. Eventually, the market will reach an equilibrium–if necessary, with dramatically fewer news outlets and higher revenues for each one. But there’s no “death of a business model” here. The newspapers have always given away content in order to sell ads. The news websites of the future will do the same thing. There just may be fewer of them than there were in the past.

The part I think Megan is ignoring is that the while it’s often true that hard news stories take a “great deal of time to write,” the Internet has made the process much easier for many types of news. Most obviously, the laborious process of editing and typesetting stories on strict deadlines is being replaced by much more flexible editing using web-based content management systems. Many primary sources (court decisions, regulatory filings, government data) that once required a physical trip to obtain can now be downloaded off the web. Reporters also have access to a vast new universe of primary sources from user-generated media that simply didn’t exist in the past.

It’s possible that the absolute number of reporters doing “hard news” in the future will be lower than it was in the past. And certainly the next decade will be a tough one for print journalists. But there’s nothing fundamentally broken about the “give away content, sell ads” business model. And we’re not heading toward a dystopian future in which no one produces hard news.

Defending Free

by on July 1, 2009 · 28 comments

There’s been a lot of criticism lately of Chris Anderson’s Free. Malcolm Gladwell didn’t like it. Matt Yglesias had a sharp and critical response, and here at TLF Cord offered a strongly negative take on the book.

I haven’t read Free myself yet, but I think I know Anderson’s argument well enough to know the critics aren’t really engaging it. Two really important points seem to be getting missed.

First, when Anderson says “eventually the force of economic gravity will win,” he means eventually. So citing YouTube—a site that’s been in business for barely four years—doesn’t prove anything. Lots of Internet startups lost money for four years and then went on to make a killing.

Moreover, it seems to me that none of Anderson’s critics really address the heart of this argument. In any other competitive market, we know that competition pushes prices down toward their marginal cost. The PC industry, for example, is famous for its razor-thin margins. Given that the marginal cost of content is zero, basic economics would seem to tell us that—at least in highly competitive sectors like mainstream news—competition is going to push prices down to zero and keep them there.

Second, a lot of criticism seems to miss that “free” business models aren’t just about giving stuff away and hoping a miracle occurs. They’re about using free stuff to sell complementary goods. Obviously if YouTube just gives away a lot of free videos, as Matt suggests, that’s not going to make them any money. But their business model is to give away free videos and sell ads. This is a perfectly plausible business model that will almost certainly become viable in the next few years. YouTube’s primary costs are servers and bandwidth, both of which continue to fall in price at a prodigious rate. Advertising revenues have fallen somewhat in the last few months, but there’s every reason to expect them to pick up again when the recession is over. Therefore, the lines will cross in the not-too-distant future, and YouTube will become at least moderately profitable.

It’s important to note that Matt is absolutely right that these businesses may only be moderately profitable. As he says, this is precisely the point of free markets—forcing companies to compete against one another means better deals for us and lower profits for them. So if Anderson claims that “free-based” business models are going to be wildly profitable, he’s probably wrong. Many free-based business models will only be modestly profitable. But they’re not going to keep losing money forever.

Finally, Gladwell, Yglesias, and Blomquist all seem to miss the point about transaction costs: charging small amounts of money is expensive. It costs more than 10 cents to charge someone 10 cents. As a consequence, if the equilibrium price of your product is less than 10 cents, it’s stupid to charge for it because all the revenues will go to the credit card company. I think this is actually more important than the psychological effects Matt talks about. It’s not just that customers have an irrationally strong attachment to the concept of free. It’s that below a certain point the overhead involved in charging for stuff is too high to be worth the trouble.

What’s especially weird about these arguments is that we’re surrounded by examples of Anderson’s thesis. The overwhelming majority of news and commentary on the web is available for free. Online services like search and email are free, and many of them are extremely profitable. Red Hat and MySQL (before it got bought by Sun) built extremely successful businesses around free software. For that matter, let’s forget the Internet and computers entirely. The 20th century radio and television industries, and parts of the newspaper industry, were built on “free”-based business models. It’s obviously true that companies can earn profits while giving away content. And basic economics tells us that we should expect companies that give their content away for free to gradually push out companies that don’t. So why is Anderson getting so much flack for pointing out an obvious and inescapable trend?

I wrote a piece about PACER last week, which Katherine Mangu-Ward at Reason was kind enough to link to from Hit and Run. In the comments to her post, a reader asked a reasonable question about the fees you pay to access PACER: “Are you buying the data or paying the court’s bandwidth costs?”

Now I’m usually pretty sympathetic to the idea that the people who benefit from government services should pay for those services through user fees. But there are two reasons this doesn’t apply in a case like this.

First, there’s the math. As it happens, I’m working on a project that will involve hosting large amounts of content, so I’ve been researching hosting costs. One of the most popular managed hosting services is Amazon’s EC2/S3. You can see the pricing for that system here. It provides a good point of comparison for PACER’s fees.

To make the math easy, let’s assume you’ve got a lawyer who downloads 20 documents per week, each of which is 10 pages long and 1 MB in size. Over the course of a year, this lawyer will download around 1000 documents, and he’ll be charged 1000 * 10 * $0.08 = $800 for those documents. (This is actually an underestimate because the lawyer also has to pay for search results) And because each document is about 1 MB in size, the total quantity of data transferred from PACER will be around 1 GB.

Now, if you click over to Amazon’s S3 pricing page, you’ll see that the going rate for a GB of data transfer in the private market is… 17 cents. In fairness, Amazon also charges for CPU time on the EC2 cluster, so if the courts actually built their system on EC2/S3, the marginal cost of a GB of data delivery might be more like 50 cents. But charging $800 is three orders of magnitude too much.
Continue reading →

I’m taking a course here at Princeton on IT Policy, taught by my advisor, Ed Felten. It’s been an interesting experience. I think it’s safe to say that I’ve spent more time thinking about the topics than the median member of the class, and the class has been an opportunity to re-acquaint myself with how these issues look for smart people who aren’t immersed in these issues every day.

The course has a blog, where each class participant is asked to contribute one post per week. I’ve been impressed by the quality of a number of posts. Here is a good post by “Jen C.” about the Author’s Guild’s implausible claim that text-to-speech software on the Kindle 2 infringes copyright by creating a derivative work. And here is a fascinating post by Sajid Mehmood about Jonathan Zittrain’s The Future of the Internet, which our own Adam Thierer reviewed here. Sajid points out that Zittrain’s prophesy is more likely to come true if it’s helped along by bad government regulations, an argument that I find persuasive (and not just because he quotes my DMCA paper).

One of the most interesting debates has been over the Google Book Search settlement. A couple of weeks ago, Sajid posted a tentative defense of the settlement, arguing that whatever its flaws, the Google Book Search settlement is a private agreement and the courts would be overstepping their authority to reject it. I responded with a pair of posts making the case that, thanks to the creative use of the class action mechanism, the settlement would have effects far beyond those that could be achieved by an ordinary private contract, and that the results of the settlement would be anticompetitive. Sajid responded with the reasonable point that the settlement will not be the only—or even necessarily the most important—barrier to entry in the book search engine market, and that its better to have one firm able to build a book search engine than zero.

We’ll all be blogging throughout the month of April, so I encourage you to check it out.

Like Berin, I tend to think a lot of anti-Google hysteria is over the top. But I think one place where some criticism is warranted is over the impending Google Book Search settlement. Reader Andrew W points us to his recent post on the Google Book Search settlement:

Google does not somehow become the exclusive copyright holder to orphan works. Other groups and companies are welcome to do the same thing and to also make money from it. And this particular monopoly is, contradictorily, limited and temporary. There will be well-funded competitors.

I’m not so sure. Thanks to the magic of the class action mechanism, the settlement will confer on Google a kind of legal immunity that cannot be obtained at any price through a purely private negotiation. It confers on Google immunity not only against suits brought by the actual members of the organizations that sued Google, but also against suits brought by anyone who doesn’t explicitly opt out. That means that Google will be free to mine the vast body of orphan works without fear of liability.

Any competitor that wants to get the same legal immunity Google is getting will have to take the same steps Google did: start scanning books without the publishers’ and authors’ permission, get sued by authors and publishers as a class, and then negotiate a settlement. The problem is that they’ll have no guarantee that the authors and publishers will play along. The authors and publishers may like the cozy cartel they’ve created, and so they may have no particular interest in organizing themselves into a class for the benefit of the new entrant. Moreover, because Google has established the precedent that “search rights” are something that need to be paid for, it’s going to be that much harder for competitors to make the (correct, in my view) argument that indexing books is fair use.

It seems to me that, in effect, Google has secured for itself a perpetual monopoly over the commercial exploitation of orphan works. Google’s a relatively good company, so I’d rather they have this monopoly than the other likely candidates. But I certainly think it’s a reason to be concerned.

Almost a year ago, I wrote about the newly-launched Seasteading Institute, which promises to break the cozy cartel of world governments by developing the technology required to found affordable autonomous communities on the open oceans. It’s an audacious plan, and I expressed some skepticism about whether it can be made to work. But the Institute, led by Patri Friedman, has made an impressive amount of progress in the last year. They’ve done preliminary engineering work on a small seastead design. They hosted a conference that was by all accounts well-attended. And they’ve generated an impressive amount of press coverage.

So I’m excited to see that Friedman will be giving a talk at Cato about his project on April 7. If I lived in DC, I’d definitely be there. I’m still not convinced Seasteading is the wave of the future, but I’m glad there are people giving it their best shot.

Yochai Benkler ponders the death of the newspaper:

Critics of online media raise concerns about the ease with which gossip and unsubstantiated claims can be propagated on the Net. However, on the Net we have all learned to read with a grain of salt between our teeth, like Russians drinking tea through a sugar cube. The traditional media, to the contrary, commanded respect and imposed authority. It was precisely this respect and authority that made The New York Times’ reporting on weapons of mass destruction in Iraq so instrumental in legitimating the lies that the Bush administration used to lead this country to war.

This is a fantastic insight, and indeed, it’s precisely the insight that we libertarians apply to the regulatory state. That is, just as a decentralized media and a skeptical public is better than the cathedral style of news gathering, so too are decentralized certification schemes and a skeptical public better than a single, cathedral-style regulatory agency “guaranteeing” that businesses are serving consumers well. Most of the time the regulators will protect the public, just as most of the time newspapers get their stories right. The problem is that no institution is perfect, and the consequences of failure are much more serious if you’ve got a population that’s gotten used to blindly trusting the authority figure rather than exercising healthy skepticism. Regulatory agencies are single points of failure, and in a complex economy single points of failure are a recipe for disaster.

Will Wilkinson makes the related point that journalists are prone to journalistic capture that’s very much akin to the regulatory capture that plagues government agencies.

Worried that decentralized news-gathering sources won’t be able to do the job the monolithic newspapers are leaving behind? Jesse Walker has a great piece cataloging the many ways that stories can get from obscure city council meetings to popular attention.

TLF reader mwendy points me to this Eben Moglen paragraph, presumably as evidence of his anti-libertarian agenda:

“…Moreover, there are now many organizations around the world which have earned literally billions of dollars by taking advantage of anarchist production. They have brought their own state of economic dependency on anarchist production to such a high level, that they cannot actually continue operating their businesses without the anarchists’ products. They, therefore, now begin to serve as founders, mentors, and benefactors, for anarchism. They employ our programmers and pay them wages. They assist our programmers in gaining additional technical skill and applying that skill more broadly. They allow me to heavily fund a carefully constructed law firm in New York, to train only lawyers to represent only anarchists on only the payrolls of the big companies which produce the money to pay for the legal representation of anarchism. They have to do that. They need anarchism to be legally solid. They do not want it to fail. They want the anarchist legal institutions that we have created to become stronger over time, because now their businesses depend upon the success of anarchist production.

“In other words, we have reached a very important moment, a moment noticed some hundred years ago by my collaborators Marx and Engels. We have reached the moment at which the bourgeois power sources have turned the crank on invention to the point in which they are actually fueling their own downfall. They have created the necessary structures for their replacement and the forces which are speeding up that replacement are their own forces, which they are deliberately applying because the logic of capitalism compels them to use those new forces to make more money, even though in the long run it speeds the social transition which puts them out of business altogether. This is a very beautiful feeling…”

As I said before, Moglen is not the guy I’d pick to sell free software to libertarians. But I don’t think this passage is as outrageous as mwendy thinks. According to Wikipedia, anarchism “is a political philosophy encompassing theories and attitudes which consider the state, as compulsory government, to be unnecessary, harmful, and/or undesirable.” That certainly sounds like a laudable goal to me. I don’t personally think it’s possible to achieve a stateless society, but there are plenty of self-described anarchists who take fundamentally libertarian policy positions.
Continue reading →