This week I will be attending two terrific conferences on Sec. 230 and Internet intermediary liability issues. On Thursday, the Stanford Technology Law Review hosts an all-day event on “Secondary and Intermediary Liability on the Internet” at the Stanford Law School. It includes 3 major panels on intermediary liability as it pertains to copyright, trademark, and privacy. On Friday, the amazing Eric Goldman and his colleagues at the Santa Clara Law School’s High Tech Law Institute host an all-star event on “47 U.S.C. § 230: a 15 Year Retrospective.” Berin Szoka and Jim Harper will also be attending both events (Harper is speaking at Stanford event) and Larry Downes will be at the Santa Clara event. So if you also plan to attend, come say ‘Hi’ to us. We don’t bite! (We have, however, been known to snarl.)
In the meantime, down below, I just thought I would post a few links to the many things we have said about Section 230 and online intermediary liability issues here on the TLF in the past as well as this graphic depicting some of the emerging threats to Sec. 230 from various proposals to “deputize the online middleman.” As we’ve noted here many times before, Sec. 230 is the “cornerstone of Internet freedom” that has allowed a “utopia of utopias” to develop online. It would be a shame if lawmakers rolled back its protections and opted for an onerous new legal/regulatory approach to handling online concerns. Generally speaking, education and empowerment should trump regulation and punishing liability.
[UPDATE: Josh links to a WSJ article telling us that EU antitrust enforcers raided several (unnamed) e-book publishers as part of an apparent antitrust investigation into the agency model and whether it is "improperly restrictive." Whatever that means. Key grafs:
At issue for antitrust regulators is whether agency models are improperly restrictive. Europe, in particular, has strong anticollusion laws that limit the extent to which companies can agree on the prices consumers will eventually be charged.
Amazon, in particular, has vociferously opposed the agency practice, saying it would like to set prices as it sees fit. Publishers, by contrast, resist the notion of online retailers' deep discounting.
It is unclear whether the animating question is whether the publishers might have agreed to a particular pricing model, or to particular prices within that model. As a legal matter that distinction probably doesn't matter at all; as an economic matter it would seem to be more complicated--to be explored further another day . . . .]
A year ago I wrote about the economics of the e-book publishing market in the context of the dispute between Amazon and some publishers (notably Macmillan) over pricing. At the time I suggested a few things about how the future might pan out (never a good idea . . . ):
And that’s really the twist. Amazon is not ready to be a platform in this business. The economic conditions are not yet right and it is clearly making a lot of money selling physical books directly to its users. The Kindle is not ubiquitous and demand for electronic versions of books is not very significant–and thus Amazon does not want to take on the full platform development and distribution risk. Where seller control over price usually entails a distribution of inventory risk away from suppliers and toward sellers, supplier control over price correspondingly distributes platform development risk toward sellers. Under the old system Amazon was able to encourage the distribution of the platform (the Kindle) through loss-leader pricing on e-books, ensuring that publishers shared somewhat in the costs of platform distribution (from selling correspondingly fewer physical books) and allowing Amazon to subsidize Kindle sales in a way that helped to encourage consumer familiarity with e-books. Under the new system it does not have that ability and can only subsidize Kindle use by reducing the price of Kindles–which impedes Amazon from engaging in effective price discrimination for the Kindle, does not tie the subsidy to increased use, and will make widespread distribution of the device more expensive and more risky for Amazon.
This “agency model,” if you recall, is one where, essentially, publishers, rather than Amazon, determine the price for electronic versions of their books sold via Amazon and pay Amazon a percentage. The problem from Amazon’s point of view, as I mention in the quote above, is that without the ability to control the price of the books it sells, Amazon is limited essentially to fiddling with the price of the reader–the platform–itself in order to encourage more participation on the reader side of the market. But I surmised (again in the quote above), that fiddling with the price of the platform would be far more blunt and potentially costly than controlling the price of the books themselves, mainly because the latter correlates almost perfectly with usage, and the former does not–and in the end Amazon may end up subsidizing lots of Kindle purchases from which it is then never able to recoup its losses because it accidentally subsidized lots of Kindle purchases by people who had no interest in actually using the devices very much (either because they’re sticking with paper or because Apple has leapfrogged the competition).
It appears, nevertheless, that Amazon has indeed been pursuing this pricing strategy. According to this post from Kevin Kelly,
John Walkenbach noticed that the price of the Kindle was falling at a consistent rate, lowering almost on a schedule. By June 2010, the rate was so unwavering that he could easily forecast the date at which the Kindle would be free: November 2011.
Nate Anderson of Ars Technica has posted an interview with Sen. Al Franken (D-MN) about Defining Internet “Freedom”. Neither Sen. Franken nor Mr. Anderson ever get around to defining that term in their exchange, but the clear implication from the piece is that “freedom” means freedom for the government to plan more and for policymakers to more closely monitor and control the Internet economy. The clearest indication of this comes when Sen. Franken repeats the old saw that net neutrality regulation is “the First Amendment issue of our time.”
As a lover of liberty, I find this corruption of language and continued debasement of the term “freedom” to be extremely troubling. The thinking we see at work here reflects the ongoing effort by many cyber-progressives (or “cyber-collectivists,” as I prefer to call them) to redefine Internet freedom as liberation from the supposed tyranny of the marketplace and the corresponding empowerment of techno-cratic philosopher kings to guide us toward a more enlightened and noble state of affairs. We are asked to ignore our history lessons, which teach us that centralized planning and bureaucracy all too often lead to massively inefficient outcomes, myriad unforeseen unintended consequences, bureaucratic waste, and regulatory capture. Instead, we are asked to believe that high-tech entrepreneurs are the true threat to human progress and liberty. They are cast as nefarious villains and their innovations, we are told, represent threats to our “freedom.” We even hear silly comparisons likening innovators like Apple to something out of George Orwell’s 1984. Continue reading →
On the podcast this week, Jim Harper, director of information policy studies at the Cato Institute, discusses identification systems. He talks about REAL ID, a national uniform ID law passed in 2005 that states have contested, and NSTIC, a more recent government proposal to create an online identification “ecosystem.” Harper discusses some of the hidden costs of establishing national identification systems and why doing so is not a proper role of government. He also comments on the reasoning behind national ID proposals and talks about practical, beneficial limits to transparency.