In one sense, Siva Vaidhyanathan’s new book, The Googlization of Everything (And Why Should Worry), is exactly what you would expect: an anti-Google screed that predicts a veritable techno-apocalypse will befall us unless we do something to deal with this company that supposedly “rules like Caesar.” (p. xi)  Employing the requisite amount of panic-inducing Chicken Little rhetoric apparently required to sell books these days, Vaidhyanathan tells us that “the stakes could not be higher,” (p. 7) because the “corporate lockdown of culture and technology” (p. xii) is imminent.

After lambasting the company in a breathless fury over the opening 15 pages of the book, Vaidhyanathan assures us that “nothing about this means that Google’s rule is as brutal and dictatorial as Caesar’s. Nor does it mean that we should plot an assassination,” he says. Well, that’s a relief!  Yet, he continues on to argue that Google is sufficiently dangerous that “we should influence—even regulate—search systems actively and intentionally, and thus take responsibility for how the Web delivers knowledge.” (p. xii)  Why should we do that? Basically, Google is just too damn good at what it does. The company has the audacity to give consumers exactly what they want! “Faith in Google is dangerous because it increases our appetite for goods, services, information, amusement, distraction, and efficiency.” (p. 55) That is problematic, Vaidhyanathan says, because “providing immediate gratification draped in a cloak of corporate benevolence is bad faith.” (p. 55) But this begs the question:  What limiting principle should be put in place to curb our appetites, and who or what should enforce it? Continue reading →

On the podcast this week, Siva Vaidhyanathan, professor of media studies at the University of Virginia, discusses his new book, The Googlization of Everything: (And Why We Should Worry). Vaidhyanathan talks about why he thinks many people have “blind faith” in Google, why we should worry about it, and why he doesn’t think it’s likely that a genuine Google competitor will emerge. He also discusses potential roles of government, calling search neutrality a “nonstarter,” but proposing the idea of a commission to monitor online search. He also talks about a “Human Knowledge Project,” an idea for a global digital library, and why a potential monopoly on information by such a project doesn’t worry him the way that Google does.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

We’ve said it here before too may times to count: When it comes to the future of content and services — especially online or digitally-delivered content and services — there is no free lunch. Something has to pay for all that stuff and increasingly that something is advertising.  But not just any type of advertising — targeted advertising is the future. We see that again today with Skype’s announcement that it is rolling out an advertising scheme as well as in this Wall Street Journal story (“TV’s Next Wave: Tuning In to You“) about how cable and satellite TV providers are ramping up their targeted advertising efforts.

No doubt, we’ll soon hear the same old complaints and fears trotted out about these developments.  We’ll hear about how “annoying” such ads are or how “creepy” they are.  Yet, few will bother detailing what the actual harm is in being delivered more tailored or targeted commercial messages.  After all, there’s actually a benefit to receiving ads that may be of more interest to us. Much traditional advertising was quite “spammy” in that it was sent to the mass market without a care in the world about who might see or hear it. But in a diverse society, it would be optimal if the ads you saw better reflected your actual interests / tastes. And that’s a primary motivation for why so many content and service providers are turning to ad targeting techniques. As Skype noted in its announcement today: “We may use non-personally identifiable demographic data (e.g. location, gender and age) to target ads, which helps ensure that you see relevant ads. For example, if you’re in the US, we don’t want to show you ads for a product that is only available in the UK.”  Similarly, the Journal article highlights a variety of approaches that television providers are using to better tailor ads to their viewers.

Some will still claim it’s too “creepy.” But, as I noted in my recent filing to the Federal Trade Commission on its new privacy green paper: Continue reading →

Twitter curmudgeon @derekahunter writes: “With all the medical advances of last 100 years, why hasn’t anyone created a cough drop that doesn’t taste like crap?” Dammit, he’s right! Why hasn’t the market for cold remedies produced a tasty cough drop? Put differently, the market for cold remedies has failed to produce a tasty cough drop. The market has failed. Market . . . failure.

We have now established the appropriateness of a regulatory solution for the taste problem in the field of cold remedies. Have we not? There is a market failure.

No, we haven’t.

“Market failure” is not what happens when a given market has failed so far to reach outcomes that a smart person would prefer. It occurs when the rules, signals, and sanctions in and around a given marketplace would cause preference- and profit-maximizing actors to reach a sub-optimal outcome. You can’t show that there’s a market failure by arguing that the current state of the actual market is non-ideal. You have to show that the rules around that marketplace lead to non-ideal outcomes. The bad taste of cough drops is not evidence of market failure.

The failure of property rights to account for environmental values leads to market failure. A coal-fired electric plant might belch smoke into the air, giving everyone downwind a bad day and a shorter life. If the company and its customers don’t have to pay the costs of that, they’re going to over-produce and over-consume electricity at the expense of the electric plant’s downwind neighbors. The result is sub-optimal allocation of goods, with one set of actors living high on the hog and another unhappily coughing and wheezing.

Take an issue that’s closer to home for tech policy folk: People seem to underweight their privacy when they go online, promiscuously sharing anything and everything on Facebook, Twitter, and everyplace else. Marketers are hoovering up this data and using it to sell things to people. The data is at risk of being exposed to government snoops. People should be more attentive to privacy. They’re not thinking about long-term consequences. Isn’t this a market failure?

It’s not. It’s consumers’ preferences not matching up with the risks and concerns that people like me and my colleagues in the privacy community share. Consumers are preference-maximizing—but we don’t like their preferences! That is not a market failure. Our job is to educate people about the consequences of their online behavior, to change the public’s preferences. That’s a tough slog, but it’s the only way to get privacy in the context of maximizing consumer welfare.

If you still think there’s a market failure in this area—I readily admit that I’m on the far edge of my expertise with complex economic concepts like this—you haven’t finished making your case for regulation. You need to show that the rules, signals, and sanctions in and around the regulatory arena would produce a better outcome than the marketplace would. Be sure that you compare real market outcomes to real regulatory outcomes, not real market outcomes to ideal regulatory outcomes. Most arguments for privacy regulation simply fail to account for the behavior of the regulatory universe.

Adam has collected quotations on the subject of regulatory capture from many experts. I wrote a brief series of “real regulators” posts on the SEC and the Madoff scam a while back (1, 2, 3). And a recent article I’m fond of goes into the problem that many people think only consumers suffer, asking: “Are Regulators Rational?”

There’s no good-tasting cough drop because the set of drops that remedy coughing and the set of drops that taste good are mutually exclusive. Not because of market failure.

I’ve written posts today for both CNET and Forbes on legislation introduced yesterday by Senators Olympia Snowe and John Kerry that would require the FCC and NTIA to complete inventories of existing spectrum allocations.  These inventories were mandated by President Obama last June (after Congress failed to pass legislation), but got lost at the FCC in the net neutrality armageddon.

Everyone believes that without relatively quick action to make more spectrum available, the mobile Internet could seize up.  Given the White House’s showcasing of wireless as a leading source of new jobs, investment, and improved living conditions for all Americans, both Congress and President Obama, along with the FCC and just about everyone else, knows this is a crisis that must be avoided.

Indeed, the National Broadband Plan estimates conservatively that mobile users will need 300-500 mhz of new spectrum over the next 5-10 years. Continue reading →

Toll-free number allocation remains one of the last vestiges of telecom’s monopoly era. Unlike Internet domain names, there is no organized way of requesting, registering, reserving, purchasing 800, 888, 877, 866, or the newly available 855  numbers, the five prefixes that currently designate toll-free service. If you’re lucky or creative enough, you can visit any number of sites (just Google “855 toll free code”) and the number you request might be available. If not, you’re SOL.

That’s because the toll-free number regulation regime is cumbersome, opaque and bureaucratic. And while the FCC technically prohibits the warehousing, hoarding, transfer and sale of toll-free numbers, enforcement is difficult and inconsistent.

The North American Numbering Council, a federal advisory committee that was created to advise the FCC on numbering issues will be meeting in Washington March 9. On the agenda will be discussion on whether to go forward with exploring market mechanisms that can be applied to toll-free number assignment.

It’s an idea worth pursuing. It is clear that some toll-free numbers have equity value, especially when they can bolster a brand identity or be easily remembered. 1-800-SOUTHWEST, 1-800-FLOWERS are two examples.

Yet right now, the toll-free numbering pool is a vast and unruly commons that recognizes no difference in value between a desirable mnemonic and a generic sequence of digits. Numbers are assigned on a first-come, first-served basis. End users can request a specific number, but they can get it only if it is available from the pool. Under the current rules, they cannot offer to buy the number from its current user. Nor can the user of 1-800-555-2665, which alphanumerically translates to both 1-800-555-BOOK and 1-800-555-COOK, put the number up for auction to see who will pay more, the bookstore or cooking school.

Continue reading →

This week I will be attending two terrific conferences on Sec. 230 and Internet intermediary liability issues. On Thursday, the Stanford Technology Law Review hosts an all-day event on “Secondary and Intermediary Liability on the Internet” at the Stanford Law School. It includes 3 major panels on intermediary liability as it pertains to copyright, trademark, and privacy. On Friday, the amazing Eric Goldman and his colleagues at the Santa Clara Law School’s High Tech Law Institute host an all-star event on “47 U.S.C. § 230: a 15 Year Retrospective.”  Berin Szoka and Jim Harper will also be attending both events (Harper is speaking at Stanford event) and Larry Downes will be at the Santa Clara event.  So if you also plan to attend, come say ‘Hi’ to us.  We don’t bite! (We have, however, been known to snarl.)

In the meantime, down below, I just thought I would post a few links to the many things we have said about Section 230 and online intermediary liability issues here on the TLF in the past as well as this graphic depicting some of the emerging threats to Sec. 230 from various proposals to “deputize the online middleman.”  As we’ve noted here many times before, Sec. 230 is the “cornerstone of Internet freedom” that has allowed a “utopia of utopias” to develop online.  It would be a shame if lawmakers rolled back its protections and opted for an onerous new legal/regulatory approach to handling online concerns. Generally speaking, education and empowerment should trump regulation and punishing liability.

Deputization of the Middleman http://d1.scribdassets.com/ScribdViewer.swf

Continue reading →

[Cross-posted at Truth on the Market]

[UPDATE:  Josh links to a WSJ article telling us that EU antitrust enforcers raided several (unnamed) e-book publishers as part of an apparent antitrust investigation into the agency model and whether it is “improperly restrictive.”  Whatever that means.  Key grafs:

At issue for antitrust regulators is whether agency models are improperly restrictive. Europe, in particular, has strong anticollusion laws that limit the extent to which companies can agree on the prices consumers will eventually be charged. Amazon, in particular, has vociferously opposed the agency practice, saying it would like to set prices as it sees fit. Publishers, by contrast, resist the notion of online retailers’ deep discounting.

It is unclear whether the animating question is whether the publishers might have agreed to a particular pricing  model, or to particular prices within that model.  As a legal matter that distinction probably doesn’t matter at all; as an economic matter it would seem to be more complicated–to be explored further another day . . . .]

A year ago I wrote about the economics of the e-book publishing market in the context of the dispute between Amazon and some publishers (notably Macmillan) over pricing.  At the time I suggested a few things about how the future might pan out (never a good idea . . . ):

And that’s really the twist.  Amazon is not ready to be a platform in this business.  The economic conditions are not yet right and it is clearly making a lot of money selling physical books directly to its users.  The Kindle is not ubiquitous and demand for electronic versions of books is not very significant–and thus Amazon does not want to take on the full platform development and distribution risk.  Where seller control over price usually entails a distribution of inventory risk away from suppliers and toward sellers, supplier control over price correspondingly distributes platform development risk toward sellers.  Under the old system Amazon was able to encourage the distribution of the platform (the Kindle) through loss-leader pricing on e-books, ensuring that publishers shared somewhat in the costs of platform distribution (from selling correspondingly fewer physical books) and allowing Amazon to subsidize Kindle sales in a way that helped to encourage consumer familiarity with e-books.  Under the new system it does not have that ability and can only subsidize Kindle use by reducing the price of Kindles–which impedes Amazon from engaging in effective price discrimination for the Kindle, does not tie the subsidy to increased use, and will make widespread distribution of the device more expensive and more risky for Amazon.

This “agency model,” if you recall, is one where, essentially, publishers, rather than Amazon, determine the price for electronic versions of their books sold via Amazon and pay Amazon a percentage.  The problem from Amazon’s point of view, as I mention in the quote above, is that without the ability to control the price of the books it sells, Amazon is limited essentially to fiddling with the price of the reader–the platform–itself in order to encourage more participation on the reader side of the market.  But I surmised (again in the quote above), that fiddling with the price of the platform would be far more blunt and potentially costly than controlling the price of the books themselves, mainly because the latter correlates almost perfectly with usage, and the former does not–and in the end Amazon may end up subsidizing lots of Kindle purchases from which it is then never able to recoup its losses because it accidentally subsidized lots of Kindle purchases by people who had no interest in actually using the devices very much (either because they’re sticking with paper or because Apple has leapfrogged the competition).

It appears, nevertheless, that Amazon has indeed been pursuing this pricing strategy.  According to this post from Kevin Kelly,

John Walkenbach noticed that the price of the Kindle was falling at a consistent rate, lowering almost on a schedule. By June 2010, the rate was so unwavering that he could easily forecast the date at which the Kindle would be free: November 2011.

Continue reading →

Nate Anderson of Ars Technica has posted an interview with Sen. Al Franken (D-MN) about Defining Internet “Freedom”. Neither Sen. Franken nor Mr. Anderson ever get around to defining that term in their exchange, but the clear implication from the piece is that “freedom” means freedom for the government to plan more and for policymakers to more closely monitor and control the Internet economy.  The clearest indication of this comes when Sen. Franken repeats the old saw that net neutrality regulation is “the First Amendment issue of our time.”

As a lover of liberty, I find this corruption of language and continued debasement of the term “freedom” to be extremely troubling. The thinking we see at work here reflects the ongoing effort by many cyber-progressives (or “cyber-collectivists,” as I prefer to call them) to redefine Internet freedom as liberation from the supposed tyranny of the marketplace and the corresponding empowerment of techno-cratic philosopher kings to guide us toward a more enlightened and noble state of affairs. We are asked to ignore our history lessons, which teach us that centralized planning and bureaucracy all too often lead to massively inefficient outcomes, myriad unforeseen unintended consequences, bureaucratic waste, and regulatory capture.  Instead, we are asked to believe that high-tech entrepreneurs are the true threat to human progress and liberty. They are cast as nefarious villains and their innovations, we are told, represent threats to our “freedom.” We even hear silly comparisons likening innovators like Apple to something out of George Orwell’s 1984.  Continue reading →

On the podcast this week, Jim Harper, director of information policy studies at the Cato Institute, discusses identification systems. He talks about REAL ID, a national uniform ID law passed in 2005 that states have contested, and NSTIC, a more recent government proposal to create an online identification “ecosystem.” Harper discusses some of the hidden costs of establishing national identification systems and why doing so is not a proper role of government. He also comments on the reasoning behind national ID proposals and talks about practical, beneficial limits to transparency.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?