July 2011

On CNET this morning, I argue that delay in approving FCC authority for voluntary incentive auctions is largely the fault of last year’s embarrassing net neutrality rulemaking.

While most of the public advocates and many of the industry participants have moved on to other proxy battles (which for most was all net neutrality ever was), Congress has remained steadfast in expressing its great displeasure with the Commission and how it conducted itself for most of 2010.

In the teeth of strong and often bi-partisan opposition, the Commission granted itself new jurisdiction over broadband Internet on Christmas Eve last year.  Understandably, many in Congress are outraged by Chairman Julius Genachowski’s chutzpah.

So now the equation is simple:  while the Open Internet rules remain on the books, Congress is unlikely to give the Chairman any new powers.

House Oversight Committee Chairman Darrell Issa has made the connection explicit, telling reporters in April that incentive auction authority will not come while net neutrality hangs in the air.  There’s plenty of indirect evidence as well.

Continue reading →

That’s the question I take up in my latest Forbes column, “The Danger Of Making Facebook, LinkedIn, Google And Twitter Public Utilities.”  I note the rising chatter in the blogosphere about the potential regulation of social networking sites, including Facebook and Twitter. In response, I argue:

public utilities are, by their very nature, non-innovative. Consumers are typically given access to a plain vanilla service at a “fair” rate, but without any incentive to earn a greater return, innovations suffers. Of course, social networking sites are already available to everyone for free! And they are constantly innovating.  So, it’s unclear what the problem is here and how regulation would solve it.

I don’t doubt that social networking platforms have become an important part of the lives of a great many people, but that doesn’t mean they are “essential facilities” that should treated like your local water company. These are highly dynamic networks and services built on code, not concrete. Most of them didn’t even exist 10 years ago. Regulating them would likely drain the entrepreneurial spirit from this sector, discourage new innovation and entry, and potentially raise prices for services that are mostly free of charge to consumers.  Social norms, public pressure, and ongoing rivalry will improve existing services more than government regulation ever could.

Read my full essay for more.

Awesome.

On the podcast this week, Woodrow Hartzog, Assistant Professor at Samford University’s Cumberland School of Law, and a Scholar at the Stanford’s Center for Internet and Society, discusses his new paper in Communications Law and Policy entitled, The New Price To Play: Are Passive Online Media Users Bound By Terms of Use? By simply browsing the internet, one can be obligated by a “terms of use” agreement displayed on a website. These agreements, according to Hartzog, aren’t always displayed where a user can immediately read them, and they often contain complicated legalese. Web browsers can be affected unfavorably by these agreements, particularly when it comes to copyright and privacy issues. Hartzog evaluates what the courts are doing about this, and discusses the different factors that could determine the enforceability of these agreements, including the type of notice a web browser receives.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the webpage for this episode on Surprisingly Free. lso, why not subscribe to the podcast on iTunes?

[By Geoffrey Manne and Joshua Wright.  Cross-posted at TOTM]

Our search neutrality paper has received some recent attention.  While the initial response from Gordon Crovitz in the Wall Street Journal was favorablecritics are now voicing their responses.  Although we appreciate FairSearch’s attempt to engage with our paper’s central claims, its response is really little more than an extended non-sequitur and fails to contribute to the debate meaningfully.

Unfortunately, FairSearch grossly misstates our arguments and, in the process, basic principles of antitrust law and economics.  Accordingly, we offer a brief reply to correct a few of the most critical flaws, point out several quotes in our paper that FairSearch must have overlooked when they were characterizing our argument, and set straight FairSearch’s various economic and legal misunderstandings.

We want to begin by restating the simple claims that our paper does—and does not—make.

Our fundamental argument is that claims that search discrimination is anticompetitive are properly treated skeptically because:  (1) discrimination (that is, presenting or ranking a search engine’s own or affiliated content more prevalently than its rivals’ in response to search queries) arises from vertical integration in the search engine market (i.e., Google responds to a query by providing not only “10 blue links” but also perhaps a map or video created Google or previously organized on a Google-affiliated site (YouTube, e.g.)); (2) both economic theory and evidence demonstrate that such integration is generally pro-competitive; and (3) in Google’s particular market, evidence of intense competition and constant innovation abounds, while evidence of harm to consumers is entirely absent.  In other words, it is much more likely than not that search discrimination is pro-competitive rather than anticompetitive, and doctrinal error cost concerns accordingly counsel great hesitation in any antitrust intervention, administrative or judicial.  As we will discuss, these are claims that FairSearch’s lawyers are quite familiar with.

FairSearch, however, grossly mischaracterizes these basic points, asserting instead that we claim

“that even if Google does [manipulate its search results], this should be immune from antitrust enforcement due to the difficulty of identifying ‘bias’ and the risks of regulating benign conduct.”

This statement is either intentionally deceptive or betrays a shocking misunderstanding of our central claim for at least two reasons: (1) we never advocate for complete antitrust immunity, and (2) it trivializes the very real—and universally-accepted–difficulty of distinguishing between pro- and anticompetitive conduct.

Continue reading →

Do-Not-Track is not inconceivable itself. It’s like the word “inconceivable” in the movie The Princess Bride. I do not think it means what people think it means—how it is meant to work and how it is likely to offer poor results.

Take Mike Swift’s reporting for MercuryNews.com on a study showing that online advertising companies may continue to follow visitors’ Web activity even after those visitors have opted out of tracking.

“The preliminary research has sparked renewed calls from privacy groups and Congress for a ‘Do Not Track’ law to allow people to opt out of tracking, like the Do Not Call list that limits telemarketers,” he writes.

If this is true, it means that people want a Do Not Track law more because they have learned that it would be more difficult to enforce.

That doesn’t make sense … until you look at who Swift interviewed for the article: a Member of Congress who made her name as a privacy regulation hawk and some fiercely committed advocates of regulation. These people were not on the fence before the study, needless to say. (Anne Toth of Yahoo! provides the requisite ounce of balance, but she defends her company and does not address the merits or demerits of a Do-Not-Track law.)

Do-Not-Track is not inconceivable. But the study shows that its advocates are not conceiving the complexities and drawbacks of a regulatory approach rather than individually tailored blocking of unwanted tracking, something any Internet user can do right now using Tracking Protection Lists.

I have always struggled with the work of media theorist Marshall McLuhan. I find it to be equal parts confusing and compelling; it’s persuasive at times and then utterly perplexing elsewhere.  I just can’t wrap my head around him and yet I can’t stop coming back to him.

Today would have been his 100th birthday. He died in 1980, but he’s just as towering of a figure today as he was during his own lifetime. His work is eerily prescient and speaks to us as if written yesterday instead of decades ago. Take, for example, McLuhan’s mind-blowing 1969 interview with Playboy. [PDF] The verse is awe-inspiring, but much of the substance is simply impenetrable. Regardless, it serves as perhaps the best introduction to McLuhan’s work. I strongly encourage you to read the entire thing. The questions posed by interviewer Eric Norden are brilliant and bring out the best in McLuhan.

I was re-reading the interview while working on a chapter for my next book on Internet optimism and pessimism, a topic I’ve spent a great deal of time pondering here in the past. Toward the end of the interview, McLuhan is asked by Norden to respond to some of his critics. McLuhan responds in typically brilliant, colorful fashion: Continue reading →

Daily news service TechLawJournal (subscription) reports that the U.S. District Court (DC) has granted summary judgment to the National Security Agency in EPIC v. NSA, a federal Freedom of Information Act (FOIA) case regarding the Electronic Privacy Information Center’s request for records regarding Google’s relationship with the NSA.

EPIC requested a wide array of records regarding interactions between Google and the NSA dealing with information security. Reports TLJ:

The NSA responded that it refused to confirm or deny whether it had a relationship with Google, citing Exemption 3 of FOIA (regarding records “specifically exempted from disclosure by statute”) and Section 6 of the National Security Agency Act of 1959 (which prohibits disclose of information about the NSA).

The FOIA merits of EPIC’s suit are one thing. It’s another for Google to have an intimate relationship with a government agency this secretive.

This would be a good time to not be evil. Google should either sever ties with the NSA or be as transparent (or more) than federal law would require the NSA to be in the absence of any special protection against disclosure.

A month ago, Rep. Mary Bono Mack introduced a bill (and staff memo) “To protect consumers by requiring reasonable security policies and procedures to protect data containing personal information, and to provide for nationwide notice in the event of a security breach.” These are perhaps the two least objectionable areas for legislating “on privacy” and there’s much to be said for both concepts in principle. Less clear-cut is the bill’s data minimization requirement for the retention of personal information.

But as I finally get a chance to look at the bill on the eve of the July 20 Subcommittee markup, I note one potentially troubling procedural aspect of the bill: giving the FTC authority to redefine PII without the procedural safeguards that normally govern the FTC’s operations. The scope of this definition would be hugely important in the future, both because of the security, breach notification and data minimization requirements attached to it, and because this definition would likely be replicated in future privacy legislation—and changes in to this term in one area would likely follow in others.   Continue reading →

On the podcast this week, Hal Singer, managing director at Navigant Economics and adjunct professor at Georgetown University’s McDonough School of Business, discusses his new paper on wireless competition, co-authored by Gerald Faulhaber of the University of Pennsylvania, and Bob Hahn of Oxford. The FCC produces a yearly report on the competitive landscape of the wireless market, which serves as an overview to policy makers and analysts. The report has found the wireless market competitive in years past; however, in the last two years, the FCC is less willing to interpret the market as competitive. According to Singer, the FCC is using indirect evidence, which looks at how concentrated the market is, rather than direct evidence, which looks at falling prices, to make its assessment. In failing to look at the direct evidence, Singer argues that the report comes to an erroneous conclusion about the real state of competition in wireless markets.

Related Links

  • Assessing Competition in U.S. Wireless Markets: Review of the FCC’s Competition Reports, by Singer et al
  • “FCC report dodges answers on wireless industry competition”, Washington Post
  • “FCC Mobile Competition Report Is One Green Light for AT&T/T-Mobile Deal”, Technology Liberation Front
  • To keep the conversation around this episode in one place, we’d like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?