February 2009

Even an economy in shambles shall not sway the elevation to Federal Trade Commission chairmanship of Jon Leibowitz, an interventionist-minded commissioner who, like all planners, knows better than others how markets should be structured.

In several important areas, his inclinations (judging from the cheers emanating from interest groups like PIRG and Center for Digital Democracy) lean toward substituting political “discipline” for what competitive markets offer.

He supports “opt-in” with respect to behavioral advertising, which we’ve often described as not-necessarily good for a lot of reasons. We’ll come back to this later.

He supports antitrust intervention with respect to firms like Intel (and watch out, Google), and favors destructive “conditions” on mergers. Nineteenth-century, smokestack-era antitrust, rather than withering, now seems dedicated to exploiting and hobbling large-scale transactions in ways that end up creating entities that would not emerge in free markets. Several mergers lately have resulted in such artificially constrained frankensteins, or suffered catastrophic delays. Thus “competition policy” (ha!) neuters the healthy competitive response to them that could have come about. (See my FCC comment on XM/Sirius in that regard.)

On “net neutrality,” we leap beyond whether markets are adequate to discipline errant behavior; here the starting point is the nominee’s doubt that even antitrust intervention is necessarily “adequate to the task”; thus the implication that new laws may be in order.

Let’s just take net neutrality for now. There are plenty reasons I think it’s an outrage to regulate price and access on networks and infrastructure; but just for the moment, the entire concept rests upon numerous (I often feel deliberate, in my less-charitable moods) misperceptions or misrepresentations about competitive markets and capitalism. These include but are not limited to the following: (Adapted from an FCC filing I made).

Continue reading →

This is the third in a series of articles about Internet technologies. The first article was about web cookies. The second article explained the network neutrality debate. This article explains network management systems. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

There has been lots of talk on blogs recently about Cox Communications’ network management trial. Some see this as another nail in Network Neutrality’s coffin, while many users are just hoping for anything that will make their network connection faster.

As I explained previously, the Network Neutrality debate is best understood as a debate about how to best manage traffic on the Internet.

Those who advocate for network neutrality are actually advocating for legislation that would set strict rules for how ISPs manage traffic. They essentially want to re-classify ISPs as common carriers. Those on the other side of the debate believe that the government is unable to set rules for something that changes as rapidly as the Internet. They want ISPs to have complete freedom to experiment with different business models and believe that anything that approaches real discrimination will be swiftly dealt with by market forces.

But what both sides seem to ignore is that traffic must be managed. Even if every connection and router on the Internet is built to carry ten times the expected capacity, there will be occasional outages. It is foolish to believe that routers will never become overburdened–they already do. Current routers already have a system for prioritizing packets when they get overburdened; they just drop all packets received after their buffers are full. This system is fair, but it’s not optimized.

The network neutrality debate needs to shift to a debate on what should be prioritized and how. One way packets can be prioritized is by the type of data they’re carrying. Applications that require low latency would be prioritized and those that don’t require low latency would not be prioritized.

Continue reading →

According to a new connectivity scorecard created by Leonard Waverman of the London Business School, it’s not the pure size of connections that matter, er, it’s how we use our broadband that really matters. As a result, Americans are more “connected” than we think. We come out #1 (followed by Sweden and Denmark). The report differs from typical studies that rate the U.S. as 11th or 16th (or whatever the latest number) and generally give countries like Korea high regards for their broadband, per the report’s FAQ:

The Connectivity Scorecard is an attempt to capture how “usefully connected” countries around the world really are. Like any Scorecard, ours is essentially a collection of different metrics, but our metrics encompass usage and skills as well as infrastructure. Further, we recognize that the primary driver of productivity and economic growth is the ability of businesses to use ICT effectively. Thus we give business – and those measures related to business infrastructure and usage – the weight that economic statistics suggest it should be given.

So take that, Korea!

ArnoldThis week, the Ninth Circuit Court of Appeals struck down a California video game statute as unconstitutional, holding that it violated both the First and Fourteenth Amendments to the federal Constitution.  The California law, which passed in October 2005 (A.B.1179), would have blocked the sale of “violent” video games to those under 18 and required labels on all games. Offending retailers could have been fined for failure to comply with the law.  It was immediately challenged by the Video Software Dealers Association and the Entertainment Software Association and, in August of 2007, a district court decision in the case of Video Software Dealers Association v. Schwarzenegger [decision here] enforced a permanent injunction against the law. The Ninth Circuit heard the state’s challenge to the injunction last year and handed down it’s decision this week [decision here] holding the statute unconstitutional. The key passage:

We hold that the Act, as a presumptively invalid content based restriction on speech, is subject to strict scrutiny and not the “variable obscenity” standard from Ginsberg v. New York , 390 U.S. 629 (1968). Applying strict scrutiny, we  hold that the Act violates rights protected by the First Amendment because the State has not demonstrated a compelling interest, has not tailored the restriction to its alleged compelling interest, and there exist less-restrictive means that would further the State’s expressed interests. Additionally, we hold that the Act’s labeling requirement is unconstitutionally compelled speech under the First Amendment because it does not require the disclosure of purely factual information; but compels the carrying of the State’s controversial opinion. Accordingly, we affirm the district court’s grant of summary judgment to Plaintiffs and its denial of the State’s cross-motion. Because we affirm the district court on these grounds, we do not reach two of Plaintiffs’ challenges to the Act: first, that the language of the Act is unconstitutionally vague, and, second, that the Act violates Plaintiffs’ rights under the Equal Protection Clause of the Fourteenth Amendment.

Continue reading →

It’s good to see Google and Microsoft playing nice (for once):

Microsoft has licensed the Exchange ActiveSync protocol to several other mobile communications players, including Apple. Horacio Gutierrez, a top Microsoft intellectual property and licensing executive, said in a statement that Google’s licensing of the patents related to the protocol “is a clear acknowledgement of the innovation taking place at Microsoft.”

He said it also exemplifies the company’s “openness to generally license our patents under fair and reasonable terms so long as licensees respect Microsoft intellectual property.”

Check out Google’s new service.

Here at TLF we often worry about government encroachment on the latest and greatest technologies.  It seems that federal regulators want to control everything that has to do with our beloved and still largely free Internet—how data moves around, whether or not we can encrypt it, how long it is stored, who owns it, and how we can get their hands on it.

But even relatively low-tech means of communication are under attack too, or at least are rumored to be.

Lately there has been so much clamor over the Fairness Doctrine—an abandoned rule mandating equal time for all sides of controversial issues discussed on broadcast radio & television—that the Obama administration has stated publicly that the President is against reviving it.

Even so, the mascot of the anti-Fairness Doctrine crowd, Rush Limbaugh, has voiced his opinion in an op-ed in today’s The Wall Street Journal.

Mr. Limbaugh’s position is obvious: he doesn’t like the Fairness Doctrine.  Not because he’s against fairness or thinks that liberal voices shouldn’t be heard, but because, as he puts it, “The dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.”

Continue reading →

And so begins another fight over data retention. As Declan summarizes:

Republican politicians on Thursday called for a sweeping new federal law that would require all Internet providers and operators of millions of Wi-Fi access points, even hotels, local coffee shops, and home users, to keep records about users for two years to aid police investigations. The legislation, which echoes a measure proposed by one of their Democratic colleagues three years ago, would impose unprecedented data retention requirements on a broad swath of Internet access providers and is certain to draw fire from businesses and privacy advocates. […] Two bills have been introduced so far — S.436 in the Senate and H.R.1076 in the House. Each of the companion bills is titled “Internet Stopping Adults Facilitating the Exploitation of Today’s Youth Act,” or Internet Safety Act.

Julian also has coverage over at Ars and quotes CDT’s Greg Nojeim who says the data retention language is “invasive, risky, unnecessary, and likely to be ineffective.”  I think that’s generally correct.  Moreover, I find it ironic that at a time when so many in Congress seemingly want online providers to collect and retain LESS data about users, this bill proposes that ISPs be required to collect and retain MORE data. One wonders how those two legislative priorities will be reconciled!!

Don’t get me wrong. It’s good that Congress is taking steps to address the scourge of child pornography — especially with stiffer sentences for offenders and greater resources for law enforcement officials. Extensive data retention mandates, however, would be unlikely to help much given the ease with which bad guys will likely circumvent those requirements using alternative access points or proxies.  Finally, retention mandates pose a threat to the privacy of average law-abiding citizens and impose expensive burdens of online intermediaries.

We’ve had more to say about data retention here at the TLF over the years.  Here’s a few things to read: Continue reading →

ICANN has just released a second draft of its Applicant Guidebook, which would guide the creation of new generic topmore generic top-level domains (gTLDs) such as .BLOG, .NYC or .BMW. As ICANN itself declared (PDF), “New gTLDs will bring about the biggest change in the Internet since its inception nearly 40 years ago.”  PFF Adjunct Fellow Michael Palage and former ICANN Board member addressed the key problems with ICANN’s original proposal in his  paper ICANN’s “Go/ No-Go” Decision Concerning New gTLDs (PDF & embedded below), released earlier this week.

ICANN deserves credit for its detailed analysis of the many comments on the original draft which Mike summarized back in December.  ICANN also deserved credit for addressing two strong concerns of the global Internet community in response to the first draft:

  • ICANN has removed its proposed 5% global domain name tax on all registry services, something Mike explains in greater detail in his “Go/No-Go” paper.
  • ICANN has commissioned a badly-needed economic study on the dynamics of the domain name system “in broad.” But such a study must address how the fees ICANN collects from specific user communities relate to the actual costs of the services ICANN provides. The study should also consider why gTLDs should continue to provide such a disproportionate percentage of ICANN’s funding—currently 90%—given increasing competition between gTLDs and ccTLDs (e.g., the increasing use of .CN in China instead of .COM).

These concerns are part of a broader debate:  Will ICANN abide by its mandate to justify its fees based on recovering the costs of services associated with those fees, or will ICANN be free to continue “leveraging its monopoly over an essential facility of the Internet (i.e., recommending additions to the Internet’s Root A Server) to charge whatever fees it wants?”  If, as Mike has discussed, ICANN walks away from its existing contractual relationship with the Department of Commerce and claims “fee simple absolute” ownership of the domain name system, who will enforce such a cost-recovery mandate?  

But ICANN simply “kicked the can down the road on the biggest concern”: how to minimize abusive domain name registrations (e.g., cybersquatting, typosquatting, phishing, etc.) and reduce their impact on consumers. Continue reading →

My new article on “FCC v. Fox and the Future of the First Amendment” has just been published in the February 2009 edition of Engage, the journal of the Federalist Society. Here’s how it begins:

On November 4th, 2008, the Supreme Court heard oral arguments in the potentially historic free speech case of Federal Communications Commission v. Fox Television Stations, Inc. This case, which originated in the Second Circuit Court of Appeals, deals with the FCC’s new policy for “fleeting expletives” on broadcast television. The FCC lost and appealed to the Supreme Court. By contrast, the so-called “Janet Jackson case” — CBS v. FCC — was heard in the Third Circuit Court of Appeals. The FCC also lost that case and has also petitioned the Supreme Court to review the lower court’s ruling.

These two cases reflect an old and odd tension in American media policy and First Amendment jurisprudence. Words and images presented over one medium-in this case broadcast television-are regulated differently than when transmitted through any other media platform (such as newspapers, cable TV, DVDs, or the Internet). Various rationales have been put forward in support of this asymmetrical regulatory standard. Those rationales have always been weak, however. Worse yet, they have opened the door to an array of other regulatory shenanigans, such as the so-called Fairness Doctrine, and many other media marketplace restrictions.

Whatever sense this arrangement made in the past, technological and marketplace developments are now calling into question the wisdom and efficacy of the traditional broadcast industry regulatory paradigm. This article will explore both the old and new rationales for differential First Amendment treatment of broadcast television and radio operators and conclude that those rationales: (1) have never been justified, and (2) cannot, and should not, survive in our new era of media abundance and technological convergence.

I go on in the piece to make the case against the those rationales and the call for the Supreme Court to use the Fox and CBS cases to end this historical First Amendment anomaly of differential treatment of broadcast platforms relative to all other media providers.

This article can be downloaded as a PDF here, or viewed down below the fold in the Scribd reader.

Continue reading →

Up with people!

by on February 19, 2009 · 3 comments

They can be so entertaining.