December 2012

The following is a guest post by Daniel Lyons, an assistant professor at Boston College Law School who specializes in the areas of property, telecommunications and administrative law.

While much of the broadband world anxiously awaits the DC Circuit’s net neutrality ruling, consumer groups have quietly begun laying the groundwork for their next big offensive, this time against usage-based broadband pricing. That movement took a significant step forward this week as the New America Foundation released a report criticizing data caps, and as Oregon Senator Ron Wyden introduced a bill that would require the Federal Communications Commission to regulate broadband prices.

But as this blog has noted before, these efforts are misguided. Usage-based pricing plans are not inherently anti-consumer or anticompetitive. Rather, they reflect different pricing strategies through which a broadband company may recover its costs from its customer base and fund future infrastructure investment. Usage-based pricing allows broadband providers to force heavier users to contribute more toward the fixed costs of building and maintaining a network. Senator Wyden’s proposal would deny providers this freedom, meaning that lighter users will likely pay more for broadband access and low-income consumers who cannot afford a costly unlimited broadband plan will be left on the wrong side of the digital divide.

In a working paper I published with the Mercatus Center in October I had already debunked the arguments that the New America Foundation relies upon to make their case. NAF suggests that broadband providers should be unconcerned about costs because gross margins on broadband service are high, and the marginal cost of data transport is relatively low and falling. This is largely true, but also largely irrelevant. For broadband providers, as many other networked industries, the challenge is generating sufficient revenue to recover their fixed costs and fund future network investment. Broadband providers have invested over $300 billion in private capital in the past decade to build and upgrade the nation’s broadband networks. And because Internet traffic is expected to triple by 2016, analysts expect them to continue to invest $30–40 billion annually to expand and upgrade their networks.

Continue reading →

By Geoffrey Manne & Berin Szoka

As Democrats insist that income taxes on the 1% must go up in the name of fairness, one Democratic Senator wants to make sure that the 1% of heaviest Internet users pay the same price as the rest of us. It’s ironic how confused social justice gets when the Internet’s involved.

Senator Ron Wyden is beloved by defenders of Internet freedom, most notably for blocking the Protect IP bill—sister to the more infamous SOPA—in the Senate. He’s widely celebrated as one of the most tech-savvy members of Congress. But his latest bill, the “Data Cap Integrity Act,” is a bizarre, reverse-Robin Hood form of price control for broadband. It should offend those who defend Internet freedom just as much as SOPA did.

Wyden worries that “data caps” will discourage Internet use and allow “Internet providers to extract monopoly rents,” quoting a New York Times editorial from July that stirred up a tempest in a teapot. But his fears are straw men, based on four false premises.

First, US ISPs aren’t “capping” anyone’s broadband; they’re experimenting with usage-based pricing—service tiers. If you want more than the basic tier, your usage isn’t capped: you can always pay more for more bandwidth. But few users will actually exceed that basic tier. For example, Comcast’s basic tier, 300 GB/month, is so generous that 98.5% of users will not exceed it. That’s enough for 130 hours of HD video each month (two full-length movies a day) or between 300 and 1000 hours of standard (compressed) video streaming.

Second, Wyden sets up a false dichotomy: Caps (or tiers, more accurately) are, according to Wyden, “appropriate if they are carefully constructed to manage network congestion,” but apparently for Wyden the only alternative explanation for usage-based pricing is extraction of monopoly rents. This simply isn’t the case, and propagating that fallacy risks chilling investment in network infrastructure. In fact, usage-based pricing allows networks to charge heavy users more, thereby recovering more costs and actually reducing prices for the majority of us who don’t need more bandwidth than the basic tier permits—and whose usage is effectively subsidized by those few who do. Unfortunately, Wyden’s bill wouldn’t allow pricing structures based on cost recovery—only network congestion. So, for example, an ISP might be allowed to price usage during times of peak congestion, but couldn’t simply offer a lower price for the basic tier to light users.

That’s nuts—from the perspective of social justice as well as basic economic rationality. Even as the FCC was issuing its famous Net Neutrality regulations, the agency rejected proposals to ban usage-based pricing, explaining:

prohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks.

It is unclear why Senator Wyden thinks the FCC—no friend of broadband “monopolists”—has this wrong. Continue reading →

Three rings for the broadcast-kings filling the sky,
Seven for the cable-lords in their head-end halls,
Nine for the telco-men doomed to die,
One for the White House to make its calls
On Capitol Hill where the powers lie,
One ring to rule them all, one ring to find them,
One ring to bring them all and without the Court bind them,
On Capitol Hill where the powers lie.

Myths resonate because they illustrate existential truths. In J.R.R. Tolkien’s mythical tale, the Lord of the Rings, the evil Lord Sauron imbued an otherwise very ordinary ring – the “One Ring”– with an extraordinary power: It could influence thought. When Sauron wore the One Ring, he could control the lords of the free peoples of Middle Earth through lesser “rings of power” he helped create. The extraordinary power of the One Ring was also its weakness: It eventually corrupted all who wore it, even those with good intentions. This duality is the central truth in Tolkien’s tale.

It is also central to current debates about freedom of expression and the Internet.

Since the invention of the printing press, those who control the means of mass communication have had the ability to influence thought. The printing press enabled the rapid and widespread circulation of ideas and information for the first time in history, including ideas that challenged the status quo (e.g., sedition and heresy). Governments viewed this new technology as a threat and responded by establishing control over the machinery of the printing press through state monopolies, press licenses, and special taxation.

The right to think is the beginning of freedom, and speech must be protected from the government because speech is the beginning of thought.”

The Framers knew that freedom of expression is the foundation of freedom. They also recognized that governments could control thought by controlling the printing press, and included a clause in the First Amendment prohibiting government interference with the “freedom of the press.” Though this clause was aimed at the printing press, its protection is not limited to the mass communications media of the Eighteenth Century. The courts have held that the First Amendment encompasses new mass media technologies, including broadcast television and cable.

Several public interest groups, academics, and pundits across the political spectrum nevertheless argue that the latest mass communications technology – the Internet – does not merit protection from government interference on First Amendment grounds. They assert that neither the dissemination of speech by Internet service providers (ISPs) nor the results of Internet search engines (e.g., Google) are entitled to First Amendment protection. They fear that Internet companies will use the First Amendment to justify the exercise of editorial control over the free expression of their consumers.

Others (including the Competitive Enterprise Institute) argue that the First Amendment applies to bothISPs and search engines. They believe a government with unrestrained control over the means of mass communications has the incentive and the ability to use that power to control the thoughts of its people, which inevitably leads to authoritarianism. They point to Internet censorship by ChinaSyria, and other authoritarian governments as current proof of this principle.

Both sides in the Internet debate raise legitimate concerns. I suspect many consumers do not want ISPs and search engines to exercise unfettered control over the Internet. I suspect that just as many consumers do not want government to exercise unfettered control over the Internet either. How can we resolve these dual concerns?

The free peoples of Middle Earth struggled with a similar duality at the Council of Elrond, where they decided what should be done with the One Ring. “Why not use this ring?” wondered Boromir, a bold hero who had long fought the forces of Sauron and believed the ring could save his people. Aragorn, a cautious but no less valiant hero, abruptly answered that no one on the Council could safely wield it. When Elrond suggested that the ring must be destroyed, mutual distrust drove the Council to chaos. Order was restored only when Frodo, a hobbit with no armies to command and no physical power, volunteered for the dangerous task of destroying the ring.

The judicial branch is our Frodo. It has no armies to command and no physical power. It must rely on the willingness of others to abide by its decisions and their strength to enforce them. Like the peoples of Middle Earth who relied on Frodo, we rely on the courts to protect us from abuse of government power because the judicial branch is the least threatening to our liberty.

This is as true today as it was when the Constitution was signed. Changes in technology do not change the balance of power among our branches of government. As we have in the earlier eras of the printing press, broadcast television, and cable, we must trust the courts to apply the First Amendment to mass communications in the Internet era.

Providing ISPs and search engines with First Amendment rights would prevent dangerous and unnecessary government interference with the Internet while permitting the government to protect Internet consumers within Constitutional bounds. Although some advocates imply otherwise, application of the First Amendment to Internet companies would not preclude the government from regulating the Internet. The courts uphold regulations that limit freedom of expression so long as they are narrowly tailored to advance a compelling or substantial government interest.

We have always trusted the courts to balance the right to freedom of expression with other rights and governmental interests, and there is no reason to believe they cannot appropriately balance competing concerns involving the Internet. If the courts cannot be trusted with this task, no one can.

By Geoffrey Manne and Berin Szoka

A debate is brewing in Congress over whether to allow the Federal Trade Commission to sidestep decades of antitrust case law and economic theory to define, on its own, when competition becomes “unfair.” Unless Congress cancels the FTC’s blank check, uncertainty about the breadth of the agency’s power will chill innovation, especially in the tech sector. And sadly, there’s no reason to believe that such expansive power will serve consumers.

Last month, Senators and Congressmen of both parties sent a flurry of letters to the FTC warning against overstepping the authority Congress granted the agency in 1914 when it enacted Section 5 of the FTC Act. FTC Chairman Jon Leibowitz has long expressed a desire to stake out new antitrust authority under Section 5 over unfair methods of competition that would otherwise be legal under the Sherman and Clayton antitrust acts. He seems to have had Google in mind as a test case.

On Monday, Congressmen John Conyers and Mel Watt, the top two Democrats on the House Judiciary Committee, issued their own letter telling us not to worry about the larger principle at stake. The two insist that “concerns about the use of Section 5 are unfounded” because “[w]ell established legal principles set forth by the Supreme Court provide ample authority for the FTC to address potential competitive concerns in the relevant market, including search.” The second half of that sentence is certainly true: the FTC doesn’t need a “standalone” Section 5 case to protect consumers from real harms to competition. But that doesn’t mean the FTC won’t claim such authority—and, unfortunately, there’s little by way of “established legal principles” stop the agency from overreaching. Continue reading →

Given the rate at which telephone companies are losing customers when they cannot raise prices as a regulatory matter, it is preposterous to continue presuming that they could raise prices as an economic matter.

Today, the United States Telecom Association (USTA) asked the Federal Communications Commission (FCC) to declare that incumbent telephone companies are no longer monopolies. Ten years ago, when most households had “plain old telephone service,” this request would have seemed preposterous. Today, when only one in three homes have a phone line, it is merely stating the obvious: Switched telephone service has no market power at all. Continue reading →

Wendell Wallach, lecturer at the Interdisciplinary Center for Bioethics at Yale University, co-author of “Moral Machines: Teaching Robots Right from Wrong,” and contributor to the new book, “Robot Ethics: The Ethical and Social Implications of Robotics,” discusses robot morality.

Though many of those interested in the ethical implications of artificial intelligence focus largely on the ethical implications of humanoid robots in the (potentially distant) future, Wallach’s studies look at moral decisions made by the technology we have now.

According to Wallach, contemporary robotic hardware and software bots routinely make decisions based upon criteria that might be differently weighted if decided by a human actor working on a case-by-case basis. The sensitivity these computers have to human factors is a vital to ensuring they make ethically sound decisions.

In order to build a more ethically robust AI, Wallach and his peers work with those in the field to increase the sensitivity displayed by the machines making the routine calculations that affect our daily lives.

Download

Related Links

The number of major cyberlaw and information tech policy books being published annually continues to grow at an astonishing pace, so much so that I have lost the ability to read and review all of them. In past years, I put together end-of-year lists of important info-tech policy books (here are the lists for 2008, 2009, 2010, and 2011) and I was fairly confident I had read just about everything of importance that was out there (at least that was available in the U.S.). But last year that became a real struggle for me and this year it became an impossibility. A decade ago, there was merely a trickle of Internet policy books coming out each year. Then the trickle turned into a steady stream. Now it has turned into a flood. Thus, I’ve had to become far more selective about what is on my reading list. (This is also because the volume of journal articles about info-tech policy matters has increased exponentially at the same time.)

So, here’s what I’m going to do. I’m going to discuss what I regard to be the five most important titles of 2012, briefly summarize a half dozen others that I’ve read, and then I’m just going to list the rest of the books out there. I’ve read most of them but I have placed an asterisk next to the ones I haven’t.  Please let me know what titles I have missed so that I can add them to the list. (Incidentally, here’s my compendium of all the major tech policy books from the 2000s and here’s the running list of all my book reviews.)

Continue reading →

by Larry Downes and Geoffrey A. Manne

Now that the election is over, the Federal Communications Commission is returning to the important but painfully slow business of updating its spectrum management policies for the 21st century. That includes a process the agency started in September to formalize its dangerously unstructured role in reviewing mergers and other large transactions in the communications industry.

This followed growing concern about “mission creep” at the FCC, which, in deals such as those between Comcast and NBCUniversal, AT&T and T-Mobile USA, and Verizon Wireless and SpectrumCo, has repeatedly been caught with its thumb on the scales of what is supposed to be a balance between private markets and what the Communications Act refers to as the “public interest.” Continue reading →

Why do mobile carriers sell phones with a subscription?  My roommate and I were debating this the other night.  Most other popular electronics devices aren’t sold this way.  Cable and satellite companies don’t sell televisions with their video service.  ISPs don’t sell laptops and desktops with their Internet service.  Bundling phones with mobile service subscriptions is pretty unique.  (The only mass-market analogs I can think of are satellite radio and GPS service.)

Why might this be?   Continue reading →

Would you pay good money for accurate predictions about important events, such as election results or military campaigns? Not if the U.S. Commodity Futures Trading Commission (CFTC) has its way. It recently took enforcement action against overseas prediction markets run by InTrade and TEN. The alleged offense? Allowing Americans to trade on claims about future events.

The blunt version: If you want to put your money where your mouth is, the CFTC wants to shut you up.

A prediction market allows its participants to buy and sell claims payable upon the occurrence of some future event, such as an election or Supreme Court opinion. Because they align incentives with accuracy and tap the wisdom of crowds, prediction markets offer useful information about future events. InTrade, for instance, accurately called the recent U.S. presidential vote in all but one state.

As far as the CFTC is concerned, people buying and selling claims about political futures deserve the same treatment as people buying and selling claims about pork futures: Heavy regulations, enforcement actions, and bans. Co-authors Josh Blackman, Miriam A. Cherry, and I described in this recent op-ed why the CFTC’s animosity to prediction markets threatens the First Amendment.

The CFTC has already managed to scare would-be entrepreneurs away from trying to run real-money prediction markets in the U.S. Now it threatens overseas markets. With luck, the Internet will render the CFTC’s censorship futile, saving the marketplace in ideas from the politics of ignorance.

Why take chances, though? I suggest two policies to protect prediction markets and the honest talk they host. First, the CFTC should implement the policies described in the jointly authored Comment on CFTC Concept Release on the Appropriate Regulatory Treatment of Event Contracts, July 6, 2008. (Aside to CFTC: Your web-based copy appears to have disappeared. Ask me for a copy.)

Second, real-money public prediction markets should make clear that they fall outside the CFTC’s jurisdiction by deploying notices, setting up independent contractor relations with traders, and dealing in negotiable conditional notes. For details, see these papers starting with this one.

[Aside to Jerry and Adam: per my promise.]

[Crossposted at Technology Liberation Front, and Agoraphilia.]