Posts tagged as:

A recent study by Cecil Bohanon and Michael Hicks at Ball State University’s Digital Policy Institute found that statewide cable franchising has increased broadband deployment.

Half of the US states have now enacted legislation that creates statewide cable franchising. These laws allow new entrants into the video business (principally the phone companies) to get permission to offer video from the state, instead of having to deal with local governments to get cable franchises. Previous research, much of it cited here, found that cable competition reduces cable rates and expands the number of channels available to subscribers. Local franchising often delayed or prevented new competitors from entering the market.

Since the same wires get used to transmit video, telephone, and broadband, Bohanon and Hicks reasoned that opening up entry into cable would also increase competition in broadband and hence increase broadband subscribership. And that’s precisely what their econometric study finds. After controlling for other factors, broadband subscribership is 2-5 percent higher in states that have statewide video franchising. Based on this finding, Bohanon and Hicks estimate that statewide video franchising increased broadband subscribership by about 5 million.

Their study covers the years 1999-2008. Maybe some of these 5 million would eventually have gotten broadband anyway. At worst, this study shows that 5 million subscribers got broadband sooner than they otherwise would have.

The study does not test whether the increase in broadband subscribership occurred because statewide video franchising sped up investment and deployment of infrastructure, or if it simply spurred competition in places where phone and cable companies already had the relevant infrastructure deployed.  I don’t know how one would get the confidential data on broadband investment in order to test this.  But given the large amount of new investment related to broadband, I’d be willing to bet that statewide franchising encouraged both new broadband deployment and more intense competition where infrastructure was already in place.

The Washington Post carried an article earlier this week by Cecilia Kang that noted the Federal Trade Commission could gain enforcement power over online businesses as a result of the financial services legislation under discussion in Congress. Ms. Kang contrasted the possibility of an empowered FTC issuing fast-track regulations against the recent experience of the Federal Communications Commission, which has become bogged down in its search for legal authority to issue net neutrality regulations. 

The comparison is insightful, but not for the reasons you might expect. Part of the debate over the FTC revolves around language in the House financial services bill that would repeal the “Magnuson-Moss” provisions that govern FTC promulgation of consumer protection regulations. (The name comes from the fact that these restrictions on FTC rulemaking were included in the Magnuson-Moss Warranty Act, which got the FTC into the business of regulating car warranties.)

If the FTC wants to regulate some type of general business practice under the FTC Act, it has to establish a factual record substantiating that there is actually a systemic problem that regulation can solve, hold a public hearing, allow cross-examination on factual matters, and conduct an economic analysis of the regulation’s effects.  In short, the commission has to do the homework necessary to demonstrate that its proposed regulation will actually solve a widespread problem that actually exists.

When Tim Muris directed the FTC’s Bureau of Consumer Protection in the early 1980s, he authored an article in Regulation magazine pointing out that when the FTC does careful analysis before issuing a rule, the rule is more likely to benefit consumers, more likely to be upheld in court, and more likely to be issued expeditiously. He contrasted the evidence-based eyeglass rule, which took three years to issue, with the anecdote-based funeral rule, which took ten. Muris noted wryly, “Some critics of my position charge that it is revolutionary to ask a body of lawyers and economists not to impose its own view of proper regulation on the world without first systematically evaluating the problem.” Muris went on to serve as chairman of the FTC between 2001-04, and last month he defended the Magnuson-Moss restrictions in testimony before Congress.  

What does this have to do with the FCC?  The FCC lost its case against Comcast on appeal, precisely because the FCC tried to take shortcuts. The FCC tried to promote net neutrality by enforcing a set of “principles” that originated in a former chairman’s speech and were never promulgated in a notice-and-comment rulemaking. The FCC commissioners endorsed these principles without investigating whether there was a systemic problem (ie, more than a few anecdotes of misbehavior). Indeed, Chairman Martin’s Notice of Inquiry on “Broadband Industry Practices” that was launched around the same time the FCC took its enforcement action against Comcast turned up no evidence of a systemic problem. If the FCC now tries to impose net neutrality by reclassifying broadband as a “Title II” common carrier, it will have to do the difficult but necessary work of demonstrating, with real factual evidence, that broadband is more like a common carrier than like the lightly-regulated “information service” the commission previously decided it was.

We don’t need Congress to free the FTC from Magnuson-Moss. Instead, Congress should impose the same requirements on the FCC. Sometimes, taking the time to do your homework leads to better decisions, sooner.

Last week the D.C. Circuit Court of Appeals ruled that the Federal Communications Commission cannot impose net neutrality rules on broadband providers under its “ancillary jurisdiction” under the Communications Act.  If it wants to impose net neutrality, the FCC must first reverse previous decisions and reclassify broadband as a “Title II” common carrier.

Whoa!  The previous two sentences prove that this economist has been spending way too much time around telecom lawyers.

In almost-plain English, the court decision means the FCC cannot impose net neutrality regulations unless it publicly changes its five-headed mind and decides that broadband is much like an old-fashioned telephone monopoly and should be regulated much the same way. 

A lot of regulatory economists pretty much gag at this idea, or worse. Non-economists wonder what triggers this visceral reaction.

Let me explain.  As the recipient of 8 years of excellent Jesuit education, of course I have three reasons.

First, anyone who follows the scholarly literature on economic regulation generally knows that this form of regulation has a pretty checkered track record. In a wide variety of industries, economic regulation has increased prices, inflated costs, stunted innovation, and/or created shortages. In addition, because this regulation transfers enormous amounts of wealth — $75 billion annually in the case of federal telecommunications regulation — it creates enormous incentives for firms to lobby and litigate to bend the rules in their favor. While big corporations may feel they benefit from these expenditures, from a society-wide perspective the fight over wealth transfers is pure waste because it rarely produces anything of value for consumers. 

Utility regulation works best in relatively stangant industries where a company makes a big capital investment, pays a few employees to run it, and doesn’t need to innovate much.  In those kinds of situations, it’s easier for regulators and other outsiders to determine costs, set some rates that let the utility earn a reasonable rate of return, and keep the regulated company from gaming the system too much. If you think this describes broadband, well, good luck. A local water utility is probably the best example.

Second, anyone knowledgeable about the economic theory underlying utility regulation (which includes most economists who specialize in the area, and some lawyers) understands that regulation is supposed to be a last resort for “natural monopoly” industries where it’s cheaper to have one firm serve the entire market. A monopolist protected from competition could increase prices, degrade service, or do other things that increase its profits while harming consumers; economic regulation seeks to prevent those behaviors. But if competition is possible, competition is preferable. 

When phone, cable, wireless, and satellite companies bombard us continually with solicitations to switch to their broadband services, and I can see multiple wires running down the street outside my house when I go up on the roof to adjust the satellite dish, it’s pretty darn obvious that broadband is NOT a natural monopoly, even if competition isn’t “perfect.”  Therefore, broadband lacks a key prerequisite for public utility regulation to possibly increase consumer welfare.  Indeed, the most anti-consumer results of economic regulation have occurred when government created monopolies, cartels and/or shortages by imposing this regulation on industries where competition is possible, such as cable TV, trucking, railroads, airlines, oil, and natural gas.

Third, recent economic studies find that the FCC’s decision to classify cable, DSL, and fiber broadband as a less-heavily-regulated “information service” generated a tsunami of investment and spurred competition. See, for example, this study by my GMU colleagues Thomas Hazlett and Anil Caliskan. Some more cites are available on pp. 17-18 of this comment to the FCC. If you don’t believe economic studies, just keep in mind that the aggressive marketing of dirt-cheap entry-level DSL tracks pretty closely with the FCC’s decision that DSL is an information service not subject to Title II regulation.  Coincidence?

So, please excuse those of us regulatory economists who vomit when the subject of Title II comes up. If you check out the links above, perhaps the reaction will be more understandable.

I have not addressed the question of whether it’s realistic to think that reclassification of broadband under Title II could be a workable mechanism to impose just a limited, targeted, surgical, light-handed, smart, data-driven, evidence-based, transparent, transformative, sustainable, green, hybrid, itsy bitsy teenie weeny yellow polka-dot bikini smidgen of net neutrality regulation to prevent only certain forms of anti-consumer discrimination, without imposing the customary broad panpoly of public utility price and service regulation. Whether that’s possible in theory, or likely in real-world political practice, is a different issue for a different day. (Whether the other name for that kind of regulation is “antitrust” is also a different  issue for a different day.) For the moment, I just wanted to provide some context on the broader Title II issue.

And now I’ll go clean off my shoes.

Several years ago at a conference on universal telecommunications service, one panel moderator noted, “Everything that can be said about universal service has already been said, but not everybody has had a chance to say it, so that’s why we still have these conferences.” After hearings and a study by the Federal Trade Commission, a Federal Communications Commission Notice of Inquiry during the previous administration, the National Broadband Plan, the FCC’s still-open Open Internet proceeding, and Wednesday’s extension of the reply comment period in the Open Internet proceeding, net neutrality is starting to have the same vibe.

That’s why, instead of virtually killing some more virtual trees by writing more lengthy comments and replies, Jerry Brito and I signed onto a declaration by telecommunications researchers which explains that there is no empirical evidence of a systemic problem that would justify net neutrality rules, and these rules might actually ban practices that benefit consumers. Since the world probably doesn’t need another blog post rehashing arguments about this issue, I’ll simply point you to the comment here. It was masterfully written by economist Jeff Eisenach, a veteran of the Federal Trade Commission. (The teeming throngs of humanity who are curious to know whether Jerry and I have any original thoughts to contribute to the issue can read this CommLaw Conspectus article.)

Now that I’ve gotten the shameless self-promotion out of the way, let me MoveOn to a broader point. The debate over net neutrality illustrates how important it is to identify and demonstrate the nature of the problem before trying to solve it.  This applies whether the issue is net neutrality or health care or financial market regulation. Two points in particular bear repeating.

First, ensure that there is empirical evidence of a system-wide problem. The arguments for net neutrality are based on concerns about things the broadband companies might have the ability to do – not empirical proof of widespread abuses that have actually occurred. Less than a handful of famous anecdotes support the argument for net neutrality. Sweeping, systemwide policy changes should only occur when a sweeping, systemwide problem actually exists.

Second, understand the actual nature of the problem. Have a coherent theory of cause and effect that explains why the problem occurs with reasoning that is consistent with what we know about human behavior. Ignoring this point has led to some odd decisions on issues far afield from net neutrality. In 2009, for example, the Department of Energy proposed energy efficiency standards for clothes washers to be used in laundromats and apartment buildings. The justification for the regulation assumed that greedy business owners and landlords willfully ignored opportunities to earn higher profits by investing in energy-efficient appliances! One might argue about whether consumers always identify and act on opportunities to save energy, but assuming that businesses will ignore opportunities to save money is a much bigger stretch.

If you don’t get the problem right, you won’t get the solution right!

As the Wall Street Journal is already reporting, today eBay sustained an important win in its long-running dispute with Tiffany over counterfeit goods sold through its marketplace.  (The full opinion is available here.)

I wrote about this case as my leading example of the legal problems that appear at the border between physical life and digital life, both in “ The Laws of Disruption” and a 2008 article for CIO Insight.

To avoid burying the lede, here’s the key point:  for an online marketplace to operate, the burden has to be on manufacturers to police their brands, not the market operator.  Any other decision, regardless of what the law says or does not say, would effectively mean the end of eBay and sites like it.

Continue reading →

Broadband Baselines

by on April 1, 2010 · 0 comments

The national broadband plan drafted by Federal Communications Commission staff has a lot of goals in it. Goals for broadband infrastructure deployment include:

  1. Make broadband with 4 Mbps download speeds available to every American
  2. Over the long term, have broadband with 100 Mbps download and 50 Mbps upload speeds available to 100 million American homes, with 50 Mbps downloads available to 100 million homes by 2015
  3. Have the fastest and most extensive wireless broadband networks in the world
  4. Ensure that no state lags significantly behind in 3G wireless coverage
  5. Ensure that every community has access to 1 Gbps broadband service in institutions like schools, libraries, and hospitals

The plan also outlines a number of policy steps that the FCC and other federal agencies could take to help accomplish these goals.

So far, so good. But to truly hold federal agencies accountable for achieving these objectives, we need more than goals, measures, and a list of policy proposals. We also need a realistic baseline that tells us how the market is likely to progress toward these goals in the absence of new federal action, and some way to determine how much the specific policy initiatives affect the amount of the goal achieved.

Here’s what will happen in the absence of a well-defined baseline and analysis that shows how much improvement in the goals is actually caused by federal policies: The broadband plan announces goals. The government will take some actions. Measurement will show that broadband deployment improved, moving the nation closer to achieving the goals. The FCC and other decisionmakers will then claim that their chosen policies have succeeded, because broadband deployment improved.

But in the absence of proof that the policies cause a measurable change in outcomes, this is like the rooster claiming that his crowing makes the sun rise. Scientists call this the ” post hoc, ergo propter hoc” fallacy: “B happened after A, therefore A must have caused B.” (Brush up on your Latin a little more, and you’ll even find out what Mercatus means. But I digress.)

Enough abstractions. Let me give a few examples.

The first goal listed above is to ensure that all Americans have access to broadband with 4 Mbps download speeds. In his second comment on my March 17 “Broadband Funding Gap” post, James Riso notes that the plan acknowledges that 5 out of the 7 million households that currently lack access to 4 Mbps broadband will soon be covered by 4th generation wireless. That means coverage for 83 percent of the households that lack 4 Mbps broadband is already “baked into the cake.” 

Accurate accountability must avoid giving future policy changes credit for this increase in deployment, because it was going to happen anyway.  (Of course, policymakers need to avoid taking steps that would discourage this deployment, such as levying the 15 percent universal service fee on 4th generation wireless.) The relevant question for evaluating future policy changes is, “How do they affect deployment to the remaining 2 million households?”

Similarly, the goal of 50 Mbps to 100 million households by 2015 seems to have been chosen because cable and fiber broadband providers indicate that they plan to cover more than that many homes by 2013 with broadband capable of delivering those speeds (pp. 21-22). Future policy initiatives should get zero credit for contributing toward this goal unless analysis demonstrates that the initiatives increased deployment of very high speed broadband over and above what the companies were already planning.

If you think this point is so basic that it’s not worth mentioning, you haven’t read enough government reports. Post hoc, ergo propter hoc is endemic, and not just on technology-related topics. For example, both sides regularly display this fallacy whenever the unemployment figures get released: “Unemployment increased after Obama’s election, therefore his administration caused the unemployment.” “The recession started when Bush was president, therefore his administration caused the unemployment.” These are at best hypotheses whose truth, untruth, and quantititive significance needs to be established by analysis that controls for other factors affecting the results.

Just take this as an advance warning on reporting results of the national broadband plan: Tone down the triumphalism.  

Note: For those of you who just can’t get enough discussion of the national broadband plan, Jerry Brito and I will have a dialog on other aspects of the plan in a future podcast that will be available here on Surprisingilyfree.com.

Are you a fellow Twitter addict who also monitors Internet policy and cyberlaw developments closely? If so, have you noticed that there really isn’t a good Twitter hashtag for this broad and growing issue set?   The #FCC and #FTC hashtags have become catch-alls for a great deal of activity in this area, but they don’t really make sense for other Internet policy issues that those agencies don’t cover. For example, Sec. 230-related issues wouldn’t really fit in either of those. Neither would something about Internet governance, e-commerce taxation, or search engine policy concerns. And just using #Internet doesn’t work because it’s far too broad. #Cyberlaw is probably the best hashtag I’ve found to cover this arena, but it doesn’t get much traction and may also be too narrow since some users might not consider it applicable to digital economics.

So, I’d like to propose #NetPolicy as a catch-all Twitter hastag for Internet policy matters. It would be great way to keep track of breaking news, new papers, and upcoming events related to the Internet policy issues.

Anyone have thoughts, or a better alternative??

Google v. Everyone

by on March 23, 2010 · 9 comments

I had a long interview this morning with the Christian Science Monitor. Like many of the interviews I’ve had this year, the subject was Google. At the increasingly congested intersection of technology and the law, Google seems to be involved in most of the accidents.

Just to name a few of the more recent pileups, consider the Google books deal, net neutrality and the National Broadband Plan, Viacom’s lawsuit against YouTube for copyright infringement, Google’s very public battle with the nation of China, today’s ruling from the European Court of Justice regarding trademarks, adwords, and counterfeit goods, the convictions of Google executives in Italy over a user-posted video, and the reaction of privacy advocates to the less-than-immaculate conception of Buzz.

In some ways, it should come as no surprise to Google’s legal counsel that the company is involved in increasingly serious matters of regulation and litigation. After all, Google’s corporate goal is the collection, analysis, and distribution of as much of the world’s information as possible, or, as the company puts it,” to organize the world’s information and make it universally accessible and useful.” That’s a goal it has been wildly successful at in its brief history, whether you measure success by use (91 million searches a day) or market capitalization ($174 billion).

As the world’s economy moves from one based on physical goods to one driven by information flow, the mismatch between industrial law and information behavior has become acute, and Google finds itself a frequent proxy in the conflicts.

Continue reading →

Michiko Kakutani has a very interesting essay in the New York Times entitled, “Texts Without Contexts,” which does a nice job running through the differences between Internet optimists and pessimists, a topic I’ve spent a great deal of time writing about here. (See: “Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society.”) She surveys many of the books I’ve reviewed and discussed here before by authors such as Neil Postman, Nick Carr, Cass Sunstein, Andrew Keen, Mark Helprin, Jaron Lanier, and others. She notes:

These new books share a concern with how digital media are reshaping our political and social landscape, molding art and entertainment, even affecting the methodology of scholarship and research. They examine the consequences of the fragmentation of data that the Web produces, as news articles, novels and record albums are broken down into bits and bytes; the growing emphasis on immediacy and real-time responses; the rising tide of data and information that permeates our lives; and the emphasis that blogging and partisan political Web sites place on subjectivity. At the same time it’s clear that technology and the mechanisms of the Web have been accelerating certain trends already percolating through our culture — including the blurring of news and entertainment, a growing polarization in national politics, a deconstructionist view of literature (which emphasizes a critic’s or reader’s interpretation of a text, rather than the text’s actual content), the prominence of postmodernism in the form of mash-ups and bricolage, and a growing cultural relativism that has been advanced on the left by multiculturalists and radical feminists, who argue that history is an adjunct of identity politics, and on the right by creationists and climate-change denialists, who suggest that science is an instrument of leftist ideologues.

It’s a great debate, but a very controversial one, of course.  Anyway, go read her entire essay.

Beyond the fact that the Federal Communications Commission (FCC) decided to release the executive summary of its long awaited National Broadband Plan via a PDF of a scanned printed copy, there are other reasons to be concerned about the agency’s ability to centrally plan one of the most important, fast-moving sectors of our economy.  In this video clip, I discussed some of my general reservations with the idea of a gargantuan government industrial policy for the broadband sector, and in this essay I noted how, from what we’ve see of the plan thus far [Executive Summary], the FCC appears to be engaged in some creative accounting techniques to fund the scheme.

Not everything in The Plan troubles me, however, and I hope to touch on some of the more sensible elements in a future post. But, as I was reading through it, I flagged 5 regulatory hot potatoes in the plan that threaten to derail the entire thing.  In this regard, the parallels between the National Broadband Plan and the debate over health care “reform” are really quite striking. Indeed, it appears the Administration has once again settled upon a “go for broke” (potentially quite literally!) strategy. In both cases, they appear hell-bent and trying to do it all in the form of One Big Plan. Now, I won’t lie to you; such everything-plus-the-kitchen-sink public policy gambits make me nervous based simply on the sheer scale of the undertaking. When Washington tries to regulate massive chunks of the economy using bloated bills and bureaucracies inside the Beltway, it troubles me greatly. But even if the sound of Big Government on Steroids doesn’t raise your blood pressure, one would hope that the prospect of political gridlock and litigation hell would force advocates to scale back their ambitions a tad bit. After all, what good is a plan that can never pass or be implemented?

That’s why I was rather surprised to see these 5 regulatory initiatives teed up in the National Broadband Plan:

(1) Return of the Forced Access Regulatory Nightmare? The Plan says the FCC will, “Undertake a comprehensive review of wholesale competition rules to help ensure competition in fixed and mobile broadband services.” As my friend Randy May of the Free State Foundation notes: Continue reading →