Articles by Jerry Ellig

Jerry Ellig is a senior research fellow at the Mercatus Center at George Mason University. He has also served as deputy director of the Office of Policy Planning at the Federal Trade Commission and as a senior economist at the Joint Economic Committee of the US Congress.


Last week the D.C. Circuit Court of Appeals ruled that the Federal Communications Commission cannot impose net neutrality rules on broadband providers under its “ancillary jurisdiction” under the Communications Act.  If it wants to impose net neutrality, the FCC must first reverse previous decisions and reclassify broadband as a “Title II” common carrier.

Whoa!  The previous two sentences prove that this economist has been spending way too much time around telecom lawyers.

In almost-plain English, the court decision means the FCC cannot impose net neutrality regulations unless it publicly changes its five-headed mind and decides that broadband is much like an old-fashioned telephone monopoly and should be regulated much the same way. 

A lot of regulatory economists pretty much gag at this idea, or worse. Non-economists wonder what triggers this visceral reaction.

Let me explain.  As the recipient of 8 years of excellent Jesuit education, of course I have three reasons.

First, anyone who follows the scholarly literature on economic regulation generally knows that this form of regulation has a pretty checkered track record. In a wide variety of industries, economic regulation has increased prices, inflated costs, stunted innovation, and/or created shortages. In addition, because this regulation transfers enormous amounts of wealth — $75 billion annually in the case of federal telecommunications regulation — it creates enormous incentives for firms to lobby and litigate to bend the rules in their favor. While big corporations may feel they benefit from these expenditures, from a society-wide perspective the fight over wealth transfers is pure waste because it rarely produces anything of value for consumers. 

Utility regulation works best in relatively stangant industries where a company makes a big capital investment, pays a few employees to run it, and doesn’t need to innovate much.  In those kinds of situations, it’s easier for regulators and other outsiders to determine costs, set some rates that let the utility earn a reasonable rate of return, and keep the regulated company from gaming the system too much. If you think this describes broadband, well, good luck. A local water utility is probably the best example.

Second, anyone knowledgeable about the economic theory underlying utility regulation (which includes most economists who specialize in the area, and some lawyers) understands that regulation is supposed to be a last resort for “natural monopoly” industries where it’s cheaper to have one firm serve the entire market. A monopolist protected from competition could increase prices, degrade service, or do other things that increase its profits while harming consumers; economic regulation seeks to prevent those behaviors. But if competition is possible, competition is preferable. 

When phone, cable, wireless, and satellite companies bombard us continually with solicitations to switch to their broadband services, and I can see multiple wires running down the street outside my house when I go up on the roof to adjust the satellite dish, it’s pretty darn obvious that broadband is NOT a natural monopoly, even if competition isn’t “perfect.”  Therefore, broadband lacks a key prerequisite for public utility regulation to possibly increase consumer welfare.  Indeed, the most anti-consumer results of economic regulation have occurred when government created monopolies, cartels and/or shortages by imposing this regulation on industries where competition is possible, such as cable TV, trucking, railroads, airlines, oil, and natural gas.

Third, recent economic studies find that the FCC’s decision to classify cable, DSL, and fiber broadband as a less-heavily-regulated “information service” generated a tsunami of investment and spurred competition. See, for example, this study by my GMU colleagues Thomas Hazlett and Anil Caliskan. Some more cites are available on pp. 17-18 of this comment to the FCC. If you don’t believe economic studies, just keep in mind that the aggressive marketing of dirt-cheap entry-level DSL tracks pretty closely with the FCC’s decision that DSL is an information service not subject to Title II regulation.  Coincidence?

So, please excuse those of us regulatory economists who vomit when the subject of Title II comes up. If you check out the links above, perhaps the reaction will be more understandable.

I have not addressed the question of whether it’s realistic to think that reclassification of broadband under Title II could be a workable mechanism to impose just a limited, targeted, surgical, light-handed, smart, data-driven, evidence-based, transparent, transformative, sustainable, green, hybrid, itsy bitsy teenie weeny yellow polka-dot bikini smidgen of net neutrality regulation to prevent only certain forms of anti-consumer discrimination, without imposing the customary broad panpoly of public utility price and service regulation. Whether that’s possible in theory, or likely in real-world political practice, is a different issue for a different day. (Whether the other name for that kind of regulation is “antitrust” is also a different  issue for a different day.) For the moment, I just wanted to provide some context on the broader Title II issue.

And now I’ll go clean off my shoes.

Several years ago at a conference on universal telecommunications service, one panel moderator noted, “Everything that can be said about universal service has already been said, but not everybody has had a chance to say it, so that’s why we still have these conferences.” After hearings and a study by the Federal Trade Commission, a Federal Communications Commission Notice of Inquiry during the previous administration, the National Broadband Plan, the FCC’s still-open Open Internet proceeding, and Wednesday’s extension of the reply comment period in the Open Internet proceeding, net neutrality is starting to have the same vibe.

That’s why, instead of virtually killing some more virtual trees by writing more lengthy comments and replies, Jerry Brito and I signed onto a declaration by telecommunications researchers which explains that there is no empirical evidence of a systemic problem that would justify net neutrality rules, and these rules might actually ban practices that benefit consumers. Since the world probably doesn’t need another blog post rehashing arguments about this issue, I’ll simply point you to the comment here. It was masterfully written by economist Jeff Eisenach, a veteran of the Federal Trade Commission. (The teeming throngs of humanity who are curious to know whether Jerry and I have any original thoughts to contribute to the issue can read this CommLaw Conspectus article.)

Now that I’ve gotten the shameless self-promotion out of the way, let me MoveOn to a broader point. The debate over net neutrality illustrates how important it is to identify and demonstrate the nature of the problem before trying to solve it.  This applies whether the issue is net neutrality or health care or financial market regulation. Two points in particular bear repeating.

First, ensure that there is empirical evidence of a system-wide problem. The arguments for net neutrality are based on concerns about things the broadband companies might have the ability to do – not empirical proof of widespread abuses that have actually occurred. Less than a handful of famous anecdotes support the argument for net neutrality. Sweeping, systemwide policy changes should only occur when a sweeping, systemwide problem actually exists.

Second, understand the actual nature of the problem. Have a coherent theory of cause and effect that explains why the problem occurs with reasoning that is consistent with what we know about human behavior. Ignoring this point has led to some odd decisions on issues far afield from net neutrality. In 2009, for example, the Department of Energy proposed energy efficiency standards for clothes washers to be used in laundromats and apartment buildings. The justification for the regulation assumed that greedy business owners and landlords willfully ignored opportunities to earn higher profits by investing in energy-efficient appliances! One might argue about whether consumers always identify and act on opportunities to save energy, but assuming that businesses will ignore opportunities to save money is a much bigger stretch.

If you don’t get the problem right, you won’t get the solution right!

Broadband Baselines

by on April 1, 2010 · 0 comments

The national broadband plan drafted by Federal Communications Commission staff has a lot of goals in it. Goals for broadband infrastructure deployment include:

  1. Make broadband with 4 Mbps download speeds available to every American
  2. Over the long term, have broadband with 100 Mbps download and 50 Mbps upload speeds available to 100 million American homes, with 50 Mbps downloads available to 100 million homes by 2015
  3. Have the fastest and most extensive wireless broadband networks in the world
  4. Ensure that no state lags significantly behind in 3G wireless coverage
  5. Ensure that every community has access to 1 Gbps broadband service in institutions like schools, libraries, and hospitals

The plan also outlines a number of policy steps that the FCC and other federal agencies could take to help accomplish these goals.

So far, so good. But to truly hold federal agencies accountable for achieving these objectives, we need more than goals, measures, and a list of policy proposals. We also need a realistic baseline that tells us how the market is likely to progress toward these goals in the absence of new federal action, and some way to determine how much the specific policy initiatives affect the amount of the goal achieved.

Here’s what will happen in the absence of a well-defined baseline and analysis that shows how much improvement in the goals is actually caused by federal policies: The broadband plan announces goals. The government will take some actions. Measurement will show that broadband deployment improved, moving the nation closer to achieving the goals. The FCC and other decisionmakers will then claim that their chosen policies have succeeded, because broadband deployment improved.

But in the absence of proof that the policies cause a measurable change in outcomes, this is like the rooster claiming that his crowing makes the sun rise. Scientists call this the “post hoc, ergo propter hoc” fallacy: “B happened after A, therefore A must have caused B.” (Brush up on your Latin a little more, and you’ll even find out what Mercatus means. But I digress.)

Enough abstractions. Let me give a few examples.

The first goal listed above is to ensure that all Americans have access to broadband with 4 Mbps download speeds. In his second comment on my March 17 “Broadband Funding Gap” post, James Riso notes that the plan acknowledges that 5 out of the 7 million households that currently lack access to 4 Mbps broadband will soon be covered by 4th generation wireless. That means coverage for 83 percent of the households that lack 4 Mbps broadband is already “baked into the cake.” 

Accurate accountability must avoid giving future policy changes credit for this increase in deployment, because it was going to happen anyway.  (Of course, policymakers need to avoid taking steps that would discourage this deployment, such as levying the 15 percent universal service fee on 4th generation wireless.) The relevant question for evaluating future policy changes is, “How do they affect deployment to the remaining 2 million households?”

Similarly, the goal of 50 Mbps to 100 million households by 2015 seems to have been chosen because cable and fiber broadband providers indicate that they plan to cover more than that many homes by 2013 with broadband capable of delivering those speeds (pp. 21-22). Future policy initiatives should get zero credit for contributing toward this goal unless analysis demonstrates that the initiatives increased deployment of very high speed broadband over and above what the companies were already planning.

If you think this point is so basic that it’s not worth mentioning, you haven’t read enough government reports. Post hoc, ergo propter hoc is endemic, and not just on technology-related topics. For example, both sides regularly display this fallacy whenever the unemployment figures get released: “Unemployment increased after Obama’s election, therefore his administration caused the unemployment.” “The recession started when Bush was president, therefore his administration caused the unemployment.” These are at best hypotheses whose truth, untruth, and quantititive significance needs to be established by analysis that controls for other factors affecting the results.

Just take this as an advance warning on reporting results of the national broadband plan: Tone down the triumphalism.  

Note: For those of you who just can’t get enough discussion of the national broadband plan, Jerry Brito and I will have a dialog on other aspects of the plan in a future podcast that will be available here on Surprisingilyfree.com.

The Federal Communications Commission released the full version of its National Broadband Plan yesterday — all 11+ megabytes of it. A quick read (!) of the 300+ page document reveals that the problem of broadband “availability” is not nearly as big as the numbers highlighted in the plan would lead one to believe. If you’re careful to read the caveats and the numbers in the plan that don’t get a lot of emphasis, the problem of people who lack access to broadband is quite manageable.

The plan states that 14 million Americans lack access to terrestrial broadband capable of delivering a download speed of 4 megabytes per second (mbps). Making broadband of this speed available to all Americans would cost $24 billion more than the likely revenues from sale of the service.

(To calculate the dollar figure, the report’s authors estimated the stream of future costs and revenues from extending 4 mbps broadband to places where it does not currently exist, then “discounted” them to present values to make the costs and revenues comparable.  The $24 billion “funding gap” is thus a present discounted value.)

Several key assumptions drive these estimates.

First, the plan explicitly declined to include satellite when it measured availability of broadband.

Second, even if the plan’s authors wanted to include satellite, the choice of the 4 mbps benchmark also excludes all but the most expensive residential satellite broadband plans.  Perhaps more importantly, the 4 mbps benchmark also allows the plan to ignore “third generation” wireless Internet as an option for households located in places that don’t have wired Internet. 

These are important omissions, because the plan reports that 98 percent of Americans live in places that have 3G wireless Internet. On the other hand, 95 percent of Americans have access to wired broadband capable of delivering 4 mbps downloads. If we include 3G wireless Internet, only 2 percent of Americans live in places where broadband is not available, rather than 5 percent. In other words, including wireless broadband in the calculation cuts the size of the problem by more than half!  If we include satellite, the number of Americans who don’t have broadband available must be truly miniscule.

Why is 4 mbps the goal, anyway? The plan does not explain this in great detail, but it looks like 4 mbps is the goal because that’s the average speed broadband subscribers currently receive in the US. As a result, the plan picked 4 mbps as the speed experienced by the “typical” broadband user in this country. Only problem is, other figures in the plan show that 4 mbps is not the speed experienced by the “typical” US broadband user. The same graph that shows the average broadband speed is 4.1 mbps (on page 21) also shows that the median speed is 3.1 mbps. Half of broadband users have speeds above the median, and half have speeds below the median; that’s the mathematical definition of a median. When the median is 25 percent below the average, it’s simply not accurate to say that the average shows the speed that a “typical” user receives. The typical user receives a speed slower than 4 mbps.

The 4 mbps figure is also way above the goals other nations have set for broadband; the plan shows that other countries typically seek to ensure that all citizens can connect to broadband at speeds between 0.5 and 2 mbps. A goal in that neighborhood would surely allow most 3G wireless services to count as broadband when estimating availability.

That $24 billion “funding gap” also deserves comment. That’s the amount of subsidy the plan estimates will be required to make 4 mbps broadband available to all Americans.  If you read the plan carefully, you will also find that a whopping $14 billion of that is required to bring broadband to the highest-cost two-tenths of one percent of American housing units — 250,000 homes  (page 138). That works out to $56,000 per housing unit!

One wonders whether most Americans would be willing to spend $56,000 per home to ensure that these last few folks can get broadband that’s as fast as the FCC’s broadband planners have decided they deserve. Here’s another option. A basic satellite broadband package costs about $70 per month. Giving these 250,000 expensive-to-reach households satellite broadband would only cost about $200 million a year. It would cost less than half of that if we actually expect these consumers to pay part of the cost — maybe the same $40 per month the rest of us pay in urban and suburban areas?

That cuts the broadband “funding gap” to $10 billion, plus maybe $100 million a year for the satellite subscriptions. If we abandon the arbitrary 4 mbps definition of “acceptable” broadband speed, so that 3G wireless counts as broadband, the gap would be maybe half that size (since more than half of the people who don’t have wired broadband available do have 3G wireless available).

 And guess what — the broadband plan identifies about $15.5 billion in current subsidies that the FCC could repurpose to support broadband. In other words, the FCC has the ability to solve the broadband funding gap all by itself, without a dime of new money from taxpayers, telephone subscribers, or broadband subscribers!

I’m surprised the plan didn’t point that out; coulda made the five commissioners look like real heroes.

The FCC today released an executive summary of its National Broadband Plan, which is supposed to be delivered to Congress tomorrow.  Of course, executive summaries by their nature are brief and usually don’t explain the underlying logic and evidence supporting the conclusions. Here are a few highlights, some possible interpretations, and things to look for when the full plan gets released tomorrow:

Recommendation: “Undertake a comprehensive review of wholesale competition rules to help ensure competition in fixed and mobile broadband.” This could signal that the FCC plans to re-impose “unbundling” or “line sharing” regulations, which would require broadband companies to let competitors use their lines and other facilities at regulated rates. Such initiatives would likely undermine broadband deployment and investment.  Economic research by my GMU colleague Tom Hazlett and others finds that broadband investment, competition, deployment in the US took off only after the FCC eliminated line-sharing requirements. Christina Forsberg and I summarized a lot of this research here.

Recommendation: “Make 500 Mhz of spectrum available for broadband within ten years … Enable incentives and mechanisms to repurpose spectrum.” This is a fantastic recommendation. A Mercatus Center review of the costs of federal telecommunications regulations found that federal spectrum allocation, which prevents spectrum from being reallocated to uses that consumers value highly (like broadband), is by far the costliest federal regulation affecting telecom and the Internet. This recommendation indicates the FCC leadership would like to auction a lot more spectrum and share the proceeds with existing users (like broadcasters) in order to overcome resistance to reallocation. It’s not quite a market in spectrum, but it might be the closest the FCC can come.

Recommendation: “Broaden the USF contribution base to ensure USF remains sustainable over time.” Uh-oh. I’m not sure what this means, but if means that broadband subscribers will have to start payng into the FCC’s universal service fund (USF), watch out! Most economic studies find that consumer demand for broadband is very price-sensitive. That means if the FCC slaps broadband with universal service fees (which currently exceed 10 percent), we’ll see a big drop in broadband subscribership — maybe by 4-7 million subscribers. This is , of course, precisely the opposite of what the FCC wants to accomplish!

Recommendation: “Reform intercarrier compensation, which provides implicit subsidies to telephone companies by eliminating per minute charges over the next ten years…” Another excellent idea.  “Intercarrier compensation” refers to payments phone companies make when they hand traffic off to each other. Small, rural phone companies usually receive the highest per minute payments — as much as 15-30 cents per minute! This is a huge markup on long-distance phone service — another price-sensitive service!

Recommendation: Provide subsidies so that rural areas can have broadband with download speeds of 4 MB.  It will be interesting to read in the full plan where this 4 MB figure came from. Does it reflect the speed of service that a lot of Americans currently have, so these subsidies are just supposed to help equalize opportunities for rural residents? Or does it reflect some balancing of the costs and benefits of subsidizing broadband in rural areas?  Or is this a magic number experts believe subscribers need, regardless of the choices consumers actually make in the marketplace and regardless of what it costs?

The executive summary also lists a set of goals, such as ensuring that every American has the ability to subscribe to “robust” broadband service, having 100 million households with access to 100 MB broadband, and ensuring that the US has the fastest and most extensive wireless networks of any nation.  When the full plan comes out, look carefully at whether or how the FCC plans to measure accomplishment of these goals.  More importantly, look to see whether the FCC explains how it will quantify how much its own policies actually contribute to these goals over time. The FCC is famous for NOT doing these kinds of things, so let’s see if the broadband plan signals a new era in accountability.

An Associated Press story this morning by Eileen AJ Connelly provides our latest example of Regulatory Whak-A-Mole, known to scholars as “term substitution.” 

Bank of America announced that it will discontinue charging overdraft fees on debit cards. This comes in response to new regulations that prohibit banks from charging overdraft fees unless the consumer has consented to the fee.  Since the bank has no way of getting your consent when you walk into Starbucks and perpetrate an overdraft while buying your latte macho grande and muffin, it simply won’t let the transaction go through.

Wa-Hoo, another victory for consumers. Well, not quite. Customers who place a high value on not being embarrassed in Starbucks are arguably worse off. (How do you return a latte macho grande if you find out you don’t have enough money to pay for it after your coffee concierge has mixed it?) More seriously, customers who might want to use an overdraft for a more substantial purchase will no longer have this option.

I wonder about the argument that regulators are saving hapless, uninformed consumers. The AP article reveals that 93 percent of overdraft fees are generated by 14 percent of customers — “serial overdrafters.” That means there are a lot of folks out there who repeatedly try to use their debit cards as a source of credit, albeit an expensive one. I don’t know about you, but it would only take one or two overdraft fees before I’d realize it’s cheaper to keep a $25 balance in my account than to pay more than that in multiple overdraft fees. If most overdrafters have done this more than once, they must know they will be charged a fee and have decided that’s the lesser of multiple evils. So why take this choice away from them?

Point-of-sale overdrafts may not be the only casualty of this regulation. The article quotes banking analyst Robert Meara’s prediction that banks might curtail free checking, which many apparently offer as a loss leader to generate fee income. A smaller stream of fee income makes “free checking” less attractive for banks.

Which consumers does this ultimately hurt? I can think of one group: people with low incomes who can’t afford checking account fees and  use debit cards responsibly.  

Somehow I doubt that was the regulators’ intention.

I was slow to adopt broadband. So maybe it’s also appropriate that I was slow to read John Horrigan’s highly informative survey on broadband adoption released by the Federal Communications Commission on February 23. Or maybe it’s fortuitous, because the delay let me take a look to see what messages the news media took away from this survey.

Two clear messages appear in the news coverage.  The first is a variant of the screaming headline the FCC put on its own press release: “93 Million Americans Disconnected from Broadband Opportunities.” You’ll find this as the headline or lead paragraph in coverage by the New York Times and AFP.

The second type of message highlights the main reasons one-third of the population does not subscribe to broadband. “FCC Survey Shows Need to Teach Broadband Basics,” notes the headline on an Associated Press story. According to the survey, the three main obstacles to broadband adoption are cost, lack of digital literacy, and non-adopters’ perception that broadband is not sufficiently relevant to their lives.  (I got a chuckle when I saw that non-adopters said they would be willing to pay $25, on average, for broadband; that’s the magic price that finally induced me to give in and sign up!)

But whoa, what’s missing here?  Our old friend Availability. Broadband was supposed to be some kind of noveau public works project that would take hundreds of billions of dollars to bring to fruition, because many Americans lack access to broadband. “Build it and they will come!” “Pour that concrete information superhighway!” “Stimulate the economy!”

The FCC survey tells an interesting story about availability:

Of the … non-adopters, 12 percent say they cannot get broadband where they live. This translates into a 4 percent share of Americans—on the basis of their reports on infrastructure availability in their neighborhood—who say they are unable to obtain broadband because it is not available. This means that 31 percent of all Americans can get service but do not. (p. 5)

The survey also notes that 10 percent of rural respondents say broadband is not available where they live.  I don’t mean to sound insensitive, but that’s all?  Heck, I’d have guessed a higher percentage than that.   

To put the numbers in perspective: 4 percent of Americans say they don’t have broadband because it isn’t available, while almost three times as many — 10 percent — lack broadband because they think the Internet is irrelevant to their lives.

Is availability a problem in some places?  Sure. But the FCC survey shows it isn’t nearly the size of problem we’d been led to believe. So let’s hope the National Broadband Plan’s discussion of availability is similary circumscribed and appropriately targeted.

Debate over the regulatory status of broadband heated up this week as trade associations and major broadband companies sent a letter to the Federal Communications Commission arguing strenuously against reclassification of broadband as “telecommunications service” subject to regulation under Title II of the Communications Act. One implication of Title II regulation is that broadband could be regulated like a public utility. Comparisons of broadband to services like electricity or railroads, which I discussed last week, also raise the prospect of public utility regulation. 

Classic public utility regulation restricts entry and regulates prices to prevent firms from charging excessive prices.  It’s typically used in situations where competition is believed to be impossible (or, where pre-existing policy decisions have created monopolies that aren’t going to go away very soon).

Broadband is not a monopoly; it is an oligopoly. Contrary to popular perception, that is not synonymous with “evil.” Although both monopoly and oligopoly end in “-opoly,” that doesn’t mean broadband competitors will charge monopoly prices, or even somewhat excessive prices.  The only firm conclusion that emerges from economic literature on oligopoly is, “anything’s possible, depending on the specific facts and circumstances.”

But there are also firm conclusions that emerge from economic literature on public utility regulation.  Just about every time the federal government has tried to impose public utility regulation on an oligopoly, it has ended up enforcing a cartel.  This is what happened in the past with railroads, trucking, airlines, and brokerage firms. There are a few times federal price regulation did not enforce cartels in oligopolistic or competitive industries. In those cases, it usually created shortages  — most notably gasoline and natural gas in the 1970s.

Title II regulation is not necessarily synonymous with public utility regulation. Title II could be used to impose some “nondiscrimination” requirements, without necessarily directly regulating broadband providers’ prices or profits.

But anyone who actually wants the FCC to regulate broadband providers’ prices and profits needs to read the peer-reviewed economics literature on the actual effects of public utility regulation in practice on the federal level. (More literature is cited here.) Then they need to explain why the results in broadband would be different.  And the explanation needs to be better than “We know better now, we’re smart, and we promise.”

Railroading Broadband?

by on February 18, 2010 · 0 comments

FCC Chairman Julius Genachowski’s comparison of broadband with electricity in a speech this week has generated mixed reviews in the blogosphere. Manny Ju says that this shows Genachowski “gets it” — that he understands the transformational power of broadband and how it will come to be regarded as a ubiquitous necessity in the years ahead. Scott Cleland is more alarmed: “The open question here is electricity transmission is regulated as a public utility. Is the FCC Chairman’s new metaphor intended to extend to how broadband should be regulated?”

It may surprise some technophiles, but this kind of discussion even predates electricity. The advent of the railroads in the 19th century brought similar arguments.  Railroads were usually a heck of a lot cheaper way of hauling goods and people across land than the next best alternative at the time: wagons. Railroads were “The Next Big Thing” that no town could do without — especially if the town lacked access to navigable waters. Lawmakers handed out subsidies (often in the form of land grants), then regulated railroads to control perceived abuses, such as discriminatory pricing for different kinds of traffic or traffic between different locations. Henry Carter Adams, the godfather of economic regulation in the U.S., said all shippers deserved “equality before the railroads.” Even today, commentators lament the rural towns that people abandoned because they lacked rail access. Deja vu all over again! 

As long as we’re deja-vuing, let’s remember a few little problems America encountered down the railroad regulatory track:

1. Subsidies created “excess capacity” — that is, more capacity than customers were willing to pay for. In some cases, subsidies attracted shady operators into the railroad business whose main goal was to get land grants or sell diluted stock offerings to the public, not build and operate railroads. 

2. Regulation ended up caretlizing railroads and propping up rail rates, which faced downward pressure because of the excess capacity.

3. When another low-cost, convenient alternative (trucking) came along in the 1930s, truckers got pulled into the cartel when they too were placed under Interstate Commerce Commission regulation to keep them from undercutting rail rates.

4. Despite cartelization, by the late 1970s, 21 percent of the nation’s railroad track was operated by bankrupt railroads, even though the railroads had shed unprofitable passenger service to Amtrak earlier in the decade. Part of the reason was excessive costs: Because access to freight rail service was still considered a right, regulation prevented railroads from abandoning money-losing lines. Part of the reason was restraints on competition: The regulatory passion for “fair” pricing kept railroads from competing aggressively with each other or with truckers. When the Southern Railway introduced its 100-ton “Big John” grain hopper cars in the 1960s, for example, it couldn’t offer shippers lower rates in exchange for high volume until it appealed an Interstate Commerce Commission all the way to the Supreme Court.

By the late 1970s, a Democratic president, a bipartisan majority in Congress, and economists across the political spectrum agreed that railroad regulation needed a radical overhaul. Regulatory reforms made it easier for railroads to abandon unprofitable service, in many cases turning track over to new, lower-cost short lines and regional railroads. Prices for more than 90 percent of rail traffic were effectively deregulated. At the same time, Congress deregulated rates and entry on interstate trucking routes. This encouraged rail-truck competition and also allowed each mode to specialize in serving those markets it could serve at lowest cost.

Rail rates fell, and railroads came out of bankruptcy. The current system is hardly perfect, but most economic research suggests that most consumers, shippers, and railroads are much better off now than they were under the old regulatory system.  (For reviews of scholarly research on this, check out Clifford Winston’s paper here  or my article here.)

Will we repeat the cycle with broadband? I don’t know, but to this railfan, the current broadband debate is looking soooo retro — as in 19th century!

The Federal Communications Commission released its 102-page fiscal year 2011 budget request to Congress this week.  Here are some fascinating factoids about the agency that I’ll pass on without commentary, beyond saying that they caught my attention:

  • The FCC has hired “close to 54 data experts, statisticians, econometricians, economists, and other expertise” to help with the National Broadband Plan mandated under the Recovery Act. These are “term employees,” meaning they’re not permanent, but the FCC says it needs more permanent hires to work on broadband after the plan is done. (p. 2)
  • The commission asks for a “budget” of $352.5 million. (p. 1) But its total requested spending actually tops $440 million, because it also asks for authority to spend $85 million of spectrum auction proceeds to cover the cost of auctions. (p. 5)
  • The administration proposes to give the FCC authority to charge user fees for unauctioned spectrum licenses, with projected revenues totaling $4.8 billion through 2020. (p. 6)
  • The FCC commits to 24 “outcome-focused performance goals.”  (pp. 16-29) Most of these goals are phrased as activities, not accomplishments, with lots of verbs like “enact,” “encourage,” “facilitate,” “enforce,” “promote,” “work to,” “foster,” advocate,” and “maintain.” In some cases, one can identify the actual concrete outcome by looking at additional wording or performance targets. It’s clear, for example, that the FCC wants to make sure that all Americans have access to broadband. In other cases, the concrete outcome, or how we would know if it is accomplished, is not clear.  For example, the only targets listed under the goal “Promote access to telecommunications services to all Americans” are targets for enforcement actions rather than measures of whether the FCC has actually accomplished the desired outcome.
  • The FCC has been supported almost entirely by regulatory fees assessed on regulated companies, with virtually no direct appropriations of tax dollars since fiscal year 2003 (p. 31).
  • Spectrum auctions have generated more than  $51.9 billion for the US Ttreasury. (p. 33)