Articles by Jerry Ellig

Jerry Ellig is a senior research fellow at the Mercatus Center at George Mason University. He has also served as deputy director of the Office of Policy Planning at the Federal Trade Commission and as a senior economist at the Joint Economic Committee of the US Congress.


You may have seen this recent article about Lila Kerr and Lauren Theis — two Rice University undergraduates who figured out how to turn a kitchen “salad spinner” into a centrifuge that can separate blood into plasma and red cells in about 20 minutes.  The inventors hope it will have a lot of applications in developing countries, because it will allow clinics to check blood samples for anemia on location and in real time, instead of transporting blood samples miles to the nearest facility with a centrifuge.

If the field tests go well, the inventors surely deserve to be lauded for the lives their invention will save. 

But I also think the students should be recognized for another aspect of their feat — namely, they figured out how to turn a really lame and pretty useless kitchen device into something useful! We have one of these (someplace). One attempted use was enough. I’m glad they found a way to unlock the true potential of this technology.

The Federal Communications Commission has an open proceeding in which it seeks advice on how to repurpose universal service subsidies for phone service in high cost areas to subsidize broadband instead. The FCC apparently wants to subsidize broadband with a minimum download speed of 4 megabytes per second (mbps) and upload speed of 1 mbps. These are the goals proposed in the commission’s National Broadband Plan.

I’m no lawyer, but I wonder if the FCC can do this legally. Section 254 of the Telecommunications Act of 1996 lays out criteria the FCC is supposed to consider when it decides whether to provide universal service subsidies for new services in addition to phone service. One of the criteria is that the new service must be subscribed to by a “substantial majority” of residential consumers.

Sixty-five percent of Americans have broadband at home. (National Broadband Plan, p. 167)  But a minority of residential customers subscribe to broadband that meets the FCC’s 4 mbps/1 mbps definition. According to the FCC’s Omnibus Broadband Initiative technical report on the “Availability Gap” (p. 43), 48 million subscribers have download speeds of 4 mbps or higher. More subscribers – 53 million – have broadband download speeds of 3 mbps or lower. And 35 percent of Americans have no broadband at all. These figures imply that a “substantial majority” of Americans have not subscribed to broadband that meets the National Broadband Plan’s proposed definition.

Based on figures in the technical report, I calculated that approximately 59 percent of Americans subscribe to broadband with a download speed of 768 kbps or higher. Perhaps this figure qualifies as a “substantial majority,” but surely the 4 mbps/1 mbps definition does not.

A reasonable person might also question whether even 59 percent counts as a “substantial majority” for the purpose of declaring broadband a service eligible for subsidy. Surely Section 254 requires a “substantial majority” in part to ensure that consumers who have chosen not to subscribe to a service do not bear the injustice of having to subsidize the provision of that service to others. It is clear from the FCC’s figures that most of the 35 percent of American households without broadband have it available but choose not to subscribe. Therefore, subsidizing even 768 kbps broadband would force many consumers to pay universal service assessments to provide others with a subsidized service that they themselves have decided is not worth the cost.

Wait and see how the FCC addresses this issue once it starts creating a universal service program for broadband.

National Economic Council Director Lawrence Summers made a major policy speech yesterday at the New America Foundation, announcing the adminstration’s plan to find an additional 500 megaherz of spectrum for wireless broadband service by the end of the decade. The spectrum will come from two places: federal agencies who currently under-utilize their spectrum, and commercial users who volunteer to participate in “incentive auctions.”

In an incentive auction, the current spectrum user receives part of the proceeds in exchange for making the spectrum available for reallocation. Within the current US system of spectrum allocation, it’s about as close as we can come to allowing spectrum holders to sell their spectrum licenses to someone else who can put the spectrum to a more valuable use. 

Summers even mentioned broadcasters specifically, noting that a local television station with a few hundred millions of dollars of revenue may currently control spectrum worth hundreds of millions of dollars. Federal agencies would get to use some of the proceeds to adopt “state-of-the-art communications.” Presumably this would include new equipment that doesn’t use so much spectrum.

In his speech, Summers gave appropriate credit to the Federal Communications Commission, which surfaced many of these ideas in its National Broadband Plan. Even more appropriately, the former Harvard University president and academic economist assigned proper credit for the original source of the idea: 

Most of the freed-up spectrum will be auctioned off for use by mobile broadband providers. As the great law and economics scholar Ronald Coase originally pointed out, auctions ensure that spectrum is devoted to its most productive uses because it is determined by investors’ willingness to pay for it.

There are, of course, a few unanswered questions. How much of the spectrum will actually get auctioned for mobile broadband, rather than reserved for unlicensed use? Will the buyers have to use the spectrum for mobile broadband, or will the license be sufficiently broad that they could use it for other forms of personal communication that perhaps haven’t even been invented yet? Do we really have to wait ten years for this? Will the Ronald Coase Institute get any royalties for the government’s use of its namesake’s intellectual property? (Academics will recognize the joke in the last question.)

For now I’ll just say, “Bravo, Dr. Summers!”

This is a post for all those broadband fans out there who want to talk about something today besides the Federal Communication’s Commission’s decision to take comments on which legal classification it should use to regulate broadband.

A recent FCC survey revealed that 80 percent of home broadband users do not know the speed of their broadband service. I can easily imagine how this statistic could be spun to “prove” that consumers are woefully uninformed and the broadband market must be plagued with “market failures” because consumers do not have even the basic information they need to make intelligent decisions.

Before we go down that road, let me explain, based on my own experience, why this is a non-issue.

I’m part of that 80 percent. I do not know the speed of my broadband service at home.  I know that when I signed up several years ago, I selected the slowest and cheapest broadband speed the provider offered.  I also know that this speed is still plenty fast for anything we need to do at home (and usually faster than the speed at my university office). I remain blissfully ignorant of the actual speed, even though it would be very easy for me to find out by looking at the materials I received when I signed up or checking the provider’s web site online.

In economic jargon, I am “rationally ignorant” of my home broadband speed. I don’t know (or remember) the speed, but to me this information is not worth the 45 seconds it would take me to find out. And that also means any FCC initiatives to “improve consumer information” or “educate” me about it will not, for me, be worth the time and money the FCC might spend on them.

If some of our Internet applications were not working in a satisfactory manner, we would probably do an online speed test, check to see what other speeds our provider offers, and check offers from competing providers. All of these steps would be easy and would require no FCC policy initiatives to facilitate (beyond making sure that the providers aren’t lying about what speeds they will provide).

I’m probably not alone.  The same survey reveals that 50 percent of Americans are satisfied with their broadband speeds, and another 41 percent are “somewhat satisfied.” So, 91 percent of consumers are more or less satisfied, even though 80 percent don’t know their speeds.

It would have been quite useful and instructive if the FCC survey had included an additional question: “Is your broadband speed adequate for the Internet applications you want to use?” And then cross-tabulate the responses with the responses on knowledge of broadband speed. Wanna bet that a substantial majority of people who do not know their speed would also have said that it is adequate?

Surely there are some broadband customers who use applications that require specific (fast) speeds, and these customers have a greater need to know what speed they’re receiving. That’s why providers tell prospective customers what speed tiers they offer. And that’s why one can find multiple web-based speed tests. This information is not hard to find if you want it.

But for some of us, it just ain’t worth it. And shame on anyone who tries to use my willful ignorance as an excuse for some new policy initiative. Rational ignorance is bliss, and I’m a bliss-ter.

We all pay “universal service” assessments on our phone bills.  It’s even broken out separately; go look. It’s probably just a matter of time before the Federal Communications Commission proposes to slap universal service assessments on broadband service to help pay for universal service subsidies for broadband service. The national broadband plan, after all, calls for “broadening” the universal service funding base.

If the commission reclassifies broadband as a “Title II” telecommunications service, this will be virtually automatic because the Telecommunications Act of 1996 says telecommunications providers must contribute toward the FCC’s universal service fund. If the commission doesn’t reclassify broadband, it could still require contributions — just like it imposed universal service assessments on VOIP without classifying VOIP as telecommunications.

After the FCC starts using universal service funds to subsidize broadband for poor people and rural households, the logic will be seductively compelling: “Broadband receives subsidies, so it’s only fair that broadband pays into the fund.”

Forget the ensuing howls about “taxing the Internet.”  I want to talk about another aspect of this.  Would imposing universal service assessments on broadband actually further the FCC’s goals in its national broadband plan?

Irish Setter Chasing Tail

Photo by nawtydawg.

The FCC wants to make broadband available to all Americans, regardless of where they live. Ideally, the FCC would like us all to subscribe, regardless of our income or where we live. The problem with imposing universal service assessments on broadband is that this would increase the price, leading subscribership to be lower than it would otherwise be.

This effect might be big or it might be little. But before making a decision about imposing universal service assessments on broadband, the FCC ought to know the size of the effect and how it compares to the increase in subscribership that would result from the subsidies.

To figure out how universal service assessments might affect broadband subscribership, we need to know how responsive broadband subscription is to changes in price. Economists call this the “price elasticity of demand.” The most recent study I’ve seen — and the only one cited in the FCC’s technical paper underlying the national broadband plan — estimates the elasticity of broadband demand was about -0.69 in 2008. That means a 1 percent increase in price would lead to a 0.69 percent decrease in subscribership. Other, earlier studies find much higher demand elasticities. But to be conservative, let’s use -0.69.

Current universal service assessments on interstate telecommunications are about 15 percent.  About 66.6 million households had broadband in 2008. A 15 percent increase in the price of broadband would reduce subscribership by about 6.9 million households (15% times -0.69 times 66.6 million.)

If the FCC imposed universal service assessments on broadband, it might be able to lower the rate since it would be collecting assessments from a broader base than just telephone service. Suppose the FCC could lower the assessment to 10 percent, more in line with the historical norm.  A 10 percent increase in the price of broadband would reduce subscribership by 4.6 million households (10% times -0.69 times 66.6 million).

So we’re going to reduce broadband subscribership by 4.6-6.9 million households in order to provide subsidies to increase broadband subscribership.  If the funds currently spent to subsidize phone service in rural areas were spent on broadband, that would be enough money to close the “funding gap” and make broadband available to the 7 million homes the FCC  says currently are unserved or under-served. 

Not all of them will susbcribe, so we can’t assume these subsidies will increase subscribership by 7 million.  About 65 percent of Americans currently have broadband at home.  If 65 percent of unserved or underserved households choose to subscribe once broadband becomes available, that would be  4.55 million new subscribers.

In short, it looks like subjecting all broadband to universal service assessments to pay for rural broadband subsidies would either be a wash or reduce subscribership on net. Paying for universal broadband service with assessments on broadband service will give the FCC a lot to do, but it won’t advance the subscribership goals of the national broadband plan. 

There are other ways to raise the money without this perverse effect. Historically, local telephone subscription has been very insensitive to price, so one option would be for the FCC to simply impose a universal service charge per phone number instead of the current percentage fee.  (Low-income households who have “Lifeline” service or use low-cost prepaid wireless plans could be charged a lower fee without sacrificing much revenue.)

Another option would be for Congress to earmark some revenues from upcoming spectrum auctions to fund universal broadband service, and reduce the universal service assessments on our phone bills accordingly.

Reasonable people can differ on whether, or by how much, the federal government should subsidize broadband where it is not currently available. But if we’re gonna do it, there’s no sense in funding it with a mechanism that reduces broadband subscription elsewhere.

Back on St. Paddy’s Day, I offered a few comments on the “funding gap” identified in the FCC’s just-released national broadband plan. Since then, the FCC has put out a notice of proposed rulemaking and notice of inquiry seeking public comment on reforms that would allow its universal service fund to subsidize broadband. The FCC has also released a 137-page technical paper that details how the staff calculated the broadband “availability gap” and funding gap.

So, now there’s more to chew on, and another round of online mastication would be timely given the open FCC proceeding.  Here are three big issues:

1. Definition of broadband

The plan announced a goal of making broadband with actual download speeds of 4 mbps available to all Americans.  In the plan, this goal appeared to be based on the actual average speed of broadband service (4 mbps), even though the median speed is just 3.1 mbps (p. 21). The technical paper, however, also projects that, based on past growth rates in broadband speed, “the median will likely be higher than 4 mbps by the end of 2010.” (p. 43)  Contrary to what I thought back in March, it appears the FCC is justifying the 4 mbps goal based on the median speed, not the average. 

The technical report also argues that 4 mbps is necessary to run high-speed video, which a “growing portion of subscribers” (not including me) apparently use. (p. 43) So, if the broadband plan achieves its goals, every Amercian will have the opportunity to subscribe to Internet access capable of delivering high-quality porn! Fortunately, the technical report uses a different and more productive example — streamed classroom lectures. 

Reasonable people could still question whether the median is the appropriate benchmark to guide government actions intended to equalize broadband access opportunities.  The technical report includes a helpful graphic that shows the most common broadband speed users actually buy is 2 mbps, and 38 percent of all subscribers have speeds of 2 mbps or less. (p. 43) The FCC staff’s model calculates that if the goal were set at 1.5 mbps, the number of “unserved” households would fall from 7 million to 6.3 million, and the required subsidy would fall from $18.6 billion to $15.3 billion. (p. 45) 

If almost half of broadband subscribers have decided that something less than 4 mbps is perfectly adequate, that suggests 4 mbps may go far beyond what is necessary to ensure that all Americans have access to basic broadband service. So, that 4 mbps goal is still questionable.

2. Omission of 3G wireless

The 4 mbps goal allowed the FCC to ignore third generation wireless when it estimated the “availability gap.” The technical paper shows that 95 percent of households have 4 mbps broadband available. About 3 percent of households have no broadband available, while 2 percent have broadband available at speeds ranging from 384 kbps – 3 mbps. (p. 17)  That 2 percent probably includes households with slow DSL and 3G wireless.

The technical paper also revealed that it did not include service from fixed Wireless Internet Service Providers due to data availability. (p. 25) These serve 2 million subscribers in rural areas (p. 66), so the omission potentially accounts for a large chunk of the households considered “unserved.” No telling how many, since apparently the data aren’t available.

Back in March, I guesstimated that the 7 million household “availability gap” might overstate the size of the problem by more than half, simply because 3G wireless is available to 98 percent of American households. Looks like my guesstimate is pretty much in line with the more detailed figures in the FCC technical paper.

 3. Role of satellite

The broadband plan did not count satellite broadband when assessing availability. The technical paper (pp. 89-94)provides a much more detailed explanation of the capacity constraints the FCC staff believes will prevent satellite broadband from serving more than a couple million subscribers.   (The current satellite subscriber base is approximately 900,000.)

The technical paper pointed out that satellites are expensive and take three years to build. (p. 92) To put the time frame in perspective, that’s about as long as the FCC and the Federal-State Joint Board on Universal Service have been discussing universal service subsidies for broadband. Lord knows we shouldn’t make consumers wait that long!

There is, however, something a little asymmetrical about the way the FCC staff treated satellite and other forms of broadband. The point of estimating the broadband availability gap was to determine how much of a subsidy would be required to induce the private sector to build the infrastructure to close the gap. But while the study assumed that the subsidies would call forth the requisite cable, DSL, and wireless infrastructure within some unnamed but acceptable time frame, it decided that three years is just too long to wait for satellite infrastructure to expand. So, satellite plays a minimal role in the FCC’s plan.

Yet even this minimal role has a big impact. To its credit, the technical paper calculated how satellite broadband could dramatically slash the cost of serving the most expensive 250,000 homes. It estimated (pp. 91-92) that the net present value of subsidies required to serve these homes with satellite would range between $800 million and $2 billion — compared to a $13.4 billion subsidy required to serve these homes with terrestrial broadband. (This implies an annual subsidy of $105-255 million, which is pretty close to my March 17 guesstimate of $100-200 million.)

So, satellite broadband could help prevent costs from skyrocketing, even assuming it plays only the limited role envisioned in the FCC staff’s analysis.

The UK’s Daily Mail reports that Phil Bissett, a 62 year old former gravedigger, transformed a steel casket into a street-legal single-seat automobile that does 100 mph, using the engine from his daughter’s 1972 VW. He acquired the casket — you guessed it — on ebay.

Digging in: Phil Bissett has dubbed his crazy new creation 'Holy Smoke'.jpeg

Now here’s where it gets interesting. The casket originally cost 1500 British pounds. He got it for just 98 pounds — about $146 at today’s exchange rate.  That’s 93 percent off!  The article doesn’t say how much he paid for the assorted spare parts from other vehicles needed to turn the casket into an automobile, nor does it explain what his daughter is doing for transportation now that the engine from her car powers his deathmobile.  Still, it’s a nice-looking little sports car, and I’ll bet it cost less and is more reliable than that fine piece of British automotive engineering I used to own, an MG Midget.

Bissett told the reporter, “I’ve learned never to go on the internet when you’ve had a drink. My friend said I’d never be able to turn it into a car but I knew I could.”

This must be what the wonks mean when they say the Internet is an “enabling technology.”

(Be sure to check out the Daily Mail link above to see the cool photos!)

A recent study by Cecil Bohanon and Michael Hicks at Ball State University’s Digital Policy Institute found that statewide cable franchising has increased broadband deployment.

Half of the US states have now enacted legislation that creates statewide cable franchising. These laws allow new entrants into the video business (principally the phone companies) to get permission to offer video from the state, instead of having to deal with local governments to get cable franchises. Previous research, much of it cited here, found that cable competition reduces cable rates and expands the number of channels available to subscribers. Local franchising often delayed or prevented new competitors from entering the market.

Since the same wires get used to transmit video, telephone, and broadband, Bohanon and Hicks reasoned that opening up entry into cable would also increase competition in broadband and hence increase broadband subscribership. And that’s precisely what their econometric study finds. After controlling for other factors, broadband subscribership is 2-5 percent higher in states that have statewide video franchising. Based on this finding, Bohanon and Hicks estimate that statewide video franchising increased broadband subscribership by about 5 million.

Their study covers the years 1999-2008. Maybe some of these 5 million would eventually have gotten broadband anyway. At worst, this study shows that 5 million subscribers got broadband sooner than they otherwise would have.

The study does not test whether the increase in broadband subscribership occurred because statewide video franchising sped up investment and deployment of infrastructure, or if it simply spurred competition in places where phone and cable companies already had the relevant infrastructure deployed.  I don’t know how one would get the confidential data on broadband investment in order to test this.  But given the large amount of new investment related to broadband, I’d be willing to bet that statewide franchising encouraged both new broadband deployment and more intense competition where infrastructure was already in place.

The Washington Post carried an article earlier this week by Cecilia Kang that noted the Federal Trade Commission could gain enforcement power over online businesses as a result of the financial services legislation under discussion in Congress. Ms. Kang contrasted the possibility of an empowered FTC issuing fast-track regulations against the recent experience of the Federal Communications Commission, which has become bogged down in its search for legal authority to issue net neutrality regulations. 

The comparison is insightful, but not for the reasons you might expect. Part of the debate over the FTC revolves around language in the House financial services bill that would repeal the “Magnuson-Moss” provisions that govern FTC promulgation of consumer protection regulations. (The name comes from the fact that these restrictions on FTC rulemaking were included in the Magnuson-Moss Warranty Act, which got the FTC into the business of regulating car warranties.)

If the FTC wants to regulate some type of general business practice under the FTC Act, it has to establish a factual record substantiating that there is actually a systemic problem that regulation can solve, hold a public hearing, allow cross-examination on factual matters, and conduct an economic analysis of the regulation’s effects.  In short, the commission has to do the homework necessary to demonstrate that its proposed regulation will actually solve a widespread problem that actually exists.

When Tim Muris directed the FTC’s Bureau of Consumer Protection in the early 1980s, he authored an article in Regulation magazine pointing out that when the FTC does careful analysis before issuing a rule, the rule is more likely to benefit consumers, more likely to be upheld in court, and more likely to be issued expeditiously. He contrasted the evidence-based eyeglass rule, which took three years to issue, with the anecdote-based funeral rule, which took ten. Muris noted wryly, “Some critics of my position charge that it is revolutionary to ask a body of lawyers and economists not to impose its own view of proper regulation on the world without first systematically evaluating the problem.” Muris went on to serve as chairman of the FTC between 2001-04, and last month he defended the Magnuson-Moss restrictions in testimony before Congress.  

What does this have to do with the FCC?  The FCC lost its case against Comcast on appeal, precisely because the FCC tried to take shortcuts. The FCC tried to promote net neutrality by enforcing a set of “principles” that originated in a former chairman’s speech and were never promulgated in a notice-and-comment rulemaking. The FCC commissioners endorsed these principles without investigating whether there was a systemic problem (ie, more than a few anecdotes of misbehavior). Indeed, Chairman Martin’s Notice of Inquiry on “Broadband Industry Practices” that was launched around the same time the FCC took its enforcement action against Comcast turned up no evidence of a systemic problem. If the FCC now tries to impose net neutrality by reclassifying broadband as a “Title II” common carrier, it will have to do the difficult but necessary work of demonstrating, with real factual evidence, that broadband is more like a common carrier than like the lightly-regulated “information service” the commission previously decided it was.

We don’t need Congress to free the FTC from Magnuson-Moss. Instead, Congress should impose the same requirements on the FCC. Sometimes, taking the time to do your homework leads to better decisions, sooner.

Wine (and beer) lovers who want to order hard-to-get vintages online have benefited greatly from federal court decisions that say state alcohol laws cannot discriminate against out-of-state sellers. Federal legislation introduced last week could threaten electronic commerce as it further entrenches middlemen who normally profit from every bottle of alcohol that passes from producers to consumers.

To understand what’s going on, you have to know something about Commerce Clause litigation. I’m not a lawyer, though I once played the teetotaling William Jennings Bryan character in a high school production of Inherit the Wind.  This proves my motives are pure. And since a lot of lawyers practice economics without a license, I figure I’ll return the favor.

The Commerce Clause of the US Constitution says that Congress, not the states, can regulate interstate commerce. A longstanding judicial interpretation, the “dormant” Commerce Clause, holds that if Congress has not chosen to regulate some aspect of interstate commerce, that means Congress doesn’t want the states to regulate it either.  So, normally a state can regulate interstate commerce only if Congress has given explicit permission.

If state law discriminates against out-of-state sellers who compete with in-state sellers, the state is regulating interstate commerce.  A state is not allowed to do this unless it can prove the discrimination is necessary to accomplish some clear state purpose that cannot be accomplished in some other way. States have to present evidence that proves these points, not just make arguments. 

The 21st Amendment, which repealed Prohibition, gave states the right to regulate alcohol.  Recent court cases involving direct wine shipment clarified that when states regulate alcohol, they must still obey the Commerce Clause. This makes good sense. Imagine if the 21st Amendment freed states from the rest of the Constitution when they regulate alcohol. The police could break into your house without warning if they imagined you might give your 20-year-old a beer, but they’d still need a search warrant if they thought you were cooking meth. 

In Granholm v. Heald (2005), the Surpeme Court said that states could either allow in-state and out-of-state sellers to ship wine directly to consumers, or prohibit it for both, but states couldn’t ban direct shipment for out-of-state sellers and allow it for in-state sellers. In response, most states have liberalized their direct shipment laws rather than making them more restrictive. In Family Wine Makers of California v. Jenkins (2008), federal courts said that an ostensibly neutral law that had a discriminatory effect on out-of-state sellers was also unconstitutional. Massachusetts had enacted a law that allowed only wineries producing 30,000 gallons or less to ship directly to consumers; the production cap was large enough to allow all in-state wineries to direct ship but small enough to exclude 637 larger out-of-state wineries that produce 98 percent of all wine in the United States.  The judge’s opinion essentially said, “By their fruits you shall know them,” and it reserved special grapes of wrath for the blatantly protectionist motives voiced by advocates of the law. Massachusetts appealed this decision to the First Circuit Court of Appeals, lost, and on April 12 decided not to appeal to the Supreme Court.

On April 15, Massachusetts Rep. Bill Delahunt introduced federal legislation that would turn alcoholic Commerce Clause litigation sideways. The legislation makes four big changes in the rules of the game:

  1. It says that states may not “facially discriminate without justification.” This standard might reverse Granholm, because the state laws were clearly discriminatory but the states offered justifications. It would likely reverse Family Wine Makers, because the law was “facially” neutral but had discriminatory effects. (Of course, if this thing passes, I’d be delighted to see a consumer or winery plaintiff prove me wrong.)
  2. It repeals the “dormant” Commerce Clause for alcohol by stating that congressional silence on interstate commerce in alcohol should not be interpreted as a prohibition on state regulation of interstate commerce in alcohol.
  3. It shifts the burden of proof by requiring that anyone challenging a state alcohol law must prove “by clear and convincing evidence” that the law is invalid. Normally, states have the obligation to present evidence that a discriminatory law accomplishes a state purpose and is no more discriminatory than necessary.  
  4. Any state law that burdens interstate commerce or contradicts any other federal law (!) would be upheld unless the person challenging it proves that the state law has no effect on temperance, orderly markets, tax collection, the structure of the distribution system, or underage drinking.  Since there’s plenty of economic evidence that state alcohol laws increase prices, a state could argue its laws reduce consumption and promote temperance, and the law would be upheld.  In other words, any state alcohol law that harms consumers by increasing prices would automatically be OK, even if it blatantly conflicted with other federal laws (such as antitrust laws, which are intended to protect consumers from the high prices associated with monopoly) or the Commerce Clause.

Word on the street is that the biggest pushers of this legislation are the beer wholesalers. Since most of this litigation has involved wine, what’s going on here?

The real goal of this legislation is not harrassing wineries that want to ship a few bottles to out-of-state customers. The real goal is to preserve anti-competitive state laws that force brewers, wine makers, and distillers to market most of their product through beer, wine, and spirits wholesalers, instead of marketing directly to retailers and restaurants. The proposed legislation would effectively insulate these state laws from challenge under the Commerce Clause, federal antitrust laws, or any other federal laws that might give alcohol producers and consumers some leverage to break the wholesalers’ lock on the market.

Call it states’ rights kool-aid with a chaser of economic protectionism.  A strange brew indeed.