A British telecom executive alleges that Verizon and AT&T may be overcharging corporate customers approximately $9 billion a year for wholesale “special access,” services, according to the Financial Times.
The Federal Communications Commission is presently evaluating proprietary data from both providers and purchasers of high-capacity, private line (i.e., special access) services. Some competitors want nothing less than for the FCC to regulate Verizon’s and AT&T’s prices and terms of service. There’s a real danger the FCC could be persuaded–as it has in the past–to set wholesale prices at or below cost in the name of promoting competition. That discourages investment in the network by incumbents and new entrants alike.
As researcher Susan Gately explained in 2007, a study by her firm claimed $8.3 billion in special access “overcharges” in 2006. She predicted they could reach $9.0-$9.5 billion in 2007. This would mean that special access overcharges haven’t increased at all in the past seven to eight years, implying that Verizon and AT&T must not be doing a very good job “abusing their landline monopolies to hurt competitors” (the words of the Financial Times writer).
As I wrote in 2009, researchers at both the National Regulatory Research Institute (NRRI) and National Economic Research Associates (NERA) pointed out that Gately and her colleagues relied on extremely flawed FCC accounting data. This is why the FCC required data collection from providers and purchasers in 2012, the results of which are not yet publicly known. Both the NRRI and NERA studies suggested the possibility that accusations of overcharging could be greatly exaggerated. If Verizon and AT&T were over-earning, their competitors would find it profitable to invest in their own facilities instead of seeking more regulation.
Verizon and AT&T are responsible for much of the investment in the network. Many of the firms that entered the market as a result of the 1996 telecom act have been reluctant to invest in competitive facilities, preferring to lease facilities at low regulated prices. The FCC has always expressed a preference for multiple competing networks (i.e., facilities-based competition), but taking the profit out of special access is sure to defeat this goal by making it more economical to lease.
I’ve been thinking about the “right to try” movement a lot lately. It refers to the growing movement (especially at the state level here in the U.S.) to allow individuals to experiment with alternative medical treatments, therapies, and devices that are restricted or prohibited in some fashion (typically by the Food and Drug Administration). I think there are compelling ethical reasons for allowing citizens to determine their own course of treatment in terms of what they ingest into their bodies or what medical devices they use, especially when they are facing the possibility of death and have exhausted all other options.
But I also favor a more general “right to try” that allows citizens to make their own health decisions in other circumstances. Such a general freedom entails some risks, of course, but the better way to deal with those potential downsides is to educate citizens about the trade-offs associated with various treatments and devices, not to forbid them from seeking them out at all.
The Costs of Control
But this debate isn’t just about ethics. There’s also the question of the costs associated with regulatory control. Practically speaking, with each passing day it becomes harder and harder for governments to control unapproved medical devices, drugs, therapies, etc. Correspondingly, that significantly raises the costs of enforcement and makes one wonder exactly how far the FDA or other regulators will go to stop or slow the advent of new technologies.
I have written about this “cost of control” problem in various law review articles as well as my little Permissionless Innovation book and pointed out that, when enforcement challenges and costs reach a certain threshold, the case for preemptive control grows far weaker simply because of (1) the massive resources that regulators would have to pour into the task on crafting a workable enforcement regime; and/or (2) the massive loss of liberty it would entail for society more generally to devise such solutions. With the rise of the Internet of Things, wearable devices, mobile medical apps, and other networked health and fitness technologies, these issues are going to become increasingly ripe for academic and policy consideration. Continue reading →
At the same time FilmOn, an Aereo look-alike, is seeking a compulsory license to broadcast TV content, free market advocates in Congress and officials at the Copyright Office are trying to remove this compulsory license. A compulsory license to copyrighted content gives parties like FilmOn the use of copyrighted material at a regulated rate without the consent of the copyright holder. There may be sensible objections to repealing the TV compulsory license, but transaction costs–the ostensible inability to acquire the numerous permissions to retransmit TV content–should not be one of them. Continue reading →
Yesterday, the White House Council of Economic Advisers released an important new report entitled, “Occupational Licensing: A Framework for Policymakers.” (PDF, 76 pgs.) The report highlighted the costs that outdated or unneeded licensing regulations can have on diverse portions of the citizenry. Specifically, the report concluded that:
the current licensing regime in the United States also creates substantial costs, and often the requirements for obtaining a license are not in sync with the skills needed for the job. There is evidence that licensing requirements raise the price of goods and services, restrict employment opportunities, and make it more difficult for workers to take their skills across State lines. Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing.
The report supported these conclusions with a wealth of evidence. In that regard, I was pleased to see that research from Mercatus Center-affiliated scholars was cited in the White House report (specifically on pg. 34). Mercatus Center scholars have repeatedly documented the costs of occupational licensing and offered suggestions for how to reform or eliminate unnecessary licensing practices. Most recently, my colleagues and I have explored the costs of licensing restrictions for new sharing economy platforms and innovators. The White House report cited, for example, the recently-released Mercatus paper on “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem,’” which I co-authored with Christopher Koopman, Anne Hobson, and Chris Kuiper. And it also cited a new essay by Tyler Cowen and Alex Tabarrok on “The End of Asymmetric Information.” Continue reading →
The FCC is being dragged–reluctantly, it appears–into disputes that resemble the infamous beauty contests of bygone years, where the agency takes on the impossible task of deciding which wireless services deliver more benefits to the public. Two novel technologies used for wireless broadband–TLPS and LTE-U–reveal the growing tensions in unlicensed spectrum. The two technologies are different and pose slightly different regulatory issues but each is an attempt to bring wireless Internet to consumers. Their advocates believe these technologies will provide better service than existing wifi technology and will also improve wifi performance. Their major similarity is that others, namely wifi advocates, object that the unlicensed bands are already too crowded and these new technologies will cause interference to existing users.
The LTE-U issue is new and developing. The TLPS proceeding, on the other hand, has been pending for a few years and there are warning signs the FCC may enter into beauty contests–choosing which technologies are entitled to free spectrum–once again.
“Why hasn’t Europe fostered the kind of innovation that has spawned hugely successful technology companies?” asks James B. Stewart in an important new column for the New York Times (“A Fearless Culture Fuels U.S. Tech Giants“).
That’s a great question, and one that I have tried to answer in a series of recent essays. (See, for example, “Europe’s Choice on Innovation” and “Embracing a Culture of Permissionless Innovation.”) What I have suggested in those essays is that the starkly different outcomes on either side of the Atlantic in terms of recent economic growth and innovation can primarily be explained by cultural attitudes toward risk-taking and failure. “For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things,” I have argued. And the most powerful proof of this is to examine the amazing natural experiment that has played out on either side of the Atlantic over the past two decades with the Internet and the digital economy.
For example, an annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing and digital technology. None of them are based in Europe, however. Another recent survey revealed that the world’s 15 most valuable Internet companies (based on market capitalizations) have a combined market value of nearly $2.5 trillion, but none of them are European while 11 of them are U.S. firms. Again, it is America’s tech innovators that dominate that list.
Many European officials and business leaders are waking up to this grim reality and are wondering how to reverse this situation. In his Times essay, Stewart quotes Danish economist Jacob Kirkegaard of the Peterson Institute for International Economics, who notes that Europeans “all want a Silicon Valley. . . . But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”
On Thursday, it was my great pleasure to participate in a Washington Legal Foundation (WLF) event on “Online Privacy Regulation: The Challenge of Defining Harm.” The entire event video can be found on YouTube here, but down below I pasted the clip of just my remarks. Other speakers at the event included: FTC Commissioner Maureen K. Ohlhausen, Commissioner; John B. Morris, Jr., the Associate Administrator and Director of Internet Policy athe U.S. Department of Commerce’s National Telecommunications and Information Administration; and Katherine Armstrong, Counsel at the law firm of Hogan Lovells. Glenn Lammi of the WLF moderated the session.
My remarks drew upon a few recent law review articles I have published relating digital privacy debates to previous debates over free speech and online child safety issues. (Here are those articles: 1, 2, 3).
Along with colleagues at the Mercatus Center at George Mason University, I am releasing two major new reports today dealing with the regulation of the sharing economy. The first report is a 20-page filing to the Federal Trade Commissionthat we are submitting to the agency for its upcoming June 9th workshop on “The “Sharing” Economy: Issues Facing Platforms, Participants, and Regulators.” We have been invited to participate in that event and I will be speaking on the fourth panel of the workshop. The filing I am submitting today for that workshop was co-authored with my Mercatus colleagues Christopher Koopman and Matt Mitchell.