Search Results for “a la carte” – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Mon, 28 Nov 2022 01:18:49 +0000 en-US hourly 1 6772528 The “A La Carte” Wars Come to an End https://techliberation.com/2019/04/12/the-a-la-carte-wars-come-to-an-end/ https://techliberation.com/2019/04/12/the-a-la-carte-wars-come-to-an-end/#comments Fri, 12 Apr 2019 14:26:38 +0000 https://techliberation.com/?p=76476

A decade ago, a heated debate raged over the benefits of “a la carte” (or “unbundling”) mandates for cable and satellite TV operators. Regulatory advocates said consumers wanted to buy all TV channels individually to lower costs. The FCC under former Republican Chairman Kevin Martin got close to mandating a la carte regulation.

But the math just didn’t add up. A la carte mandates, many economists noted, would actually cost consumers just as much (or even more) once they repurchased all the individual channels they desired. And it wasn’t clear people really wanted a completely atomized one-by-one content shopping experience anyway.

Throughout media history, bundles of all different sorts had been used across many different sectors (books, newspapers, music, etc.). This was because consumers often enjoyed the benefits of getting a package of diverse content delivered to them in an all-in-one package. Bundling also helped media operators create and sustain a diversity of content using creative cross-subsidization schemes. The traditional newspaper format and business is perhaps the greatest example of media bundling. The classifieds and sports sections helped cross-subsidize hard news (especially local reporting). See this 2008 essay by Jeff Eisenach and me for details for more details on the economics of a la carte.

Yet, with the rise of cable and satellite television, some critics protested the use of bundles for delivering content. Even though it was clear that the incredible diversity of 500+ channels on pay TV was directly attributable to strong channels cross-subsidizing weaker ones, many regulatory advocates said we would be better off without bundles. Moreover, they said, online video markets could show us the path forward in the form of radically atomized content options and cheaper prices.

Flash-forward to today. As this Wall Street Journal article points out, online video providers are rejecting a la carte and recreating content bundles to keep a diversity of programming flowing. This happened in unregulated markets without any FCC rules. YouTube, Hulu, PlayStation, and many other online video providers are creating new bundles and monetization schemes.

It is also worth noting that this same sort of “re-bundling” of content is happening with online news sources and other digital platforms as various sites struggle to find content monetization schemes that can sustain diverse, high-quality content in the Digital Era. Content bundling and various paywall schemes are helping them do so.

The lesson here is that the economics of content creation and delivery are quite dynamic, challenging, and extremely hard to predict. Mandating “a la carte” unbundling of content sounded smart and well-intentioned to many people a decade ago, but it proved to be problematic even in highly competitive online markets. Thankfully, we did not mandate unbundling by law. We waited and watched to see how it naturally played out in various markets. We now have a better feel for how big of a mistake mandatory a la carte would have likely been in practice.

]]>
https://techliberation.com/2019/04/12/the-a-la-carte-wars-come-to-an-end/feed/ 1 76476
Infrastructure Control as Innovation Regulation https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/ https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/#respond Fri, 10 Aug 2018 20:28:51 +0000 https://techliberation.com/?p=76343

The ongoing ride-sharing wars in New York City are interesting to watch because they signal the potential move by state and local officials to use infrastructure management as an indirect form of innovation control or competition suppression. It is getting harder for state and local officials to defend barriers to entry and innovation using traditional regulatory rationales and methods, which are usually little more than a front for cronyist protectionism schemes. Now that the public has increasingly enjoyed new choices and better services in this and other fields thanks to technological innovation, it is very hard to convince citizens they would be better off without more of the same.

If, however, policymakers claim that they are limiting entry or innovation based on concerns about how disruptive actors supposedly negatively affect local infrastructure (in the form of traffic or sidewalk congestion, aesthetic nuisance, deteriorating infrastructure, etc.), that narrative can perhaps make it easier to sell the resulting regulations to the public or, more importantly, the courts. Going forward, I suspect that this will become a commonly-used playbook for many state and local officials looking to limit the reach of new technologies, including ride-sharing companies, electric scooters, driverless cars, drones, and many others.

To be clear, infrastructure control is both (a) a legitimate state and local prerogative; and (b) something that has been used in the past to control innovation and entry in other sectors. But I suspect that this approach is about to become far more prevalent because a full-frontal defense of barriers to innovation is far more likely to face serious public and legal challenges. For example, limiting ride-sharing competition in NYC on the grounds that it hurts local taxi cartels is unappealing to citizens and the courts alike. So, NYC is now making it all about traffic congestion. Even if that regulatory rationale is bunk, it is a much harder narrative to counter in the court of public opinion or the courts of law. For that reason, we can expect more and more state and local governments to just flip the narrative about innovation regulation going forward in this fashion.

How should defenders of innovation and competition respond to state and local efforts to use infrastructure control as an indirect form of innovation regulation? First, call them out on it if it really is just naked protectionism by another name. Second, to the extent there may be something their asserted concerns about infrastructure problems, propose alternative solutions that do not freeze innovation and new entry outright. The best approach is to borrow a page out of Coase’s playbook and use smarter pricing and property rights solutions. Or perhaps use unique funding mechanisms for new and better infrastructure that could accommodate ongoing entry and innovation.

For example, my Mercatus colleague Salim Furth recently penned a column (“Let Private Companies Pay for More Bike Lanes”) in which he noted how the electric scooter company Bird has offered cities a dollar a day per scooter to help build protected bike lanes. In doing so, Furth notes, Bird is:

offering to enter the long tradition of private provision of public goods. The original subway lines were private. Private institutions have frequently built or maintained public parks. Radio broadcasts, a textbook example of a public good, are largely private in the US. Companies often provide public entertainment because they benefit from the attraction.

In a similar way, Uber has already supported usage-based road pricing to alleviate congestion.  We could imagine still other examples like this for emerging technology companies. Drone manufacturers could help create or pay for “aerial sidewalks” or easements so they can deliver goods more efficiently. Scooter and dockless bike companies could help pay for bike and scooter paths either directly or through promotional efforts. Driverless car fleet providers could help build or cover the cost of new parking garages or for road improvements that would help make autonomous systems work better in local communities.

That is the pro-consumer, pro-innovation path forward. Hopefully, state and local officials will embrace such forward-looking reform ideas instead of seeking to indirectly control new entry and competition under the guise of infrastructure management.

]]>
https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/feed/ 0 76343
How Should Privacy Be Defined? A Roadmap https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/ https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/#comments Mon, 06 Aug 2018 12:00:45 +0000 https://techliberation.com/?p=76335

Privacy is an essentially contested concept . It evades a clear definition and when it is defined , scholars do so inconsistently. So, what are we to do now with this fractured term? Ryan Hagemann suggests a bottom up approach. Instead of beginning from definitions, we should be building a folksonomy of privacy harms :

By recognizing those areas in which we have an interest in privacy, we can better formalize an understanding of when and how it should be prioritized in relation to other values. By differentiating the harms that can materialize when it is violated by government as opposed to private actors, we can more appropriately understand the costs and benefits in different situations.

Hagemann aims to route around definitional problems by exploring the spaces where our interests intersect with the concept of privacy, in our relations to government, to private firms, and to other people. It is a subtle but important shift in outlook that is worth exploring.

Hagemann’s colleague Will Wilkinson laid out the benefits of this kind of philosophical exercise, which comes to me via Paul Crider . Wilkinson traces it back to very beginnings of liberal thought, which takes a bit to wind up:

Thomas Reid, the Scottish Enlightenment philosopher, pointed out that there are two ways to construct an account of what it means to really know something, rather than just believing it to be true. The first way is to develop an abstract theory of knowledge—a general criterion that separates the wheat of knowledge from the chaff of mere opinion—and then see which of our opinions qualify as true knowledge. Reid noted that this method tends to lead to skepticism, because it’s hard, if not impossible, to definitively show that any of our opinions check off all the boxes these sort of general criteria tend to set out.

That’s why Descartes ends up in a pickle and Hume leaves us in a haze of uncertainty. It’s all a big mistake, Reid said, because the belief that I have hands, for example, is on much firmer ground than any abstract notions about the nature of true knowledge that I might dream up. If my theory implies that I don’t really know that I have hands, that’s a reason to reject the theory, not a reason to be skeptical about the existence of my appendages.

According to Reid, a better way to come up with a theory of knowledge is to make a list of the things we’re very sure that we really know. Then, we see if we can devise a coherent theory that explains how we know them.

The 20th century philosopher Roderick Chisholm called these two ways of theorizing about knowledge “methodism”—start with a general theory, apply it, and see what, if anything, counts as knowledge according to the theory—and “particularism”—start with an inventory of things that we’re sure we know and then build a theory of knowledge on top of it.

Hagemann is right to build privacy on the particularism of Wilkinson, Reid and Chisholm. Given the changing nature of technology, we should take a regular “inventory of things that we’re sure we know” about privacy and then build theories on top of it.

Indeed, privacy scholarship finds its genesis in this method. While many have gotten hung up on the rights talk in the “Right to Privacy”, Warren and Brandeis actually aim “to consider whether the existing law affords a principle which can properly be invoked to protect the privacy of the individual; and, if it does, what the nature and extent of such protection is.” The article looks to previous law to construct a principle for “recent inventions and business methods.” This is particularism applied to privacy.

Only a handful of court cases that are actually reviewed in the article, the most important of which is Marian Manola v. Stevens & Myers . Marian Manola was a classically trained comic opera prima donna that had a string of altercations with her company where Stevens was the manager. About a year before the case, the New York Times carried a story describing a dispute between Manola and another actor in the McCaull Opera Company. She refused to go on stage after the actor pushed her on stage and Benjamin Stevens, apparently “ignored her until she returned to her duty.” About a year later, Stevens set up the photographer Myers in a box, as a stunt to boost sales. Manola sued the both of them. Today, the case would be cited in the right to publicity literature.

Still, Warren and Brandeis were trying to survey the land of privacy harms and then build a principle on top of it.

Be it either particularism or methodism, these ways of constructing knowledge frame the moral ground, creating a field where privacy advocates and privacy scholars can converse. What unites these two groups, then, is their common rhetoric about the contours of  privacy harms. And so, what constitutes a harm is still the central question in privacy policy.

]]>
https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/feed/ 1 76335
Realities of Zero Rating and Internet Streaming Will Confront the FCC in 2016 https://techliberation.com/2016/01/12/realities-of-zero-rating-and-internet-streaming-will-confront-the-fcc-in-2016/ https://techliberation.com/2016/01/12/realities-of-zero-rating-and-internet-streaming-will-confront-the-fcc-in-2016/#respond Tue, 12 Jan 2016 15:16:32 +0000 http://techliberation.com/?p=75974

For tech policy progressives, 2015 was a great year. After a decade of campaigning, network neutrality advocates finally got the Federal Communications Commission to codify regulations that require Internet service providers to treat all traffic the same as it crosses the network and is delivered to customers.

Yet the rapid way broadband business models, always tenuous to begin with, are being overhauled, may throw some damp linens on their party. More powerful smart phones, the huge uptick in Internet streaming and improved WiFi technology are just three factors driving this shift.

As regulatory mechanisms lag market trends in general, they can’t help but be upended along with the industry they aim to govern. Looking ahead to the coming year, the consequences of 2015’s regulatory activism will create some difficult situations for the FCC.

Zero rating will clash with net neutrality

 The FCC biggest question will be whether “zero rating,” also known as “toll-free data,” is permissible under its new Open Internet rules. Network neutrality prohibits an ISP from favoring one provider’s content over another’s. Yet by definition, that’s what zero rating does: an ISP agrees not to count data generated by a specific content provider against a customer’s overall bandwidth cap. Looking at from another angle, instead of charging more for enhanced quality—the Internet “toll road” network neutrality is designed to prevent, zero rating offers a discount for downgraded transmission. As ISPs, particularly bandwidth-constrained wireless companies, replace “all-you-can-eat” data with tiered pricing plans that place a monthly limit on total data used—and assess additional charges on consumers who go beyond the cap—zero rating agreements become critical in allowing companies like Alphabet (formerly Google), Facebook and Netflix, companies that were among the most vocal supports of network neutrality, to keep users regularly engaged.

T-Mobile has been aggressive with zero rating, having reached agreements with Netflix, Hulu, HBO Now, and SlingTV for its Binge On feature. Facebook, another network neutrality advocate, has begun lobbying for zero rating exceptions outside the U.S. Facebook founder and CEO Mark Zuckerberg told a tech audience in India, where net neutrality has been a long-standing rule, that zero rating is not a violation, a contention that some tech bloggers immediately challenged.

When it came to net neutrality rulings, the FCC may have hoped it would only have to deal with disputes dealing with the technical sausage-making covered by the “reasonable network management” clause in the Title II order (to be fair, zero rating involves some data optimization). But any ruling that permits zero rating would collapse its entire case for network neutrality. The Electronic Frontier Foundation, another vocal net neutrality supporter, understands this explicitly, and wants the FCC to nip zero rating in the bud.

The problem is that zero-rating is not anti-consumer, but a healthy, market-based response to bandwidth limitations. Even though ISPs are treating data differently, customers get access to more entertainment and content without higher costs. Bottom line: consumers get more for their money. For providers like Alphabet and Facebook, which rely on advertising, there stands to be substantial return on investment. Unlike blanket regulation, it’s voluntary, sensitive to market shifts and not coercive.

How long before these companies who lobbied for network neutrality begin their semantic gymnastics to demand exemptions for zero rating? The Court of Appeals may make it moot by overturning Title II reclassification outright. But failing that, expect some of the big Silicon Valley tech companies to start their rhetorical games soon.

  Internet streaming will confound the FCC

The zero rating controversy is just one more outgrowth of the rise in Internet streaming.

For the past seven years, the FCC’s regulatory policy has been based on the questionable assertion that cable and phone companies are monopoly bottlenecks.

Title II reclassification is aimed at preventing ISPs from using these perceived bottlenecks to extract higher costs from content providers. Yet at the same time, the FCC, in keeping with its cable/telco/ISPs-are-monopolies mindset, depends on them to fund its universal service and e-rate funds and fulfill its public interest mandate by carrying broadcast feeds from local television stations.

The simple fact is that the local telephone, cable and ISP bundlers are not monopolies. The 463,000 subscribers the top 8 cable companies lost in the second quarter of 2015 are getting their TV entertainment from somewhere. Those who are not cutting the cord completely are reducing their service: Another study estimated that 45 percent of U.S. households reduced the level of cable or satellite service in 2014.

Consumers are replacing their cable bundle with streaming options such as Roku, Amazon Fire, Apple TV and Google Play. These companies aggregate and optimize the Internet video for big screen TVs and home entertainment centers. Broadcast and basic cable programs are usually free (but carry ads); other programming can be purchased by subscription (Netflix, HBO Now) or on demand (iTunes, Amazon). While in many cases consumers retain their broadband connection, that remains their only purchase from the cable or telephone company. But even that might be optional, too. Millennial consumers are comfortable using free WiFi services or zero-rated wireless plans like T-Mobile’s Binge On.

But as consumers cut the cord, cable revenues go down. When cable revenues drop, so does the funding for all those FCC pet causes. The question is how hard will the FCC push to require streaming services to pay universal service fees, or include local TV feeds among their channel offerings? Under the current law, the FCC has no regulatory jurisdiction over streaming applications, unless, as with Title II, it tries to play fast and loose with legal definitions. The FCC has never been shy about overreaching, and as early as October 2014 Chairman Tom Wheeler suggested that IP video aggregators could be considered multichannel video programming distributors, a term that to date has been applied only to cable television companies.

Ironically, streaming stands to meet two long-held progressive policy goals—a la carte programming selection and structural separation of the companies that build and manage physical broadband networks and the companies that provide the applications that ride it. Cable and Internet bundles are so 2012! Yet 2016 finds the FCC is woefully unprepared for this shift. In fact, last we looked was encouraging small towns to borrow millions of dollars to get onto the cable TV business.

Over the past seven years, the FCC has pursued Internet regulations from an ideological perspective—treating it as a necessary component of the overall business ecosystem. In truth, regulation is supposed to serve consumer interests, and should be applied to address extant problems, not as precautionary measures. Unfortunately, the FCC has chosen to ignore market realities and apply rules that fit its own deliberate misperceptions. The Commission’s looming inability to find consistency in enforcing its own edicts is a problem solely of its own making.

]]>
https://techliberation.com/2016/01/12/realities-of-zero-rating-and-internet-streaming-will-confront-the-fcc-in-2016/feed/ 0 75974
What Should the FTC Do about State & Local Barriers to Sharing Economy Innovation? https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/ https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/#respond Tue, 12 May 2015 20:21:02 +0000 http://techliberation.com/?p=75549

The Federal Trade Commission (FTC) is taking a more active interest in state and local barriers to entry and innovation that could threaten the continued growth of the digital economy in general and the sharing economy in particular. The agency recently announced it would be hosting a June 9th workshop “to examine competition, consumer protection, and economic issues raised by the proliferation of online and mobile peer-to peer business platforms in certain sectors of the [sharing] economy.” Filings are due to the agency in this matter by May 26th. (Along with my Mercatus Center colleagues, I will be submitting comments and also releasing a big paper on reputational feedback mechanisms that same week. We have already released this paper on the general topic.)

Relatedly, just yesterday, the FTC sent a letter to Michigan policymakers about restricting entry by Tesla and other direct-to-consumer sellers of vehicles. Michigan passed a law in October 2014 prohibiting such direct sales. The FTC’s strongly-worded letter decries the state’s law as “protectionism for independent franchised dealers” noting that “current provisions operate as a special protection for dealers—a protection that is likely harming both competition and consumers.” The agency argues that:

consumers are the ones best situated to choose for themselves both the vehicles they want to buy and how they want to buy them. Automobile manufacturers have an economic incentive to respond to consumer preferences by choosing the most effective distribution method for their vehicle brands. Absent supportable public policy considerations, the law should permit automobile manufacturers to choose their distribution method to be responsive to the desires of motor vehicle buyers.

The agency cites the “well-developed body of research on these issues strongly suggests that government restrictions on distribution are rarely desirable for consumers” and the staff letter continues on to utterly demolish the bogus arguments set forth by defenders of the blatantly self-serving, cronyist law. (For more discussion of just how anti-competitive and anti-consumer these laws are in practice, see this January 2015 Mercatus Center study, “State Franchise Law Carjacks Auto Buyers,” by Jerry Ellig and Jesse Martinez.)

The FTC’s letter is another example of how the agency can take steps using its advocacy tools to explain to state and local policymakers how their laws may be protectionist and anti-consumer in character. Needless to say, this also has ramifications for how the agency approaches parochial restraints on entry and innovation affecting the sharing economy.

In our forthcoming Mercatus Center comments to the FTC for its June 6th sharing economy workshop, Christopher Koopman, Matt Mitchell, and I will address many issues related to the sharing economy and its regulation. Beyond addressing all five of the specific questions asked in the Commission’s workshop notice, we also include a discussion about “Federal Responses to Local Anticompetitive Regulations.” Down below I have reproduced the current rough draft of that section of our filing in the hope of getting input from others. Needless to say, the idea of the FTC aggressively using its advocacy efforts or even federal antitrust laws to address state and local barriers to trade and innovation will make some folks uncomfortable–especially on federalism grounds. But we argue that a good case can be made for the agency using both its advocacy and antitrust tools to address these issues. Let us know what you think.

 


 

The Federal Trade Commission possesses two primary tools to address public restraints of trade created by state and local authorities: advocacy and antitrust.[1]

Through its advocacy program, the Commission can provide specific comments to state and local officials regarding the effects of both proposed and existing regulations.[2] Commissioner Joshua Wright has noted that, “For many years, the FTC has used its mantle to comment on legislation and regulation that may restrain competition in a way that harms consumers.”[3] Thus, at a minimum, the Commission can and should shine light on parochial governmental efforts to restrain trade and limit innovation throughout the sharing economy.[4] By shining more light on state or local anti-competitive rules, the Commission will hopefully make governments, or their surrogate bodies (such as licensing boards), more transparent about their practices and more accountable for laws or regulations that could harm consumer welfare. However, to be successful, the Commission’s advocacy efforts depend upon the willingness of state and local legislators and regulators to heed its advice.[5]

The Commission has already used its advisory role in its recent guidance to state and local policymakers regarding the regulation of ridesharing services. The Commission noted then that “a regulatory framework should be responsive to new methods of competition,” and set forth the following vision regarding what it regards as the proper approach to parochial regulation of passenger transportation services:

Staff recommends that a regulatory framework for passenger vehicle transportation should allow for flexibility and adaptation in response to new and innovative methods of competition, while still maintaining appropriate consumer protections. [Regulators] also should proceed with caution in responding to calls for change that may have the effect of impairing new forms or methods of competition that are desirable to consumers. . . .  In general, competition should only be restricted when necessary to achieve some countervailing procompetitive virtue or other public benefit such as protecting the public from significant harm.[6]

This represents a reasonable framework for addressing concerns about parochial regulation of the sharing economy more generally.

Unfortunately, in areas relevant to the regulation of the sharing economy (e.g., taxicab regulations and rules governing home and apartment rentals) anticompetitive regulations have remained on the books—and in some instances have expanded—in spite of more than 30 years of Commission comment and advocacy.[7]  In fact, as Public Citizen noted in a recent Supreme Court filing:

[M]any more occupations are regulated than ever before, and most boards doing the regulating—in both traditional and new professions—are dominated by industry members who compete in the regulated market. Those board member-competitors, in turn, commonly engage in regulation that can be seen as anticompetitive self-protection. The particular forms anticompetitive regulations take are highly varied, the possibilities seemingly limited only by the imaginations of the board members.[8]

In these instances, the Commission’s antitrust enforcement authority may need to be utilized when its advocacy efforts fall short with regard to regulations that favor incumbents by limiting competition and entry.[9] Many academics have endorsed expanded antitrust oversight of public barriers to trade and innovation.[10] As Commissioner Wright has argued, “the FTC is in a good position to use its full arsenal of tools to ensure that state and local regulators do not thwart new entrants from using technology to disrupt existing marketplace.”[11] He notes specifically that he is “quite confident that a significant shift of agency resources away from enforcement efforts aimed at taming private restraints of trade and instead toward fighting public restraints would improve consumer welfare.”[12] We agree.

The Supreme Court’s recent decision in North Carolina State Board of Dental Examiners v. Federal Trade Commission made it clear that local authorities cannot claim broad immunity from federal antitrust laws.[13] This is particularly true, the Court noted, “where a State delegates control over a market to a nonsovereign actor,” such as a professional licensing board consisting primarily of members of the affected interest being regulated.[14] “Limits on state-action immunity are most essential when a State seeks to delegate its regulatory power to active market participants,” the Court held, “for dual allegiances are not always apparent to an actor and prohibitions against anticompetitive self-regulation by active market participants are an axiom of federal antitrust policy.”[15]

The touchstone of this case and the Court’s related jurisprudence in this area is political accountability.[16] State officials must (1) “clearly articulate” and (2) “actively supervise” licensing arrangements and regulatory bodies if they hope to withstand federal antitrust scrutiny.[17] The Court clarified this test in N.C. Dental holding that “the Sherman Act confers immunity only if the State accepts political accountability for the anticompetitive conduct it permits and controls.”[18] In other words, if state and local officials want to engage in protectionist activities that restrain trade in pursuit of some other countervailing objective, then they need to own up to it by being transparent about their anticompetitive intentions and then actively oversee the process after that to ensure it is not completely captured by affected interests.[19]

Some might argue that this does not go far enough to eradicate anti-competitive barriers to trade at the state or local level that could restrain the innovative potential of the sharing economy. While that may be true, some limits on the Commission’s federal antitrust discretion are necessary to avoid impinging upon legitimate state and local priorities.

Over time, it is our hope that by empowering the public with more options, more information and better ways to shine light on bad actors, the sharing economy will continue to make many of those old regulations unnecessary. Thus, in line with Commissioner Maureen Ohlhausen’s wise advice, the Commission should encourage state and local officials to exercise patience and humility as they confront technological changes that disrupt traditional regulatory systems.[20]

But when parochial regulators engage in blatantly anti-competitive activities that restrain trade, foster cartelization, or harm consumer welfare in other ways, the Commission can act to counter the worst of those tendencies.[21] The Commission’s standard of review going forward was appropriately articulated by Commissioner Wright recently when he noted that, “in the context of potentially disruptive forms of competition through new technologies or new business models, we should generally be skeptical of regulatory efforts that have the effect of favoring incumbent industry participants.”[22]

Such parochial protectionist barriers to trade and innovation will become even more concerning as the potential reach of so many sharing economy businesses grows larger. The boundary between intrastate and interstate commerce is sometimes difficult to determine for many sharing economy platforms. Clearly, much of the commerce in question occurs within the boundaries of a state or municipality, but sharing economy services also rely upon Internet-enabled platforms with a broader reach. To the extent state or local restrictions on sharing economy operations create negative externalities in the form of “interstate spillovers,” the case for federal intervention is strengthened.[23] It would be preferable if Congress chose to deal with such spillovers using its Commerce Clause authority (Art. 1, Sec. 8 of the Constitution),[24] but the presence of such negative externalities might also bolster the case for the Commission’s use of antitrust to address parochial restraints on trade.


[1]     See Maureen K. Ohlhausen, Reflections on the Supreme Court’s North Carolina Dental Decision and the FTC’s Campaign to Rein in State Action Immunity, before the Heritage Foundation, Washington, DC, March 31, 2015, at 19-20.

[2]     Id., at 20. (“The primary goal of such advocacy is to convince policymakers to consider and then minimize any adverse effects on competition that may result from regulations aimed at preventing various consumer harms.”) Also see James C. Cooper and William E. Kovacic, “U.S. Convergence with International Competition Norms: Antitrust Law and Public Restraints on Competition,” Boston University Law Review, Vol. 90, No. 4, (August 2010): 1582, “Competition advocacy helps solve consumers’ collective action problem by acting within the regulatory process to advocate for regulations that do not restrict competition unless there is a compelling consumer protection rationale for imposing such costs on citizens.”).

[3]     Joshua D. Wright, “Regulation in High-Tech Markets:  Public Choice, Regulatory Capture, and the FTC,” Remarks of Joshua D. Wright Commissioner, Federal Trade Commission at the Big Ideas about Information Lecture Clemson University, Clemson, South Carolina, April 2, 2015, at 15, https://www.ftc.gov/public-statements/2015/04/regulation-high-tech-markets-public-choice-regulatory-capture-ftc.

[4]     Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1610, (“Competition agencies could devote greater resources to conduct research to measure the effects of public policies that restrict competition. A research program could accumulate and analyze empirical data that assesses the consumer welfare effects of specific restrictions. Such a program could also assess whether the stated public interest objectives of government restrictions are realized in practice.”)

[5]     Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1582, (“The value of competition advocacy should be measured by (1) the degree to which comments altered regulatory outcomes times (2) the value to consumers of those improved outcomes. For all practical purposes, however, both elements are difficult to measure with any degree of certainty.”).

[6]     Federal Trade Commission, Staff Comments Before the Colorado Public Utilities Commission In The Matter of The Proposed Rules Regulating Transportation By Motor Vehicle, 4 Code of Colorado Regulations, (March 6, 2013), http://ftc.gov/os/2013/03/130703coloradopublicutilities.pdf.

[7]     Marvin Ammori, “Can the FTC Save Uber,” Slate, March 12, 2013, http://www.slate.com/articles/technology/future_tense/2013/03/uber_lyft_sidecar_can_the_ftc_fight_local_taxi_commissions.html (noting that, “not only does the FTC have the authority to take these cities to impartial federal courts and end their anticompetitive actions; it also has deep expertise in taxi markets and antitrust doctrines.”) Also see, Edmund W. Kitch, “Taxi Reform—The FTC Can Hack It,” Regulation, May/June 1984, http://object.cato.org/sites/cato.org/files/serials/files/regulation/1984/5/v8n3-3.pdf.

[8]     Brief of Amici Curiae Public Citizen in Support of Respondent, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 24.

[9]     Brief of Antitrust Scholars as Amici Curiae in Support of Respondent, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 6, 2014): 24, (“Antitrust review is entirely appropriate for curbing the excesses of occupational licensing because the anticompetitive effect has a similar effect on the market—and in particular consumers—as does traditional cartel activity.”)

[10]   See Mark A. Perry, “Municipal Supervision and State Action Antitrust Immunity,” The University of Chicago Law Review, Vol. 57, (Fall 1990): 1413-1445; William J. Martin, “State Action Antitrust Immunity for Municipally Supervised Parties,” The University of Chicago Law Review, Vol. 72, (Summer, 2005): 1079-1102; Jarod M. Bona, “The Antitrust Implications of Licensed Occupations Choosing Their Own Exclusive Jurisdiction,” University of St. Thomas Journal of Law & Public Policy, Vol 5, (Spring 2011): 28-51; Ingram Weber “The Antitrust State Action Doctrine and State Licensing Boards,” The University of Chicago Law Review, Vol. 79, (2012); Aaron Edlin and Rebecca Haw, “Cartels by Another Name:  Should Licensed Occupations Face Antitrust Scrutiny?,” University of Pennsylvania Law Review, Vol. 162, (2014): 1093-1164.

[11]   Wright, “Regulation in High-Tech Markets,” at 28-9.

[12]   Wright, “Regulation in High-Tech Markets,” at 29.

[13]   North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015).

[14]   Id.

[15]   Id. Also see Edlin & Haw, “Cartels by Another Name,” at 1143, (“Who could seriously argue that an unsupervised group of competitors appointed to regulate their own profession can be counted on to neglect their selfish interests in favor of the state’s?”); Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 3, (“Antitrust immunity for private parties who act under color of state law is especially problematic, given that anticompetitive conduct is most likely to occur when private parties are in a position to exploit government’s regulatory powers.”)

[16]   See Maureen K. Ohlhausen, Reflections on the Supreme Court’s North Carolina Dental Decision and the FTC’s Campaign to Rein in State Action Immunity, before the Heritage Foundation, Washington, DC, March 31, 2015, at 16, https://www.ftc.gov/public-statements/2015/03/reflections-supreme-courts-north-carolina-dental-decision-ftcs-campaign, (“states need to be politically accountable for whatever market distortions they impose on consumers.”); Edlin & Haw, “Cartels by Another Name,” at 1137, (“political accountability is the price a state must pay for antitrust immunity.)

[17]   See Federal Trade Commission, Office of Policy and Planning, Report of the State Action Task Force (2003): 54, (“clear articulation requires that a state enunciate an affirmative intent to displace competition and to replace it with a stated criterion. Active supervision requires the state to examine individual private conduct, pursuant to that regulatory regime, to ensure that it comports with that stated criterion. Only then can the underlying conduct accurately be deemed that of the state itself, and political responsibility for the conduct fairly placed with the state.”) This test has been developed and refined in a variety of cases over the past 35 years. See: California Retail Liquor Dealers Ass’n v. Midcal Aluminum, Inc., 445 U.S. 97 (1980); Cmty. Comm’ns Co., Inc. v. City of Boulder, 455 U.S. 40, 48-51 (1982); City of Columbia v. Omni Outdoor Advertising, Inc., 499 U.S. 365 (1991); FTC v. Ticor Title Ins. Co., 504 U.S. 621 (1992).

[18]   North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015).

[19]   Edlin & Haw, “Cartels by Another Name,” at 1156. (“Requiring that the state place its imprimatur on regulation is at least better than the status quo, in which states too often delegate self-regulation to professionals and walk away.”) See also North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015) (“[Federal antitrust] immunity requires that the anticompetitive conduct of nonsovereign actors, especially those authorized by the State to regulate their own profession, result from procedures that suffice to make it the State’s own.”).

[20]  Maureen K. Ohlhausen, Commissioner, Fed. Trade Commission, “Regulatory Humility in Practice,” Remarks of the American Enterprise Institute, Washington, D.C. (April 1, 2015).

[21]   Edlin & Haw, “Cartels by Another Name,” at 1094, (“state action doctrine should not prevent antitrust suits against state licensing boards that are comprised of private competitors deputized to regulate and to outright exclude their own competition, often with the threat of criminal sanction.”). See also Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 2, 21, http://www.americanbar.org/content/dam/aba/publications/supreme_court_preview/BriefsV4/13-534_resp_amcu_plf-cato.authcheckdam.pdf, (noting that courts “should presume strongly against granting state-action immunity in antitrust cases.  It makes little sense to impose powerful civil and criminal punishments on private parties who are deemed to have engaged in anti-competitive conduct, while exempting government entities—or, worse, private parties acting under the government’s aegis—when they engage in the exact same conduct. . . . “Whatever one’s opinion of antitrust law in general, there is no justification for allowing states broad latitude to disregard federal law and erect private cartels with only vague instructions and loose oversight.”)

[22]   Wright, “Regulation in High-Tech Markets,” at 7.

[23]   FTC, Report of the State Action Task Force, 44, (“an unfortunate gap has emerged between scholarship and case law. Although many of the leading commentators have expressed serious concern regarding problems posed by interstate spillovers, their thinking has yet to take root in the law. Such spillovers undermine both economic efficiency and some of the same political representation values thought to be protected by principles of federalism.”); Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 13, (“Allowing states expansive power to exempt private actors from antitrust laws would also disrupt national economic policy by encouraging a patchwork of state-established entities licensed to engage in cartel behavior. This would disrupt interstate investment and consumer expectations, and would have spillover effects across state lines.”) Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1598, (“When a state exports the costs attendant to its anticompetitive regulatory scheme to those who have not participated in the political process, however, there is no political backstop; arguments for immunity based on federalism concerns are severely weakened, if not wholly eviscerated, in these situations.”

[24]   See Adam Thierer, The Delicate Balance: Federalism, Interstate Commerce, and Economic Freedom in the Technological Age (Washington, DC: The Heritage Foundation, 1998): 81-118.

]]>
https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/feed/ 0 75549
Net Neutrality and the Dangers of Title II https://techliberation.com/2014/09/26/net-neutrality-and-the-dangers-of-title-ii/ https://techliberation.com/2014/09/26/net-neutrality-and-the-dangers-of-title-ii/#comments Fri, 26 Sep 2014 14:40:32 +0000 http://techliberation.com/?p=74788

There are several “flavors” of net neutrality–Eli Noam at Columbia University estimates there are seven distinct meanings of the term–but most net neutrality proponents agree that reinterpreting the 1934 Communications Act and “classifying” Internet service providers as Title II “telecommunications” companies is the best way forward. Proponents argue that ISPs are common carriers and therefore should be regulated much like common carrier telephone companies. Last week I filed a public interest comment about net neutrality and pointed out why the Title II option is unwise and possibly illegal.

For one, courts have defined “common carriers” in such a way that ISPs don’t look much like common carriers. It’s also unlikely that ISPs can be classified as telecommunications providers because Congress defines “telecommunications” as the transmission of information “between or among points specified by the user.” Phone calls are telecommunications because callers are selecting the endpoint–a person associated with the known phone number. Even simple web browsing, however, requires substantial processing by an ISP that often coordinates several networks, servers, and routers to bring the user the correct information, say, a Wikipedia article or Netflix video. Under normal circumstances, this process is completely mysterious to a user. By classifying ISPs as common carriers and telecommunications providers, therefore, the FCC invites immense legal risk.

As I’ve noted before, prioritized data can provide consumer benefits and stringent net neutrality rules would harm the development of new services on the horizon. Title II–in making the Internet more “neutral”–is anti-progress and is akin to putting the toothpaste back in the tube. The Internet has never been neutral, as computer scientist David Clark and others point out, and it’s getting less neutral all the time. VoIP phone service is already prioritized for millions of households. VoLTE will do the same for wireless phone customers.

It’s a largely unreported story that many of the most informed net neutrality proponents, including President Obama’s former chief technology officer, are fine with so-called “fast lanes”–particularly if it’s the user, not the ISP, selecting the services to be prioritized. There is general agreement that prioritized services are demanded by consumers, but Title II would have a predictable chilling effect on new services because of the regulatory burdens.

MetroPCS, for example, a small wireless carrier with about 3% market share attempted selling a purportedly non-neutral phone plan that allowed unlimited YouTube viewing and was pilloried for it by net neutrality proponents. MetroPCS, chastened, dropped the plan. With Title II, a small ISP or wireless carrier wouldn’t dream of attempting such a thing.

In the comment, I note other undesirable effects of Title II, including that it undermines the position the US has held publicly for years that the Internet is different than traditional communications.

If the FCC further intermingles traditional telecommunications with broadband, it may increase the probability of the [International Telecommunications Union] extending sender-pays or other tariffing and tax rules to the exchange of Internet traffic. Several countries proposed instituting sender-pays at a contentious 2012 ITU forum and the United States representatives vigorously fought sender-pays for the Internet. Many developing countries, particularly, would welcome such a change in regulations, because, as Mercatus scholar Eli Dourado found, sender-pays rules “allow governments to export some of their statutory tax burden.” New foreign tariffing rules would function essentially as a transfer of wealth from popular US-based companies like Facebook and Google to corrupt foreign governments and telephone cartels.

Finally, I note that classifying ISPs as common carriers weakens the enforcement of antitrust and consumer protection laws. Generally, it is difficult to bring antitrust lawsuits in extensively regulated industries. After filing my comment, I learned that the FTC also filed a comment noting, similarly, that its Section 5 authority would be limited if the FCC goes the Title II route. Brian Fung and others have since written about this interesting political and legal development. This detrimental effect on antitrust enforcement should weigh against Title II regulation.

There are substantial drawbacks to Title II regulation of ISPs and the FCC should exercise regulatory humility and its traditional hands-off approach to the Internet. In the end, Title II would harm investment in nascent technologies and network upgrades. The harms to consumers and small carriers, particularly, would be immense. It almost makes one think that comedy sketches and “death of the Internet” reporting don’t lead to good public policy.

More Information

See my presentation (36 minutes) on net neutrality and “fast lanes” on the Mercatus website.

]]>
https://techliberation.com/2014/09/26/net-neutrality-and-the-dangers-of-title-ii/feed/ 1 74788
Problems with Precautionary Principle-Minded Tech Regulation & a Federal Robotics Commission https://techliberation.com/2014/09/22/problems-with-precautionary-principle-minded-tech-regulation-a-federal-robotics-commission/ https://techliberation.com/2014/09/22/problems-with-precautionary-principle-minded-tech-regulation-a-federal-robotics-commission/#respond Mon, 22 Sep 2014 15:55:03 +0000 http://techliberation.com/?p=74760

If there are two general principles that unify my recent work on technology policy and innovation issues, they would be as follows. To the maximum extent possible:

  1. We should avoid preemptive and precautionary-based regulatory regimes for new innovation. Instead, our policy default should be innovation allowed (or “permissionless innovation”) and innovators should be considered “innocent until proven guilty” (unless, that is, a thorough benefit-cost analysis has been conducted that documents the clear need for immediate preemptive restraints).
  2. We should avoid rigid, “top-down” technology-specific or sector-specific regulatory regimes and/or regulatory agencies and instead opt for a broader array of more flexible, “bottom-up” solutions (education, empowerment, social norms, self-regulation, public pressure, etc.) as well as reliance on existing legal systems and standards (torts, product liability, contracts, property rights, etc.).

I was very interested, therefore, to come across two new essays that make opposing arguments and proposals. The first is this recent Slate oped by John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often.” The second is Ryan Calo’s new Brookings Institution white paper, “The Case for a Federal Robotics Commission.”

Weaver argues that new robot technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” In order to preemptively address concerns about new technologies such as driverless cars or commercial drones, “we need to legislate early and often,” Weaver says. Stated differently, Weaver is proposing “precautionary principle”-based regulation of these technologies. The precautionary principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

Calo argues that we need “the establishment of a new federal agency to deal with the novel experiences and harms robotics enables” since there exists “distinct but related challenges that would benefit from being examined and treated together.” These issues, he says, “require special expertise to understand and may require investment and coordination to thrive.

I’ll address both Weaver and Calo’s proposals in turn.

Problems with Precautionary Regulation

Let’s begin with Weaver proposed approach to regulating robotics and autonomous systems.

What Weaver seems to ignore—and which I discuss at greater length in my latest book—is that “precautionary” policy-making typically results in technological stasis and lost opportunities for economic and social progress. As I noted in my book, if we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon such fears—it means that best-case scenarios will never come about. Wisdom and progress are born from experience, including experiences that involve risk and the possibility of occasional mistakes and failures. As the old adage goes, “nothing ventured, nothing gained.”

More concretely, the problem with “permissioning” innovation is that traditional regulatory policies and systems tend to be overly-rigid, bureaucratic, costly, and slow to adapt to new realities. Precautionary-based policies and regulatory systems focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. As a result, preemptive bans or highly restrictive regulatory prescriptions can limit innovations that yield new and better ways of doing things.

Weaver doesn’t bother addressing these issues. He instead advocates regulating “early and often” without stopping to think through the potential costs of doing so. Yet, all regulation has trade-offs and opportunity costs. Before we rush to adopt rules based on knee-jerk negative reactions to new technology, we should conduct comprehensive benefit-cost analysis of the proposals and think carefully about what alternative approaches exist to address whatever problems we have identified.

Incidentally, Weaver also does not acknowledge the contradiction inherent in his thinking when he says robotic technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” Well, if robotic technology is truly developing “faster than we can legislate it,” then “getting out ahead of it” would be seemingly impossible! Unless, that is, he envisions regulating robotic technologies so stringently as to effectively bring new innovation to a grinding halt (or banning altogether).

To be clear, my criticisms should not be read to suggest that zero regulation is the best option. There are plenty of thorny issues that deserve serious policy consideration and perhaps even some preemptive rules. But how potential harms are addressed matters deeply. We should exhaust all other potential nonregulatory remedies first — education, empowerment, transparency, etc. — before resorting to preemptive controls on new forms of innovation. In other words, ex post (or after the fact) solutions should generally trump ex ante (preemptive) controls.

I’ll say more on this point in the conclusion since my response addresses general failings in Ryan Calo’s Federal Robotics Commission proposal, to which we now turn.

Problems with a Federal Robotics Commission

Moving on to Calo, it is important to clarify what he is proposing because he is careful not to overstate his case in favor of a new agency for robotics. He elaborates as follows:

“The institution I have in mind would not “regulate” robotics in the sense of fashioning rules regarding their use, at least not in any initial incarnation. Rather, the agency would advise on issues at all levels—state and federal, domestic and foreign, civil and criminal—that touch upon the unique aspects of robotics and artificial intelligence and the novel human experiences these technologies generate. The alternative, I fear, is that we will continue to address robotics policy questions piecemeal, perhaps indefinitely, with increasingly poor outcomes and slow accrual of knowledge. Meanwhile, other nations that are investing more heavily in robotics and, specifically, in developing a legal and policy infrastructure for emerging technology, will leapfrog the U.S. in innovation for the first time since the creation of steam power.”

Here are some of my concerns with Calo’s proposed Federal Robotics Commission.

Will It Really Just Be an Advisory Body?

First, Calo claims he doesn’t want a formal regulatory agency, but something more akin to a super-advisory body. He does, however, sneak in that disclaimer that he doesn’t envision it to be regulatory “at least not in any initial incarnation.” Perhaps, then, he is suggesting that more formal regulatory controls would be in the cards down the road. It remains unclear.

Regardless, I think it is a bit disingenuous to propose the formation of a new governmental body like this and pretend that it will not someday very soon come to possess sweeping regulatory powers over these technologies. Now, you may well feel that that is a good thing. But I fear that Calo is playing a bit of game here by asking the reader to imagine his new creation would merely stick to an advisory role.

Regulatory creep is real. There just aren’t too many examples of agencies being created solely for their advisory expertise and then not also getting into the business of regulating the technology or topic that is included in that agency’s name. And in light of some of Calo’s past writing and advocacy, I can’t help but think he is actually hoping that the agency comes to take on a greater regulatory role over time. Regardless, I think we can bank on that happening and I that there are reasons to worry about it for reasons noted above and which I will elaborate on below.

Incidentally, if Calo is really more interested in furthering just this expert advisory capacity, there are plenty of other entities (including non-governmental bodies) that could play that role. How about the National Science Foundation, for example? Or how about a multi-stakeholder body consisting of many different experts and institutions? I could go on, but you get the point. A single point of action is also a single point of failure. I don’t want just one big robotics bureaucracy making policy or even advising. I’d prefer a more decentralized approach, and one that doesn’t carry a (potential) big regulatory club in its hand.

Public Choice / Regulatory Capture Problems

Second, Calo underestimates the public choice problems of creating a sector-specific or technology-specific agency just for robotics. To his credit, he does admit that, “agencies have their problems, of course. They can be inefficient and are subject to capture by those they regulate or other special interests.” He also notes he has criticized other agencies for various failings. But he does not say anything more on this point.

Let’s be clear. There exists a long and lamentable history of sector-specific regulators being “captured” by the entities they regulate. To read the ugly reality, see my compendium, “Regulatory Capture: What the Experts Have Found.” That piece documents what leading academics of all political stripes have had to say about this problem over the past century. No one ever summarized the nature and gravity of this problem better than the great Alfred Kahn in his masterpiece, The Economics of Regulation: Principles and Institutions (1971):

“When a commission is responsible for the performance of an industry, it is under never completely escapable pressure to protect the health of the companies it regulates, to assure a desirable performance by relying on those monopolistic chosen instruments and its own controls rather than on the unplanned and unplannable forces of competition. [. . . ] Responsible for the continued provision and improvement of service, [the regulatory commission] comes increasingly and understandably to identify the interest of the public with that of the existing companies on whom it must rely to deliver goods.” (pgs. 12, 46)

The history of the Federal Communications Commission (FCC) is highly instructive in this regard and was documented in a 66-page law review article I penned with Brent Skorup entitled, “A History of Cronyism and Capture in the Information Technology Sector,” (Journal of Technology Law & Policy, Vol. 18, 2013). Again, it doesn’t make for pleasant reading. Time and time again, instead of serving the “public interest,” the FCC served private interests. The entire history of video marketplace regulation is one of the most sickening examples to consider since there have almost eight decades worth of case studies of the broadcast industry using regulation as a club to beat back new entry, competition, and innovation. [Skorup and I have another paper discussing that specific history and how to go about reversing it.] This history is important because, in the early days of the Commission, many proponents thought the FCC would be exactly the sort of “expert” independent agency that Calo envisions his Federal Robotics Commission would be. Needless to say, things did not turn out so well.

But the FCC isn’t the only guilty offender in this regard. Go read the history about how airlines so effectively cartelized their industry following World War II with the help of the Civil Aeronautics Board. Thankfully, President Jimmy Carter appointed Alfred Kahn to clean things up in the 1970s. Kahn, a life-long Democrat, came to realize that the problem of capture was so insidious and inescapable that abolition of the agency was the only realistic solution to make sure consumer welfare would improve. As a result, he and various other Democrats in the Carter Administration and in Congress worked together to sunset the agency and its hideously protectionist, anti-consumer policies. (Also, please read this amazing 1973 law review article on “Economic Regulation vs. Competition,” by Mark Green and Ralph Nader if you need even more proof of why this is a such a problem.)

In other words, the problem of regulatory capture is not something one can casually dismiss. The problem is still very real and deserves more consideration before we casually propose creating new agencies, even “advisory” agencies. At a minimum, when proposing new agencies, you need to get serious about what sort of institutional constraints you might consider putting in place to make sure that history does not repeat itself. Because if you don’t, various large, well-heeled, and politically-connected robotics companies could come to capture any new “Federal Robotics Commission” in very short order.

Can We Clean Up Old Messes Before Building More Bureaucracies?

Third, speaking of agencies, if it is the case that the alphabet soup collection of regulatory agencies we already have in place are not capable of handling “robotics policy” right now, can we talk about reforming them (or perhaps even getting rid of a few of them) first? Why must we just pile yet another sector-specific or technology-specific regulator on top of the many that already exist? That’s just a recipe for more red tape and potential regulatory capture. Unless you believe there is value in creating bureaucracy for the sake of creating bureaucracy, there is no excuse for not phasing out agencies that failed in their original mission, or whose mission is now obsolete, for whatever reason. This is a fundamental “good government” issue that politicians and academics of all stripes should agree on.

Calo indirectly addresses this point by noting that “we have agencies devoted to technologies already and it would be odd and anomalous to think we are done creating them.” Curiously, however, he spends no time talking about those agencies or asking whether they have done a good job. Again, the heart of Calo’s argument comes down the assertion that another specialized, technology-specific “expert” agency is needed because there are “novel” issues associated with robotics. Well, if it is true, as Calo suggests, that we have been down this path before (and we have), and if you believe our economy or society has been made better off for it, then you need to prove it. Because the objection to creating another regulatory bureaucracy is not simply based on distaste for Big Government; it comes down to the simple questions: (1) Do these things work; and (2) Is there a better alternative?

This is where Calo’s proposal falls short. There is no effort to prove that technocratic or “scientific” bureaucracies, on net, are worth their expense (to taxpayers) or cost (to society, innovation, etc.) when compared to alternatives. Of course, I suspect this is where Calo and I might part ways regarding what metrics we would use to gauge success. I’ll save that discussion for another day and shift to what I regard as the far more serious deficiency of Calo’s proposal.

Do We Become Global Innovation Leaders Through Bureaucratic Direction?

Fourth, and most importantly, Calo does not offer any evidence to prove his contention that we need a sector-specific or technology-specific agency for robotics in order to develop or maintain America’s competitive edge in this field. Moreover, he does not acknowledge how his proposal might have the exact opposite result. Let me spend some time on this point because this is what I find most problematic about his proposal.

In his latest Brookings essay and his earlier writing about robotics, Calo keeps suggesting that we need a specialized federal agency for robotics to avoid “poor outcomes” due to the lack of “a legal and policy infrastructure for emerging technology.” He even warns us that other countries who are looking into robotics policy and regulation more seriously “will leapfrog the U.S. in innovation for the first time since the creation of steam power.”

Well, on that point, I must ask: Did America need a Federal Steam Agency to become a leader in that field? Because unless I missed something in history class, steam power developed fairly rapidly in this country without any centralized bureaucratic direction. Or how about a more recent example: Did America need a Federal Computer Commission or Federal Internet Commission to obtain or maintain a global edge in computing, the Internet, or the Digital Economy?

To the contrary, we took the EXACT OPPOSITE approach. It’s not just that no new agencies were formed to guide the development of computing or the Internet in this country. It’s that our government made a clear policy choice to break with the past by rejecting top-down, command-and-control regulation by unelected bureaucrats in some shadowy Beltway agency.

Incidentally, it was Democrats who accomplished this. While many Republicans today love to crack wise-ass comments about Al Gore and the Internet while simultaneously imagining themselves to be the great defenders of Internet freedom, the reality is that we have the Clinton Administration and one its most liberal members—Ira Magaziner—to thank for the most blessedly “light-touch,” market-oriented innovation policy that the world has ever seen.

What did Magaziner and the Clinton Administration do? They crafted the amazing 1997 Framework for Global Electronic Commerce, a statement of the Administration’s principles and policy objectives toward the Internet and the emerging digital economy. It recommended reliance upon civil society, contractual negotiations, voluntary agreements, and ongoing marketplace experiments to solve information age problems. First, “the private sector should lead. The Internet should develop as a market driven arena not a regulated industry,” the Framework recommended. “Even where collective action is necessary, governments should encourage industry self-regulation and private sector leadership where possible.” Second, “governments should avoid undue restrictions on electronic commerce” and “parties should be able to enter into legitimate agreements to buy and sell products and services across the Internet with minimal government involvement or intervention.”

I’ve argued elsewhere that the Clinton Administration’s Framework, “remains the most succinct articulation of a pro-freedom, innovation-oriented vision for cyberspace ever penned.” Of course, this followed the Administration’s earlier move to allow the full commercialization of the Internet, which was even more important. The policy disposition they established with these decisions resulted in an unambiguous green light for a rising generation of creative minds who were eager to explore this new frontier for commerce and communications. And to reiterate,they did it without any new bureaucracy.

If You Regulate “Robotics,” You End Up Regulating Computing & Networking

Incidentally, I do not see how we could create a new Federal Robotics Commission without it also becoming a de facto Federal Computing Commission. Robotics and the many technologies and industries it already includes — driverless cars, commercial drones, Internet of Things, etc. — is becoming a hot policy topic, and proposals for regulation are already flying. These robotic technologies are developing on top of the building blocks of the Information Revolution: microprocessors, wireless networks, sensors, “big data,” etc.

Thus, I share Cory Doctorow’s skepticism about how one could logically separate “robotics” from these other technologies and sectors for regulatory purposes:

I am skeptical that “robot law” can be effectively separated from software law in general. … For the life of me, I can’t figure out a legal principle that would apply to the robot that wouldn’t be useful for the computer (and vice versa).

In his Brookings paper, Calo responded to Doctorow’s concern as follows:

the difference between a computer and a robot has largely to do with the latter’s embodiment. Robots do not just sense, process, and relay data. Robots are organized to act upon the world physically, or at least directly. This turns out to have strong repercussions at law, and to pose unique challenges to law and to legal institutions that computers and the Internet did not.

I find this fairly unconvincing. Just because robotic technologies have a physical embodiment does not mean their impact on society is all that more profound than computing, the Internet, and digital technologies. Consider all the hand-wringing going on today in cybersecurity circles about how hacking, malware, or various other types of digital attacks could take down entire systems or economies. I’m not saying I buy all that “technopanic” talk (and here are about three dozens of my essays arguing the contrary), but the theoretical ramifications are nonetheless on par with dystopian scenarios about robotics.

The Alternative Approach

Of course, it certainly may be the case that some worst-case scenarios are worth worrying about in both cases—for robotics and computing, that is. Still, is a Federal Robotics Commission or a Federal Computing Commission really the sensible way to address those issues?

To the contrary, this is why we have a Legislative Branch! So many of the problems of our modern era of dysfunctional government are rooted in an unwise delegation of authority to administrative agencies. Far too often, congressional lawmakers delegate broad, ambiguous authority to agencies instead of facing up to the hard issues themselves. This results in waste, bloat, inefficiencies, and an endless passing of the buck.

There may very well be some serious issues raised by robotics and AI that we cannot ignore, and which may even require a little preemptive, precautionary policy. And the same goes for general computing and the Internet. But that is not a good reason to just create new bureaucracies in the hope that some set of mythical technocratic philosopher kings will ride in to save the day with their supposed greater “expertise” about these matters. Either you believe in democracy or you don’t. Running around calling for agencies and unelected bureaucrats to make all the hard choices means that “the people” have even less of a say in these matters.

Moreover, there are many other methods of dealing with robotics and the potential problems robotics might create than through the creation of new bureaucracy. The common law already handles many of the problems that both Calo and Weaver are worried about. To the extent robotic systems are involved in accidents that harm individuals or their property, product liability law will kick in.

On this point, I strongly recommend another new Brookings publication. John Villasenor’s outstanding April white paper, “Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation,” correctly argues that,

“when confronted with new, often complex, questions involving products liability, courts have generally gotten things right. … Products liability law has been highly adaptive to the many new technologies that have emerged in recent decades, and it will be quite capable of adapting to emerging autonomous vehicle technologies as the need arises.”

Thus, instead of trying to micro-manage the development of robotic technologies in an attempt to plan for every hypothetical risk scenario, policymakers should be patient while the common law evolves and liability norms adjust. Traditionally, the common law has dealt with products liability and accident compensation in an evolutionary way through a variety of mechanisms, including strict liability, negligence, design defects law, failure to warn, breach of warranty, and so on. There is no reason to think the common law will not adapt to new technological realities, including robotic technologies. (I address these and other “bottom-up” solutions in my new book.)

In the meantime, let’s exercise some humility and restraint here and avoid heavy-handed precautionary regulatory regimes or the creation of new technocratic bureaucracies. And let’s not forget that many solutions to the problems created by new robotic technologies will develop spontaneously and organically over time as individuals and institutions learn to cope and “muddle through,” as they have many times before.


Additional Reading

]]>
https://techliberation.com/2014/09/22/problems-with-precautionary-principle-minded-tech-regulation-a-federal-robotics-commission/feed/ 0 74760
Crovitz Nails It on Software Patents and the Federal Circuit https://techliberation.com/2013/12/16/crovitz-nails-it-on-software-patents-and-the-federal-circuit/ https://techliberation.com/2013/12/16/crovitz-nails-it-on-software-patents-and-the-federal-circuit/#respond Mon, 16 Dec 2013 16:38:42 +0000 http://techliberation.com/?p=73994

Gordon Crovitz has an excellent column in today’s Wall Street Journal in which he accurately diagnoses the root cause of our patent litigation problem: the Federal Circuit’s support for extensive patenting in software.

Today’s patent mess can be traced to a miscalculation by Jimmy Carter, who thought granting more patents would help overcome economic stagnation. In 1979, his Domestic Policy Review on Industrial Innovation proposed a new Federal Circuit Court of Appeals, which Congress created in 1982. Its first judge explained: “The court was formed for one need, to recover the value of the patent system as an incentive to industry.” The country got more patents—at what has turned out to be a huge cost. The number of patents has quadrupled, to more than 275,000 a year. But the Federal Circuit approved patents for software, which now account for most of the patents granted in the U.S.—and for most of the litigation. Patent trolls buy up vague software patents and demand legal settlements from technology companies. Instead of encouraging innovation, patent law has become a burden on entrepreneurs, especially startups without teams of patent lawyers.

I was pleased that Crovitz cites my new paper with Alex Tabarrok:

A system of property rights is flawed if no one can know what’s protected. That’s what happens when the government grants 20-year patents for vague software ideas in exchange for making the innovation public. In a recent academic paper, George Mason researchers Eli Dourado and Alex Tabarrok argued that the system of “broad and fuzzy” software patents “reduces the potency of search and defeats one of the key arguments for patents, the dissemination of information about innovation.”

Current legislation in Congress makes changes to patent trial procedure in an effort to reduce the harm caused by patent trolling. But if we really want to solve the trolling problem once and for all, and to generally have a healthy and innovative patent system, we need to get at the problem of low-quality patents, especially in software. The best way to do that is to abolish the Federal Circuit, which has consistently undermined limits on patentable subject matter.

]]>
https://techliberation.com/2013/12/16/crovitz-nails-it-on-software-patents-and-the-federal-circuit/feed/ 0 73994
3 Cell Phone Unlocking Bills Introduced—What Would They Accomplish? https://techliberation.com/2013/03/16/3-cell-phone-unlocking-bills-introduced-what-would-they-accomplish/ https://techliberation.com/2013/03/16/3-cell-phone-unlocking-bills-introduced-what-would-they-accomplish/#respond Sat, 16 Mar 2013 07:49:26 +0000 http://techliberation.com/?p=44006

In the past couple weeks, three bills addressing the legality of cell phone unlocking have been introduced in the Senate:

  • Sens. Leahy, Grassley, Franken, and Hatch’s “Unlocking Consumer Choice and Wireless Competition Act” (S.517)
  • Sen. Ron Wyden’s “Wireless Device Independence Act” (S.467)
  • Sen. Amy Klobuchar’s “Wireless Consumer Choice Act” (S.481)

This essay will explain how these bills would affect users’ ability to lawfully unlock their cell phones.

Background

If you buy a new cell phone from a U.S. wireless carrier and sign a multi-year service contract, chances are your phone is “locked” to your carrier. This means if you want to switch carriers, you’ll first need to unlock your phone. Your original carrier may well be happy to lend you a helping hand—but, if not, unlocking your phone may violate federal law.4s-unlock

The last few months have seen an explosion of public outcry over this issue, with a recent White House “We the People” petition calling for the legalization of cell phone unlocking garnering over 114,000 signatures—and a favorable response from the Obama administration. The controversy was sparked in October 2012, when a governmental ruling (PDF) announced that unlocking cell phones purchased after January 26, 2013 would violate a 1998 federal law known as the Digital Millennium Copyright Act (the “DMCA”).

Under this law’s “anti-circumvention” provisions (17 U.S.C. §§ 1201-05), it is generally illegal to “circumvent a technological measure” that protects a copyrighted work. Violators are subject to civil penalties and, in serious cases, criminal prosecution.

However, the law includes an escape valve: it empowers the Librarian of Congress, in consultation with the Register of Copyrights, to periodically determine if any users’ “ability to make noninfringing uses . . . of a particular class of copyrighted works” is adversely affected by the DMCA’s prohibition of tools that circumvent access controls. Based on these determinations, the Librarian may promulgate rules exempting categories of circumvention tools from the DMCA’s ban.

One such exemption, originally granted in 2006 and renewed in 2010, permits users to unlock their cell phones without their carrier’s permission. (You may be wondering why phone unlocking is considered an access control circumvention—it’s because unlocking requires the circumvention of limits on user access to a mobile phone’s bootloader or operating system, both of which are usually copyrighted.)

But late last year (2012), when the phone unlocking exemption came up for its triennial review, the landscape had evolved regarding a crucial legal question: do cell phone owners  own a copy of the operating system software installed on their phone, or are they merely licensees of the software?

Until a few years ago, the leading authority on what it means to own a copy of a computer program was the 2nd Circuit’s 2005 opinion in Krause v. Titleserv, Inc.402 F.3d 119. There, the court held that a person owns a copy of software if he “exercises sufficient incidents of ownership over a copy of the program to be sensibly considered the owner of the copy . . . .” As the Copyright Office noted in its 2012 recommendation to the Librarian of Congress, the 2006 and 2010 rules exempting cell phone unlocking from the DMCA reflected an understanding, based in part on the holding in Krause, that a typical cell phone owner exercises a level of dominion over her device (and its digital contents) more akin to traditional property ownership than the licensed use of property owned by another.

But in 2010, the 9th Circuit took a very different approach in  Vernor v. Autodesk, Inc.621 F.3d 1102, in which the court held that a “software user is a licensee rather than an owner of a copy where the copyright owner (1) specifies that the user is granted a license; (2) significantly restricts the user’s ability to transfer the software; and (3) imposes notable use restrictions.” Because a typical cell phone owner is bound by a “click-wrap” agreement that significantly restricts her ownership rights in her phone’s operating system, she’s arguably a licensee of the software—not an owner of a copy—according to Vernor.

In light of the  Vernor-Krause circuit split, combined with pronounced trend toward more permissive carrier unlocking policies in recent years, the Librarian of Congress substantially curtailed the exemption for cell phone unlocking for all new phones purchased after January 26, 2013. Today, an owner of a new phone may unlock it only if “the operator of the wireless communications network to which the handset is locked has failed to unlock it within a reasonable period of time following a request by the owner of the wireless telephone handset, and when circumvention is initiated by the owner, an individual consumer, who is also the owner of the copy of the computer program in such wireless telephone handset . . . .”

So it is that cell phone unlocking is now in many cases a violation of federal law. (For more background, check out the writings of Timothy Lee at Ars TechnicaDerek Khanna at The Atlantic, and Mike Masnick at Techdirt.)

How would the bills recently introduced in Congress address the cell phone unlocking issue? Let’s take a look at each bill.

The Unlocking Consumer Choice and Wireless Competition Act

To begin with the simplest of the cell phone unlocking bills, Sens. Leahy, Grassley, Franken, and Hatch’s Unlocking Consumer Choice and Wireless Competition Act (S.517) would simply amend the Code of Federal Regulations, replacing the pertinent paragraph from the Librarian of Congress’s 2012 rulemaking (codified at 37 C.F.R. § 201.40(b)(3)) with its more permissive 2010 analogue. The bill also tasks the Librarian of Congress with determining whether to extend the unlocking exemption to other wireless devices (e.g., mobile broadband-enabled tablets), based on the DMCA’s usual rulemaking criteria.

By restoring the broad DMCA exemption for phone unlocking in force from 2006 to 2012, S.517 addresses the problem at hand without going too far. It neither forces carriers to help users unlock their phones, nor limits carriers’ ability to recover damages from subscribers who breach their contracts. Rather, the bill would simply shield users who unlock their cell phones from the DMCA’s harsh penalties. In striking this balance, S.517 deserves credit for aiming to solve a discrete problem with a narrowly-tailored solution.

But would S.517’s fix last? Given that “[n]othing in [the] Act alters . . . the authority of the Librarian of Congress under [the DMCA],” S.517 would presumably leave unchanged the substantial deference enjoyed by the Librarian regarding his decisions about which circumvention tools to exempt—including cell phone unlocking tools. If, three years from now, the Librarian boldly decides that his 2012 decision to curtail the phone unlocking exemption was correct, and thus restores the language currently in force, Congress will be back at square one.

For a more lasting solution, Congress could act under the Congressional Review Act (“CRA”) to pass a resolution expressing its disapproval of the Librarian’s 2012 rule. If both houses of Congress were to pass such a resolution, and the President were to sign it, the narrow cell phone unlocking rule would be nullified—permanently. And the Librarian couldn’t simply reissue the rule, as a rule nullified under the CRA “may not be reissued in substantially the same form.” 5 U.S.C. § 801(b)(2).

Admittedly, this would be a novel use of the CRA. Congress has historically used the law’s disapproval procedure to review rules promulgated by “ordinary” federal agencies (i.e., agencies that are entirely within the Executive Branch). Nevertheless, the Library of Congress is arguably an “agency” for purposes of the CRA insofar as it promulgates rules of general applicability. As the D.C. Circuit recently held in Intercollegiate Broad. Sys., Inc. v. Copyright Royalty Bd., when the Library of Congress exercises its “powers . . . to promulgate copyright regulations . . . the Library is undoubtedly a ‘component of the Executive Branch.'” 684 F.3d 1332, 1341-42 (D.C. Cir. 2012) (citing Free Enterprise Fund v. Public Company Accounting Oversight Bd., 130 S.Ct. 3138, 3163 (2010)).

The Wireless Device Independence Act

Sen. Ron Wyden’s Wireless Device Independence Act (S.467) is the only cell phone unlocking bill that actually amends the DMCA. It would add to section 1201 a clause specifying that modifying software on a mobile device so that it operates on a different network is exempt from the law. While his colleagues dance around the underlying problem—the DMCA itself—Sen. Wyden tackles it head-on. To his credit, this approach embodies Congress exercising its proper constitutional role. If the legislative branch is dissatisfied with how an agency has exercised its statutorily delegated authority, the legislature ought to respond by amending the agency’s enabling statute.

However, S.467 contains a potentially massive loophole: it only exempts from DMCA liability “user[s] [who] legally own[] a copy of the computer program” installed on their mobile phone. In other words, the bill would do nothing for users who are mere licensees of the software installed on their phone. This may not matter for residents of the three states under the jurisdiction of the Second Circuit, where Krause controls—but for cell phone owners in the Ninth Circuit, where Vernor controls, S.467 is unlikely to offer much relief. Because most mobile operating systems are accompanied by click-wrap contracts that impose significant use and transfer restrictions on users, under Vernor these users are considered licensees, rather than owners of a copy of the operating system.

If the Wireless Device Independence Act were enacted, therefore, most Americans wishing to unlock their cell phones would still face significant legal uncertainty regarding their potential liability under the DMCA. To remedy this, the bill could extend its safe harbor to encompass cell phone unlocking by licensees, as well as owners, of software.

The Wireless Consumer Choice Act

Sen. Amy Klobuchar, along with Sens. Mike Lee and Richard Blumenthal, take a very different approach from their colleagues in their Wireless Consumer Choice Act (S.481). The bill’s full text is worth posting (PDF):

Pursuant to its authorities under title III of the Communications Act of 1934 . . . the [FCC], not later than 180 days after the date of enactment of this Act, shall direct providers of commercial mobile services and commercial mobile data services to permit the subscribers of such services, or the agent of such subscribers, to unlock any type of wireless device used to access such services. Nothing in this Act alters, or shall be construed to alter, the terms of any valid contract between a provider and a subscriber.

Note the absence of any explicit amendments to the DMCA or related regulations, or any mention of circumvention tools. Instead, the bill empowers the FCC to regulate carriers’ unlocking policies, yet leaves the DMCA intact. This drafting decision has led some commentators to pan the legislation, questioning its effectiveness and scope.

While I too have serious concerns about S.481, I think Sina Khanifar (who started the White House petition about cell phone unlocking) may be incorrect to suggest the bill “doesn’t do anything at all.” It seems to me that S.481 would alter the DMCA’s unwritten contours, albeit in narrow ways.

How can a law that doesn’t even mention the DMCA effectively “rewrite” its anti-circumvention provisions? Consider that S.481 and the DMCA’s section 1201 both purport to deal with the subject of cell phone unlocking. To borrow a term from legal Latin, the two laws are in pari materia (“upon the same subject”). While section 1201 focuses on the general issue of circumvention of copyright access controls without mentioning cell phone unlocking, S.481 specifically and exclusively addresses cell phone unlocking.

So how would a court reconcile S.481 with section 1201 if a mobile subscriber were sued for unlocking his cell phone despite his full compliance with the carrier’s service contract? Here’s an excerpt from the leading treatise on statutory interpretation, Sutherland Statutory Construction, summarizing how courts have historically sought to reconcile incompatible statutes:

Where one statute deals with a subject in general terms and another deals with a part of the same subject in a more detailed way, the two should be harmonized if possible. But if two statutes conflict, the general statute must yield to the specific statute involving the same subject . . . .

2B Sutherland Statutory Construction § 51:5 (7th ed.) (internal citations omitted).

The DMCA, it seems, must yield to S.481—at least as far as contractually-authorized cell phone unlocking is concerned. As Sean Flaim points out, if you unlock your phone with help from your carrier, it cannot be said that you’ve “circumvented” a technological measure. Thus, under S.481, carriers would lose their existing ability under the DMCA (17 U.S.C. § 1203) to sue a subscriber who has unlocked his phone without breaching his service contract. Similarly, the law might deny the DMCA’s civil remedies to other rights holders—say, mobile operating system creators—against consumers who unlock their phones without breaching any contractual provisions. S.481 also purports to eliminate criminal liability in such situations; as Sen. Mike Lee explained in a joint statement announcing the bill, “[c]onsumers shouldn’t have to fear criminal charges if they want to unlock their cell phones and switch carriers.”

But courts could just as well construe S.481 to effect none of these changes. There is no such thing as  stare decisis  when it comes to statutory construction. If Congress wanted to alter the DMCA, courts might reason, Congress would have done just that. S.481 simply requires that carriers help off-contract subscribers unlock their phones, so why read into the statute a meaning that conflicts with other laws?

Perhaps there are persuasive reasons for trying to tweak the DMCA without actually amending the law, but I’m not aware of any. Given how widely courts vary in interpreting vague statutes, it’s awfully risky to gamble on judges who review S.481 correctly divining Congress’s intent if it enacts the law.

Another worrisome aspect of S.481 is its expansion of the FCC’s regulatory authority to encompass cell phone unlocking. While this grant of authority may seem innocuous, Congress should think twice before involving the FCC in mobile carriers’ decisions about when to permit subscribers to unlock their phones. If the FCC is tasked with policing carriers’ policies regarding cell phone unlocking, the agency might interpret this narrow grant of jurisdiction as a grant of  “ancillary authority” to dictate the contours of mobile service contracts (not that the FCC isn’t already eager to regulate this space). The FCC is notorious for taking an extremely broad view of its own powers; as the Electronic Frontier Foundation has warned, the FCC’s willingness to overreach “raises the specter of discretionary FCC regulation of the Internet not just in the area of net neutrality, but also in a host of other areas.”

Given the FCC’s historically limited understanding of how markets work, unleashing it on the wireless industry is especially unwise. This isn’t a market in need of regulation; in fact, consumers enjoy plenty of choices among devices, carriers, and payment plans. If you want to buy the latest smartphone sans carrier lock, chances are you can order it today and have it on your doorstep tomorrow. If anything, Congress should be exploring ways to shrink  the FCC’s role in the mobile communications space, among others.

Conclusion

Like co-liberator Jerry Brito, I think the ideal public policy approach to cell phone unlocking is fairly straightforward. If I own a cell phone, I should be free to modify its software (or hardware) so that it works on any carrier’s network—unless I’ve agreed in contract not to unlock my phone. If I go ahead and unlock my phone anyway, I owe my carrier compensation for its damages resulting from my breach—which are typically specified in advance in the form of an early termination fee. If the contract doesn’t specify an early termination fee, I owe my carrier damages equal to the amount necessary to put the carrier in the same position it would have ended up had I held up my end of the bargain. This is the common law in action, simple yet elegant.

Notice that the approach I’ve outlined makes no mention of the Copyright Act. That a particular type of wrongful conduct happens to involve a copyrighted work doesn’t necessarily make it proper to invoke the copyright laws. While I support robust copyright protection, tweaking the operating software installed on my own phone so that it will operate on my preferred mobile carrier is a far cry from actionable copyright infringement. The potential market for Apple’s iOS, Google’s Android, or Windows Phone 8 suffers no adverse effect if a user unlocks her smartphone so she can switch carriers. As the Copyright Office explained in 2006:

[T]he access controls do not appear to actually be deployed in order to protect the interests of the copyright owner or the value or integrity of the copyrighted work; rather, they are used by wireless carriers to limit the ability of subscribers to switch to other carriers, a business decision that has nothing whatsoever to do with the interests protected by copyright.

This is not to say that carriers are wrong to limit some subscribers’ ability to switch networks. To the contrary, American consumers enjoy substantial benefits thanks to the availability of carrier-subsidized, locked cell phones, as George Ford, Thomas Koutsky, and Larry Spiwak argue in A Policy and Economic Exploration of Wireless Carterfone Regulation, 25 Santa Clara Computer & High Tech. L.J. 647 (2009). The question is thus not whether consumers should be permitted to unlock their cell phones, but what legal regime(s) should deter wrongful unlocking. As Jerry rightly argues, contract law affords mobile carriers a far more appropriate set of remedies for wrongful unlocking than the Copyright Act does.

Cell phone unlocking may be a fairly clear-cut issue, but the broader debate over whether, and to what extent, federal laws should ban tools that circumvent technological measures protecting copyrighted works is anything but straightforward. Critics of the DMCA’s anti-circumvention provisions offer powerful arguments why Congress shouldn’t be in the business of banning technologies, but there remains a fine line between selling lock picking tools and helping people unlawfully pick locks. In a forthcoming essay, I’ll explore the anti-circumvention debate in greater detail.

For a scholarly treatment of the interplay between the DMCA and cell phone unlocking, check out Daniel J. Corbett’s article,  Would You Like That iPhone Locked or Unlocked?: Reconciling Apple’s Anticircumvention Measures with the DMCA, 8 U. Pitt. J. Tech. L. Pol’y 8 (2008).

]]>
https://techliberation.com/2013/03/16/3-cell-phone-unlocking-bills-introduced-what-would-they-accomplish/feed/ 0 44006
The free market case for cell phone unlocking https://techliberation.com/2013/03/05/the-free-market-case-for-cell-phone-unlocking/ https://techliberation.com/2013/03/05/the-free-market-case-for-cell-phone-unlocking/#comments Tue, 05 Mar 2013 21:24:18 +0000 http://techliberation.com/?p=43964

donny-walter

Conservatives and libertarians believe strongly in property rights and contracts. We also believe that businesses should compete on a level playing field without government tipping the scales for anyone. So, it should be clear that the principled position for conservatives and libertarians is to oppose the DMCA anti-circumvention provisions that arguably prohibit cell phone unlocking.

Indeed it’s no surprise that it is conservatives and libertarians—former RSC staffer Derek Khanna and Rep. Jason Chaffetz (R–Utah)—who are leading the charge to reform the laws.

In it’s response to the petition on cell phone unlocking, the White House got it right when it said: “[I]f you have paid for your mobile device, and aren’t bound by a service agreement or other obligation, you should be able to use it on another network.”

Let’s parse that.

If you have paid for your mobile device, it’s yours, and you should be able to do with it whatever you want. That’s the definition of property rights. If I buy a bowling ball at one bowling alley, I don’t need anyone’s permission to use it in another alley. It’s mine.

Here comes the caveat, though. I don’t need anyone’s permission unless I have entered into an agreement to the contrary. If I got a great discount on my bowling ball in exchange for a promise that for the next two years I’d only use it at Donny’s Bowling Alley, then I am bound to that contract and I can’t very well go off and use it at Walter’s Alley. But once those two years are up, the ball is mine alone and I can do with it whatever I want. Again, that’s the definition of property, and the same should be true for cell phones or any other device.

So how is it that after you have paid for a phone, and you no longer have a contractual obligation with a carrier, that they can still prevent you from using it on another network? The answer is that they are manipulating copyright law to gain an unfair advantage.

For one thing, it’s a bit of a farce. In theory the DMCA’s anti-circumvention provisions exist to protect copyrighted works by making it illegal to circumvent a digital lock that limits access to a creative work. That kind of makes sense when it comes to, say, music that is wrapped in DRM (and indeed the DMCA was targeted at piracy). But what is the creative work that is being protected in cell phones? It’s not clear there is any, but ostensibly it’s the phone’s baseband firmware. It doesn’t pass the laugh test to say that Americans are clamoring to unlock their phones in order to pirate the firmware.

No, Americans don’t want to pirate firmware. They simply want to use their phones as they see fit and carriers and phone makers are misusing the DMCA to make out-of-contract and bought-and-paid-for phones less valuable. That’s bad enough, but what should really upset conservatives and libertarians is that they are employing the power of the state to gain this unfair advantage.

If I use my bowling ball at Walter’s Alley while I’m still under contract to Donny’s, the only remedy available to Donny is to sue me for breach. If he was smart, Donny probably included an “early termination” clause in the contract that spelled out the damages. What Donny can’t do is call the police and have me arrested, nor will he have access to outsized statutory damages. Yet that’s what the DMCA affords device makers and carriers. They are using the power of the state to deny the property rights of others and to secure for themselves rights they could not get through contract law.

Where the White House’s response gets it wrong, however, is in involving the FCC and the NTIA. This is not a telecommunications policy issue; it’s a copyright issue. It’s not just cell phone makers and carriers that are misusing the DMCA. Device makers are employing the same technique to garage door openers, printers, and other devices. Yet that’s how it seems the White House is approaching the issue. From their petition response:

The Obama Administration would support a range of approaches to addressing this issue, including narrow legislative fixes in the telecommunications space that make it clear: neither criminal law nor technological locks should prevent consumers from switching carriers when they are no longer bound by a service agreement or other obligation.

If Congress acts to fix this mess, it should not limit itself to just a narrow provision that exempts cell phone unlocking from the DMCA. In fact, this is an opportunity for conservatives and libertarians in Congress to act on principle and propose a comprehensive fix to the DMCA in the name of respecting property rights. I for one would love to see that challenge put the President.

Finally, it should be made clear that contrary to what some folks are suggesting, by involving the FCC the White House is not endorsing a “Carterfone for wireless”—the idea that carriers should not be allowed to limit how consumers can use their devices, even through contract. The White House response was quite clear that agreements that bind consumers to a particular carrier should still be allowed. And it makes perfect sense.

Today Verizon announced that it activated a record 6.2 million iPhones in its fourth quarter. What accounts for this feat? CFO Fran Shammo explains:

This past fourth quarter, you … had really one thing happen that never happened before, especially with Verizon Wireless, and that was for the first time ever, because of the iPhone 5 launch, we had the 4 at free. So it was the first time ever you could get a free iPhone on the Verizon Wireless network.

A free iPhone is a great deal for consumers who can’t or don’t want to pay for the $450 device up front. The only way carriers can make these offers is in exchange for a promise from the consumer to stay with the carrier for a fixed amount of time and to pay a penalty if they don’t. That’s a win-win-win for the consumer, the carrier and the phone maker—and it’s possible just with the contract law we know and love.

]]>
https://techliberation.com/2013/03/05/the-free-market-case-for-cell-phone-unlocking/feed/ 123 43964
Sports Channels and A La Carte Cable Pricing https://techliberation.com/2013/01/26/sports-channels-and-a-la-carte-cable-pricing/ https://techliberation.com/2013/01/26/sports-channels-and-a-la-carte-cable-pricing/#comments Sun, 27 Jan 2013 00:17:55 +0000 http://techliberation.com/?p=43515

Matt Yglesias today responded with a post of his own to a NYT article about sports channels and cable pricing by Brian Stelter that Yglesias believed had “bad analysis.” I’m here to defend Stelter a little bit because I think Yglesias was too harsh and that Yglesias erred in his own post about the nature of cable bundling. Yglesias’ posts on cable bundling are good, and especially valuable because his Slate and ThinkProgress audiences are not the most receptive to economic justifications for perceived unfair corporate pricing schemes. In part due to him I suspect, you rarely hear econ and business bloggers calling for a la carte pricing of cable channels.

And Yglesias is certainly right that you can’t really complain about the price of your cable package, which includes the few channels you watch plus the sports channels you don’t watch, because you obviously value the channels more than the price you pay per month, even if the sports are a “waste.” He falters when he says

So since those channels are worth $60 to you, even if unbundling happens your cable provider is going to find a way to charge you approximately $60 for them. Because at the end of the day, you’re paying your cable provider for access to the channels you do watch—not for access to the channels you don’t watch. The channels you don’t watch are just there. If the channels you do watch are worth $60 to you, then $60 is what you’ll pay for them.

It would be an amazing price discrimination scheme if it were true cable operators can figure out how to charge each subscriber the approximate price the subscriber values his favorite channels. Cable companies don’t currently have that ability. Even a la carte distributors, like Amazon Prime with their video offerings, don’t charge you exactly what you value TV shows and movies at. The efficiency of bundling cable channels arises not because cable companies are pricing everyone their reservation price, as Yglesias suggests. Bundling is efficient because in a high fixed-cost industry, like cable, cable channel bundles provide cost savings that outweigh the costs of providing “wasted” channels consumers don’t watch.

I think the main point of Stelter’s article is right and Yglesias is incorrect. It’s conceivable that most customers would actually see sustained lower cable prices if sports channels were someday offered as premium channels, like Showtime and HBO. If Stelter is faulted for anything, it’s that he mentioned the phrase “a la carte,” since it seems like his sources only alluded to a partial breakup of the current bundle–making sports a premium offering–not a wholesale a la carte offering. Stelter quoted a former DOJ antitrust lawyer and anonymous cable executives who say that increasing sports channel prices may make the cable bundle so pricey that cable operators will be forced to break up the bundle, and I see no reason to question their assessments.

I’ll attempt to illustrate what the cable executives are trying to avoid. Bundling components like cable channels lowers costs for providers. If you imagine an a la carte world, it’s plain the costs escalate. Instead of everyone picking from a menu of 3 or 4 bundles from a cable provider, every single subscriber household would have a different customized selection. Cable companies would have to ensure everyone is receiving their requested channels, frequently make corrections and updates, and incur other costs.

Not to mention, a la carte would eliminate many channels currently in existence because there is a cross-subsidy business model in place that makes low-demand channels available in the first place. (A la carte would especially harm religious, African-American, and other niche programming. Currently, these niche content creators have to market their channels only to a few cable and satellite companies for carriage. With a la carte, they would have to engage in nationwide and expensive marketing campaigns to all their likely customers, which is why these smaller firms typically oppose a la carte.) A la carte, then, is costly to both cable and content providers. Offering only a few bundles eliminates many costs.

However, when the price of the bundle increases with more expensive sports programming, as the Stelter piece describes, you lose customers because the bundle has become too expensive. Eventually, it becomes more cost-effective to spin off some sports channels as premium channels, charge those sports customers more, and offer a lower-priced package to everyone else and gain customers. And I suspect sports viewers have relatively inelastic demand (nothing ruins my fall weekend like a Bears black-out on the East Coast), so the losses from a sports unbundling could be minimal.

If there’s a lesson, it’s that this all goes back to Coase and his tautological but helpful theory of the firm. We know where efficient firm boundaries are based on where firm boundaries are. That is, the current cable packages could be disintegrated if it’s too costly to maintain them. In a dynamic market like cable, it may one day be efficient to break up the current bundle, charge everyone less, and make some sports channels premium channels.

]]>
https://techliberation.com/2013/01/26/sports-channels-and-a-la-carte-cable-pricing/feed/ 4 43515
FCC Should State the Obvious: Telephone Service Is Not a Monopoly https://techliberation.com/2012/12/19/fcc-should-state-the-obvious-telephone-service-is-not-a-monopoly/ https://techliberation.com/2012/12/19/fcc-should-state-the-obvious-telephone-service-is-not-a-monopoly/#respond Wed, 19 Dec 2012 16:39:11 +0000 http://techliberation.com/?p=43365

Given the rate at which telephone companies are losing customers when they cannot raise prices as a regulatory matter, it is preposterous to continue presuming that they could raise prices as an economic matter.

Today, the United States Telecom Association (USTA) asked the Federal Communications Commission (FCC) to declare that incumbent telephone companies are no longer monopolies. Ten years ago, when most households had “plain old telephone service,” this request would have seemed preposterous. Today, when only one in three homes have a phone line, it is merely stating the obvious: Switched telephone service has no market power at all.

The FCC already knows that plain old telephone service is no longer a “dominant” service (“dominance” is more likely when a service has a market share exceeding 60%). Last year, the FCC’s Technological Advisory Council found that the legacy, circuit switched telephone network “no longer functions as a universal communications infrastructure” and telephone service “does not provide anything close to the services and capabilities” of wired and wireless broadband Internet access services.

The FCC also knows that outdated regulations premised on the historical primacy of telephone networks are discouraging investment in the modern Internet infrastructure that is necessary for the United States to remain competitive in a global economy. To its credit, the FCC has begun “eliminating barriers to the transformation of today’s telephone networks into the all-IP broadband networks of the future.” Based on an idea pioneered by Commissioner Ajit Pai, the FCC recently formed an agency-wide Technology Transitions Task Force to provide recommendations for modernizing our nation’s communications policies.

The USTA petition has a very limited scope compared to the TTTF. The petition does not include broadband or “special access” services and does not seek to deregulate telephone service. It asks only that incumbent telephone companies providing plain old telephone service receive regulatory treatment similar to that received by wireless providers, cable operators, and VOIP providers. Today, telephone companies designated as “dominant” are subject to  unique regulatory requirements regarding pricing, tariff filings, and market entry and exit that are inapplicable to their competitors.

These unique regulatory requirements are premised on the presumption that telephone companies have “market power” – i.e., that they can raise prices without losing customers to competitors. Telephone companies may have possessed such market power during the Carter Administration when the current regulatory regime was adopted. But today, incumbent telephone companies whose prices are capped by the FCC are losing 10% of their customers to competitive alternatives every year. Given the rate at which telephone companies are losing customers when they  cannot raise prices as a regulatory matter, it is preposterous to continue presuming that they could raise prices as an economic matter. It is more realistic to presume that plain old telephone service will lose customers at any price as consumers migrate to services with superior capabilities.

Though the relief sought by USTA is a small step toward regulatory modernization, it is an essential one that the FCC can take immediately under existing precedent. In 1995, the FCC concluded that AT&T should be reclassified as “non-dominant” in the “long distance” market after its share of that market declined from approximately 90% to 60% during the preceding decade. Last October, the FCC eliminated the presumption prohibiting cable operators from entering into exclusive programming arrangements with their affiliates because cable’s share of the video market had dropped from approximately 95% to 57% since the presumption was adopted. It is obvious that switched telephone service – with a national market share that is approximately half that of the long distance and cable services the FCC found lacked market power – should receive similar treatment.

There is nothing more deceptive than an obvious fact.” It is obvious that switched telephone services are no longer capable of supporting the economic and social goals of our nation. It is also obvious that our future success depends on a rapid transition to an all-Internet infrastructure.

The USTA petition asks the FCC to state the obvious while the FCC’s new Task Force conducts a more holistic review of our nation’s outdated communications policies. Eliminating the presumption that plain old telephone service is “dominant” would promote confidence for private investment in Internet infrastructure and bring us one step closer to realizing the full potential and opportunity of Internet transformation for consumers. That’s progress that benefits everyone.

]]>
https://techliberation.com/2012/12/19/fcc-should-state-the-obvious-telephone-service-is-not-a-monopoly/feed/ 0 43365
Important Cyberlaw & Info-Tech Policy Books (2012 Edition) https://techliberation.com/2012/12/17/important-cyberlaw-info-tech-policy-books-2012-edition/ https://techliberation.com/2012/12/17/important-cyberlaw-info-tech-policy-books-2012-edition/#comments Mon, 17 Dec 2012 19:23:44 +0000 http://techliberation.com/?p=39701

The number of major cyberlaw and information tech policy books being published annually continues to grow at an astonishing pace, so much so that I have lost the ability to read and review all of them. In past years, I put together end-of-year lists of important info-tech policy books (here are the lists for 2008, 2009, 2010, and 2011) and I was fairly confident I had read just about everything of importance that was out there (at least that was available in the U.S.). But last year that became a real struggle for me and this year it became an impossibility. A decade ago, there was merely a trickle of Internet policy books coming out each year. Then the trickle turned into a steady stream. Now it has turned into a flood. Thus, I’ve had to become far more selective about what is on my reading list. (This is also because the volume of journal articles about info-tech policy matters has increased exponentially at the same time.)

So, here’s what I’m going to do. I’m going to discuss what I regard to be the five most important titles of 2012, briefly summarize a half dozen others that I’ve read, and then I’m just going to list the rest of the books out there. I’ve read most of them but I have placed an asterisk next to the ones I haven’t.  Please let me know what titles I have missed so that I can add them to the list. (Incidentally, here’s my compendium of all the major tech policy books from the 2000s and here’s the running list of all my book reviews.)

As I do each year, I need to repeat a few disclaimers.  First, what qualifies as an “important” info-tech policy book is highly subjective, but I would define it as a title that many people — especially scholars in the field — are currently discussing and that we will likely be referencing for many years to come.  But I “weight” books in the sense that narrowly-focused titles lose a few points. For example, books that deal mostly with privacy issues, copyright law, or antitrust policy are docked a few points relative to “big picture” info-tech policy books that offer a broader exploration of policy issues and which offer more wide-ranging recommendations.

Second, almost all of the books included have something profound to say about Internet policy (either directly or indirectly) and the more profound and clear the policy recommendations or implications, the higher the titles rank in terms of importance on my list.

Third, and most importantly: Just because a book appears on this list that does not necessarily mean I agree with everything in it.  In fact, as was the case in previous years, I found much with which to disagree in most of the books listed here. Simply put, the cyber-liberty I cherish is a real loser in both academic and public policy circles these days. It has very few defenders today. So, if this was simply a list of my personal favorite books, there would only be 2 or 3 titles on it. Instead, this is my effort to list important books in the field, regardless of whether I agree with the content and conclusions found in those titles.

OK, on to the list.

(1) Rebecca MacKinnonConsent of the Network: The Worldwide Struggle for Internet Freedom

Rebecca MacKinnon’s book was the most important information technology policy book released in 2012 because it: (1) presented a splendid history of the ideas and forces shaping Internet policy debates globally; (2) offered policy insights that were extremely relevant to breaking developments in this field; and (3) set forth a call-to-arms to global Internet activists and gave them a new way of framing their issue advocacy.

MacKinnon is a former journalist and her outstanding reporting skills are on display throughout the text. Her coverage of China’s efforts to regulate the Net is outstanding. She also surveys some of the recent policy fights here and abroad over issues such as online privacy, Net neutrality regulation, free speech matters, and the copyright wars. The book demands attention for this historical work and analysis alone.

Even more importantly, however, MacKinnon makes a forceful argument for how to think about Internet freedom and democracy in new digital worlds. Her book is an attempt to take the Net freedom movement to the next level; to formalize it and to put in place a set of governance principles that will help us hold the “sovereigns of cyberspace” more accountable. Many of her proposals are quite sensible. But, as I noted in my much longer review of the book, I had a real problem with MacKinnon’s use of the term “digital sovereigns” or “sovereigns of cyberspace” and the loose definition of “sovereignty” that pervades her narrative. She too often blurs and equates private power and political power, and she sometimes leads us to believe that the problem of the dealing with the mythical nation-states of “Facebookistan” and “Googledom” is somehow on par with the problem of dealing with actual sovereign power — government power — over digital networks, online speech, and the world’s Netizenry.

Despite these nitpicks, MacKinnon has many other ideas about Net governance in the book that are less controversial and entirely sensible in my opinion. She wants to “expand the technical commons” by building and distributing more tools to help activists and make organizations more transparent and accountable. These would include circumvention and anonymization tools, software and programs that allow both greater data security and portability, and devices and network systems to expand the range of communication and participation, especially in more repressed countries. She would also like to see neitzens “devise more systematic and effective strategies for organizing, lobbying, and collective bargaining with the companies whose service we depend upon — to minimize the chances that terms of service, design choices, technical decisions, or market entry strategies could put people at risk or result in infringement of their rights.” This also makes sense as part of a broader push for improved corporate social responsibility.

Regarding the role of law, MacKinnon has a mixed view. She says: “There is a need for regulation and legislation based on solid data and research (as opposed to whatever gets handed to legislative staffers by lobbyists) as well as consultation with a genuinely broad cross-section of people and groups affected by the problem the legislation seeks to solve, along with those likely to be affected by the proposed solutions.” Of course, that’s a fairly ambiguous standard that could open the door to excessive political meddling with the Net if we’re not careful. Overall, though, she acknowledges how regulation so often lags far behind innovation. “A broader and more intractable problem with regulating technology companies is that legislation appears much too late in corporate innovation and business cycles,” she rightly notes.

MacKinnon’s book will be of great interest to Internet policy scholars and students, but it is also accessible to a broader audience interested in learning more about the debates and policies that will shape the future of the Internet and digital networks for many years to come. One other note: MacKinnon’s clearly-worded prose and cool-headed tone deserve praise and emulation. It serves as a model for how to write a thoughtful Internet policy book, even if you don’t agree with all her conclusions or recommendations.

My complete review of Consent of the Networked can be found here.

(2) Susan CrawfordCaptive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age

Susan Crawford’s book was probably my least favorite title of 2012, but that doesn’t mean I can discount its significance within this field. Crawford has made herself a widely-recognized and highly-charged figure in the world of Internet policy through her work as an activist, an academic, and even a government official. In Captive Audience, she doesn’t even try to hide her self-described “radicalized” views on communications policy anymore and in the process she solidifies her role as the ringleader of the growing movement to impose centralized, top-down government control on America’s broadband infrastructure.

What is most astonishing about Captive Audience is the way Crawford so audaciously waxes nostalgic for the days of regulated monopoly. Simply put, Crawford doesn’t believe that capitalism or competition have any role to play in the provision of broadband networks and services. “No competitive pressure will force these companies to act [in the public interest],” she argues on the last page of the manifesto. “Americans,” she claims, “have allowed a naive belief in the power and beneficence of the free market to cloud their vision.” She suggests we should just give up our false hope that markets can deliver such an important service and get on with the task of converting broadband into a full-blown regulated public utility.

Her proposed solutions read like the typical Big Government grab-bag of policy proposals: more government spending, more government ownership, and more government regulation (forced access regulation and rate controls) for any private carriers that are allowed to remain in operation as de facto handmaidens of the state. Crawford’s perfect world scenario would seem to be some sort of amalgam of the U.S. Postal Service and the federal highway program. While both programs have sought to provide an important service to the masses, it goes without saying that both are also an absolute basket case in terms of service management and economic viability. But, for the sake of argument, let’s say that Crawford is right and that public ownership and comprehensive government management is the way to go. Where will all this money come from for all the new government activity Crawford desires? Apparently it grows on trees because she isn’t ever willing to admit that we find ourselves in the midst of major fiscal crisis that likely constrains the ability of governments to make these investments themselves. Luckily, private wireline and wireless broadband providers have been investing tens of billions in infrastructural upgrades in recent years (don’t take my word for it, read what the Progressive Policy Institute has to say), a fact that Crawford conveniently ignores.

More importantly, Crawford never fully confronts the fact that the era of regulated monopoly she cherishes was an unmitigated croynist disaster for consumers. That era had nothing to do with the “public interest” and everything to do with protecting the private interests of regulated entities — namely, Ma Bell on the communications side and broadcasters on the media side. She also doesn’t address the lackluster state of innovation during the 70 or so years during which time communications and media markets were under the tight grip of federal and state regulators, who controlled rates, restricted new entry, and discouraged innovation at virtually every juncture. If one is going to recommend a return to the regulatory past, they had better grapple with that uncomfortable, anti-consumer, anti-innovation history. Crawford utterly fails to in Captive Audience.

While the book is nominally about broadband regulation, the bulk of it is actually dedicated to taking on one company — Comcast — and specifically picking apart its recent merger with NBC Universal. For Crawford, the Comcast-NBC deal represented something akin to the Mayan apocalypse of media policy. She wants us to believe that the deal has forever solidified Comcast’s grasp on both programming and broadband markets. Comcast chief Brian Roberts is presented as the nefarious villain of the narrative; Crawford paints him as a cross between Gordon Gecko and Mr. Burns from “The Simpsons.” Usually such neurotic narratives are reserved for Rupert Murdoch and how he is supposedly plotting mass media domination to brainwash the minds of the masses. But Crawford suggests that Roberts is the new Bond villain du jour and chapter after chapter are devoted to demonizing him, his father, and other execs at Comcast. She argues that “Comcast now owns the Internet in America” and that the company is “squeezing independent online video” providers out of the market.

Despite all this hand-wringing, the situation in the video marketplace has never looked brighter. Crawford fails to put things in historical perspective and examine consumer choices in this market today relative to the past — a point I made in this debate with her last year. Of course, she probably didn’t want to seriously examine that evidence because by every metric available — and I published an entire report called Media Metrics a few years ago proving this — Americans have more and better viewing options at their disposal than ever before in history. We have more channels and more content available over more platforms (cable, satellite, telco, online, DVD, mail, etc) and more devices than ever before. Consumers have an unprecedented ability to access, record, time-shift, interact with, and even manipulate and redistribute video content. Of course, all this choice and quality comes at a cost, as Crawford continuously complains throughout the text. Apparently, in her view, all these great new programming options and technologies should just fall to us like manna from heaven with no price tag attached.

If you want to see what the opposite of Internet freedom and digital capitalism looks like, look no further than this book. It is the definitive articulation of the cyber-planner’s ethos. Of course, that’s also what makes Captive Audience one of the most important books of 2012. But if you really must read such one-sided propaganda — since this book will, no doubt, be assigned in many cyberlaw and media studies classes across America — then I encourage you to also read Christopher Yoo’s Dynamic Internet and Randy May’s edited collection of essays on Communications Law and Policy in the Digital Age, both of which are mentioned below. Both of those books offer a refreshingly level-headed examination of the true state of this marketplace. I’d also recommend you check out these recent essays by Bret Swanson and Richard Bennett for a hard look at the shoddy numbers and assumptions underlying many of the broadband policy critiques you hear out there today from Crawford and others.

(3) John Palfrey & Urs GasserInterop: The Promise and Perils of Highly Interconnected Systems

What makes Palfrey & Gasser’s book so important is that the authors aim to develop “a normative theory identifying what we want out of all this interconnectivity” that the information age has brought us. They correctly note “there is no single, agreed-upon definition of interoperability” and that “there are even many views about what interop is and how it should be achieved.” Generally speaking, they argue increased interoperability — especially among information networks and systems — is a good thing because it “provides consumers greater choice and autonomy,” “is generally good for competition and innovation,” and “can lead to systemic efficiencies.”

But they wisely acknowledge that there are trade-offs, too, noting that “this growing level of interconnectedness comes at an increasingly high price.” Whether we are talking about privacy, security, consumer choice, the state of competition, or anything else, Palfrey and Gasser argue that “the problems of too much interconnectivity present enormous challenges both for organizations and for society at large.” Their chapter and privacy and security offers many examples, but one need only look around at their own digital existence to realize the truth of this paradox. The more interconnected our information systems become, and the more intertwined our social and economic lives become with those systems, the greater the possibility of spam, viruses, data breaches, and various types of privacy or reputational problems. Interoperability giveth and it taketh away.

Ultimately, however, the authors fail to develop a clear standard for when interoperability is good and when governments should take steps to facilitate or mandate it. They argue that “there is no single form or optimal amount of interoperability that will suit every circumstance” and that “most of the specifics of how to bring interop about [must] be determined on a case-by-case basis. Yet, Palfrey and Gasser also make it clear they want government(s) to play an active role in ensuring optimal interoperability. They say they favor “blended approaches that draw upon the comparative advantages of the private and public sector,” but they argue that government should feel free to tip or nudge interoperability determinations in superior directions to satisfy “the public interest.” “If deployed with skill,” they argue, “the law can play a central role in ensuring that we get as close as possible to optimal levels of interoperability in complex systems.”

The fundamental problem this “public interest” approach to interoperability regulation is that it is no better than the “I-know-it-when-I-see-it” standard we sometimes at work in the realm of speech regulation. It’s an empty vessel, and if it is the lodestar by which policymakers make determinations about the optimal level of interoperability, then it leaves markets, innovators, and consumers subject to the arbitrary whims of what a handful of politicians or regulators think constitutes “optimal interoperability,” “appropriate standards,” and “best available technology.”

In my absurdly long review of their book, I offered an alternative framework that suggests patience, humility, and openness to ongoing marketplace experimentation as the primary public policy virtues that lawmakers should instead embrace. Ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses. The latter (regulatory foreclosure of experimentation) limits that potential.

Defining “optimal interoperability,” is not just difficult as Palfrey and Gasser suggest, but I would argue that it is a pipe dream. Sometimes consumers demanded a certain amount interoperability and they usually get it. But it seems equally obvious that consumers don’t always demand perfect interoperability. Just look at your iPhone or Xbox for proof. Quite often, a lack of interoperability helps firms finance important new products and services while simultaneously ensuring users a tailored and potentially more secure and satisfying experience. Importantly, however, non-interoperability also spurs new forms of innovation from rivals looking to leap-frog the old front-runners. Progress flows from this never-ending cycle of technological change and industrial churn.

In sum, we cannot define or determine “optimal interoperability” in an a priori fashion; only ongoing experimentation can help us determine what truly lies in “the public interest.” Despite my different approach and conclusions, Palfrey and Gasser’s book perfectly frames what should be a very interesting ongoing debate over these issues and for that reason will be required reading on this subject for years to come.

Again, my longer review of Palfrey and Gasser’s book can be found here, and listen to John Palfrey’s podcast discussion with Jerry Brito here.]

(4) Christopher YooThe Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network

Christopher Yoo’s book was my personal favorite of the year, but it won’t capture as much interest and recognition as some of the other titles on this list. The book offers a concise overview of how Internet architecture has evolved and a principled discussion of the public policies that should govern the Net going forward. Yoo makes two straight-forward arguments. First, the Internet is changing. In Part 1 of the book, Yoo offers a layman-friendly overview of the changing dynamics of Internet architecture and engineering. He documents the evolving nature of Internet standards, traffic management and congestion policies, spam and security control efforts, and peering and pricing policies. He also discusses the rise of peer-to-peer applications, the growth of mobile broadband, the emergence of the app store economy, and what the explosion of online video consumption means for ongoing bandwidth management efforts. Those are the supply-side issues. Yoo also outlines the implications of changes in the demand-side of the equation, such as changing user demographics and rapidly evolving demands from consumers. He notes that these new demand-side realities of Internet usage are resulting in changes to network management and engineering, further reinforcing changes already underway on the supply-side.

Yoo’s second point in the book flows logically from the first: as the Internet continues to evolve in such a highly dynamic fashion, public policy must as well. Yoo is particularly worried about calls to lock in standards, protocols, and policies from what he regards as a bygone era of Internet engineering, architecture, and policy. “The dramatic shift in Internet usage suggests that its founding architectural principles form the mid-1990s may no longer be appropriate today,” he argues. “[T]he optimal network architecture is unlikely to be static. Instead, it is likely to be dynamic over time, changing with the shifts in end-user demands,” he says. Thus, “the static, one-size-fits-all approach that dominates the current debate misses the mark.”

Yoo makes a particular powerful case for flexible network pricing policies. His outstanding chapter on “The Growing Complexity of Internet Pricing” offers an excellent overview of the changing dynamics of pricing in this arena and explains why experimentation with different pricing methods and business models must be allowed to continue. Getting pricing right is essential, Yoo notes, if we hope to ensure ongoing investment in new networks and services. He also notes how foolish it is to expect the government to come in and save the day thought massive infrastructure investment to cover the hundreds of billions of dollars needed to continue to build-out high-speed services.

Throughout the second half of his book, Yoo explains why it would be a disaster for consumers and high-tech innovation if policymakers limited pricing flexibility and experimentation with new business models and technological standards. He argues that public policy should generally seek to avoid ex ante forms of preemptive, prophylactic Internet regulation and instead rely on an ex post approach when and if things go wrong. Essentially, he wants policymakers to embrace “techno-agnosticism” toward ongoing debates over standards, protocols, business models, pricing methods, and so on. Lawmakers should not be preemptively tilting the balance in one direction or the other or, worse yet, restricting experimentation that can help us find superior solutions.

And even under that model of retrospective review, Yoo makes it clear throughout the book that there should be a very high bar established before any regulation is pursued. This is particularly true because of the First Amendment values at stake when the government attempts to regulate speech platforms. In Chapter 9 of the book, Yoo walks the reader through all the relevant case law on this front and makes it clear how “the Supreme Court has repeatedly recognized that the editorial discretion exercised by intermediaries serves important free speech values.” Yoo also makes the case that a certain degree of intermediation helps serve consumer needs by helping them more easily find the content and services they desire. Law should not seek to constrain that and, under current Supreme Court First Amendment jurisprudence, it probably cannot.

To me, Yoo’s approach strikes the right balance for Net governance and public policy in the information age. It all comes down to flexibility and freedom. If the Internet and all modern digital technologies are to thrive, we must reject the central planner’s mindset that dominated the analog era and forever bury all the static thinking it entailed.

My complete review of Yoo’s Dynamic Internet is here.

(5) Brett Frischmann Infrastructure: The Social Value of Shared Resources

Frischmann’s book offers a nice contrast with Yoo’s in that it suggests a far more ambitious role for the state in shaping the future of digital networks and online platforms. Although not strictly a book about information technology infrastructure, Frischmann spends a great deal of time making the case for a greater government action in the realm of communications policy and for open access and Net neutrality regulation in particular. (There’s also a chapter on intellectual property issues that tech policy wonks will find of interest). The book is a veritable paean to open access regulation; Frischmann aims to persuade the reader that “society is better off sharing infrastructure openly” and devotes considerable energy to hammering that point home in one context after another.

In my review of the book, which was part of 2-day symposium on the book over at the Concurring Opinions blog, I took Frischmann’s book to task for its almost complete absence of public choice insights and his general disregard for thorny “supply-side” questions.  Frischmann is so single-mindedly focused on making the “demand-side” case for better appreciating how open infrastructures “generate spillovers that benefit society as a whole” and facilitate various “downstream productive activities,” that he short-changes the supply-side considerations regarding how infrastructure gets funded and managed to begin with.

The book also ignored the omnipresent threat of regulatory capture and the fact that any major infrastructure regulatory system big enough and important to be captured by special interests and affected parties often will be. Frischmann acknowledges the problem of capture in just a single footnote in the book and admits that “there are many ways in which government failures can be substantial,” but he asks the reader to quickly dispense with any worries about government failure since he believes “the claims rest on ideological and perhaps cultural beliefs rather than proven theory or empirical fact.”  I found that assertion outrageous and argued that, to the contrary, decades of scholarship has empirically documented the reality of government failure and its costs to society, as well as the plain old-fashioned inefficiency often associated with large-scale government programs. For infrastructure projects in particular, the combination of these public choice factors usually adds up to massive inefficiencies and cost overruns.

For those reasons, I argued in my review that society would be better off adopting a “3-P” approach to infrastructure management: privatize, property-tize, and price. But Frischmann is dead set against such thinking and makes it clear that everything must be subservient to the goal of “openness” and commons-based management. Unsurprisingly, therefore, this leads him to suggest that we need “a dramatic shift — perhaps a paradigm shift — away from the conventional position favoring market provisioning and markets ‘free’ from government intervention.” But the problem with that reasoning, as I pointed out in my review, is that most of the infrastructure that Frischmann cites as failing us today is already managed in the fashion he favors! Nonetheless, he wants to pile on still more commons-based government control / ownership solutions even though they are the primary cause of our infrastructure problems today. In this sense, Frischmann’s approach parallels Susan Crawford’s in her book Captive Audience, discussed above. They both seek to gloss over the ugly realities of traditional public infrastructure (mis-)management and they imply that we just need to build a better breed of bureaucrats who will somehow be immune to all the problems of the past. Needless to say, I don’t place much faith in such efforts.

Despite these serious deficiencies, students and scholars studying infrastructure theory will benefit from Frischmann’s excellent treatment of public goods and social goods; spillovers and externalities; proprietary versus commons systems management; common carriage policies and open access regulation; congestion pricing strategies; and the debate over price discrimination for infrastructural resources. He at least does a nice job outlining these concepts and controversies, even if he ultimately fails to make the case for radically expanding government control of infrastructural resources.

Again, you can read my entire review of Frischmann’s book here.


— Other Major Releases in 2012 —

Julie E. CohenConfiguring the Networked Self: Law, Code, and the Play of Everyday Practice

Cohen’s book represents an effort to move “beyond the bounds of traditional liberal political theory” by transcending what she labels the traditional “information-as-freedom” versus “information-as-control” paradigms. Her aim is to promote “cultural environmentalism” and “the structural conditions of human flourishing.” She argues that “a commitment to human flourishing demands a more critical stance toward the market-driven evolution of network architectures.” In other words, don’t trust markets.

I didn’t find her case very convincing and it didn’t help that the book is filled with impenetrable prose that sometimes leaves the reader’s head a bit numb. (Two representative samples: “With respect to space, surveillance employs a twofold dynamic of containerization and affective modulation in order to pursue large-scale behavioral modification.” … and… “Here the performative impulse introduces static into the circuits of the surveillant assemblage; it seeks to reclaim bodies and reappropriate spaces.” Say what? Write in plain English, professor!)

The closing chapter also includes a strange reinterpretation of Ludditism. Cohen argues: “the tale of the Luddites poses an important challenge for scholars and policy makers in the emerging networked information society. If technologies do not have natural trajectories, it is our obligation to seek pathways of development that promote the well-being of situated, embodied users and communities. When our preferred policy prescriptions persistently produce information architectures and institutions that undermine human flourishing in critical ways, it is time to question them and to experiment with ways of doing better.”  Hmmm… I’m not sure I want to know what that would mean in practice!

Regardless, Cohen’s book has a lot to say about modern privacy and copyright battles and will be of great interest to scholars in those specific fields of study.  You can find all the chapters online here.

Cole StrykerHacking the Future: Privacy, Identity, and Anonymity on the Web

Stryker’s Hacking the Future provides a concise overview of the battles over online anonymity that have raged since the Net’s early days and he outlines the many new threats to it. “What we are seeing is an all-out war on anonymity, and thus free speech, waged by a variety of armies with widely diverse motivations, often for compelling reasons,” he says. The book will be a great use to those covering ongoing policy debates over cybersecurity, the “nymwars” and online authentication / identification debates, post-Arab Spring political activism & “hactivism,” encryption issues, social networking privacy, troll culture and cyberbullying, and much more. Stryker makes a strong case for the continuing importance of online anonymity but isn’t scared to ask hard questions about the trade-offs society faces when some can mask their online identities. But he also explores the question of whether anonymity can survive given recent technological and policy-related developments, both of which aim to make individuals more identifiable online. I particularly enjoyed Chapter 10’s breakdown of the “Faces of Anonymity,” in which Stryker crafts a detailed taxonomy of anonymous character types online.

He also offers a run-down of the tools and steps that people can take advantage of if they want to ensure their anonymity / privacy online, including: cookie blocking, private browsing tools, disabling HTML in email and limiting or disabling broswer extensions, clearing browser histories, and using encryption tools, proxy servers, and VPN tunneling. “The question we have to ask ourselves,” Stryker notes, is “Does the accessibility of these anonymizing technologies make the world a safer, more equitable place, better place?” He answers: “It’s difficult to measure, but their abolition certainly wouldn’t.” He also draws this interesting parallel with efforts to regulate firearms: “The logic here is not unlike that used by those who oppose gun control: if guns are made illegal, then only criminals will have guns, leaving well-meaning folks defenseless. The reasoning is compelling within the identity space,” he argues, “regardless of what you might think about the merits of gun control.”

Two other notes: First, Wide Open Privacy: Strategies For The Digital Life by J.R. Smith & Siobhan MacDermott makes a nice compliment to Hacking the Future. It also offers a breakdown of privacy-enhancing technologies and outlines other strategies to safeguard your online anonymity. Second, if you are interested in digging even deeper in the Luzsec side of this story, you should check out Parmy Olson’s W e are Anonymous: Inside the Hacker Wor ld of Lulzsec, Anonymous and the Global Cyber Insurgency. It’s a splendid history but doesn’t have as much to say about the various policy issues that Stryker tackles in Hacking the Future. Or just listen to Olson’s podcast discussion with Jerry Brito. Speaking of that Brito character…

Jerry Brito (ed.) – Copyright Unbalanced: From Incentive to Excess

My Mercatus Center colleague Jerry Brito put together this important collection of essays by various conservatives and libertarian authors to highlight growing concerns about copyright policy. Contributors include Tom W. Bell, David G. Post, Reihan Salam, Patrick Ruffini, Tim Lee, Christina Mulligan, and Eli Dourado (also of Mercatus). Their essays suggest that the tide may be turning against copyright among free market analysts. Their chapters explore the increasingly complexity of copyright law and the rising costs associated with its enforcement and make a powerful case for reform of, or at least restraints on, the current copyright system. The consensus seemed to revolve around a few key reforms: significantly shortened copyright terms, the reintroduction of formalities (i.e., registration), and limits on criminal prosecution and civil asset forfeiture. The authors also make a strong case that public choice problems pervade today’s copyright system and that we should be concerned that cronyism is increasing creeping into the politics of copyright law and its seemingly endless expansion.

If you interested in a different take on IP issues to balance out Brito’s collection, I’d recommend picking up the forthcoming Laws of Creation: Property Rights in the World of Ideas by Ronald A. Cass and Keith N. Hylton. It’s a 2013 release but it is already in stock. I’m reading an advance copy from the publisher right now and will likely have more to say about it in a forthcoming post.

Randolph J. May (ed.) – Communications Law and Policy in the Digital Age: The Next Five Years

My former colleague Randy May put together this nice collection of essays by some of America’s leading communications and media policy scholars, including Bruce Owen, Christopher Yoo, James Speta, Daniel Lyons and others. The authors offer a generally skeptical take on the expansion of communications and broadband regulation and the growing power of the Federal Communications Commission over these markets. In particular, many of the contributors take the FCC to task for sketchy assertions of jurisdiction and the agency’s efforts to expand its imperial regulatory ambitions without always having the clear statutory authority to do so. The chapters by James Speta and Seth Cooper are particularly good in that regard. Admin law geeks will eat them up.

Those analysts following the ongoing Net neutrality wars will also find the book informative, even if they disagree with the generally skeptical take on the issue from contributors. Spectrum and universal service policy wonks will also appreciate the excellent chapters on those two issues from Michele P. Connolly and Daniel A. Lyons, respectively. And the closing chapter by Bruce Owen is, like everything Bruce does, a masterpiece. Owen is probably the most respected media economist on the planet and his decades of experience in this field shines through in his powerful essay on “Communications Policy Reform, Interest Groups, and Legislative Capture.” He crafts a political economy of the regulatory state and points out that the explosion of rent-seeking and legislative/regulatory capture in this sector is unlikely to dissipate. “Therefore,” Owen argues, “communications policy likely will continue to be subject to welfare-suppressing regulation because such regulation is consistent with the interests of legislators,” who are often beholden to special interests and their campaign dollars.

Joshua GansInformation Wants to Be Shared

I really enjoyed this book. It’s an insightful exploration of modern media economics filled with interesting questions and scenarios about how information markets will evolve in the future. What will sustain movies, music, book, local reporting, and so on in the future? Gans does a terrific job making these issues easy to understand and doesn’t try to evangelize as much as the many others who have written on these issues. If you’ve read and enjoyed Carl Shapiro and Hal Varian’s classic text, Information Rules, then you will find Gans’ book to be the perfect compliment.

Gans doesn’t have a lot to say about public policy, however. This is really more of a business book suited for industry analysts and business school students. Nonetheless, some of its implications for policy are clear since many of these business model debates boil over into the policy arena.

P.S. I should mention that, even if you don’t pick up his new book, you should be following Gans’ “Digitopoly” blog. It is always worth reading.

Andrew Keen – Digital Vertigo: How Today’s Online Social Revolution Is Dividing, Diminishing, and Disorienting Us

If you’re into ‘the-whole-world-is-going-to-Hell-and-the-Internet-is-to-blame’ screeds, Andrew Keen will never disappoint. In Digital Vertigo as well as his earlier book, The Cult of the Amateur, Keen is grumpy about, well, just about everything under the sun. In the earlier book, it was the Web 2.0 world of blogging and “amateur” content creation — most notably Wikipedia and YouTube — that earned Keen’s wrath. In the new book, it is users themselves and the social sharing sites and technologies that they favor that Keen goes off on.

Specifically, Keen is worried that our increased reliance on new online and interactive technologies is spawning a “hypervisible age of great exhibitionism” that sacrifices privacy and individuality at the altar of sharing and social status-seeking. He also makes sweeping claims that we are now living in “a world in which many of us have forgotten what it means to be human,” or that “we are forgetting who we really are.” As I noted in my Forbes review of the book, it’s classic technopanic talk. Not only does Keen fail to substantiate such claims, but he also doesn’t bother to even offer the reader any sort of practical plan for how to achieve a more balanced digital life.

Bruce SchneierLiars & Outliers: Enabling the Trust that Society Needs to Thrive

Security expert Bruce Schneier’s latest book was a terrific read and easily one of my favorites of the year. It wasn’t a book about technology policy per se, but it certainly has important ramifications for it. Schneier explains four “societal pressures” combine to help create and preserve trust within society. Those pressures include: (1) Moral pressures; (2) Reputational pressures; (3) Institutional pressures; and (4) Security systems. By “dialing in” these societal pressures in varying degrees, trust is generated over time within groups. Of course, these societal pressures also fail on occasion, Schneier notes. He explores a host of scenarios — in organizations, corporations, and governments — when trust breaks down because defectors seek to evade the norms and rules the society lives by. These defectors are the “liars and outliers” in Schneier’s narrative and his book is an attempt to explain the complex array of incentives and trade-offs that are at work and which lead some humans to “game” systems or evade the norms and rules others follow.

Indeed, Schneier’s book serves as an excellent primer on game theory as he walks readers through complex scenarios such as prisoner’s dilemma, the hawk-dove game, the free-rider problem, the bad apple effect, principle-agent problems, the game of chicken, race to the bottom, capture theory, and more. These problems are all quite familiar to economists, psychologists, and political scientists, who have spent their lives attempting to work through these scenarios. Schneier has provided a great service here by making game theory more accessible to the masses and given it practical application to a host of real-world issues.

The most essential lesson Schneier teaches us is that perfect security is an illusion, and this is where the implications for tech policy come in. We can rely on those four societal pressures in varying mixes to mitigate problems like theft, terrorism, fraud, online harassment, and so on, but it would be foolish and dangerous to believe we can eradicate such problems completely. “There can be too much security,” Schneier explains, because, at some point, constantly expanding security systems and policies will result in rapidly diminishing returns. Trying to eradicate every social pathology would bankrupt us and, worse yet, “too much security system pressure lands you in a police state,” he correctly notes.

Despite these challenges, Schneier reminds us that there is cause for optimism. Humans adapt better to social change than they sometimes realize, usually by tweaking the four societal pressures Schneier identifies until a new balance emerges. While liars and outliers will always exist, society will march on.

See my longer review of Schneier’s excellent book over at Forbes. I highly recommend you pick up Liars & Outliers no matter what your field of study. It is outstanding.


… and still more titles from 2012 (* asterisk means I didn’t find time to finish them)…

… and, again, here are the lists of important books from 2008, 2009, 2010, and 2011.

]]>
https://techliberation.com/2012/12/17/important-cyberlaw-info-tech-policy-books-2012-edition/feed/ 131 39701
State Film Industry Incentives: A Growing Cronyism Fiasco https://techliberation.com/2012/12/05/state-film-industry-incentives-a-growing-cronyism-fiasco/ https://techliberation.com/2012/12/05/state-film-industry-incentives-a-growing-cronyism-fiasco/#comments Wed, 05 Dec 2012 15:14:35 +0000 http://techliberation.com/?p=43088

Someone should consider making a movie about wasteful state-based film industy subsidies. It has become quite a cronyist fiasco in a very short period of time.

Some background: State and local tax incentives for movie production have expanded rapidly over the past decade. These inducements include tax credits, sales tax exemptions, cash rebates, direct grants, and tax or fee reductions for lodging or locational shooting. In 2002, only five states offered such inducements for movie production. By the end of 2009, forty-five states had some sort of incentives in place to lure film producers.

In 2010, the film industry received an estimated $1.5 billion in financial commitments from these programs. Unsurprisingly, these incentives have proven very popular with movie studios. Of the nine motion pictures that were nominated for Best Picture at the Academy Awards in 2012, five had received taxpayer-funded rebates, tax credits, and subsidies by state governments. “The Help” received a Mississippi spending rebate of $3,547,780 and “The Tree of Life” received $434,253 from Texas. In February 2012, Best Picture-nominee “Moneyball” received as much as $5.8 million from the state of California. It had grossed over $75 million at the box office. More recently, the biopic “Lincoln” received roughly $3.5 million in tax incentives from the Virginia Film Office.

Many state and local governments offer these inducements in the hope of attracting new jobs and investment; other simply seek to bill themselves as “the new Hollywood.” As William Luther of the Tax Foundation notes, “From politicians’ point of view, bringing Hollywood to town is the best of all possible photo opportunities—not just a ribbon-cutting to announce new job creation but a ribbon-cutting with a movie or TV star.” But it seems as if the glamor and prestige associated with films and celebrities have trumped sound economics since there is no evidence these tax incentives help state or local economies.

“Based on fanciful estimates of economic activity and tax revenue, states are investing in movie production projects with small returns and taking unnecessary risks with taxpayer dollars,” noted a 2010 Tax Foundation study. “In return, they attract mostly temporary jobs that are often transplanted from other states.” Studies of specific state incentive programs confirm this finding, almost universally finding miniscule revenue gains for every dollar of film subsidies offered. The adjoining table, derived from a meta-survey of film incentives studies by the Center on Budget & Policy Priorities, illustrates how much revenue was lost per net job created by film tax credits as well as how little revenue each program generated for every dollar of state revenues awarded.

  State Net Revenue Foregone per Net Job Created by Film Tax Credit Revenue Gained from Feedback Effects per  Dollar of Film Subsidy Claimed($)
Massachusetts $88,000 $0.16
Connecticut $33,400 $0.07
Louisiana $16,100 $0.13
Louisiana $14,100 $0.18
Michigan $44,561 $0.11
New Mexico $13,400 $0.14
New Mexico ($400) $1.50
Pennsylvania $13,000 $0.24
New York ($2,000) $1.90
Arizona $23,676 $0.28

The only two studies that have revealed positive results for such film incentive programs were both conducted by Ernst and Young on behalf of the New York and New Mexico film offices. All others have shown consistent negative returns. (If you exclude those two Ernst and Young studies that were done for the film offices, the average revenue gained across those other programs is just 16 cents for every dollar of subsidy granted to the film industry. Stated differently, that’s an 84% net loss for these programs. Truly astonishing numbers.)

Recently, some states have begun abandoning or limiting film incentive programs or at least taking a hard look at their effectiveness. Iowa, for example, suspended its film program in 2009 after an investigation revealed a scandal involving much waste and abuse. Ten criminal cases were brought and seven people were eventually convicted. Michigan Governor Rick Snyder has also started reining in its film program as evidence has mounted that it has failed to create local jobs and has cost the state a great deal of tax revenue. Check out yesterday’s excellent New York Times article by Louise Story for all the gory details.

In sum, film tax credit cronyism puts taxpayers at risk without any corresponding benefits to them or the state.  Glamor-seeking and state pride seem to be the primary motivational factors driving state legislators to engage in such economically illogical behavior. It’s like “smokestack-chasing” for the Information Age, except in this case you don’t even have a factory left in town after your economic development efforts go bust. This cronyist activity benefits no one other than film studios. States should end their film incentive programs immediately.

Additional Reading:

]]>
https://techliberation.com/2012/12/05/state-film-industry-incentives-a-growing-cronyism-fiasco/feed/ 2 43088
The Troubling Persistence of Policy Clichés https://techliberation.com/2012/08/21/the-troubling-persistence-of-policy-cliches/ https://techliberation.com/2012/08/21/the-troubling-persistence-of-policy-cliches/#comments Tue, 21 Aug 2012 21:18:46 +0000 http://techliberation.com/?p=42073

A cable TV monopoly is imminent and high prices loom, at least as far as the Associated Press is concerned.

That was the angle of a widely syndicated AP story last week reporting that in the second quarter of this year, landline phone companies lost broadband subscribers while cable companies gained market share.

Beneath the lead, Peter Svensson, AP technology reporter, wrote:

The flow of subscribers from phone companies to cable providers could lead to a de facto monopoly on broadband in many areas of the U.S., say industry watchers. That could mean a lack of choice and higher prices.

In the news business, the second graph is usually referred to as the “nut” graph. It encapsulates the significance of the story, that is, why it’s news.

It’s interesting that Svensson, with either support or input from his editors, jumped on the “de facto” monopoly angle. There could be any number of reasons why cable broadband is outpacing telco DSL, beginning with superior speed (to be fair, an aspect noted in the lead).

However, AP defaulted to the clichéd narrative that the telecom, Internet and media technology markets inevitably bend toward monopoly (see here, herehere and here for just as a sample). Moreover, that the money quote came from Susan Crawford, President Obama’s former special assistant for science, technology and innovation policy, and a vocal advocate of broad industry regulation, was all the more reason it should have been countered with some acknowledgement of the growing data on how consumer behavior is changing when it comes to TV viewing. Arguably, at least, the cable companies, far from heading toward monopoly, are sailing into competitive headwinds stirred up by video on demand services such as Netflix, Hulu and iTunes.

What numbers does AP base its monopoly supposition on? Their own tally from separate phone company reports finds that the eight largest phone companies in the U.S. collectively lost 70,000 broadband subscribers between April and June. Meanwhile, the top four public cable companies reported a gain of 290,000 subscribers. Assuming most of the 70,000 the telcos lost was DSL-to-cable churn, that still leaves cable with a net gain of 220,000 broadband subscribers (although some likely switched from satellite). So another way to read these numbers is that U.S. broadband subscriptions increased by nearly a quarter-million households in the second quarter. Too much of a smiley face? OK.

Then consider this:

Balancing the AP’s reporting of cable dominance is the Convergence Consulting Group’s finding that between 2008 and 2011 2.65 million people have dropped cable entirely in favor of alternative methods. Separate research from Nielsen tracked with this, finding that the number of households paying a multichannel provider last year declined by 1.5 million, which suggests the rate of cord-cutting is increasing.

In an excellent analysis of these trends, Engadget’s Brad Hill, no fan of cable, looks at how the cable companies are slowly getting boxed in between the on-demand alternatives and their traditional tiered pricing model, which day-by-day appears less and less price-competitive.

My purpose here has not been to pile on AP’s or the mainstream media’s technology reporting. But the danger in defaulting to the monopoly angle reinforces erroneous perceptions that persist in policy circles. I can’t predict how it’s going to turn out, but if consumers are to be served, broadband providers, as well as companies in any other segment of the digital economy, need the freedom to respond to market conditions. Regulations that restrain now-competitive companies as if they were once-and-future monopolies is not going to promote innovation. If progress is to be made on broadband policy that truly benefits consumers, lawmakers and regulators have approach the industry as it exists today—not as it was one, three, five or ten years ago. I’ll be the first to say legacy perceptions are hard to dismiss. But responsible reporting and analysis contributes to greater clarity and does not reinforce outdated notions.

 

]]>
https://techliberation.com/2012/08/21/the-troubling-persistence-of-policy-cliches/feed/ 1 42073
FTC sacrifices the rule of law for more flexibility; Commissioner Ohlhausen wisely dissents https://techliberation.com/2012/08/02/ftc-sacrifices-the-rule-of-law-for-more-flexibility-commissioner-ohlhausen-wisely-dissents/ https://techliberation.com/2012/08/02/ftc-sacrifices-the-rule-of-law-for-more-flexibility-commissioner-ohlhausen-wisely-dissents/#comments Thu, 02 Aug 2012 17:32:24 +0000 http://techliberation.com/?p=41877

On July 31 the FTC voted to withdraw its 2003 Policy Statement on Monetary Remedies in Competition Cases.  Commissioner Ohlhausen issued her first dissent since joining the Commission, and points out the folly and the danger in the Commission’s withdrawal of its Policy Statement.

The Commission supports its action by citing “legal thinking” in favor of heightened monetary penalties and the Policy Statement’s role in dissuading the Commission from following this thinking:

It has been our experience that the Policy Statement has chilled the pursuit of monetary remedies in the years since the statement’s issuance. At a time when Supreme Court jurisprudence has increased burdens on plaintiffs, and legal thinking has begun to encourage greater seeking of disgorgement, the FTC has sought monetary equitable remedies in only two competition cases since we issued the Policy Statement in 2003.

In this case, “legal thinking” apparently amounts to a single 2009 article by Einer Elhague.  But it turns out Einer doesn’t represent the entire current of legal thinking on this issue.  As it happens, Josh Wright and Judge Ginsburg looked at the evidence in 2010 and found no evidence of increased deterrence (of price fixing) from larger fines:

If the best way to deter price-fixing is to increase fines, then we should expect the number of cartel cases to decrease as fines increase. At this point, however, we do not have any evidence that a still-higher corporate fine would deter price-fixing more effectively. It may simply be that corporate fines are misdirected, so that increasing the severity of sanctions along this margin is at best irrelevant and might counter-productively impose costs upon consumers in the form of higher prices as firms pass on increased monitoring and compliance expenditures.

Commissioner Ohlhausen points out in her dissent that there is no support for the claim that the Policy Statement has led to sub-optimal deterrence and quite sensibly finds no reason for the Commission to withdraw the Policy Statement.  But even more importantly Commissioner Ohlhausen worries about what the Commission’s decision here might portend:

The guidance in the Policy Statement will be replaced by this view: “[T]he Commission withdraws the Policy Statement and will rely instead upon existing law, which provides sufficient guidance on the use of monetary equitable remedies.”  This position could be used to justify a decision to refrain from issuing any guidance whatsoever about how this agency will interpret and exercise its statutory authority on any issue. It also runs counter to the goal of transparency, which is an important factor in ensuring ongoing support for the agency’s mission and activities. In essence, we are moving from clear guidance on disgorgement to virtually no guidance on this important policy issue.

An excellent point.  If the standard for the FTC issuing policy statements is the sufficiency of the guidance provided by existing law, then arguably the FTC need not offer any guidance whatever.

But as we careen toward a more and more active role on the part of the FTC in regulating the collection, use and dissemination of data (i.e., “privacy”), this sets an ominous precedent.  Already the Commission has managed to side-step the courts in establishing its policies on this issue by, well, never going to court.  As Berin Szoka noted in recent Congressional testimony:

The problem with the unfairness doctrine is that the FTC has never had to defend its application to privacy in court, nor been forced to prove harm is substantial and outweighs benefits.

This has lead Berin and others to suggest — and the chorus will only grow louder — that the FTC clarify the basis for its enforcement decisions and offer clear guidance on its interpretation of the unfairness and deception standards it applies under the rubric of protecting privacy.  Unfortunately, the Commission’s reasoning in this action suggests it might well not see fit to offer any such guidance.

[Cross posted at TruthontheMarket]

]]>
https://techliberation.com/2012/08/02/ftc-sacrifices-the-rule-of-law-for-more-flexibility-commissioner-ohlhausen-wisely-dissents/feed/ 1 41877
Waiting for the Next Fred Kahn https://techliberation.com/2012/07/16/waiting-for-the-next-fred-kahn/ https://techliberation.com/2012/07/16/waiting-for-the-next-fred-kahn/#comments Tue, 17 Jul 2012 01:58:38 +0000 http://techliberation.com/?p=41733

It is unlikely there has ever been a more important figure in the history of regulatory policy than Alfred Kahn. As I noted in this appreciation upon his passing in December 2010, his achievements as both an academic and a policymaker in this arena where monumental. His life was the very embodiment of the phrase “ideas have consequences.” His ideas changed the world profoundly and all consumers owe him a massive debt of gratitude for reversing the anti-consumer regulatory policies that stifled competition, choice, and innovation. It was also my profound pleasure to get to know Fred personally over the last two decades of his life and to enjoy his spectacular wit and unparalleled charm. He was the most gracious and entertaining intellectual I have ever interacted with and I miss him dearly.

As I noted in my earlier appreciation, Fred was a self-described “good liberal Democrat” who was appointed by President Jimmy Carter to serve as Chairman of the Civil Aeronautics Board in the mid-1970s and promptly set to work with other liberals, such as Sen. Ted Kennedy, Stephen Breyer, and Ralph Nader, to dismantle anti-consumer airline cartels that had been sustained by government regulation. These men achieved a veritable public policy revolution in just a few short years. Not only did they comprehensively deregulate airline markets but they also got rid of the entire regulatory agency in the process. Folks, that is how you end crony capitalism once and for all!

Who could imagine such a thing happening today? It’s getting hard for me to believe it could. The cronyist cesspool of entrenched Washington special interests don’t want it. Neither do the regulators, of course. Nor do any Democrats or Republicans on the Hill. And all those self-anointed “consumer advocates” running around D.C. scream bloody murder at the very thought. All these people are happy with the regulatory status quo because it guarantees them power and influence–even if it screws consumers and stifles innovation in the process.

And so, when I reach my most pessimistic depths of despair like this, I go back and read Fred. I remember what one man accomplished through the power of ideas and I hope to myself that there’s another Fred out there ready to come to Beltway, shake things up, and start clearing out the morass of anti-consumer, anti-innovation regulations that pervade so many fields–but especially communications and technology.

I could cite endless wisdom from his 2-volume masterwork, The Economics of Regulation, but instead I will encourage you to pick that up if it’s not already on your shelf. It will forever change the way you think about economic regulation. Instead, I will leave you with a few things from Fred that you might not have seen before since they appeared in two obscure speeches delivered just a year apart to the American Bar Association. Just imagine being in the crowd when a sitting regulator delivered these remarks to a bunch of bureaucrats, regulatory lawyers, and industry fat cats. Oh my, how they must have all cringed!

Remarks before the American Bar Association, New York, August 8, 1978:

I believe that one substantive regulatory principle on which we can all agree is the principle of minimizing coercion: that when the government presumes to interfere with peoples’ freedom of action, it should bear a heavy burden of proof that the restriction is genuinely necessary…

Remarks before the American Bar Association, Dallas, TX, August 15, 1979:

I think it unquestionable that there is a basic difference between the regulatory mentality and the philosophical approach of relying on the competitive market to restrain people. The regulator has a very high propensity to meddle; the advocate of competition, to keep his hands off. The regulator prefers order; competition is disorderly. The regulator prefers predictability and reliability; competition has the virtue as well as the defect that its results are unpredictable. Indeed, it is precisely because of the inability of any individual, cartel or government agency to predict tomorrow’s technology or market opportunities that we have a general preference for leaving the outcome to the decentralized market process, in which the probing of these opportunities is left to diffused private profit-seeking. The regulator prefers instead to rely on selected chosen instruments, whom he offers protection from competition in exchange for the obligation to serve, as well as, often, transferring income from one group of customers to another — that is, using the sheltered, monopoly profits from the lucrative part of the business to subsidize the provision of service to other, worthy groups of customers. No matter that the social obligations are often ill-defined, and sometimes not defined or enforced at all; the protectionist bias of regulation is unmistakable.
]]>
https://techliberation.com/2012/07/16/waiting-for-the-next-fred-kahn/feed/ 1 41733
DirecTV’s Viacom Gambit https://techliberation.com/2012/07/12/directv%e2%80%99s-viacom-gambit/ https://techliberation.com/2012/07/12/directv%e2%80%99s-viacom-gambit/#comments Thu, 12 Jul 2012 22:17:40 +0000 http://techliberation.com/?p=41664

I suppose there’s something to be said for the fact that two days into DirecTV’s shutdown of 17 Viacom programming channels (26 if you count the HD feeds) no congressman, senator or FCC chairman has come forth demanding that DirecTV reinstate them to protect consumers’ “right” to watch SpongeBob SquarePants.

Yes, it’s another one of those dust-ups between studios and cable/satellite companies over the cost of carrying programming. Two weeks ago, DirecTV competitor Dish Network dropped AMC, IFC and WE TV. As with AMC and Dish, Viacom wants a bigger payment—in this case 30 percent more—from DirecTV to carry its channel line-up, which includes Comedy Central, MTV and Nickelodeon. DirecTV, balked, wanting to keep its own prices down. Hence, as of yesterday, those channels are not available pending a resolution.

As I have said in the past, Washington should let both these disputes play out. For starters, despite some consumer complaints, demographics might be in DirecTV’s favor. True, Viacom has some popular channels with popular shows. But they all skew to younger age groups that are turning to their tablets and smartphones for viewing entertainment. At the same time, satellite TV service likely skews toward homeowners, a slightly older demographic. It could be that DirecTV’s research and the math shows dropping Viacom will not cost them too many subscribers.

This is the new reality of TV viewing. If you are willing to wait a few days, you can download most of Comedy Central’s latest episodes for free (although John Bergmayer at Public Knowledge reports that, in a move related to the DirecTV dispute, Comedy Central has pulled The Daily Show episodes from its site, although they are still available at Hulu).

What’s more, in a decision that should raise eyebrows all around, AMC said it will allow Dish subscribers to watch the season premiere of its hit series Breaking Bad online this Sunday, simultaneous with the broadcast/cablecast. This decision should be the final coffin nail for the regulatory claim of “cable programming bottleneck.” Obviously, studios have other means of reaching their audience, and are willing to use them when they have to.

Meanwhile, a Michigan user, commenting on the DirecTV-Viacom fight, told the MLive web site that “I love [DirecTV] compared to everyone else. I get local channels, I get sports channels. I wouldn’t have chosen if it was a problem.”

Now if Congress or the FCC steps up and requires that satellite and cable companies carry programming on behalf of Hollywood, the irony would be rich. Recall that just a few years ago, Congress and the FCC were pushing for a la carte regulation that would require cable companies to  reduce total channel packaging and let consumers essentially pick  the ones they want. Even the Parents Television Council is glomming onto this, as reported in the Washington Post, although not precisely from a libertarian perspective.

“The contract negotiation between DirecTV and Viacom is the latest startling example of failure in the marketplace through forced product bundling,” said PTC President Tim Winter in a statement calling on Congress and the FCC to examine the issue. “The easy answer is to allow consumers to pick and pay for the cable channels they want,” he said.

Winter’s mistake is that he views DirecTV’s challenge to Viacom as marketplace failure. Quite the contrary, it is a sure sign of a functional marketplace when one party feels it has the leverage to say no to a supplier’s aggressive price increase. And while I would be against a ruling forcing cable and satellite companies to construct a la carte alternatives, market evolution may soon be taking us there, but perhaps not the way activists imagined.

I’ll hazard a guess to say that today’s viewer paradigm isn’t so much “I never watch such-and-such a channel” than “I only watch one show on such-and-such a channel.” When Dish cuts off AMC and DirecTV cuts off Comedy Channel et al, they are banking that their customers won’t miss the station, just a handful of shows that they will be motivated enough to find elsewhere, if they haven’t done so already.

It might take a pencil and paper, but there is enough price transparency for a budget-minded video consumer to calculate the best balance between multichannel TV program platforms like satellite and cable, pay-per-view video, free and paid digital downloads and DVD rentals. The cable cord (or satellite link) may be difficult to cut completely, but the $200-a-month bill packed with multiple premium channel packages is endangered. The video consumer of the near-future might still keep cable or satellite for ESPN for Monday Night Football, but turn to Netflix for Game of Thrones, iTunes for Breaking Bad, and the bargain DVD bin for a season box set of Dora the Explorer videos. DirecTV and Dish Network are confronting these economics by confronting studios on their distribution strategy. The studios, for their part, may find they can’t aggressively exploit other digital channels and keep raising rates for multichannel operators.

While disputes like this are messy for consumers in the short term, the resolution will be a consumer win because it will force multichannel operators and the studios to adapt to actual changes in consumer behavior, not a policymaker’s construct of competitive supply chain. Washington would be wise to stay out.

]]>
https://techliberation.com/2012/07/12/directv%e2%80%99s-viacom-gambit/feed/ 3 41664
Real Talk on Net Neutrality https://techliberation.com/2012/05/09/real-talk-on-net-neutrality/ https://techliberation.com/2012/05/09/real-talk-on-net-neutrality/#comments Wed, 09 May 2012 19:59:52 +0000 http://techliberation.com/?p=41098

A lot of people are talking about this New York Times article on net neutrality, which highlights the effect on Netflix of Comcast launching its own video platform on the Xbox that is exempt from Comcast’s bandwidth limitations. While this policy may indeed result in more customers for Comcast’s video services and fewer for Netflix’s in the short run, I don’t think that critics are seriously thinking through the economics of Internet service before they speak.

The economics of running a large ISP is one of fixed costs. When you introduce large fixed costs, a lot of consumers’ ordinary economic intuition becomes worse than useless. If Comcast incurs a lot of fixed costs from building a network,  someone has to pay for it. Suppose that the fixed cost is currently divided between TV subscription and advertising revenue and Internet service revenue. If Comcast’s TV revenues collapse because everyone is switching to Netflix, where will Comcast get the revenue to pay its high fixed costs? You guessed it, they will have to raise the price of Internet service.

To give a dramatically oversimplified example, suppose that TV service and Internet service each cost $50/month and Comcast has $90/customer/month in fixed costs and $10/customer/month in TV content licensing costs. If all of Comcast’s customers drop TV service and switch to Netflix, which costs $8/month, Comcast loses its $10/month licensing expense but it still has $90/month in fixed costs for maintaining its network. It will have to raise the price of its Internet service to $90/month to recover those costs. Consumers will now pay $90/month to Comcast for Internet service and $8/month to Netflix for TV service, for a total of $98/month, which is $2 less than they were paying before.

However, Comcast’s “non-neutral” Xbox service could improve on this for some customers, assuming that customers are heterogeneous. Suppose that critics’ worst fear comes true and I am the  only Comcast customer to switch from Comcast video to Netflix. Then Comcast’s pricing does not have to change, I pay $58/month, and other customers continue paying $100/month, just as they were before. This pricing policy is great for me, the most elastic customer. If you are a Netflix subscriber, therefore, you benefit from Comcast’s non-neutral Xbox service.

But what about the inelastic customers? They have to pay more. However, it is economically efficient—and this can be proven rigorously—for the less elastic customers to pay a higher share of the fixed cost. Given that we’re going to have a network with a large fixed cost, the question we should be asking is, “What is the most efficient way of paying that fixed cost?” And the answer is, in many cases, in a non-neutral way.

The bottom line is that there is a lot of wishful thinking when it comes to net neutrality. In many respects, it reminds me of the simpleton’s dream of  à la carte cable, as if pricing of $0.50/channel in a bundle of 100 channels can be extended to customers buying only 5 channels. Fools! You must pay the fixed cost somehow. And the best, most efficient way of splitting up this fixed cost is not equally, and certainly not at taxpayer expense, which is completely unfair to taxpayers who do not value the service, but inversely with demand elasticity. This means the network should always be non-neutral to some extent, balanced of course against our willingness to pay more as consumers for a neutral Internet.

]]>
https://techliberation.com/2012/05/09/real-talk-on-net-neutrality/feed/ 659 41098
Video from Internet Tax Policy Event https://techliberation.com/2012/03/22/video-from-internet-tax-policy-event/ https://techliberation.com/2012/03/22/video-from-internet-tax-policy-event/#respond Thu, 22 Mar 2012 20:45:03 +0000 http://techliberation.com/?p=40497

On Monday it was my great pleasure to participate in a Cato Institute briefing on Capitol Hill about “Internet Taxation: Should States Be Allowed to Tax outside Their Borders?” Also speaking was my old friend Dan Mitchell, a senior fellow with Cato. From the event description: “State officials have spent the last 15 years attempting to devise a regime so they can force out-of-state vendors to collect sales taxes, but the Supreme Court has ruled that such a cartel is not permissible without congressional approval. Congress is currently considering the Main Street Fairness Act, a bill that would authorize a multistate tax compact and force many Internet retailers to collect sales taxes for the first time. Is this sensible? Are there alternative ways to address tax “fairness” concerns in this context?”

Watch the video for our answers. Also, here’s the big Cato paper that Veronique de Rugy and I penned for Cato on this back in 2003 and here’s a shorter recent piece we did for Mercatus.

]]>
https://techliberation.com/2012/03/22/video-from-internet-tax-policy-event/feed/ 0 40497
Thinking about the Future of Broadband & FCC Reform https://techliberation.com/2011/11/12/thinking-about-the-future-of-broadband-fcc-reform/ https://techliberation.com/2011/11/12/thinking-about-the-future-of-broadband-fcc-reform/#comments Sat, 12 Nov 2011 15:17:49 +0000 http://techliberation.com/?p=39020

It was my pleasure this week to host a terrific panel discussion about the future of broadband policy and FCC reform featuring Raymond Gifford, a Partner at the law firm of Wilkinson Barker Knauer, LLP,  Jeffrey Eisenach, a Managing Director and Principal at Navigant Economics and an Adjunct Professor at George Mason University Law School, and Howard Shelanski, Professor of Law at Georgetown Law School who previously served as Chief Economist for the Federal Communications Commission and as a Senior Economist for the President’s Council of Economic Advisers at the White House. We discussed two new papers by Gifford and Eisenach on these issues.

Gifford discussed his new Mercatus Center Working Paper on “The Continuing Case for Serious Communications Law Reform.” Gifford’s paper outlines what substantive FCC reform would entail and considers what antitrust agencies and enforcement can teach us about the way the FCC should work going forward.  Eisenach discussed his important new paper on “Theories of Broadband Competition,” which similarly considers how competition oversight of broadband markets could be modeled after modern antitrust principles.  Shelanski offered his thoughts on both papers. It was an interesting discussion and I encourage you to watch the entire thing.

During the discussion period, we debated the likelihood that serious communications policy / FCC reform could occur in the current political environment.  I argued that the stars just don’t line up at this time to achieve such reforms. However, keep in mind that many deregulatory experiments in the past sometimes started slowly and then something sparked sudden action.  Scholars have noted (see McCraw’s “Prophets of Regulation”) sometimes just a couple of key players (such as Alfred Kahn in the airline context) were able to change the underlying dynamics of deregulation very rapidly to push through long-lasting reforms.

The key difference between then and now, of course, is that, back then, liberal Democrats in Congress and the Carter Admin came to understand how regulation was having a deleterious impact on marketplace competition and consumer welfare.  I simply cannot find a single Democrat who makes that same case today for the communications or media sectors.  And if telecom / media reform remains a highly politically charged, partisan issue, then the hopes for reform remain quite slim. But I haven’t given up all hope just yet!

Anyway, watch the event video for more discussion on this matter.

]]>
https://techliberation.com/2011/11/12/thinking-about-the-future-of-broadband-fcc-reform/feed/ 2 39020
The Alternative to the Speier-Womack Internet Tax Proposal https://techliberation.com/2011/10/14/the-alternative-to-the-speier-womack-internet-tax-proposal/ https://techliberation.com/2011/10/14/the-alternative-to-the-speier-womack-internet-tax-proposal/#comments Fri, 14 Oct 2011 14:09:15 +0000 http://techliberation.com/?p=38680

Reps. Jackie Speier (D-Calif.) and Steve Womack (R-Ark.) have introducedThe Marketplace Equity Act,” which would open the floodgates to anything-goes State-based taxation of the Internet and interstate commerce. The bill essentially sacrifices constitutional fairness at the alter of “tax fairness.” Building on concerns raised by state and local officials as well as “bricks-and-mortar” retailers, Speier and Womack claim that, as “a matter of states’ rights” and “leveling the playing field,” Congress should bless state efforts to impose sales tax collection obligation on interstate (“remote”) companies.The measure would allow States to do so using one of three rate structures: (1) a single blended state/local rate; (2) a single maximum State rate; or (3) the actual local jurisdiction destination rate + the State rate (so long as the State “make(s) available adequate software to remote sellers that substantially eases the burden of collecting at multiple rates within the State.”)

This builds on a long-standing effort by some States to devise a multistate sales tax compact to collude and impose taxes on interstate transactions. In the Senate, Sen. Dick Durbin (D-IL) has floated legislation (“The Main Street Fairness Act”) that would bless such a state-based de facto national sales tax regime for the Internet.

There is a better way to achieve fairness without sacrificing tax competition or opening the doors to unjust, unconstitutional, and burdensome state-based taxation of interstate sales. In a new Mercatus Center essay,”The Internet, Sales Taxes, and Tax Competition,” Veronique de Rugy and I argue that:

Apart from getting chronic state overspending under control, a better solution to the states’ fiscal problems than a tax cartel that imposes burdensome tax collection obligations on outof-state vendors would be tax competition.  Congress should adopt an “origin-based” sourcing rule for any states seeking to impose sales tax collection obligations on interstate vendors. This rule would be in line with Constitutional protections for interstate commerce, allow for the continued growth of the digital economy, and ensure excessive, inefficient taxes do not burden companies and consumers.

Vero and I have detailed this alternative plan in much greater detail in this 2003 Cato white paper, “The Internet Tax Solution: Tax Competition, Not Tax Collusion.” As we explain in our new paper:

In this system, states would tax all sales inside their borders equally, regardless of the buyer’s residence or the ultimate location of consumption. Under that model, all sales would be “sourced” to the seller’s principal place of business and taxed accordingly. This is, after all, how sales taxes have traditionally worked. A Washington, DC, resident who buys a television in Virginia, for instance, is taxed at the origin of sale in Virginia regardless of whether he brings the television back into the District. Each day in America, there are millions of cross-border transactions that are taxed only at the origin of the sale; no questions are asked about where the buyer will consume the good. Policy makers should extend the same principle to crossborder sales involving mail order and the Internet. Under this approach, Internet shoppers would pay the sales tax of the state where the online retailer is based.

An origin-based sourcing rule has several advantages over the destination-based system States favor.

  1. It would eliminate constitutional concerns because only companies within a state or local government’s borders would be taxed.
  2. An origin-based system would do away with the need for prohibitively complex multistate collection arrangements because states would tax transactions at the source, not at the final point of consumption.
  3. An origin-based system also would protect buyers’ privacy rights, eliminating the need to collect any special or unique information about a buyer and to use third-party tax collectors to gather such information.
  4. It would also preserve local jurisdictional tax authority whereas a harmonization proposal would create a de facto national sales tax system that would exclude local governments.
  5. Finally, because it is more politically and constitutionally feasible, an origin tax may actually maximize the amount of tax collected for states by making compliance easier and incorporating currently untaxed activities.

In closing, it is important to address the misguided claim at the heart of the Speier-Womack bill that this is a “states’ rights” issue. Let’s be clear what real federalism is all about. Federalism is not about “states’ rights.” States have powers and responsibilities, and under the Constitution — at least the proper interpretation of it — they have wide-ranging flexibility to purse different governance approaches. But that power is not unlimited. America abandoned its first constitution, The Articles of Confedertion, after just 14 years in part because untrammeled state authority was discouraging interstate trade and commerce. In their wisdom, the authors or our present Constitution made sure to include Article 1, Sec. 8, Clause 3 — the so-called “Commerce Clause” — which created and protected what might best be thought of as the world’s first free trade zone – The United States of America. It remains one of the greatest achievements in constitutional and commercial history.

Thus, properly understood, federalism is about a healthy tension among competing units of government. Each has a different role and set of responsibilities, and this tension bolsters the checks and balances at the heart of our constitutional republic. [I outline all this in far more detail my 1999 book, The Delicate Balance: Federalism, Interstate Commerce and Economic Freedom in the Technological Age.]

In the context of Internet tax policy, this means that the tax power of the States can be legitimately constrained by the federal government to ensure that the interstate market is not unduly burdened with unjust levies. States certainly retain the power to impose whatever levies they wish on those actors who have a substantial physical presence in their geographic confines. That is, they can tax their own exports. Taxing imports from another State, however, is an entirely different matter, and one the necessarily requires some degree of federal oversight to ensure America’s free trade zone is preserved and protected.

An origin-based sourcing rule accomplishes that goal while also leaving States the discretion to impose taxes on their own exports if they so choose. The fact that this system would lead to heated tax competition among the States is a feature, not a bug.

]]>
https://techliberation.com/2011/10/14/the-alternative-to-the-speier-womack-internet-tax-proposal/feed/ 7 38680
Internet Taxes, “Main Street Fairness” & the Origin-Based Alternative https://techliberation.com/2011/08/02/internet-taxes-main-street-fairness-the-origin-based-alternative/ https://techliberation.com/2011/08/02/internet-taxes-main-street-fairness-the-origin-based-alternative/#comments Tue, 02 Aug 2011 14:50:24 +0000 http://techliberation.com/?p=37980

The debate over the imposition of sales tax collection obligations on interstate vendors is heating up again at the federal level with the introduction of S. 1452, “The Main Street Fairness Act.” [pdf]  The measure would give congressional blessing to a multistate compact that would let states impose sales taxes on interstate commerce, something usually blocked by the Commerce Clause of the U.S. Constitution.  Senator Dick Durbin (D-IL) introduced the bill in the Senate along with Tim Johnson (D-SD) and Jack Reed (D-RI).  The measure is being sponsored in the House of Representatives by John Conyers (D-MI) and Peter Welch (D-VT). At this time, there are no Republican co-sponsors even though Sen. Mike Enzi was rumored to be a considered co-sponsoring the measure before introduction.

Without any Republicans on board the effort, the measure may not advance very far in Congress. Nonetheless, to the extent the measure gets any traction, it is worth itemizing a few of the problems with this approach. My Mercatus Center colleague Veronique de Rugy and I have done some work on this issue together in the past and we are planning a short new paper on the topic. It will build on this lengthy Cato Institute paper we authored together in 2003, “The Internet Tax Solution: Tax Competition, Not Tax Collusion.” The key principle we set forth was this: “Congress must.. take an affirmative stand against efforts by state and local governments to create a collusive multistate tax compact to tax interstate sales.” “It would be wrong,” we argued, “for members of Congress to abdicate their responsibility to safeguard the national marketplace by giving the states carte blanche to tax interstate commercial activities through a tax compact. The guiding ethic of this debate must remain tax competition, not tax collusion.”

Proponents of simply extending current sales tax collection obligations to interstate sales will claim that the so-called “Streamlined Sales and Use Tax Agreement” (SSTUA) they want Congress to bless has solved the compliance cost and complexity problem associated with taxing “remote” interstate sales. Yet, as I pointed out in my recent Forbes essay, “The Internet Taxman Cometh,” this 200-page “simplification” effort remains a Swiss cheese tax system, however, riddled with loopholes and complexities that could burden vendors, especially mom-and-pop operators. America’s estimated 7,400 local jurisdictions still have many different definitions and exemptions that complicate the sales tax code. For example, is a cookie a “candy,” (which is taxed in most jurisdictions) or a “baked good,” (which is typically tax-exempt)? Thus, forcing online vendors to collect local taxes would create significant burdens on interstate commerce.

This is not to say there aren’t some legitimate tax “fairness” arguments in play here. It really is unfair that “Main Street” vendors are burdened with significant tax collection responsibilities while others are not. But “fairness” cuts many ways. It’s also unfair and unconstitutional to require out-of-state vendors to collect sales taxes on behalf of a jurisdiction where they have no physical presence. After all, at least in theory, those who are taxed should expect to receive some benefit for it. Interstate vendors receive no benefit but bear all the cost.

To the extent we want to “level the playing field,” therefore, one approach is to cut or eliminate sales taxes on in-state vendors. Of course, that’s a tough pill for many states and localities to swallow. If they got their profligate spending habits under control, however, that might be easier.

Another alternative would be the creation of a national Internet sales tax that would avoid the complexity problem by imposing a single rate and set of definitions on all vendors. But that just opens the door to a new federal tax base, which would grow to be burdensome in other ways at a time when American consumers and companies are already over-taxed. I doubt the idea would get much traction in Congress, anyway.

Perhaps the best alternative would be to switch the sourcing methodology for state sales tax collection obligations from destination-based to “origin-based.”  Stated differently, the rule would be “you can tax your own exports, not the imports from other states.” Here’s how Veronique and I summarized an origin-based solution in our old Cato paper:

under an origin-based sourcing rule—also referred to as a “seller state,” “vendor-state,” or “source-based” rule by some scholars—all interstate sales through all channels (traditional stores or cyber-retailers) would be taxed at the point of sale (meaning the company’s “principal place of business”) instead of at the point of destination, if the state or locality chooses to impose a tax. All goods within a given state or locality would be taxed at the locally applicable rate no matter how they were purchased and no matter where they were consumed.  This option would take care of most of the problems posed by the destination-based methodology that is favored by most state and local policymakers today.

Specifically, an origin-based sourcing rule would have the following advantages:

  • Minimize the burden on sellers by requiring sellers to know and abide by the tax rates and regulations within their principal place of business instead of the rates and definitions of thousands of different taxing jurisdiction.
  • Ensure tax parity between Main Street vendors and interstate sellers.
  • Do away with the need for a multistate collection arrangement such as the SSTUA by eliminating any need to trace interstate transactions to the final point of consumption.
  • Remove nexus uncertainties and constitutional concerns, because only companies within a state or local government’s borders would be taxed.
  • Largely remove any need for continued reliance on the use tax because all transactions would henceforth be sourced to the origin of sale and collected immediately by the vendor at that point.
  • Respect buyers’ privacy rights by eliminating the need to collect any special or unique information about a buyer, and  by not using third-party tax collectors to gather information about buyers.
  • Respect federalism principles and enhance jurisdictional tax competition  by permitting each state to determine its  own tax policies and encouraging healthy state-by-state tax rivalry.
  • Preserve local jurisdictional tax authority where a harmonization proposal like the SSTUA plans would create a de facto national sales tax system and run roughshod over local governments.
  • Because it is more politically / constitutionally feasible it may maximize the amount of tax collected for states by making compliance easier and incorporating activities that are currently untaxed.

Please see the old Cato paper for more details and answers to potential objections, but I hope it’s clear why an “origin-based” solution offers a sensible way to break the current logjam and achieve tax “fairness” in the process.

Some states officials will object to the vigorous tax competition spawned by an origin-based sourcing rule. But that’s a feature, not a bug! Tax competition is good for consumers and the continued vitality of American federalism. A multistate tax compact, by contrast, would encourage tax collusion and let states too easily raise rates on interstate sales.

Moreover, I think it bears repeating that state officials have been at this for 15 years and still not found a way to truly simplify their sales taxes and get around constitutional limitations on the taxation of interstate activity. An origin-based system, therefore, may offer them the only way for them to finally tax the Internet and interstate sales.  I’d prefer they scale back their taxing ways, of course, but to the extent they insist on pushing out the boundaries of their tax authority, an origin-based solution — not the “Main Street Tax Fairness Act” — is the only sensible, constitutional way for them to do so.

 

]]>
https://techliberation.com/2011/08/02/internet-taxes-main-street-fairness-the-origin-based-alternative/feed/ 4 37980
The Bono-Mack Bill & Giving APA Authority to the FTC to Redefine “PII” https://techliberation.com/2011/07/19/the-bono-mack-bill-giving-apa-authority-to-the-ftc-to-redefine-pii/ https://techliberation.com/2011/07/19/the-bono-mack-bill-giving-apa-authority-to-the-ftc-to-redefine-pii/#comments Wed, 20 Jul 2011 03:47:57 +0000 http://techliberation.com/?p=37845

A month ago, Rep. Mary Bono Mack introduced a bill (and staff memo) “To protect consumers by requiring reasonable security policies and procedures to protect data containing personal information, and to provide for nationwide notice in the event of a security breach.” These are perhaps the two least objectionable areas for legislating “on privacy” and there’s much to be said for both concepts in principle. Less clear-cut is the bill’s data minimization requirement for the retention of personal information.

But as I finally get a chance to look at the bill on the eve of the July 20 Subcommittee markup, I note one potentially troubling procedural aspect of the bill: giving the FTC authority to redefine PII without the procedural safeguards that normally govern the FTC’s operations. The scope of this definition would be hugely important in the future, both because of the security, breach notification and data minimization requirements attached to it, and because this definition would likely be replicated in future privacy legislation—and changes in to this term in one area would likely follow in others.  

The bill (p. 28) provides a fairly common-sensical definition of “personal information”:

an individual’s first name or initial and last name, or address, or phone number, in combination with any 1 or more of the following data elements for that individual… [including a social security number, driver’s license or other identity number, financial account number, etc.]

Then the bill then gives the FTC the authority to redefine PII in the future. The bill limits that authority to situations where:

(i) … such modification is necessary … as a result of changes in technology or practices and will not unreasonably impede technological innovation or otherwise adversely affect interstate commerce; and (ii) … if the Commission determines that access to or acquisition of the additional data elements in the event of a breach of security would create an unreasonable risk of identity theft, fraud, or other unlawful conduct and that such modification will not unreasonably impede technological innovation or otherwise adversely affect interstate commerce.

This is an admirable attempt to make the statute flexible and forward-looking without giving the FTC carte blanche to redefine “PII”—easily the single most important term in when it comes to regulating the flow of data in our information economy. But I fear even these prudent measures may not be enough if the FTC can use the streamlined Administrative Procedures Act (APA) rulemaking process. Yes, of course, that’s the same process used by most federal agencies, but it’s not what the FTC generally uses—and for good reason. Commissioner Kovacic explained “Mag-Moss” in his 2010 Senate  testimony on this issue:

Magnuson-Moss rulemaking, as this authority is known, requires more procedures than those needed for rulemaking pursuant to the Administrative Procedure Act.  These include two notices of proposed rulemaking, prior notification to Congress, opportunity for an informal hearing, and, if issues of material fact are in dispute, cross-examination of witnesses and rebuttal submissions by interested persons.

Kovacic isn’t against all grants of APA authority to the FTC:

In addition, over the past 15 years, there have been a number of occasions where Congress has identified specific consumer protection issues requiring legislative and regulatory action. In those specific instances, Congress has given the FTC authority to issue rules using APA rulemaking procedures…. Except where Congress has given the FTC a more focused mandate to address particular problems, beyond the FTC Act’s broad prohibition of unfair or deceptive acts or practices, I believe that it is prudent to retain procedures beyond those encompassed in the APA [i.e., Magnuson-Moss].

Kovacic’s cautiousness about this largely stems from his desire to protect the FTC from repeating the over-reach in the late 1970s that caused even the Washington Post to brand the agency the “National Nanny” and a heavily Democratic Congress to try to briefly shut down the agency, heavily slash its funding and require additional procedural safeguards—a history I’ve written about here and here, and the subject of a PFF event I ran in April 2010. (Of course, Howard Beales wrote the definitive history of this saga.) Kovacic continues:

The lack of a more focused mandate and direction from Congress, reflected in legislation with relatively narrow tailoring, could result in the FTC undertaking initiatives that ultimately arouse Congressional ire and lead to damaging legislative intervention in the FTC’s work…. Through specific, targeted grants of APA rulemaking authority, Congress makes a credible commitment not to attack the Commission when the agency exercises such authority

So what might Commissioner (and former FTC Chairman) Kovacic say about Rep. Bono-Mack’s bill? Unfortunately, he’s retiring from the Commission in September, so we may not actually hear an official answer from him (and FTC Commissioners generally don’t opine about pending legislation anyway unless asked to do so). But I’ll wager he’d applaud the requirements for redefinition and, in principle, he’d be open to giving the FTC APA authority in a narrow area. But I think he’d wonder whether redefining a term so critical as “personal information” is really a “specific”, “targeted” or “focused” given what’s at stake—in particular, the data minimization requirement, which could swallow much of online data collection if “personal information” were defined too broadly.

Rep. Bono-Mack is clearly well aware of these dangers, given the evident thought that went into writing the twin requirements for redefinition I quoted above. But it’s well worth asking whether they’ll be enough to prevent abuse of the power to redefine PII. At the very least, this seems like a question worth considering very, very carefully before the bill moves forward.

Other thoughts?

Update: It appears that an amendment passed today sponsored by Reps Marsha Blackburn (R-TN) and Pete Olson (R-TX) removing the grant of APA rulemaking authority at issue here—a relief!

]]>
https://techliberation.com/2011/07/19/the-bono-mack-bill-giving-apa-authority-to-the-ftc-to-redefine-pii/feed/ 2 37845
George Will & Jeff Jacoby on Internet Sales Taxes & “Tax Fairness” https://techliberation.com/2011/05/02/george-will-jeff-jacoby-on-internet-sales-taxes-tax-fairness/ https://techliberation.com/2011/05/02/george-will-jeff-jacoby-on-internet-sales-taxes-tax-fairness/#comments Mon, 02 May 2011 15:03:38 +0000 http://techliberation.com/?p=36552

I was pleased to see columnists George Will of The Washington Post and Jeff Jacoby of The Boston Globe take on the Internet sales tax issue in two smart recent essays. Will’s Post column (“Working Up a Tax Storm in Illinois) and Jacoby’s piece,”There’s No Fairness in Taxing E-Sales,” are both worth a read. They are very much in line with my recent Forbes column on the issue (“The Internet Tax Man Cometh,”) as well as this recent oped by CEI’s Jessica Melugin, which Ryan Radia discussed here in his recent essay “A Smarter Way to Tax Internet Sales.”

I was particularly pleased to see both Will and Jacoby take on bogus federalism arguments in favor of allowing States to form a multistate tax cartel to collect out-of-state sales taxes.  Senators Dick Durbin (D-IL) and Mike Enzi (R-WY) will soon introduce the “Main Street Fairness Act,” which would force all retailers to collect sales tax for states who join a formal compact. It’s a novel—and regrettable—ploy to get around constitutional hurdles to taxing out-of-state vendors. Sadly, it is gaining support in some circles based on twisted theories of what federalism is all about. Real federalism is about a tension between various levels of government and competition among the States, not a cozy tax cartel.

Will rightly notes that “Federalism — which serves the ability of businesses to move to greener pastures — puts state and local politicians under pressure, but that is where they should be, lest they treat businesses as hostages that can be abused.” And Jacoby argues that an “origin-based” sales tax sourcing rule is the more sensible solution to leveling the tax playing field:

The current system is far fairer than the one [Senator] Durbin wants. Bricks-and-mortar merchants charge sales taxes based on their physical location. The same rule applies to online merchants. A Pennsylvania tobacco shop doesn’t collect Ohio sales taxes whenever it sells a humidor to a visitor from Ohio. Amazon shouldn’t have to, either.

Jacoby also addresses the “tax fairness” argument as follows:

All other things being equal, consumers no doubt prefer a tax-free shopping experience. But all other things are rarely equal. E-retailers (or mail-order catalogs) may have a price advantage, but well-run “Main Street’’ businesses have competitive advantages of their own. They attract customers with eye-catching window displays. They play up local ties and neighborhood loyalty. They give shoppers the chance to see, feel, or try on items before buying them. They enable the serendipitous joys of browsing. They don’t charge for shipping. And they offer potential customers a degree of personal service and warmth that no website can match.

And Will says:

[bricks and mortar] stores have the competitive advantage of local loyalties and customers being able to handle merchandise.Besides, Main Street stores pay sales taxes to support local police, fire and rescue, sewage, schools and other services. If Amazon’s Seattle headquarters catches fire, will Champaign, Ill., firefighters extinguish it?

Anyway, read both columns and stay tuned: this fight is about to get hot once again.

]]>
https://techliberation.com/2011/05/02/george-will-jeff-jacoby-on-internet-sales-taxes-tax-fairness/feed/ 128 36552
For The Last Time: The Bell System Monopoly Is Not Being Rebuilt https://techliberation.com/2011/04/22/for-the-last-time-the-bell-system-monopoly-is-not-being-rebuilt/ https://techliberation.com/2011/04/22/for-the-last-time-the-bell-system-monopoly-is-not-being-rebuilt/#comments Fri, 22 Apr 2011 18:26:37 +0000 http://techliberation.com/?p=36381

Believe it or not, this argument is being trotted out as part of the pressure from consumer activist groups against AT&T’s proposed acquisition of T-Mobile. The subject of a Senate Judiciary Hearing on the merger, scheduled for May 11, even asks, “Is Humpty Dumpty Being Put Back Together Again?”

It seems because the deal would leave AT&T and Verizon as the country’s two leading wireless service providers, the blogosphere is aflutter with worries that we are returning to the bad old days when AT&T pretty much owned all of the country’s telecom infrastructure.

It is true that AT&T and Verizon trace their history back to the six-year antitrust case brought by the Nixon Justice Department, which ended in the 1984 divestiture of then-AT&T’s 22 local telephone operating companies, which were regrouped into seven regional holding companies.

Over the last 28 years, there has been gradual consolidation, each time accompanied by an uproar that the Bell monopoly days were returning. But those claims miss the essential goal of the Bell break-up, and why, even though those seven “Baby Bell” companies have been integrated into three, there’s no going back to the pre-divestiture AT&T.

The Bell System monopoly was vertically integrated. Not only did it have a monopoly on local services, it operated the only long-distance company, it handled all incoming and outgoing international calls, and, most important, its wholly-owned subsidiaries, Bell Labs and Western Electric, developed, manufactured and sold all network equipment from the switches and cable to the phone in your home and office.

The claim that that AT&T and T-Mobile merger will remake the Bell System is undone by recalling why AT&T was broken up in the first place. It had little to do with it being a monopoly provider of residential telephone service. Remember, in the final judgment, the seven spin-off companies retained their local monopolies.

The problem was that AT&T was its own supply chain. As such, in the 1960s and 70s, as the computer industry was going through massive upheaval because of rapid and disruptive strides being made by semiconductor companies, AT&T remained insulated in a bubble. Since AT&T only bought from AT&T, AT&T could dictate the pace of telecom technology evolution, say from mechanical switches, to electronic to digital. This was virtually opposite the situation that was happening with computers and data networking, where central mainframe-based architectures were disintermediated by distributed computing. IBM and Sperry were giving way to Digital Equipment Corp. and Wang, and ultimately Microsoft and Apple.

It’s arguable, at least, that the demand for faster data networking, driven by the trend toward distributed intelligence, created the policy pressure for the Bell break-up and competitive telecom in general. MCI, which provided the first long distance alternative for businesses, appeared on the scene in the late 70s. At the same time, spurred by the 1968 Carterfone decision that permitted end-users to attach their own terminal equipment to AT&T network, intense competition broke out for office phone systems, especially those that could integrate data networking. More and more, it seemed as if AT&T’s ironclad grip on the U.S. public network was an obstacle to innovation, not the enabler it had purported itself to be for decades (and one of the legs on which it rested its whole “natural monopoly” argument).

In fact, conventional wisdom at the time was that the government was going to force AT&T to divest Western Electric and Bell Labs in order to create a competitive market for network infrastructure. Divestiture accomplished this somewhat, because it separated local exchange infrastructure from AT&T’s control. Ironically, it was market forces that accomplished what regulators had hoped to, when, in 1996, AT&T divested Western Electric, because by then, AT&T itself was the Bell companies’ biggest competitor, and that was straining its ability to sell into that segment.

So while the divested Bell companies have re-merged, even to the point of consuming their former parent, they have no control over the supply chain, and therefore, cannot control prices or product development the way AT&T did pre-break-up.

Keeping things in the context of wireless for now, as that’s what’s driving the AT&T and T-Mobile deal, it’s clear consumers are impatient for the latest smartphone models. Even as merged unit, AT&T and T-Mobile cannot arbitrarily choose when and where to release new technology like the Bell System once could. Quite the opposite, there continues to be an ongoing race as to which company can deliver the most popular phones on the best terms. Case in point was the hoopla surrounding Verizon’s introduction of the iPhone earlier this spring. That was accompanied by price cuts as well as hints of the new iPhone model expected in the fall. In the meantime, a geek war has broken out over the utility and relative benefits of Apple’s iOS-based iPhone and Google’s Android operating system. Certainly the debate gets confusing, overwrought and tiresome, but that’s because consumers can vote with their pocketbook. In the Bell System days, there were no such dialogues because there was no such choice.

Most other arguments fall apart, too. Size by itself is not an antitrust argument. Nor is duopoly or triopoly. It takes a certain level of capital and heft to operate a nationwide network, and the fact that post-merger, there will still be three national companies competing alongside regional players speaks to the competitiveness of the industry. AT&T and Verizon have similar market share numbers, and although Sprint lags, it has a healthy share of the government sector. It is not as weak as the media suggests.

Market share also is an imprecise measure of competition and consumer harm. A company with 80 percent market share may be doing nothing illegal. It can be holding that level because low prices and innovative products yield loyal customers. Cisco Systems, which makes Internet routers and switches, is a great example. It dominates the segment, yet does through aggressive innovation, quality products and strong customer support.

At the same time, we have seen companies whose market shares pundits have deemed unassailable wilt in the face of a newcomer who can provide more utility or expose a weakness. Witness Firefox against Explorer; Facebook against MySpace, and in wireless devices, Apple against Nokia.

Others have raised the customer service issue–that AT&T consistently ranks low in customer service surveys. This metric itself cannot be used as a “customer harm” because there is no predicting how this might change with the mix of T-Mobile (which has good customer satisfaction ratings). Measurements are also subjective. Everyone complains about the phone company. Yet, in AT&T’s case, the equally measurable popularity of the iPhone seems to offset those complaints. It also undermines the market share argument–begging the question of why a company whose service is reportedly so inferior poses such a threat to competition. But just to be skanky, if antitrust approval hinged on customer service, United Airlines would never been allowed to merge with Continental.

A valid antitrust case must show the merger will allow AT&T to illegally or unfairly limit options for consumers. In U.S. antitrust law, this usually means determining whether a dominant company can it use its size to undermine or drive out otherwise healthy competitors by controlling access to other parts of the supply chain—such as manufacturing, transportation or distribution. In modern antitrust jurisprudence, leveraging size to speed innovation, respond to market needs, or lure investment dollars is not seen as unfair or illegal. This distinction guards against the use of courts to protect or prop up uncompetitive companies. (European antitrust, however, is a different animal).

The AT&T/T-Mobile merger is a sign of maturing market, not the reconstitution of a monopoly that existed 30 years ago in an environment very different from today. Bottom line, economies of scale could not sustain seven regional telecommunications companies. Far from “unthinkable,” as one-time FCC Chairman Reed Hundt once declared, their consolidation was inevitable. A few prescient analysts, including Victor Schnee and Allan Tumolillo in the landmark study “Taking Over Telephone Companies,” predicted this very thing as far back as 1990.

Broadly speaking, we are entering a new phase of service provision, where wireless stands to be a much more competitive “last mile” technology for broadband. This will shuffle the players and the stakes again. The former AT&T companies dominate wireless, to be sure, but on the wireline side, cable companies have the competitive advantage. The right approach to this merger would be to view it in the context of the evolving broadband market. With this perspective, AT&T and T-Mobile won’t be one wireless company among three, but one national broadband player among six or seven.

Let the competition grow.

]]>
https://techliberation.com/2011/04/22/for-the-last-time-the-bell-system-monopoly-is-not-being-rebuilt/feed/ 6 36381
Remembering What Regulatory Capture Looked Like: The Airline Experience https://techliberation.com/2011/04/11/remembering-what-regulatory-capture-looked-like-the-airline-experience/ https://techliberation.com/2011/04/11/remembering-what-regulatory-capture-looked-like-the-airline-experience/#respond Tue, 12 Apr 2011 02:51:23 +0000 http://techliberation.com/?p=36203

This week, my colleague Jerry Brito asked me to guest lecture to his George Mason University law school class on regulatory process. He asked me to talk about one of my favorite topics: the sad, sordid history of regulatory capture. Regular readers will recall the compendium I posted here a few months ago [and that I continue to update] of selected passages from books and papers penned by various economists and political scientists who have studied this issue.

Again, it doesn’t make for pretty reading, but the lesson that history teaches is vital: No matter how noble the “public interest” goals of regulatory advocates or their specific proposals, the only thing that really counts is what regulation means in practice.  Regrettably, all too often, regulation is “captured” by various interests and used to their advantage, or at least to the disadvantage of potential competitors, new entrants, and innovation.

While I was gathering some materials for the case study portion of my lecture — which incorporates the history of telecommunications monopolization, broadcast industry regulatory shenanigans, and transportation / airlines fiascos — I figured I had to post a passage from one of my favorite books on regulation of all-time: Thomas K. McCraw’s brilliant Pulitzer Prize-winning 1984 book, Prophets of Regulation. In his chapter on the late great Alfred Kahn, the father of airline deregulation, McCraw recounts the history of the Civil Aeronautics Board (CAB) from its creation in the 1940s up until the time of Kahn’s ascendency to CAB chairman in the Carter Administration (and then the CAB’s eventual deregulation and abolition). Here’s the key passage from that history:

“Clearly, in passing the Civil Aeronautics Act [of 1938], Congress intended to bring stability to airlines. What is not clear is whether the legislature intended to cartelize the industry. Yet this did happen. During the forty years between passage of the act of 1938 and the appointment of [Alfred] Kahn to the CAB chairmanship, the overall effect of board policies tended to freeze the industry more or less in its configuration of 1938. One policy, for example, forbade price competition. Instead the CAB ordinarily required that all carriers flying a certain route charge the same rates for the same class of customer. […] A second policy had to do with the CAB’s stance toward the entry of new companies into the business. Charged by Congress with the duty of ascertaining whether or not ‘the public interest, convenience, and necessity’ mandated that new carriers should receive a certificate to operate, the board often ruled simply that no applicant met these tests. In fact, over the entire history of the CAB, no new trunkline carrier had been permitted to join the sixteen that existed in 1938. And those sixteen, later reduced to ten by a series of mergers, still dominated the industry in the 1970s. All these companies… developed into large companies under the protective wing of the CAB. None wanted deregulation.” (p. 263)

To reiterate: Zero new competitors were allowed. Zero price competition was allowed. And very little service innovation was permitted. It was a comfy little protected cartel from start to finish. It’s no wonder that “none wanted deregulation”!  Folks, if that isn’t the very definition of regulatory capture, I don’t know what is.

This is what makes Fred Kahn’s achievement all the more monumental.  Beyond his obvious mastery of the subject and rigorous documentation of regulatory failure in action, it was Fred’s sheer force of will and amazing spirit that provided the spark to get the deregulation of this mess moving forward. Against all odds — and with the help of some fellow liberal Democrats like Ted Kennedy, Ralph Nader, and Stephen Breyer — Fred did it.

And consumers owe him a huge debt of gratitude for it. Prices plummeted following the CAB’s abolition and countless new industry faces have come and gone since deregulation. Things haven’t been perfect by any stretch of the imagination, but can you imagine how much worse off we would have been absent Fred Kahn’s bold move to break the regulatory capture logjam and “free the skies” for competition?

Something to think about next time someone tells you that regulation is always in consumer’s best interests.

[P.S. – I posted a short obit here late last December remembering Fred Kahn upon his passing. I hope you read it if you haven’t already.  I still wish I could hear Fred speak just one more time. What a joy and inspiration that man was. I always treasured my moments with him. I still have the final email he sent me from early 2010 sitting in my inbox. I just can’t bring myself to delete it for some reason. He had passed along a nice note about a paper I had recently written. I practically cried when I read his note. One of my intellectual heroes had not only read something I penned but he had actually liked it! And he was 92 when he sent it to me.  Blows my mind.]

]]>
https://techliberation.com/2011/04/11/remembering-what-regulatory-capture-looked-like-the-airline-experience/feed/ 0 36203
Chairman Genachowski and his Howling Commissioners: Reading the Net Neutrality Order (Part I) https://techliberation.com/2010/12/30/chairman-genachowski-and-his-howling-commissioners-reading-the-net-neutrality-order-part-i/ https://techliberation.com/2010/12/30/chairman-genachowski-and-his-howling-commissioners-reading-the-net-neutrality-order-part-i/#comments Thu, 30 Dec 2010 22:27:41 +0000 http://techliberation.com/?p=33907

At the last possible moment before the Christmas holiday, the FCC published its Report and Order on “Preserving the Open Internet,” capping off years of largely content-free “debate” on the subject of whether or not the agency needed to step in to save the Internet.

In the end, only FCC Chairman Julius Genachowski fully supported the final solution.  His two Democratic colleagues concurred in the vote (one approved in part and concurred in part), and issued separate opinions indicating their belief that stronger measures and a sounder legal foundation were required to withstand likely court challenges.  The two Republican Commissioners vigorously dissented, which is not the norm in this kind of regulatory action.  Independent regulatory agencies, like the U.S. Courts of Appeal, strive for and generally achieve consensus in their decisions.

So for now we have a set of “net neutrality” rules that a bi-partisan majority of the last Congress, along with industry groups and academics, strongly urged the agency not to adopt, and which were deemed unsatisfactory by four of the five Commissioners.  It’s hardly a moment of pride for the agency, which has been distracted by the noise around these proceedings since Genachowski was first confirmed by the Senate.  Important work freeing up radio spectrum for wireless Internet, reforming the corrupt Universal Service Fund, and promoting the moribund National Broadband Plan have all been sidelined.

How did we get here?  In October, 2009, the agency first proposed new rules, but their efforts were sidetracked by a May court decision that held the agency lacked authority to regulate broadband Internet.  After flirting with the dangerous (and likely illegal) idea of “reclassifying” broadband to bring it under the old telephone rules, sanity seemed to return.  Speaking to state regulators in mid-November, the Chairman made no mention of net neutrality or reclassification, saying instead that “At the FCC, our primary focus is simple: the economy and jobs.”

Just a few days later, at a Silicon Valley event, the Chairman seemed to reverse course, promising that net neutrality rules would be finalized.  He also complimented the “very smart lawyers” in his employ who had figured out a way to do it without the authorization of Congress, which has consistently failed to pass enabling legislation since the idea first surfaced in 2003.  (Most recently, Democratic Congressman Henry Waxman floated a targeted net neutrality bill days before the mid-term elections, but never introduced it.)

From then until the Commission’s final meeting before the new Congress comes to town in January, Commissioners and agency watchers lobbied hard and feinted outrage with the most recent version of the rules, which the agency did not make public until after the final vote was taken on Dec. 21.  In oral comments delivered at the December meeting, two commissioners complained that they hadn’t seen the version they were to vote on until midnight the night before the vote.  Journalists covering the event didn’t have the document all five Commissioners referenced repeatedly in their spoken comments, and had to wait two more days for all the separate opinions to be collated.

Why the Midnight Order?  FCC Commissioners do not serve at the whim of Congress or the President, so the mid-term election results technically had no effect on the chances of agency action.  Chairman Genachowski has had the votes to approve pretty much anything he wants to all along, and will for the remainder of his term.

Even with a Republican House, legislation to block or overturn FCC actions is unlikely.  The Republicans would have to get Democratic support in the Senate, and perhaps overcome a Presidential veto.

But Republicans could use net neutrality as a bargaining chip in future negotiations, and the House can make life difficult for the agency by holding up its budget or by increasing its oversight of the agency, forcing the Chairman to testify and respond to written requests so much as to tie the agency in knots.

So doing something as Congress was nearly adjourned and too busy to do much but bluster was perhaps the best chance the Chairman had for getting something—anything—on the Federal Register.

More likely, the agency was simply punting the problem.  Tired of the rancor and distraction of net neutrality, the new rules—incomplete, awkward, and without a solid legal foundation—move the issue from the offices of the FCC to the courts and Congress.  That will still tie up agency resources and waste even more taxpayer money, of course, but now the pressure of industry and “consumer advocate” groups will change its focus.  Perhaps this was the only chance the Chairman had of getting any real work done.

The Report and Order

Too much ink has already been spilled on both the substance and the process of this order, but there are a few tidbits from the documents that are worth calling out.  In this post, I look at the basis for issuing what the agency itself calls “prophylactic rules.”  In subsequent posts, I’ll look at the final text of the rules themselves and compare them to the initial draft, as well as to alternatives offered by Verizon and Google and Congressman Waxman.  Another post will review the legal basis on which the rules are being issued, and likely legal challenges to the agency’s authority.  I’ll also examine the FCC’s proposed approach to enforcement of the rules.

“Prophylactic” Rules

Even the FCC acknowledges that the “problem” these new rules solve doesn’t actually exist…yet.  The rules are characterized as “prophylactic” rules—a phrase that appears eleven times in the 87-page report.  The report fears that the lack of robust broadband competition in much of the U.S. (how many sets of redundant broadband infrastructure do consumer advocates want companies to build out, anyway?) could lead to ISPs using their market influence to squeeze content providers, consumers, or both.

This hasn’t happened in the ten years broadband Internet has been growing in both capability and adoption, of course, but still, there’s a chance.  As the report (¶ 21) puts it in challenged grammar, “broadband providers potentially face at least three types of incentives to reduce the current openness of the Internet.”

We’ll leave to the side for now the undiscussed potential that these new rules will themselves cause unintended negative consequences for the future development or deployment of technologies built on top of the open Internet.  Instead, let’s look at the sum total of the FCC’s evidence, collected over the course of more than a year with the help of advocates who believe the “Internet as we know it” is at death’s door, that broadband providers are lined up to destroy the technology that, ironically, is the source of their revenue.

To prove that these “potential” incentives are neither “speculative or merely theoretical,” the FCC cites precisely four examples between 2005 and 2010 where it believes broadband providers have threatened the open Internet (¶ 35).   These are:

1.      A local ISP that was “a subsidiary of a telephone company” settled claims it had interfered with Voice over Internet Telephony (VoIP) applications used by its customers.

2.      Comcast agreed to change its network management techniques when the company was caught slowing or blocking packets using the BitTorrent protocol (the subject of the 2010 court decision holding the agency lacked jurisdiction over broadband Internet).

3.      After a mobile wireless provider contracted with an online payment service, the provider “allegedly” blocked customers’ attempts to use competing services to pay for purchases made with mobile devices.

4.      AT&T initially restricted the types of applications—including VoIP and Slingbox—that customers could use on their Apple iPhone.

In the world of regulatory efficiency, this much attention being focused on just four incidents of potential or “alleged” market failures is a remarkable achievement indeed.  (Imagine if the EPA, FDA, or OSHA reacted with such energy to the same level of consumer harm.)

But in legal parlance, regulating on such a microscopically thin basis goes well beyond mere “pretense”—it’s downright embarrassing the agency couldn’t come up with more to justify its actions.  Of the incidents, (1) and (2) were resolved quickly through existing agency authority, (3) was merely alleged and apparently did not even lead to a complaint filed with the FCC (the footnote here is to comments filed by the ACLU, so it’s unclear who is being referenced) and (4) was resolved—as the FCC acknowledges–when customers put pressure on Apple to allow AT&T as the sole iPhone network provider to allow the applications.

Even under the rules adopted, (2) would almost surely still be allowed.  The Comcast case involved use of the BitTorrent protocol.  Academic studies performed since 2008 (when the protocol has been expanded to more legal uses, that is), find that over 99% of BitTorrent traffic still involves unlicensed copyright infringement.  Thus the vast majority of the traffic involved is not “lawful” traffic and, therefore, is not subject to the rules.  The no blocking rule (§8.5) only prohibits blocking of “ lawful content, applications, services or non-harmful devices.”  (emphasis added)

Indeed, the FCC encourages network providers to move more aggressively to block customers who use the Internet to violate intellectual property law.  In ¶ 111, the Report makes crystal clear that the new rules “do not prohibit broadband providers from making reasonable efforts to address the transfer of unlawful content or unlawful transfers of content…..open Internet rules should not be invoked to protect CR infringement….” (Perhaps the FCC, which continues to refer to BitTorrent as an “application” or believes it to be a website, simply doesn’t understand how the BitTorrent protocol actually works.)

Under the more limited wireless rules adopted, (3) and (4) would probably still be allowed as well.  We don’t know enough about (3) to really understand what is “alleged” to have happened, but the no-blocking rule (§ 8.5) says only that mobile broadband Internet providers “shall not block consumers from accessing lawful websites, subject to reasonable network management; nor shall such person block applications that compete with the provider’s voice or video telephony service, subject to reasonable network management.”

A mobile payment application wouldn’t seem to be included in that limitation, and in the case of the iPhone, it was Apple, not AT&T, that wanted to limit VoIP.

Even so, the Report makes clear that the wireless rule (¶ 102) doesn’t apply to app stores: “The prohibition on blocking applications that compete with a broadband provider’s voice or video telephony services does not apply to a broadband provider’s operation of application stores or their functional equivalent.”  So if the software involved in incidents (3) and (4) involved rejection of proposed apps for the respective mobile devices, there would still be no violation under the new rules.

And the caveat for “reasonable network management” (§8.11(d)) says only that a practice is “reasonable if it is appropriate and tailored to achieving a legitimate network purpose, taking into account the particular network architecture of the broadband Internet access service.”  Voice and video apps, depending on how they have been implemented, can put particular strain on a wireless broadband network.  Blocking particular VoIP or apps like Slingbox might be allowed, in other words.

So that’s it.  Only four or fewer actual examples of non-open behavior by ISPs in ten years.  And the rules adopted to curb such behavior would probably only apply, at best, to the single case of Madison River (1), a local telephone carrier with six hundred employees, in a case the FCC agreed to drop without a formal finding of any kind nearly six years ago.

But maybe these aren’t the real problems.  Maybe the real problem is, as many regulatory advocates argue vaguely, the lack of “competition” for broadband.  Since the first deployment of high-speed Internet, multiple technologies have been used to deliver access to consumers, including DSL (copper), coaxial cable, satellite, cellular (3G and now 4G), wireless (WiFi and WiMax), and broadband over power lines.  According to the National Broadband Plan, 4% of the U.S. population still doesn’t have access to any of these alternatives.  In many parts of the country, only two providers are available and in others, the offered speeds of alternatives vary greatly, leaving high-bandwidth users without effective alternatives.

If lack of competition is the problem, though, why not solve that problem?  Well, perhaps the FCC would rather sidestep the issue, since it has demonstrated it is the wrong agency to encourage more competition.  The FCC, for example, has supported legal claims by states that they can prohibit municipalities from offering wireless service, and has dragged its feet on approving trials for broadband over power lines—the best hope for much of the 4% who today have no broadband option, most of whom live in rural areas which already have power line infrastructure.

Indeed, if there are anti-competitive behaviors now or in the future, existing antitrust law, enforceable by either the Department of Justice or the Federal Trade Commission, provide much more powerful tools both to prosecute and remedy activities that genuinely harm consumers.

It’s hard, by comparison, to find many examples in the long history of the FCC where it has used its sometimes vast authority to solve a genuine problem.  The Carterfone decision, which Commissioner Copps cites enthusiastically in his concurrence, and (finally) the opening of long distance telephony to competition, certainly helped consumers.  But both (and other examples) could also be seen as undoing harm caused by the agency in the first place.  And both dealt with technologies and applications that were mature.  Why does anyone believe the FCC can “prophylactically” solve a problem dealing with an emerging, rapidly-evolving new technology that has thrived in the last decade in part because it was unregulated?

The new rules, which are aimed at ensuring “edge” providers do not need to get “permission to innovate” from ISPs, may have the unintended effect of requiring ISPS—and edge providers—to get “permission to innovate” from the FCC.  That hardly seems like a risk worth taking for a problem that hasn’t presented itself.

]]>
https://techliberation.com/2010/12/30/chairman-genachowski-and-his-howling-commissioners-reading-the-net-neutrality-order-part-i/feed/ 8 33907
Alfred Kahn – An Appreciation https://techliberation.com/2010/12/28/alfred-kahn-an-appreciation/ https://techliberation.com/2010/12/28/alfred-kahn-an-appreciation/#comments Tue, 28 Dec 2010 15:31:45 +0000 http://techliberation.com/?p=33886

I was very sad to learn this morning of the death of Alfred Kahn, the brilliant economist known as “the father of airline deregulation.”  He was 93.  He was a brilliant, gracious and gregarious man who never failed to have a smile on his face and make those around him smile even more.  He will be missed.

Kahn has been an inspiration to an entire generation of regulatory analysts and economists. His 2-volume masterwork, The Economics of Regulation, has served as our bible and provided us with a framework to critically analyze the efficacy of government regulation. I have cited it in more of my papers and essays than any other book or article. The book was that big of a game-changer, as was Kahn’s time in government.  A self-described “good liberal Democrat,” Kahn was appointed by President Jimmy Carter to serve as Chairman of the Civil Aeronautics Board in the mid-1970s and promptly set to work with other liberals, such as Sen. Ted Kennedy, Stephen Breyer, and Ralph Nader, to dismantle anti-consumer cartels that had been sustained by government regulation. These men understood that consumer welfare was better served by innovative, competitive markets than by captured regulators, who talked a big game about serving “the public interest” but were typically busy stifling innovation and market entry.

His academic and policy achievements were significant, but what I will most remember about him is that, in a field not known for lively personalities or exciting discussions, Kahn was a consistent source of great wit and entertainment. He always managed to make even the most dreadfully boring of regulatory topics interesting and entertaining. Everyone would go away happy from a Fred Kahn talk.  Moreover, in a policy arena characterized by bitter intellectual bickering and endless bad-mouthing, Kahn always rose above the fray and held himself out to be a model of maturity and respectfulness. I have never heard a single person say a bad word about Alfred Kahn. Not one. That’s saying something in the field of regulatory policy!

One quick story about one of my interactions with Fred.  Back in 1994, someone in DC was hosting a lunch on telecom and regulatory policy and Kahn was the guest of honor. Knowing this in advance, I brought along my copy of The Economics of Regulation hoping for an inscription from Fred.  I handed it to him — I think my hands were shaking as if I were a teenage girl meeting the Jonas Brothers — and asked Fred for a simple autograph. He took a close look at my well-worn book, with scribblings in every margin, Post-It notes all over it, and every other page dog-eared for one reason or another.  The book was that important to me.  Seeing this, Fred flashed me one of his signature big grins and laughed as he wrote on the first page: “To a man of obviously excellent judgment!”  He handed it back to me and said, “I wish everyone cared enough about my book to deface it like that!”

We did, Fred. We did. Thank you for it, everything you taught us, and the example you set for all of us. You will not be forgotten.

]]>
https://techliberation.com/2010/12/28/alfred-kahn-an-appreciation/feed/ 12 33886
Regulatory Capture: What the Experts Have Found https://techliberation.com/2010/12/19/regulatory-capture-what-the-experts-have-found/ https://techliberation.com/2010/12/19/regulatory-capture-what-the-experts-have-found/#comments Mon, 20 Dec 2010 00:58:22 +0000 http://techliberation.com/?p=33727

[Note: This post is updated regularly as I discover relevant old or new material.]

“Regulatory capture” occurs when special interests co-opt policymakers or political bodies — regulatory agencies, in particular — to further their own ends.  Capture theory is closely related to the “rent-seeking” and “political failure” theories developed by the public choice school of economics.  Another term for regulatory capture is “client politics,” which according to James Q. Wilson, “occurs when most or all of the benefits of a program go to some single, reasonably small interest (and industry, profession, or locality) but most or all of the costs will be borne by a large number of people (for example, all taxpayers).”  (James Q. Wilson, Bureaucracy, 1989, at 76).

While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity.  Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism.  (See, for example, the reaction of New Republic’s Jonathan Chait to Will Wilkinson’s recent Economist column about the prevalence of corporatism in our modern political system.)  Yet, countless studies have shown that regulatory capture has been at work in various arenas: transportation and telecommunications; energy and environmental policy; farming and financial services; and many others.

I thought it might be useful to build a compendium of quotes from various economists and political scientists who have studied the regulatory process throughout history and identified regulatory capture or client politics as a major problem.  I would greatly appreciate having others suggest additional quotes and studies to add to this list since I plan to update it frequently and eventually work all of this into a future paper or book. [ Note: I have updated this compendium over a dozen times since the original post, so please check back for updates.]

The following list is chronological and begins, surprisingly, with the thoughts of progressive hero Woodrow Wilson…

Woodrow Wilson, The New Freedom: A Call For the Emancipation of the Generous Energies of a People (1913) at 201-202:

“If the government is to tell big business men how to run their business, then don’t you see that big business men have to get closer to the government even than they are now? Don’t you see that they must capture the government, in order not to be restrained too much by it? Must capture the government? They have already captured it. Are you going to invite those inside to stay? They don’t have to get there. They are there.”

A. C. PigouEconomics of Welfare, (1920), Ch. 20, Para. #4

“It is not sufficient to contrast the imperfect adjustments of unfettered private enterprise with the best adjustment that economists in their studies can imagine. For we cannot expect that any public authority will attain, or will even whole-heartedly seek, that ideal. Such authorities are liable alike to ignorance, to sectional pressure and to personal corruption by private interest. A loud-voiced part of their constituents, if organised for votes, may easily outweigh the whole.”

Anthony Downs, “An Economic Theory of Political Action in a Democracy,” 65 Journal of Political Economy 2 (1957), 135-150, at 136:

“…even if social welfare could be defined, and methods of maximizing it could be agreed upon, what reason is there to believe that the men who run the government would be motivated to maximize it? To state that “they should do so does not mean that they will.”

Ronald Coase, “The Federal Communications Commission” 2 Journal of Law and Economics (1959), 1-40, at 37. In commenting on the fact that many lawmakers bemoaned “the extent to which pressure is brought to bear on the [FCC] by politicians and businessmen,” Coase said “that this should be happening is hardly surprising.”  He continued on:

“When rights, worth millions of dollars, are awarded to one businessman and denied to others, it is no wonder if some applicants become overanxious and attempt to use whatever influence they have (political and otherwise), particularly as they can never be sure what pressure the other applicants may be exerting.”

Milton Friedman, Capitalism & Freedom (1962) at 140:

“the pressure on the legislature to license an occupation rarely comes from the members of the public . . . On the contrary, the pressure invariably comes from the occupation itself.”

Harold Demsetz, “Why Regulate Utilities?,” 11(1) Journal of Law and Economics (Apr., 1968), at 61.

“…in utility industries, regulation has often been sought because of the inconvenience of competition.”

Richard Posner, “Natural Monopoly and Its Regulation,” 21(3) Stanford Law Review 548 (Feb., 1969):

“Because regulatory commissions are of necessity intimately involved in the affairs of a particular industry, the regulators and their staffs are exposed to strong interest group pressures.  Their susceptibility to pressures that may distort economically sound judgments is enhanced by the tradition of regarding regulatory commissions as ‘arms of the legislature,’ where interest-group pressures naturally play a vitally important role.”

George Stigler, “The Theory of Economic Regulation,” 2(1) Bell Journal of Economics and Management Science, (1971), 3-21 at 3:

“…as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefits.”

George Stigler, “Can Regulatory Agencies Protect the Consumer?” in The Citizen and the State: Essays on Regulation (1975), at 183:

“Regulation and competition are rhetorical friends and deadly enemies: over the doorway of every regulatory agency save two should be carved: ‘Competition Not Admitted.’ The Federal Trade Commission’s doorway should announce , “Competition Admitted in Rear,” and that of the Antitrust Division, ‘Monopoly Only by Appointment.’”

Theodore J. Lowi, The End of Liberalism: The Second Republic of the United States (2nd Ed., 1969, 1979) at 280:

“a considerable proportion of federal regulation, regardless of its own claim to consumer protection, has the systematic effect of constituting and maintaining a sector of the economy or the society. These are the policies of receivership by regulation.”

Alfred Kahn, The Economics of Regulation: Principles and Institutions (1971):

“When a commission is responsible for the performance of an industry, it is under never completely escapable pressure to protect the health of the companies it regulates, to assure a desirable performance by relying on those monopolistic chosen instruments and its own controls rather than on the unplanned and unplannable forces of competition.” (p. 12) “Responsible for the continued provision and improvement of service, [the regulatory commission] comes increasingly and understandably to identify the interest of the public with that of the existing companies on whom it must rely to deliver goods.” (p. 46)

Mark Green and Ralph Nader, “Economic Regulation vs. Competition: Uncle Sam the Monopoly Man,” Yale Law Journal 82, no. 5 (April 1973), 876

“a kind of regular personnel interchange between agency and industry blurs what should be a sharp line between regulator and regulatee, and can compromise independent regulatory judgment. In short, the regulated industries are often in clear control of the regulatory process.”

Richard B. McKenzie and Gordon Tullock, Modern Political Economy: An Introduction to Economics (1978) at 220:

“although regulation is begun with the good intentions of those who promote and pass the laws, somewhere along the line regulators may become pawns of the regulated firms.”

Milton and Rose Friedman, Free to Choose (1980) at 193:

“Every act of intervention establishes positions of power.  How that power will be used and for what purposes depends far more on the people who are in the best position to get control of that power and what their purposes are than on the aims and objectives of the initial sponsors of the intervention.”

Barry M. Mitnick, The Political Economy of Regulation: Creating, Designing, and Removing Regulatory Forms (New York: Columbia University Press, 1980), at 38:

“Much relatively recent research has argued that regulation was often sought by industries for their own protection, rather than being imposed in some ‘public interest.’ Although the distinction is not always made clear in this recent literature, we may add that regulation which is not directly sought at the outset is generally ‘captured’ later on so it behaves with consistency to the industry’s major interests, or at least has been observed to behave in this manner.”

Barry Weingast, “Regulation, Reregulation and Deregulation: The Foundation of Agency-Clientele Relationships,”44 Law and Contemporary Problems, (1981) pp. 147-77, at 151:

“Often, agencies are the vehicle for this endeavor. Agency heads and commission members, anxious to further their careers and goals (including large budgets) as well as completing their own of power and prestige pet projects and policy initiatives, depend upon service to interest their success groups and key committee members for their success.”

George Gilder, Wealth & Poverty (New York: Bantam Books, 1981), pp. 283:

“One reason for government resistance to change is that the process of creative destruction can attack not only an existing industry, but also the regulatory apparatus that subsists on it; and it is much more difficult to retrench a bureaucracy than it is to bankrupt a company. A regulatory apparatus is a parasite that can grow larger than its host industry and become in turn a host itself, with the industry reduced to parasitism, dependent on the subsidies and protections of the very government body that initially sapped its strength.”

Bruce Yandle,”Bootleggers and Baptists — The Education of a Regulatory Economist,” Regulation, Vol. 3, No. 3, (May/June 1983) p. 13:

“what do industry and labor want from the regulators? They want protection from competition, from technological change, and from losses that threaten profits and jobs. A carefully constructed regulation can accomplish all kinds of anticompetitive goals of this sort, while giving the citizenry the impression that the only goal is to serve the public interest.”

Thomas K. McCraw, Prophets of Regulation, (Cambridge, MA: Harvard University Press, 1984), p. 263 [recounting the history of the Civil Aeronautics Board up until the time of Alfred Kahn ascendency to chairman and its eventual deregulation and abolition.]

“Clearly, in passing the Civil Aeronautics Act [of 1938], Congress intended to bring stability to airlines. What is not clear is whether the legislature intended to cartelize the industry. Yet this did happen. During the forty years between passage of the act of 1938 and the appointment of [Alfred] Kahn to the CAB chairmanship, the overall effect of board policies tended to freeze the industry more or less in its configuration of 1938. One policy, for example, forbade price competition. Instead the CAB ordinarily required that all carriers flying a certain route charge the same rates for the same class of customer. […] A second policy had to do with the CAB’s stance toward the entry of new companies into the business. Charged by Congress with the duty of ascertaining whether or not ‘the public interest, convenience, and necessity’ mandated that new carriers should receive a certificate to operate, the board often ruled simply that no applicant met these tests. In fact, over the entire history of the CAB, no new trunkline carrier had been permitted to join the sixteen that existed in 1938. And those sixteen, later reduced to ten by a series of mergers, still dominated the industry in the 1970s. All these companies… developed into large companies under the protective wing of the CAB. None wanted deregulation.”

Robert Higgs, Crisis and Leviathan: Critical Episodes in the Growth of American Government (1987) p. 8:

“The government’s regulatory agencies have created or sustained private monopoly power more often than they have precluded or reduced it.  This result was exactly what  many interested parties desired from government regulation, though they would have been impolitic to have said so in public.”

Jeffrey M. Berry, The Interest Group Society (1989) p. 151:

“The ties between interest groups and [regulatory] agencies can become too close. A persistent criticism by political scientists is that agencies that regulate businesses are overly sympathetic to the industries they are responsible for regulating.  Critics charge that regulators often come from the businesses they regulate and thus naturally see things from an industry point of view.  Even if regulators weren’t previously involved in the industry, they have been seen as eager to please powerful clientele groups rather than have them complain to the White House or to the agency’s overseeing committees in Congress.”

Jonathan Emord, “The Electronic Press and the Industry Capture Movement,” Chapter 11 from: Freedom Technology and the First Amendment (1991), p. 146 (discussing the early history of radio licensing):

“The minutes of the First National Radio Conference in 1922 reveal that even at this early date, industry leader clamored for government limits on the number of licenses issued; they sought protection against entry by new licenses. For its part, the government desired control over the industry’s structure and programming content. Certain members of Congress, joined by [Secretary of Commerce Herbert] Hoover, agreed with broadcast industry leaders that the system of broadcasting in the United States would be brought within the federal government’s control. The classic rent/content control quid pro quo soon developed: in exchange for regulatory controls on industry structure and programming content, industry leaders would be granted restrictions on market entry that they wanted. These restrictions would ensure monopoly rents for licensees and would provide the government with assurance that the broadcast industry would not oppose regulatory controls.”

David Schoenbrod, Power Without Responsibility: How Congress Abuses the People Through Delegation (New Haven, CT: Yale University Press, 1993), p. 13:

“Agency heads are usually not apolitical and, indeed, concentrated interests often prevail more easily in an agency than they can in Congress. Effective participation in agency lawmaking usually requires expensive legal representation as well as close connections to members of Congress who will pressure the agency on one’s behalf. The agency itself is often closely linked with the industry it regulates. Not only large corporations, but also labor unions, cause-based groups, and other cohesive minority interests sometimes can use delegation to triumph over the interests of the larger part of the general public, which lacks the organization, finances, and know-how to participate as effectively in the administrative process.”

Douglass North, “Economic Performance through Time,” 84 American Economic Review 3, (1994), 359-363, at p. 360:

“Institutions are not necessarily or even usually created to be socially efficient; rather they, or at least the formal rules, are created to serve the interests of those with the bargaining power to create new rules.”

P.A. McNutt, The Economics of Public Choice (1996), p. 105-6:

“The more successful the interest group becomes the greater the probability that it will be in a position to impact on the policy making process of successive governments. … Aspiring monopolists will retain lobbyists to assure a favourable outcome and devote resources to the acquisition of the monopoly right.  A government will more than likely grant monopoly privileges to various groups of politically influential people.  Cartels and anti-competitive behaviour will be maintained and politicians will react to the demands of the more vociferous and well organised interest groups.”

Andrew Odlyzko, “Privacy, Economics, and Price Discrimination on the Internet,” July 27, 2003, p. 12:

“It is now widely accepted that the passage of the Interstate Commerce Act of 1887 was not a pure triumph of the populist movement and its allies in the anti-railroad camp. The railway industry largely decided that regulation was in its best interests and acquiesced in and even encouraged government involvement. This is often portrayed as the insidious capture of the regulators by the industry they regulate. There is certainly much evidence to support this view.”

Lawrence Lessig,”Reboot the FCC,” Newsweek, December 23, 2008

“Economic growth requires innovation. Trouble is, Washington is practically designed to resist it. Built into the DNA of the most important agencies created to protect innovation, is an almost irresistible urge to protect the most powerful instead. The FCC is a perfect example. … With so much in its reach, the FCC has become the target of enormous campaigns for influence. Its commissioners are meant to be “expert” and “independent,” but they’ve never really been expert, and are now openly embracing the political role they play. Commissioners issue press releases touting their own personal policies. And lobbyists spend years getting close to members of this junior varsity Congress.”

Thomas Frank, Obama and Regulatory Capture,” Wall Street Journal, June 24, 2009:

“There are powerful institutions that don’t like being regulated. Regulation sometimes cuts into their profits and interferes with their business. So they have used the political process to sabotage, redirect, defund, undo or hijack the regulatory state since the regulatory state was first invented. The first federal regulatory agency, the Interstate Commerce Commission, was set up to regulate railroad freight rates in the 1880s. Soon thereafter, Richard Olney, a prominent railroad lawyer, came to Washington to serve as Grover Cleveland’s attorney general. Olney’s former boss asked him if he would help kill off the hated ICC. Olney’s reply, handed down at the very dawn of Big Government, should be regarded as an urtext of the regulatory state: ‘The Commission… is, or can be made, of great use to the railroads. It satisfies the popular clamor for a government supervision of the railroads, at the same time that that supervision is almost entirely nominal. Further, the older such a commission gets to be, the more inclined it will be found to take the business and railroad view of things. … The part of wisdom is not to destroy the Commission, but to utilize it.'”

Tim Wu, The Master Switch: The Rise and Fall of Information Empires (2010), p. 308:

“Again and again in the histories I have recounted, the state has shown itself an inferior arbiter of what is good for the information industries. The federal government’s role in radio and television from the 1920s through the 1960s, for instance, was nothing short of a disgrace…. Government’s tendency to protect large market players amounts to an illegitimate complicity … [particularly its] sense of obligation to protect big industries irrespective of their having become uncompetitive.”

David J. Farber & Gerald R. Faulhaber, “Net Neutrality: No One Will Be Satisfied, Everyone Will Complain,” The Atlantic, December 21, 2010:

“When the FCC asserts regulatory jurisdiction over an area of telecommunications, the dynamic of the industry changes. No longer are customer needs and desires at the forefront of firms’ competitive strategies; rather firms take their competitive battles to the FCC, hoping for a favorable ruling that will translate into a marketplace advantage. Customer needs take second place; regulatory “rent-seeking” becomes the rule of the day, and a previously innovative and vibrant industry becomes a creature of government rule-making.”

Holman Jenkins, “Let’s Restart the Green Revolution,” Wall Street Journal, February 2, 2011, (regarding how misguided agricultural & environmental policies are hurting consumers):

“When some hear the word ‘regulation,’ they imagine government rushing to the defense of consumers. In the real world, government serves up regulation to those who ask for it, which usually means organized interests seeking to block a competitive threat. This insight, by the way, originated with the left, with historians who went back and reconstructed how railroads in the U.S. concocted federal regulation to protect themselves from price competition. We should also notice that an astonishingly large part of the world has experienced an astonishing degree of stagnation for an astonishingly long time for exactly such reasons.”

Bruce Schneier, Liars & Outliers: Enabling the Trust that Society Needs to Thrive (New York: John Wiley & Sons, Inc., 2012), p. 204.

“There’s one competing interest that’s unique to enforcing institutions, and that’s the interest of the group the institution is supposed to watch over. If a government agency exists only because of the industry, then it is in its self-preservation interest to keep that industry flourishing. And unless there’s some other career path, pretty much everyone with the expertise necessary to become a regulator will be either a former or future employee of the industry with the obvious implicit and explicit conflicts. As a result, there is a tendency for institutions delegated with regulating a particular industry to start advocating the commercial and special interests of that industry. This is known as regulatory capture, and there are many examples both in the U.S. and in other countries.”

Bruce Owen, “Communication Policy Reform, Interest Groups, and Legislative Capture” (Stanford, CA: Stanford Institute for Economic Policy Research, January 19, 2012), SIEPR Discussion Paper No. 11-006, p. 2. Owen argues that it is the legislative branch, not the regulatory agencies themselves, where regulatory capture takes root:

“It is rather legislative oversight and budget committees and their chairs that are (willingly) captured by special interests in the first instance. One could equally say that legislators capture the special interests, seeking campaign funding The behavior of regulatory agencies simply reflect the preferences of their congressional masters. Regulators generally seek to please their committees, not to defy them.”

Mark Zachary TaylorThe Politics of Innovation: Why Some Countries Are Better Than Others at Science and Technology (Oxford University Press, 2016), p. 213:

“political resistance to technological change can obstruct or warp otherwise ‘good’ S&T [science and technology] policy. Time and again, the losing interest groups created by scientific progress or technological change have been able to convince politicians to block, slow, or alter government support for scientific and technological progress. They support taxes, regulations, subsidies, procurement policies, spending, and so forth that obstruct progress in new S&T, and favor the status quo S&T. The losers and their political representatives have interfered with markets, public institutions and policies, and even the scientific debate itself–whatever they can to protect their interests.”

Additional readings:

]]>
https://techliberation.com/2010/12/19/regulatory-capture-what-the-experts-have-found/feed/ 661 33727