Is it “insane” for free market oriented thinkers to support the AT&T/T-Mobile merger?  Although AT&T says there are five choices of wireless providers to choose from in 18 of 20 major markets, Milton Mueller argues that 93 percent of wireless subscribers prefer a seamless, nationwide provider.  If the merger is approved, there would only be three such providers.

A market dominated by three major providers is neither competitive nor noncompetitive as a definitional matter.  Factual analysis is necessary to determine competitiveness.

And it may be premature to conclude that there is no competitive significance either to the fact there are over a hundred providers currently delivering nationwide service on the basis of voluntary roaming agreements that are common in the industry, or to assume that the possibility the FCC will double the amount of spectrum available for wireless services will not impact the structure of the industry.

The trouble with antitrust generally is the possibility that government will choose to protect weak or inefficient competitors, thus preventing meaningful competition that attracts private investment which leads to innovation, better services and lower prices.  Antitrust is supposed to protect consumers, not politically influential producers.  Although this sounds simple in theory, it can get confusing in practice.  As free market oriented thinkers, we do not want government picking winners and losers.

Continue reading →

On the podcast this week, Gavin Andresen, project lead of the open source, decentralized, and anonymous virtual currency project Bitcoin, talks about the project. Andresen explains how the peer-to-peer currency functions and talks about what allows Bitcoin to operate without a central bank, why it doesn’t have to rely on intermediaries, and how it overcomes the double-spending problem. He also discusses the project’s implications for government regulation, what attracted him to the project, and Bitcoin inventor Satoshi Nakomoto’s motivation for creating the currency.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

I was surprised to read a defense of the AT&T-T-Mobile merger here.

Let’s begin at the beginning and ask why this merger is happening. It’s not as if AT&T is gaining dominance the way Google gained it in search and advertising, or the way Intel did in chips: i.e., through low prices, superior products and customer loyalty. No, last time I looked AT&T was the carrier with the lowest customer satisfaction ratings, some of the highest prices and one of the weakest network performance metrics. In my opinion there is no reason for this merger to take place other than to make life easier for AT&T by reducing competitive pressures on it. AT&T seems to be driven by the following calculus. It can either grow its services and its network under the harsh constraints of market pricing and competition, or it can attempt to reduce the field to an oligopoly with tacit price controls by using its size and financial bulk to eliminate a pest who keeps downward pressure on pricing and service requirements. I think it is rational for AT&T to try to get away with the latter. I think it’s insane for free market oriented thinkers to support it.

Larry Downes can’t argue with the extremely high level of market concentration and the scary HHI measurements that the merger would produce. So he plays the game that clever antitrust advocates always play: shift the market definition. Downes argues that “both Justice and the FCC have consistently concluded that wireless markets are essentially local.” I see no citation to any specific document in Downe’s claim, but if FTC and FCC have concluded that “local” means “my metropolitan area” they are wrong.

Let’s reacquaint everyone with a very basic but pertinent fact: 93% of the wireless users in the U.S. are served by the national carriers. This number (the proportion served by national as opposed to regional providers) has generally increased over the past decade, driven by both demand-side requirements, mergers, and supply-side efficiencies. The choices of consumers have rendered a decisive verdict negating Downes’s claim. Whether it’s voice or data, people expect and want seamless national service; a small but significant segment wants transnational compatibility as well.

Increases in the scope of service will intensify as we move from a primarily voice-driven market to a data-driven market. Carriers who have to impose roaming charges and interconnection fees on their users will not be competitive. Nor will they be able to attract the interest of the cutting-edge handset manufacturers and service developers. Can you imagine Apple signing an iPhone exclusivity deal with Cricket?

It is no accident that the dominant mobile network operators have national brands and national footprints. Most Americans travel outside their metro areas at least once a month, and go places further away than that at least once or twice a year. The 93% who choose a national carrier are rationally calculating that it pays to not have to guess the service area limits of their provider. Of course, a highly budget-constrained segment of the market will accept limited local service for a lower price. To say that those smaller providers are in the same market as a T-Mobile or AT&T is not plausible. They occupy a niche. And if one allows a major merger like this on the grounds that these tiny players constitute a competitive alternative to the likes of AT&T, what are you going to say as the last of these local providers is gobbled up?

How about that “spectrum efficiency” argument? Downes, like the AT&T Corp., makes the same claim that the old AT&T made when it said there should be no microwave-based competition in long distance. As a matter of pure engineering efficiency, it is of course true that a single, optimizing planner can make better use of limited spectrum bands than multiple, competitive providers. But then, that argument applies to any and all carriers (an AT&T-Verizon merger, for example) and to any resource – that’s why it was used by the socialists of the 19th century to claim that capitalism was inherently wasteful and inefficient. Dynamic efficiencies of competition typically benefit the public more than a few allocative efficiencies. And there are plenty of ways for AT&T to expand network capacity without merging.

But there is an interesting twist to this line of reasoning. Notice how the “market is local” claim suddenly disappears. AT&T needs to take over a smaller national rival, according to Downes, so it can “accelerate deployment of nationwide mobile broadband using LTE technology, including expansion into rural areas.” Voila! Once we start talking about spectrum efficiencies and the promotion of universal service we take a nationwide perspective, not a local one. Doesn’t this obvious contradiction make anyone suspicious?

Notice also the ominous historical overtones of AT&T’s claim that it will be able to promote universal broadband service in rural areas if it has a stronger monopoly er, if it gains consolidation efficiencies. Hey, rural areas don’t have congested spectrum, do they? What’s stopping them from doing that now? If they need help to do it, where are the subsidies going to come from? Would more market power make that possible? One cannot help but ask: Is AT&T doing this to get more spectrum or is it trying to pull a neo-Theodore Vail, and promise the government that it will subsidize rural access if it has more market power?

Bottom line: this is one step too far back to the days of a single telephone company. If you support a competitive industry where one can reasonably expect the public and legislators to rely on market forces as the primary industry regulator, this merger has to be stopped. On the other hand, if you welcome the growing pressures for regulating carriers and making them the policemen and chokepoints for network control, a bigger AT&T is just what the doctor ordered.

So a few weeks ago I hit up Adam Thierer, who has done and is continuing to do great work on all things regulation, on some materials for a project I was working on regarding the precautionary principle in the digital space. Turns out Adam was in the middle of his own Digital Precautionary Principle piece as well. I’ll take our simpatico as a sign that this phenomenon may actually be taking place and that I’m not paranoid. (If you haven’t read his earlier piece on TLF, please do so).

While my piece on DPP is coming, hopefully this week, I’ll start things off with my article in today’s RealClearMarkets.com on regulations and risk and how regulating agencies are engaging in traditional “risk aversion behavior” to the detriment of the risk takers (aka entrepreneurs) in the private market. A smarter approach to regulating would incorporate both benefits and risks of NOT regulating. So many times the discussion is geared towards the notion that something has to be done, so how can we minimize the negative impacts, rather than, should we be doing anything at all or should we encourage the trial and error mechanisms that markets utilize?

While the piece isn’t targeted directly at the technology industry, I think it can apply there just as much as any other industry.

 

Following AT&T’s announcement last month of its planned acquisition of T-Mobile USA, pundits and other oddsmakers have settled in for a long tour of duty. Speculation, much of it uninformed, is already clogging the media about the chances the $39 billion deal—larger even than last year’s merger of Comcast and NBC Universal—will be approved.

Both the size of the deal and previous consolidation in the communications industry lead some analysts and advocates to doubt the transaction will or ought to survive the regulatory process.

Though the complex review process could take a year or perhaps even longer, I’m confident that the deal will go through—as it should. To see why, one need only look to previous merger reviews by the Department of Justice and the Federal Communications Commission, both of which must approve the AT&T deal. Continue reading →

Yesterday the FBI effectively [shut down](http://thehill.com/blogs/hillicon-valley/technology/156429-fbi-shuts-down-online-poker-sites) three of the largest gambling sites online and indicted their executives. From a tech policy perspective, these events highlight how central intermediary control is to the regulation of the internet.

Department of Justice lawyers were able to take down the sites using the same tools we’ve [seen DHS use](http://techland.time.com/2011/02/17/operation-protect-our-children-accidentally-shutters-84000-sites/) against alleged pirate and child porn sites: they seize the domain names. Because the sites are hosted overseas (where online gambling is legal), the feds can’t physically shut down the servers, so they do the next best thing. They get a seizure warrant for the domain names that point to the servers and [force the domain name registrars](http://pokerati.com/2011/04/15/poker-panic-11-update-on-domain-name-seizures/) to point them instead to a government IP address, such as [50.17.223.71](http://50.17.223.71). The most popular TLDs, including .com, .net, .org, and .info, have registrars that are American companies within U.S. jurisdiction.

Another intermediary point of control for the federal government are payment processors. The indictments revealed yesterday relate to violations of the [Unlawful Internet Gambling Enforcement Act](http://www.firstamendment.com/site-articles/UIEGA/), which makes it illegal for banks and processors like Visa, MasterCard and PayPal to let consenting adults use their money to gamble online. According to the DOJ, in order to let them bet, the poker sites “arranged for the money received from U.S. gamblers to be disguised as payments to hundreds of non-existent online merchants purporting to sell merchandise such as jewelry and golf balls.” ([PDF](http://www.wired.com/images_blogs/threatlevel/2011/04/scheinbergetalindictmentpr.pdf))

Now, imagine if there were no intermediaries.

[In my TIME.com Techland column today, I write about Bitcoin](http://techland.time.com/2011/04/16/online-cash-bitcoin-could-challenge-governments/), a completely decentralized and anonymous virtual currency that I think will be revolutionary.

>Because Bitcoin is an open-source project, and because the database exists only in the distributed peer-to-peer network created by its users, there is no Bitcoin company to raid, subpoena or shut down. Even if the Bitcoin.org site were taken offline and the Sourceforge project removed, the currency would be unaffected. Like BitTorrent, taking down any of the individual computers that make up the peer-to-peer system would have little effect on the rest of the network. And because the currency is truly anonymous, there are no identities to trace.

And if a P2P currency can make it so that there is no fiscal intermediary to regulate, how about a distributed DNS system so that there are no registrars to coerce? This is something Peter Sunde of Pirate Bay fame [has been working on](http://www.wired.co.uk/news/archive/2010-12/02/peter-sunde-p2p-dns). These ideas may sound radical and far-fetched, but if we truly want to see an online regime of “[denationalized liberalism](http://techliberation.com/2010/11/28/mueller%E2%80%99s-networks-and-states-classical-liberalism-for-the-information-age/),” as Milton Mueller puts it, then getting rid of the intermediaries in the net’s infrastructure might be the best path forward.

Again, check out [my piece in TIME](http://techland.time.com/2011/04/16/online-cash-bitcoin-could-challenge-governments/) for a thorough explanation of Bitcoin and its implications. I plan to be writing about it a lot more and devote some of my research time to it.

When legislation or regulation is what you rely on for privacy protection, your privacy protection relies on political consensus staying the same. When political consensus changes, your privacy can go away.

Witness the Department of Education’s proposed change to FERPA regulations—the Family Education Rights and Privacy Act—to make more data about students available to more people. The privacy protections that have applied until now are unlikely to withstand the Education Department’s belief that using data about students is more important.

To anyone who relied on FERPA for privacy protection: Oops!

In the ongoing copyright debates, areas of common ground are seemingly few and far between. It’s easy to forget that not all approaches to combating copyright infringement are mired in controversy. One belief that unites many stakeholders across the spectrum is that more efforts are needed to educate Internet users about copyright. The Internet has spawned legions of amateur content creators, but not all of the content that’s being created is original. Indeed, a great deal of online copyright infringement owes to widespread ignorance of copyright law and its penalties.

For its part, Google yesterday unveiled “Copyright School” for YouTube users. As Justin Green explains on the official YouTube blog, users whose accounts have been suspended for allegedly uploading infringing content will be required to watch this video and then correctly answer questions about it before their account will be reinstated:

Of course, boiling down the basics of copyright into a four and a half minute video is not an easy task, to put it mildly. (The authoritative treatment of copyright law, Nimmer on Copyright, fills an 11-volume treatise.) Copyright geeks and fans of “remix culture” will appreciate that Google’s video touches on fair use and includes links to in-depth resources for users to learn more about copyright. It will be interesting to see how Google’s effort influences the behavior of YouTube users and the incidence of repeat infringement.

Continue reading →

While most folks have been obsessing over their income taxes the past few weeks, Jerry Brito and I have been obsessing about a non-tax: the universal service assessments on our phone bills.

More specifically, the Federal Communications Commission has asked for comments on its plan to gradually turn the current phone subsidy program in high-cost rural areas into a broadband subsidy program in high-cost rural areas. This opens up a big tangled can of worms.  Comments are due Monday.  We deal with two issues in our comment:

Definition of broadband: Thankfully, the FCC is asking for comments on its proposal to define broadband as 4 Mbps download/1 Mbps upload. This is an important decision with a big effect on the size of the program. The 4 Mbps definition more than doubles the number of households considered “unserved,” because it doesn’t count 3G wireless or slower DSL or slower satellite broadband as broadband. It also raises the cost of the subsidies by requiring more expensive forms of broadband.

The definition fails to fit the factors the 1996 Telecom Act says the FCC is supposed to consider when determining what communications services qualify for universal service subsidies.  A download speed of 4 Mbps is not “essential” for online education; most online education providers say any broadband speed or even dialup is satisfactory. Nor is that speed “essential” for public safety; the biggest barrier to public safety broadband deployment is creation of an interoperable public safety network, which has nothing to do with USF subsidies. And the proposed speed is not subscribed to by a “substantial majority” of US households.  The most recent FCC statistics indicate that the fastest broadband download speed subscribed to by a “substantial majority” of US households is probably 768 kbps.

Definition of performance measures: Fifteen years after passage of the legislation that authorized the high cost universal service subsidies, the FCC has proposed to measure the program’s outcomes.  Actually, the FCC wants to measure intermediate outcomes like deployment, subscribership, and urban-rural rate comparability — not ultimate outcomes like expanded economic and social opportunities for people in rural areas.  But it’s a start …  provided that the FCC actually figures out how the subsidies have affected these intermediate outcomes, rather than just measuring trends and claiming the universal service subsidies caused any positive trends observed.  We have some suggestions on how to do this.  

Our full comment is available here.