On Forbes this morning, I have a long meditation on what Protect IP says about the current state of the Internet content wars. Copyright, patent, and trademark are under siege from digital technology, and for now at least are clearly losing the arms race.
The new bill isn’t exactly the nuclear option in the fight between the media industries and everyone else, but it does signal increased desperation. Continue reading →
POLITICO reports that a bill aimed at combating so-called “rogue websites” will soon be introduced in the U.S. Senate by Sen. Patrick Leahy. The legislation, entitled the PROTECT IP Act, will substantially resemble COICA (PDF), a bill that was reported unanimously out of the Senate Judiciary Committee late last year but did not reach a floor vote. As more details about the new bill emerge, we’ll likely have much more to say about it here on TLF.
I discussed my concerns about and suggested changes to the COICA legislation here last November; the PROTECT IP Act reportedly contains several new provisions aimed at mitigating concerns about the statute’s breadth and procedural protections. However, as Mike Masnick points out on Techdirt, the new bill — unlike COICA — contains a private right of action, although that right may not permit rights holders to disable infringing domain names. Also unlike COICA, the PROTECT IP Act would apparently require search engines to cease linking to domain names that a court has deemed to be “dedicated to infringing activities.”
For a more in-depth look at this contentious and complex issue, check out the panel discussion that the Competitive Enterprise Institute and TechFreedomhosted last month. Our April 7 event explored the need for, and concerns about, legislative proposals to combat websites that facilitate and engage in unlawful counterfeiting and copyright infringement. The event was moderated by Juliana Gruenwald of National Journal. The panelists included me, Danny McPherson of VeriSign, Tom Sydnor of the Association for Competitive Technology, Dan Castro of the Information Technology & Innovation Foundation, David Sohn of the Center for Democracy & Technology, and Larry Downes of TechFreedom.
A UK government report issued this week warns that climate change, in addition to threatening many different parts of everyday life, also threatens the Information and Communications Technology (ICT) industry. The report, available online, warns that regulatory measures have to be taken to lessen the threat of rising temperatures and stormy weather, which would have adverse effects on the radio waves that constitute communications technology.
Specifically, the report’s authors assume that rising temperatures and rainy storms will interfere with radio waves. This assumes that that the aforementioned rising temperatures and rainy storms are indeed a foregone conclusion. For the sake of argument, let’s assume they are correct.
The study mentions that rising temperatures will cause cell towers to lose efficiency, but nothing in the document backs this up. Making such a claim requires scientific data but nothing was offered. A skeptical person reading this report may think, anecdotally, that cell towers are sited in all sorts of conditions all over the globe, taking into account varying temperatures in which they operate. Cell towers sited in Alaska are probably able to handle the extreme cold, otherwise the cell provider would not waste money placing it there. Likewise, a cell tower sited in Arizona would need to take into account the 100 degree+ temperature. And at last count, wireless service is available in both Alaska and Arizona. Continue reading →
There are business, technical, and legal reasons why the order stands on unsteady ground, which the article looks at in detail.
The order, by encouraging artificial competition in nationwide mobile broadband, could also undermine arguments against AT&T’s merger with T-Mobile USA.
How so? If every regional, local, or rural carrier can offer their customers access to the nationwide coverage of Verizon, AT&T, or Sprint, on terms overseen for “commercial reasonableness” by the FCC, what’s the risk of consumer harm from combining AT&T and T-Mobile’s infrastructure? Indeed, doing so would create stronger nationwide 3G and 4G networks for other carriers to use. In that sense, it’s actually pro-competitive, and a pragmatic solution to spectrum exhaustion. Continue reading →
Believe it or not, this argument is being trotted out as part of the pressure from consumer activist groups against AT&T’s proposed acquisition of T-Mobile. The subject of a Senate Judiciary Hearing on the merger, scheduled for May 11, even asks, “Is Humpty Dumpty Being Put Back Together Again?”
It seems because the deal would leave AT&T and Verizon as the country’s two leading wireless service providers, the blogosphere is aflutter with worries that we are returning to the bad old days when AT&T pretty much owned all of the country’s telecom infrastructure.
It is true that AT&T and Verizon trace their history back to the six-year antitrust case brought by the Nixon Justice Department, which ended in the 1984 divestiture of then-AT&T’s 22 local telephone operating companies, which were regrouped into seven regional holding companies.
Over the last 28 years, there has been gradual consolidation, each time accompanied by an uproar that the Bell monopoly days were returning. But those claims miss the essential goal of the Bell break-up, and why, even though those seven “Baby Bell” companies have been integrated into three, there’s no going back to the pre-divestiture AT&T.
Like Milton, I’m very worried about the political vulnerabilities that might arise if the wireless sector grows more concentrated. Still, I think it’s a big mistake to legitimize one repressive incarnation of coercive state power (antitrust intervention) to reduce the likelihood that another incarnation (information control) will intensify. This approach is not only defeatist, as Hance argues, but it also requires a tactical assessment that rests on several dubious assumptions.
First, Milton overestimates the marginal risk that the AT&T – T-Mobile deal will pave the way for an information control regime. The wireless market isn’t static; the disappearance of T-Mobile as an independent entity (which may well occur regardless of whether this deal goes through) hardly means we’re forever “doomed” to live with 3 nationwide wireless players. With major spectrum auctions likely on the horizon, and the possibility of existing spectrum holdings being combined in creative ways, the eventual emergence of one or more nationwide wireless competitors is quite possible — especially if, as skeptics of the AT&T – T-Mobile deal often argue, the wireless market underperforms in the years following the acquisition.
More importantly, network operators, like almost all Internet gatekeepers, face mounting pressure from their users not to facilitate censorship, surveillance, and repression. Case in point: AT&T is a leading member of the Digital Due Process coalition (to which I also belong) that’s urging Congress to substantially strengthen the 1986 federal statute that governs law enforcement access to private electronic communications. Consider that AT&T’s position on this major issue is officially at odds with the official position of the same Justice Department that’s currently reviewing the AT&T – T-Mobile deal. Would a docile, subservient network operator challenge its state overseers so publicly?
Is it “insane” for free market oriented thinkers to support the AT&T/T-Mobile merger? Although AT&T says there are five choices of wireless providers to choose from in 18 of 20 major markets, Milton Mueller argues that 93 percent of wireless subscribers prefer a seamless, nationwide provider. If the merger is approved, there would only be three such providers.
A market dominated by three major providers is neither competitive nor noncompetitive as a definitional matter. Factual analysis is necessary to determine competitiveness.
And it may be premature to conclude that there is no competitive significance either to the fact there are over a hundred providers currently delivering nationwide service on the basis of voluntary roaming agreements that are common in the industry, or to assume that the possibility the FCC will double the amount of spectrum available for wireless services will not impact the structure of the industry.
The trouble with antitrust generally is the possibility that government will choose to protect weak or inefficient competitors, thus preventing meaningful competition that attracts private investment which leads to innovation, better services and lower prices. Antitrust is supposed to protect consumers, not politically influential producers. Although this sounds simple in theory, it can get confusing in practice. As free market oriented thinkers, we do not want government picking winners and losers.
So a few weeks ago I hit up Adam Thierer, who has done and is continuing to do great work on all things regulation, on some materials for a project I was working on regarding the precautionary principle in the digital space. Turns out Adam was in the middle of his own Digital Precautionary Principle piece as well. I’ll take our simpatico as a sign that this phenomenon may actually be taking place and that I’m not paranoid. (If you haven’t read his earlier piece on TLF, please do so).
While my piece on DPP is coming, hopefully this week, I’ll start things off with my article in today’s RealClearMarkets.com on regulations and risk and how regulating agencies are engaging in traditional “risk aversion behavior” to the detriment of the risk takers (aka entrepreneurs) in the private market. A smarter approach to regulating would incorporate both benefits and risks of NOT regulating. So many times the discussion is geared towards the notion that something has to be done, so how can we minimize the negative impacts, rather than, should we be doing anything at all or should we encourage the trial and error mechanisms that markets utilize?
While the piece isn’t targeted directly at the technology industry, I think it can apply there just as much as any other industry.
Following AT&T’s announcement last month of its planned acquisition of T-Mobile USA, pundits and other oddsmakers have settled in for a long tour of duty. Speculation, much of it uninformed, is already clogging the media about the chances the $39 billion deal—larger even than last year’s merger of Comcast and NBC Universal—will be approved.
Both the size of the deal and previous consolidation in the communications industry lead some analysts and advocates to doubt the transaction will or ought to survive the regulatory process.
Though the complex review process could take a year or perhaps even longer, I’m confident that the deal will go through—as it should. To see why, one need only look to previous merger reviews by the Department of Justice and the Federal Communications Commission, both of which must approve the AT&T deal. Continue reading →
While most folks have been obsessing over their income taxes the past few weeks, Jerry Brito and I have been obsessing about a non-tax: the universal service assessments on our phone bills.
More specifically, the Federal Communications Commission has asked for comments on its plan to gradually turn the current phone subsidy program in high-cost rural areas into a broadband subsidy program in high-cost rural areas. This opens up a big tangled can of worms. Comments are due Monday. We deal with two issues in our comment:
Definition of broadband: Thankfully, the FCC is asking for comments on its proposal to define broadband as 4 Mbps download/1 Mbps upload. This is an important decision with a big effect on the size of the program. The 4 Mbps definition more than doubles the number of households considered “unserved,” because it doesn’t count 3G wireless or slower DSL or slower satellite broadband as broadband. It also raises the cost of the subsidies by requiring more expensive forms of broadband.
The definition fails to fit the factors the 1996 Telecom Act says the FCC is supposed to consider when determining what communications services qualify for universal service subsidies. A download speed of 4 Mbps is not “essential” for online education; most online education providers say any broadband speed or even dialup is satisfactory. Nor is that speed “essential” for public safety; the biggest barrier to public safety broadband deployment is creation of an interoperable public safety network, which has nothing to do with USF subsidies. And the proposed speed is not subscribed to by a “substantial majority” of US households. The most recent FCC statistics indicate that the fastest broadband download speed subscribed to by a “substantial majority” of US households is probably 768 kbps.
Definition of performance measures: Fifteen years after passage of the legislation that authorized the high cost universal service subsidies, the FCC has proposed to measure the program’s outcomes. Actually, the FCC wants to measure intermediate outcomes like deployment, subscribership, and urban-rural rate comparability — not ultimate outcomes like expanded economic and social opportunities for people in rural areas. But it’s a start … provided that the FCC actually figures out how the subsidies have affected these intermediate outcomes, rather than just measuring trends and claiming the universal service subsidies caused any positive trends observed. We have some suggestions on how to do this.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →