December 2013

I didn’t have nearly as much time this year to review the steadily growing stream of information policy books that were released. The end-of-year lists I put together in the past were fairly comprehensive (see 2008, 2009, 2010, 2011 and 2012), but I got sidetracked this year with 7 law review articles and an eBook project and had almost no time for book reviews, or even general blogging for that matter.

So, I’ve just listed some of the more notable titles from 2013 even though I didn’t find the time to describe them all.  The first couple are the titles that I believe will have the most lasting influence on information technology policy debates. Needless to say, just because I believe that some of these titles will have an impact on policy going forward does not mean I endorse the perspectives or recommendations in any of them. And that would certainly be the case with my choice for most important Net policy book of the year, Ian Brown and Chris Marsden’s Regulating Code. Their book does a wonderful job mapping the unfolding universe of Internet “co-regulation” and “multi-stakeholderism,” but their defense of a more politicized information policy future leaves lovers of liberty like me utterly demoralized.

The same could be said of many other titles on the list. As I noted in concluding several reviews over the past year, liberty is increasingly a loser in Internet policy circles these days. And it’s not just neo-Marxist rants like McChesney’s Digital Disconnect or Lanier’s restatement of the Unibomber Manifesto, Who Owns the Future? The sad reality is that pretty much everybody these days has a pet peeve they want addressed through pure power politics because, you know, something must be done! The very term “Internet freedom” has already been grotesquely contorted into something akin to an open mandate for governments to meticulously plan virtually every facet of economic and social activity in the Information Age.

Anyway, despite that caveat, many interesting books were released in 2013 on an ever-expanding array of specific information policy topics.  Here’s the list of everything that landed on my desk over the past year. Continue reading →

Retransmission consent came under attack again this month, and two long-awaited bills on the subject have finally been introduced—the Next Generation Television Marketplace Act (H.R. 3720) by Rep. Steve Scalise, and the Video CHOICE (Consumers Have Options in Choosing Entertainment) Act (H.R. 3719) by Rep. Anna G. Eshoo.

The American Cable Association’s Matthew M. Polka has reiterated his view that the process whereby cable and satellite TV providers negotiate with broadcasters for the right to retransmit broadcast signals is a “far cry from the free market,” and Alan Daley and Steve Pociask with the American Consumer Institute claim that retransmission consent jeopardizes the Broadcast Television Spectrum Incentive Auction.

As Jeff Eisenach pointed out at the Hudson Institute, “Congress created retransmission consent in 1992 to take the place of the property rights that it and the FCC abrogated.  Prior to 1992, broadcasters weren’t permitted to charge anyone for retransmitting their signals.” Continue reading →

My response to Free State Foundation’s blog post, “Understanding the Un-Free Market for Retrans Consent Is the First Step for Reforming It

The Free State Foundation (FSF) questioned my most recent blog post at RedState, which noted that the American Television Alliance’s (ATVA) arguments supporting FCC price regulation of broadcast television content are inconsistent with the arguments its largest members make against government intervention proposed by net neutrality supporters. FSF claimed that my post created a “false equivalency” between efforts to modify an existing regulatory regime and efforts to impose new regulations in a previously free market.

FSF’s “false equivalence” theory is a red herring that is apparently intended to distract from the substantive issues I raised. The validity of the economic arguments related to two-sided markets discussed in my blog doesn’t depend on the regulatory status of the two-sided markets those arguments address. The notion that the existence of regulation in the video marketplace gives ATVA a free pass to say anything it wants without heed for intellectual consistency is absurd.

I suspect FSF knows this. Its blog post does not dispute that ATVA’s arguments at the FCC are inconsistent with the arguments its largest members make against net neutrality; in fact, FSF failed to address the ATVA petition at all. Though the FSF blog was ostensibly prompted by my post at RedState, FSF decided to “leave the merits of ATVA’s various proposals to others” (except me, apparently).

FSF’s decision to avoid the merits of ATVA’s arguments at the FCC (the subject of my blog post), begs the question: What was the FSF blog actually about? It appears FSF wrote the blog to (1) reiterate its previous (and misleading) analyses of the video programing market, and (2) argue that the Next Generation Television Marketplace Act “represents the proper direction” for reforming it.

To be clear, I haven’t previously addressed either issue. But, in the spirit of collegial dialogue initiated by FSF, I discuss them briefly in this blog. Continue reading →

In an op-ed at CNN, Ryan Calo argues that the real drone revolution will arrive when ordinary people can own and operate app-enabled drones. Rather than being dominated by a few large tech companies, drones should develop along the lines of the PC model: they should be purchasable by consumers and they should run third-party software or apps.

The real explosion of innovation in computing occurred when devices got into the hands of regular people. Suddenly consumers did not have to wait for IBM or Apple to write every software program they might want to use. Other companies and individuals could also write a “killer app.” Much of the software that makes personal computers, tablets and smartphones such an essential part of daily life now have been written by third-party developers.

[…]

Once companies such as Google, Amazon or Apple create a personal drone that is app-enabled, we will begin to see the true promise of this technology. This is still a ways off. There are certainly many technical, regulatory and social hurdles to overcome. But I would think that within 10 to 15 years, we will see robust, multipurpose robots in the hands of consumers.

I agree with Ryan that a world where only big companies can operate drones is undesirable. His vision of personal drones meshes well with my argument in Wired that we should see airspace as a platform for innovation.

This is why I am concerned about the overregulation of drones. Big companies like Amazon, Apple, and Google will always have legal departments that will enable them to comply with drone regulations. But will all of us? There are economies of scale in regulatory compliance. If we’re not careful, we could regulate the little guy out of drones entirely—and then only big companies will be able to own and operate them. This is something I’m looking at closely in advance of the FAA proceedings on drones in 2014.

Everyone seems to be worried about Bitcoin’s carbon footprint lately. Last week, an article on Quartz claimed that Bitcoin miners are spending $17 million per day on electricity in order to reap $4.4 million worth of bitcoins. And Yesterday, Pando Daily ran a piece that ominously warned about Bitcoin’s carbon footprint.

One problem with both of these pieces is that they seem to rely on electricity consumption estimates from blockchain.info. While this site is great for getting stats about the Bitcoin network, it’s not such a great site for estimating electricity consumption. Blockchain.info clearly states that it is using an estimate of 650 Watts per gigahash [per second, I assume] in its electricity calculations. While this may have been a good estimate of the efficiency of the Bitcoin network when the page was first created, the network has become much more efficient since then. Archive.org shows that the 650W/GH/s figure was used on the earliest cached copy of the page, from December 2, 2011; yes, that is over two years ago. Continue reading →

Robert Scoble, Startup Liaison Officer at Rackspace discusses his recent book, Age of Context: Mobile, Sensors, Data and the Future of Privacy, co-authored by Shel Israel. Scoble believes that over the next five years we’ll see a tremendous rise in wearable computers, building on interest we’ve already seen in devices like Google Glass. Much like the desktop, laptop, and smartphone before it, Scoble predicts wearable computers represent the next wave in groundbreaking innovation. Scoble answers questions such as: How will wearable computers help us live our lives? Will they become as common as the cellphone is today? Will we have to sacrifice privacy for these devices to better understand our preferences? How will sensors in everyday products help companies improve the customer experience?

Download

Related Links

Gordon Crovitz has an excellent column in today’s Wall Street Journal in which he accurately diagnoses the root cause of our patent litigation problem: the Federal Circuit’s support for extensive patenting in software.

Today’s patent mess can be traced to a miscalculation by Jimmy Carter, who thought granting more patents would help overcome economic stagnation. In 1979, his Domestic Policy Review on Industrial Innovation proposed a new Federal Circuit Court of Appeals, which Congress created in 1982. Its first judge explained: “The court was formed for one need, to recover the value of the patent system as an incentive to industry.”

The country got more patents—at what has turned out to be a huge cost. The number of patents has quadrupled, to more than 275,000 a year. But the Federal Circuit approved patents for software, which now account for most of the patents granted in the U.S.—and for most of the litigation. Patent trolls buy up vague software patents and demand legal settlements from technology companies. Instead of encouraging innovation, patent law has become a burden on entrepreneurs, especially startups without teams of patent lawyers.

I was pleased that Crovitz cites my new paper with Alex Tabarrok:

A system of property rights is flawed if no one can know what’s protected. That’s what happens when the government grants 20-year patents for vague software ideas in exchange for making the innovation public. In a recent academic paper, George Mason researchers Eli Dourado and Alex Tabarrok argued that the system of “broad and fuzzy” software patents “reduces the potency of search and defeats one of the key arguments for patents, the dissemination of information about innovation.”

Current legislation in Congress makes changes to patent trial procedure in an effort to reduce the harm caused by patent trolling. But if we really want to solve the trolling problem once and for all, and to generally have a healthy and innovative patent system, we need to get at the problem of low-quality patents, especially in software. The best way to do that is to abolish the Federal Circuit, which has consistently undermined limits on patentable subject matter.

Join TechFreedom on Thursday, December 19, the 100th anniversary of the Kingsbury Commitment, AT&T’s negotiated settlement of antitrust charges brought by the Department of Justice that gave AT&T a legal monopoly in most of the U.S. in exchange for a commitment to provide universal service.

The Commitment is hailed by many not just as a milestone in the public interest but as the bedrock of U.S. communications policy. Others see the settlement as the cynical exploitation of lofty rhetoric to establish a tightly regulated monopoly — and the beginning of decades of cozy regulatory capture that stifled competition and strangled innovation. Continue reading →

The decision to forgo distribution is referred to as a “blackout” in the cable context and “blocking” in the Internet context, but the economic considerations affecting such negotiations are substantially the same.

The American Television Alliance (ATVA), a coalition comprised primarily of cable and satellite TV operators, is using the playbook of net neutrality proponents in abid to convince the Federal Communications Commission (FCC) to regulate prices for broadcast television content. The goal of ATVA’s cable and satellite members is to increase their profit margins by convincing the government to artificially lower the cost of programming they resell to consumers. I suspect the goal of ATVA’s non-profit memberse.g.Public Knowledge and New America Foundation, is to solidify the FCC’s flawed rationale for adopting net neutrality rules in 2010, which imposed restrictions on market arrangements between Internet Service Providers (ISPs) and Internet content providers without finding a market failure.

Many of ATVA’s cable members are also ISPs that have routinely argued against the imposition of net neutrality regulations in the market for Internet services. By supporting ATVA, these same companies appear to have abandoned the intellectual foundation for opposition to net neutrality. Are they now signaling their intent to embrace net neutrality regulation of the Internet? Continue reading →

It’s encouraging to see more congressional movement in repurposing federal spectrum for commercial use. This week, a bill rewarding federal agencies for ending or moving their wireless operations passed a House committee. The bipartisan Federal Spectrum Incentive Act of 2013 allows agencies to benefit when they voluntarily give up their spectrum for FCC auction.

In the past, an agency could receive a portion of auction proceeds but only to compensate the agency for relocating its systems. Agencies complained, sensibly, that this arrangement does little to encourage them to give up spectrum. Federal agencies had to go through the hassle of modifying their wireless equipment and sharing spectrum with another agency but were left no better off than before. In some cases, the complications with sharing spectrum made them worse off, so there was risk of downside and no upside.

This House bill provides that an agency can keep 1% of auction proceeds in addition to relocation costs. With this additional carrot, the hope is, agencies will be more willing to modify their equipment and make room for mobile broadband carriers.

The bill is a good start but I think it’s a little too restrictive. A one percent claim on auction receipts seems insufficient to induce dramatically improved agency participation. Given how poorly federal agencies use spectrum, Congress should be doing much more to force agencies to justify their spectrum usage. Additionally, how agencies can use that 1% benefit seems too limited. The bill allows the funds to be used 1) to offset sequestration cuts, and 2) to compensate other agencies if they agree to share spectrum. Some journalists are reporting that agencies can use the funds to expand existing programs but I don’t see that language in the proposed bill. It wouldn’t be a bad idea, though, to have fewer restrictions on the payments since it would likely increase agency participation.

Further Reading:

See my Mercatus paper on the subject of repurposing federal spectrum.