Today I’ll be testifying at a Senate Commerce Committee hearing on online privacy and commercial data collection issues. In my remarks, I make three primary points:

  1. First, no matter how well-intentioned, restrictions on data collection could negatively impact the competitiveness of America’s digital economy, as well as consumer choice.
  2. Second, it is unwise to place too much faith in any single, silver-bullet solution to privacy, including “Do Not Track,” because such schemes are easily evaded or defeated and often fail to live up to their billing.
  3. Finally, with those two points in mind, we should look to alternative and less costly approaches to protecting privacy that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.

The testimony also contains 4 appendices elaborating on some of these themes.

Down below, I’ve embedded my testimony, a list of 10 recent essays I’ve penned on these topics, and a video in which I explain “How I Think about Privacy” (which was taped last summer at an event up at the University of Maine’s Center for Law and Innovation). Finally, the best summary of my work on these issues can be found in this recent Harvard Journal of Law & Public Policy article, “The Pursuit of Privacy in a World Where Information Control is Failing.” (This is the first of two complimentary law review articles I will be releasing this year dealing with privacy policy. The second, which will be published early this summer by the George Mason University Law Review, is entitled, “A Framework for Benefit-Cost Analysis in Digital Privacy Debates.”) Continue reading →

The Cato Institute is seeking a “researcher to support a campaign to educate the public and policymakers on the implications of biometric identification systems related to immigration policy reforms.

The better applicants will know how many different governmental systems work—legislation, appropriation, regulation, procurement, grant-making, and so on—and have zeal to chase down all the ways the national ID builders are using them to advance their cause.

Immigration reform legislation in the Senate that features a vast expansion of E-Verify is yet another reason to join the fight against having a national ID in the United States.

Remember all the businesses, internet techies and NGOs who were screaming about an “ITU takeover of the Internet” a year ago? Where are they now? Because this time, we actually need them.

May 14 – 21 is Internet governance week in Geneva. We have declared it so because there will be three events in that week for the global community concerned with global internet governance. From 14-16 May the International Telecommunication Union (ITU) holds its World Telecommunication Policy Forum (WTPF). This year it is devoted to internet policy issues. With the polarizing results of the Dubai World Conference on International Telecommunications (WCIT) still reverberating, the meeting will revisit debates about the role of states in Internet governance. Next, on May 17 and 18, the Graduate Institute of International and Development Studies and the Global Internet Governance Academic Network (GigaNet) will hold an international workshop on The Global Governance of the Internet: Intergovernmentalism, Multi-stakeholderism and Networks. Here, academics and practitioners will engage in what should be a more intellectually substantive debate on modes and principles of global Internet governance.

Last but not least, the UN Internet Governance Forum will hold its semi-annual consultations to prepare the program and agenda for its next meeting in Bali, Indonesia. The IGF consultations are relevant because, to put it bluntly, it is the failure of the IGF to bring governments, the private sector and civil society together in a commonly agreed platform for policy development that is partly responsible for the continued tension between multistakeholder and intergovernmental institutions. Whether the IGF can get its act together and become more relevant is one of the key issues going forward.

Continue reading →

Today, Jerry Brito, Adam Thierer and I filed comments on the FAA’s proposed privacy rules for “test sites” for the integration of commercial drones into domestic airspace. I’ve been excited about this development ever since I learned that Congress had ordered the FAA to complete the integration by September 2015. Airspace is a vastly underutilized resource, and new technologies are just now becoming available that will enable us to make the most of it.

In our comments, we argue that airspace, like the Internet, could be a revolutionary platform for innovation:

Vint Cerf, one of the “fathers of the Internet,” credits “permissionless innovation” for the economic benefits that the Internet has generated. As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand.

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.

And in Wired today, I argue that preemptive privacy regulation is unnecessary and unwise:

Regulation at this juncture requires our over-speculating about which types of privacy violations might arise. Since many of these harms may never materialize, pre-emptive regulation is likely to overprotect privacy at the expense of innovation.

Frankly, it wouldn’t even work. Imagine if we had tried to comprehensively regulate online privacy before allowing commercial use of the internet. We wouldn’t have even known how to. We wouldn’t have had the benefit of understanding how online commerce works, nor could we have anticipated the rise of social networking and related phenomena.

I expect us all to hear more about commercial drones in the near future. See Jerry’s piece in Reason last month or Larry Downes’s great post at the HBR blog for more.

The DOJ’s recommendation would likely reduce the amount of revenue produced by the incentive auction and risk leaving the public safety network unfunded (as the economist who led the design of the most successful auction in FCC history will explain in this webinar on Thursday). The unsubstantiated, speculative increase in commercial competition the DOJ says could occur if the FCC picks winners and losers in the incentive auction is a poor justification for continuing to deny our nation’s first responders the network they need to protect the safety of every American.

Beyond enforcing the antitrust laws, the Antitrust Division of the Department of Justice (DOJ) advocates for competition policy in regulatory proceedings initiated by Executive Branch and independent agencies, including the Federal Communications Commission (FCC). In this role, the DOJ works with the FCC on mergers involving communications companies and occasionally provides input in other FCC proceedings. The historical reputation of the DOJ in this area has been one of impartial engagement and deliberate analysis based on empirical data. The DOJ’s recent filing (DOJ filing) on mobile spectrum aggregation jeopardizes that reputation, however, by recommending that the FCC “ensure” Sprint Nextel and T-Mobile obtain a nationwide block of mobile spectrum in the upcoming broadcast incentive auction.

The new “findings” in the DOJ filing fail to cite any factual record and are inconsistent with the DOJ’s factual findings in recent merger proceedings that contain extensive factual records. The DOJ filing blithely relies on a discriminatory evidentiary presumption to insinuate that Verizon and AT&T are “warehousing” spectrum, and then uses that presumption to support a proposed remedy that bears no rational relationship to factual findings that the DOJ has actually made. The absence of any empirical evidence supporting the relevant conclusions in the DOJ filing gives it the appearance of a political document rather than a deliberative work product crafted with the traditionally substantive and impartial standards of the Justice Department. The FCC, the independent agency that prides itself on being fact-based and data-driven, should give this screed no weight. Continue reading →

Paul J. Heald, professor of law at the University of Illinois Urbana-Champaign, discusses his new paper “Do Bad Things Happen When Works Enter the Public Domain? Empirical Tests of Copyright Term Extension.”

The international debate over copyright term extension for existing works turns on the validity of three empirical assertions about what happens to works when they fall into the public domain. Heald discusses a study he carried out with Christopher Buccafusco that found that all three assertions are suspect. In the study, they show that audio books made from public domain bestsellers are significantly more available than those made from copyrighted bestsellers. They also demonstrate that recordings of public domain and copyrighted books are of equal quality.

Since copyrighted works will once again begin to fall into the public domain starting in 2018, Heald says, it’s likely that content owners will ask Congress for yet another term extension. He argues that his empirical findings suggest it should not be granted.

Download

Related Links

2013-03-07_0113-4A couple of weeks ago I wrote that bitcoin’s valuation doesn’t really matter for the currency to effectively function as a medium of exchange. Now comes word from none other than the proprietor of the notorious Silk Road encrypted black market that indeed the recent wild volatility has not affected the transactions on his site. As Andy Greenberg reports:

In a rare (and brief) public statement sent to me, the Dread Pirate Roberts (DPR) said that despite Silk Road’s reliance on Bitcoin, commerce on the site hasn’t been seriously hurt by Bitcoin’s wild rise and fall. “Bitcoin’s foundation, its algorithms and network, don’t change with the exchange rate,” the pseudonymous site administrator writes. “It is just as important to the functioning of Silk Road at $1 as it is at $1,000. A rapidly changing price does have some effect, but it’s not as big as you might think.”

Silk Road’s customers, after all, aren’t generally interested in Bitcoin’s worth as an investment vehicle, so much as in how it makes it possible to privately buy heroin, cocaine, pills or marijuana. They use Bitcoin because it’s not issued or stored by banks and doesn’t require any online registrations, and thus offers a certain amount of anonymity. …

Silk Road has built-in protections against Bitcoin’s spikes and crashes. Although purchases on Silk Road can only be made with Bitcoin, sellers on the site have the option to peg their prices to the dollar, automatically adjusting them based on Bitcoin’s current exchange rate as defined by the central Bitcoin exchange Mt. Gox. To insulate those sellers against Bitcoin fluctuations, the eBay-like drug site also offers a hedging service. Sales are held in escrow until buyers receive their orders via mail, and vendors are given the choice to turn on a setting that pegs the escrow’s value to the dollar, with Silk Road itself covering any losses or taking any gains from Bitcoin’s swings in value that occur while the drugs are in transit. So while Bitcoin’s crash last week from $237 to less than $100 means that the Dread Pirate Roberts was likely forced to pay out much of the extra gains Silk Road made from Bitcoin’s rise, most of his sellers were protected from those price changes and continued to trade their drugs for Bitcoins despite the currency’s plummeting value.

What this shows is that Silk Road is separating the “unit of account” function of money from the “medium of exchange” function. Prices are denominated in dollars (as a unit of account) but payments are made in bitcoin (as a medium of exchange). Hedging is used to smooth out volatility.

Continue reading →

The Information Economy Project at the George Mason University School of Law is hosting a conference tomorrow, Friday, April 19. The conference title is From Monopoly to Competition or Competition to Monopoly? U.S. Broadband Markets in 2013. There will be two morning panels featuring discussion of competition in the broadband marketplace and the social value of “ultra-fast” broadband speeds.

We have a great lineup, including keynote addresses from Commissioner Joshua Wright, Federal Trade Commission and from Dr. Robert Crandall, Brookings Institution.

The panelists include:

Eli Noam, Columbia Business School

Marius Schwartz, Georgetown University, former FCC Chief Economist

Babette Boliek, Pepperdine University School of Law

Robert Kenny, Communications Chambers (U.K.)

Scott Wallsten, Technology Policy Institute

The panels will be moderated by Kenneth Heyer, Federal Trade Commission and Gus Hurwitz, University of Pennsylvania, respectively. A continental breakfast will be served at 8:00 am and a buffet lunch is provided. We expect to adjourn at 1:30 pm. You can find an agenda here and can RSVP here. Space is limited and we expect a full house, so those interested are encouraged to register as soon as possible.

Why did the government impose a completely different funding mechanism on the Internet than on the Interstate Highway System? There is no substantive distinction between the shared use of local infrastructure by commercial “edge” providers on the Internet and shared use of the local infrastructure by commercial “edge” providers (e.g., FedEx) on the highways.

In Part 1 of this post, I described the history of government intervention in the funding of the Internet, which has been used to exempt commercial users from paying for the use of local Internet infrastructure. The most recent intervention, known as “net neutrality”, was ostensibly intended to protect consumers, but in practice, requires that consumers bear all the costs of maintaining and upgrading local Internet infrastructure while content and application providers pay nothing. This consumer-funded commercial subsidy model is the opposite of the approach the government took when funding the Interstate Highway System: The federal government makes commercial users pay more for their use of the highways than consumers. This fundamental difference in approach is why net neutrality advocates abandoned the “information superhighway” analogy promoted by the Clinton Administration during the 1990s. Continue reading →

The US Patent and Trademark office is starting to recognize that it has a software patent problem and is soliciting suggestions for how to improve software patent quality. A number of parties such as Google and EFF have filed comments.

I am on record against the idea patenting software at all. I think it is too difficult for programmers, as they are writing code, to constantly check to see if they are violating existing software patents, which are not, after all, easy to identify. Furthermore, any complex piece of software is likely to violate hundreds of patents owned by competitors, which makes license negotiation costly and not straightforward.

However, given that the abolition of software patents seems unlikely in the medium term, there are some good suggestions in the Google and EFF briefs. They both note that the software patents granted to date have been overbroad, equivalent to patenting headache medicine in general rather than patenting a particular molecule for use as a headache drug.

This argument highlights one significant problem with patent systems generally, that they depend on extremely high-quality review of patent applications to function effectively. If we’re going to have patents for software, or anything else, we need to take the review process seriously. Consequently, I would favor whatever increase in patent application fees is necessary to ensure that the quality of review is rock solid. Give USPTO the resources it needs to comply with existing patent law, which seems to preclude such overbroad patents. Simply applying patent law consistently would reduce some of the problems with software patents.

Higher fees would also function as a Pigovian tax on patenting, disincentivizing patent protection for minor innovations. This is desirable because the licensing cost of these minor innovations is likely to exceed the social benefits the patents generate, if any.

While it remains preferable to undertake major patent reform, many of the steps proposed by Google and EFF are good marginal policy improvements. I hope the USPTO considers these proposals carefully.