April 2011

User-driven websites — also known as online intermediaries — frequently come under fire for disabling user content due to bogus or illegitimate takedown notices. Facebook is at the center of the latest controversy involving a bogus takedown notice. On Thursday morning, the social networking site disabled Ars Technica’s page after receiving a DMCA takedown notice alleging the page contained copyright infringing material. While details about the claim remain unclear, given that Facebook restored Ars’s page yesterday evening, it’s a safe bet that the takedown notice was without merit.

Understandably, Ars Technica wasn’t exactly pleased that its Facebook page — one of its top sources of incoming traffic — was shut down for seemingly no good reason. Ars was particularly disappointed by how Facebook handled the situation. In an article posted yesterday (and updated throughout the day), Ars co-founder Ken Fisher and senior editor Jacqui Cheng chronicled their struggle in getting Facebook to simply discuss the situation with them and allow Ars to respond to the takedown notice.

Facebook took hours to respond to Ars’s initial inquiry, and didn’t provide a copy of takedown notice until the following day. Several other major tech websites, including ReadWriteWeb and TheNextWeb, also covered the issue, noting that Ars Technica is the latest in a series of websites to have suffered from their Facebook page being wrongly disabled. In a follow-up article posted today, Ars elaborated on what happened and offered some tips to Facebook on how it could have better handled the situation.

It’s totally fair to criticize how Facebook deals with content takedown requests. Ars is right that the company could certainly do a much better job of handling the process, and Facebook will hopefully re-evaluate its procedures in light of this widely publicized snafu. In calling out Facebook’s flawed approach to dealing with takedown requests, however, Ars Technica doesn’t do justice to the larger, more fundamental problem of bogus takedown notices.

Continue reading →

When it comes to information control, everybody has a pet issue and everyone will be disappointed when law can’t resolve it. I was reminded of this truism while reading a provocative blog post yesterday by computer scientist Ben Adida entitled “(Your) Information Wants to be Free.” Adida’s essay touches upon an issue I have been writing about here a lot lately: the complexity of information control — especially in the context of individual privacy. [See my essays on “Privacy as an Information Control Regime: The Challenges Ahead,” “And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars,” and this recent FTC filing.]

In his essay, Adida observes that:

In 1984, Stewart Brand famously said that information wants to be free. John Perry Barlow reiterated it in the early 90s, and added “Information Replicates into the Cracks of Possibility.” When this idea was applied to online music sharing, it was cool in a “fight the man!” kind of way. Unfortunately, information replication doesn’t discriminate: your personal data, credit cards and medical problems alike, also want to be free. Keeping it secret is really, really hard.

Quite right. We’ve been debating the complexities of information control in the Internet policy arena for the last 20 years and I think we can all now safely conclude that information control is hugely challenging regardless of the sort of information in question. As I’ll note below, that doesn’t mean control is impossible, but the relative difficulty of slowing or stopping information flows of all varieties has increased exponentially in recent years.

But Adida’s more interesting point is the one about the selective morality at play in debates over information control. That is, people generally expect or favor information freedom in some arenas, but then get pretty upset when they can’t crack down on information flows elsewhere. Indeed, some people can get downright religious about the whole “information-wants-to-be-free” thing in some cases and then, without missing a beat, turn around and talk like information totalitarians in the next breath. Continue reading →

Thanks to all of you who have sent your comments about Tate Watkins and my new cybersecurity paper. It’s been getting a good reception.

James Fallows of *The Atlantic*, for example, [noted yesterday](http://www.theatlantic.com/technology/archive/2011/04/two-fascinating-exhibits-on-data-security/237891/) that the paper “represents a significant libertarian-right voice of concern about this latest expansion of the permanent national-security surveillance state,” and that while we can’t underestimate cyber risks, “the emphasis on proportionate response, and the need to guard other values, comes at the right time. We should debate these threats rather than continuing to cower.”

Today I wanted to bend your ears (or eyes, I guess) with another excerpt. The subject today is the “if you only knew what we know,” rationale for government action. I’m happy to see that Sen. Sheldon Whitehouse has [a new bill](http://www.fas.org/blog/secrecy/2011/04/cyber_secrecy.html) getting right at the problem of over-classification that allows leaders to get away with “just trust us” rhetoric. Check out the excerpt is after the jump.
Continue reading →

I’ve written a long article this morning for CNET (See “Privacy panic debate:  Whose data is it?”) on the discovery of the iPhone location tracking file and the utterly predictable panic response that followed.  Its life-cycle follows precisely the crisis model Adam Thierer has so frequently and eloquently traced, most recently here on TLF.

In particular, the CNET article takes a close and serious look at Richard Thaler’s column in Saturday’s New York Times, “Show us the data.  (It’s ours, after all.)” Thaler uses the iPhone scare as occassion to propose a regulatory fix to the “problem” of users being unable to access in “computer-friendly form” copies of the information “collected on” them by merchants.  Continue reading →

Today my colleague [Tate Watkins](http://shortsentences.org/) and I are releasing [a new working paper on cybersecurity policy](http://mercatus.org/publication/loving-cyber-bomb-dangers-threat-inflation-cybersecurity-policy). Please excuse my patently sleep-deprived mug while I describe it here:



Over the past few years there has been a steady drumbeat of alarmist rhetoric coming out of Washington about potential catastrophic cybersecurity threats. For example, at a Senate Armed Services Committee hearing last year, Chairman Carl Levin said that “cyberweapons and cyberattacks potentially can be devastating, approaching weapons of mass destruction in their effects.” Proposed responses include increased federal spending on cybersecurity and the regulation of private network security practices.

The rhetoric of “[cyber doom](http://mercatus.org/publication/beyond-cyber-doom)” employed by proponents of increased federal intervention, however, lacks clear evidence of a serious threat that can be verified by the public. As a result, the United States may be witnessing a bout of threat inflation.

Threat inflation, [according to Thrall and Cramer](http://books.google.com/books?id=EzUtuTOIfTEC&lpg=PP1&ots=3AQmVD2Slb&dq=AMERICAN%20FOREIGN%20POLICY%20AND%20THE%20POLITICS%20OF%20FEAR&pg=PP1#v=onepage&q&f=false), is a concept in political science that refers to “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.” Different actors—including members of Congress, defense contractors, journalists, policy experts, academics, and civilian, military, and intelligence officials—will each have their own motives for contributing to threat inflation. When a threat is inflated, the marketplace of ideas on which a democracy relies to make sound judgments—in particular, the media and popular debate—can become overwhelmed by fallacious information. The result can be unwarranted public support for misguided policies.

The run-up to the Iraq War illustrates the dynamic of threat inflation. After 9/11, the Bush Administration decided to invade Iraq to oust Saddam Hussein. Lacking any clear casus belli, the administration sought popular and congressional support for war by promoting several rationales that ultimately proved baseless.
Continue reading →

On the podcast this week, Jane Yakowitz, a visiting assistant professor at Brooklyn Law School, discusses her new paper about data anonymization and privacy regulation, Tragedy of the Data Commons. Citing privacy concerns, legal scholars and privacy advocates have recently called for tighter restrictions on the collection and dissemination of public research data. Yakowitz first explains why these concerns are overblown, arguing that scholars have misinterpreted the risks of anonymized data sets. She then discusses the social value of the data commons, noting the many useful studies that likely wouldn’t have been possible without a data commons. She finally suggests why the data commons is undervalued, citing disparate reactions to similar statistical releases by OkCupid and Facebook, and offers a few policy recommendations for the data commons.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

It is disappointing that the Obama administration, which campaigned against George W. Bush’s poor record on civil liberties protection, is pursuing a course that aims to limit Fourth Amendment rights when it comes to the use of location tracking technology.

The Washington Post reported yesterday that the Obama administration has petitioned the U.S. Supreme Court to overturn a ruling last year by the U.S. Court of Appeals for the D.C. Circuit that forces police to obtain a warrant before tracking the movements of a suspect using a global positioning device.

The motion is significant because various state laws conflict over procedure and the Supreme Court, if it takes the case, could establish long-term procedure going forward. In the case at hand, United States vs. Antoine Jones, the D.C. court sided with the defendant, overturning the conviction against Jones, who was accused of being a major cocaine dealer, ruling that D.C. police violated due process by using a GPS device to track Jones’ movements for one month without a warrant. Appellate courts in New York and California, on the other hand, have ruled in favor of police in similar cases.

Continue reading →

Here is [a chart](http://bitcoincharts.com/charts/mtgoxUSD#rg180ztgCzm1g10zm2g25) of the Bitcoin-dollar exchange rate for the past six months. The arrow notes the date [my column on the virtual currency](http://techland.time.com/2011/04/16/online-cash-bitcoin-could-challenge-governments/) was published in TIME.com. The day after that piece was published, the Bitcoin exchange rate [reached an all time high at $1.19](http://www.bitcoinnews.com/post/4703632837/daily-2011-04-17). Yesterday, just over a week later, [it was pushing $2](http://www.bitcoinnews.com/post/4897524633/daily2011-04-24).

A wiser fella than myself once said, correlation is not causation, and no doubt my article was just a contributing factor in Bitcoin’s recent run-up. It’s simply getting increasingly mainstream attention, and with that more speculators and speculation about mainstream adoption. The chart above lends a lot of credence to Tim Lee’s [bubble critique](http://timothyblee.com/2011/04/18/the-bitcoin-bubble/), so I wanted to make sure I wasn’t giving that argument short shrift.

There may well be a Bitcoin bubble, and it may even be likely. But again, I think that misses the greater point about what Bitcoin represents. Bitcoin may be tulips and the bubble may burst, but the innovation—distributed, anonymous payments—is here to stay. Napster went bust, but its innovation presaged BitTorrent, which is here to stay. Could the Bitcoin project itself go bust? Certainly, but the innovation solving the double-spending problem I’ve been talking about, will be taken up and improved by others, just as other picked up and ran with Napster’s innovation.

I want to start thinking through the practical and legal implications of that innovation. If you don’t think the innovation could ever allow for a useful store of value, then mine is a fool’s errand. I guess I’m betting on the success of a censorship resistant currency.

Consumers are buying more and more stuff from online retailers located out-of-state, and state and local governments aren’t happy about it. States argue that this trend has shrunk their brick and mortar sales tax base, causing them to lose out on tax revenues. (While consumers in most states are required by law to annually remit sales taxes for goods and services purchased out of state, few comply with this practically unenforceable rule).

CNET’s Declan McCullagh recently reported that a couple of U.S. Senators are pushing for a bill that would require many Internet retailers to collect sales taxes on behalf of states in which they have no “nexus” (physical presence).

In his latest Forbes.com column, “The Internet Tax Man Cometh,” Adam Thierer argues against this proposed legislation. He points out that while cutting spending should be the top priority of state governments, the dwindling brick and mortar tax base presents a legitimate public policy concern. However, Thierer suggests an alternative to “deputizing” Internet retailers as interstate sales tax collectors:

The best fix might be for states to clarify tax sourcing rules and implement an “origin-based” tax system. Traditional sales taxes are already imposed at the point of sale, or origin. If you buy a book in a Seattle bookstore, the local sales tax rate applies, regardless of where you “consume” it. Why not tax Net sales the same way? Under an origin-based sourcing rule, all sales would be sourced to the principal place of business for the seller and taxed accordingly.

Origin-based taxation is a superb idea, as my CEI colleague Jessica Melugin explained earlier this month in the San Jose Mercury News in an op-ed critiquing California’s proposed affiliate nexus tax:

An origin-based tax regime, based on the vendor’s principal place of business instead of the buyer’s location, will address the problems of the current system and avoid the drawbacks of California’s plan. This keeps politicians accountable to those they tax. Low-tax states will likely enjoy job creation as businesses locate there. An origin-based regime will free all retailers from the accounting burden of reporting to multiple jurisdictions. Buyers will vote with their wallets, “choosing” the tax rate when making decisions about where to shop online and will benefit from downward pressure on sales taxes. Finally, brick-and-mortar retailers would have the “even playing field” they seek.

Congress should exercise its authority over interstate commerce and produce legislation to fundamentally reform sales taxes to an origin-based regime. In the meantime, California legislators should resist the temptation to tax those beyond their borders. Might we suggest an awards show tax?

Continue reading →

Melissa Yu is the winner of first prize in the middle school category of C-SPAN’s StudentCam 2011 competition. Her video, “Net Neutrality: The Federal Government’s Role in Our Online Community,” is an eight-minute look at the push for regulation of Internet service with an emphasis appropriate for students on how the three branches of government have each been involved in the story up to now.

Many TLF readers already know the story and the key players, but if you haven’t been following along, or if you want a refresher, here’s a better video than I could have produced in eighth grade. Or now. Congratulations, Melissa Yu!