I’ve already Tweeted about it, but if you are following Internet privacy debates and have not yet had the chance to read Lauren Weinstein‘s new paper, “Do-Not-Track, Doctor Who, and a Constellation of Confusion,” it is definitely worth a look. Weinstein, founder of the Privacy Forum, zeroes in on two related issue that I have made the focus of much of my work on this issue: (1) the fact that Do Not Track is seemingly viewed by some as a silver-bullet quick fix to online privacy concerns but will really be far more complicated in practice to enforce, and (2) that Do Not Track regulation will likely have many unintended consequences, most of which are going unexplored by proponents.
For example, Weinstein says:
Do-not-track in actuality encompasses an immensely heterogeneous mosaic of issues and considerations, not appropriately subject to simplistic approaches or “quick fix” solutions. Approaching this area without a realistic appreciation of such facts is fraught with risks and the potential for major undesirable collateral damages to businesses, organizations, and individuals. Attempts to portray these controversies as “black or white” topics subject to rapid or in some cases even unilaterally imposed resolutions may be politically expedient, but are ultimately both childish and dangerous. […]
Above all, we should endeavor to remember that tracking issues both on and off the Internet are in reality part of a complicated whole, a multifaceted set of problems — and very importantly — potentials as well. The decisions that we make now regarding these issues will likely have far-ranging implications and effects on the Internet for many years to come, perhaps for decades.
Wired reports that a recent federal court decision would make it possible for a private-sector employee to be found in violation of the the Computer Fraud and Abuse Act for simply violating their employer’s data policies, without any real “hacking” having occurred. This not only applies to data access, like grabbing data via a non-password-protected computer, but also to unauthorized use, such as emailing or copying data the employee might otherwise have permission to access.
On face, this doesn’t seem entirely unreasonable. Breaking and entering is a crime, but so is casually walking into a business or home and taking things that aren’t yours, so it seems like data theft, even without any “hacking,” should be a crime. For the law to be otherwise would create a “but he didn’t log out” defense for would-be data thieves.
But what about unauthorized use? Is there a physical property equivalent of this? Could I be criminally liable for using the corporate car to drag race my against my neighbor, or would I only be fired and potentially sued in civil court? Does this new interpretation CFAA simply expand the scope of this law into realms already covered, perhaps more appropriately, by statutes that specifically address trade secrets or other sensitive information in a broader way that doesn’t involve computing technology?
Judge Tena Campbell noted in the dissent that under the ruling, “any person who obtains information from any computer connected to the internet, in violation of her employer’s computer-use restrictions, is guilty of a federal crime.” So, perhaps this is a case of the court overreaching in an incredibly dramatic fashion.
I hope my lawyerly co-bloggers can weigh-in on this issue.
I was particularly pleased to see both Will and Jacoby take on bogus federalism arguments in favor of allowing States to form a multistate tax cartel to collect out-of-state sales taxes. Senators Dick Durbin (D-IL) and Mike Enzi (R-WY) will soon introduce the “Main Street Fairness Act,” which would force all retailers to collect sales tax for states who join a formal compact. It’s a novel—and regrettable—ploy to get around constitutional hurdles to taxing out-of-state vendors. Sadly, it is gaining support in some circles based on twisted theories of what federalism is all about. Real federalism is about a tension between various levels of government and competition among the States, not a cozy tax cartel.
Will rightly notes that “Federalism — which serves the ability of businesses to move to greener pastures — puts state and local politicians under pressure, but that is where they should be, lest they treat businesses as hostages that can be abused.” And Jacoby argues that an “origin-based” sales tax sourcing rule is the more sensible solution to leveling the tax playing field: Continue reading →
User-driven websites — also known as online intermediaries — frequently come under fire for disabling user content due to bogus or illegitimate takedown notices. Facebook is at the center of the latest controversy involving a bogus takedown notice. On Thursday morning, the social networking site disabled Ars Technica’s page after receiving a DMCA takedown notice alleging the page contained copyright infringing material. While details about the claim remain unclear, given that Facebook restored Ars’s page yesterday evening, it’s a safe bet that the takedown notice was without merit.
Understandably, Ars Technica wasn’t exactly pleased that its Facebook page — one of its top sources of incoming traffic — was shut down for seemingly no good reason. Ars was particularly disappointed by how Facebook handled the situation. In an article posted yesterday (and updated throughout the day), Ars co-founder Ken Fisher and senior editor Jacqui Cheng chronicled their struggle in getting Facebook to simply discuss the situation with them and allow Ars to respond to the takedown notice.
Facebook took hours to respond to Ars’s initial inquiry, and didn’t provide a copy of takedown notice until the following day. Several other major tech websites, including ReadWriteWeb and TheNextWeb, also covered the issue, noting that Ars Technica is the latest in a series of websites to have suffered from their Facebook page being wrongly disabled. In a follow-up article posted today, Ars elaborated on what happened and offered some tips to Facebook on how it could have better handled the situation.
It’s totally fair to criticize how Facebook deals with content takedown requests. Ars is right that the company could certainly do a much better job of handling the process, and Facebook will hopefully re-evaluate its procedures in light of this widely publicized snafu. In calling out Facebook’s flawed approach to dealing with takedown requests, however, Ars Technica doesn’t do justice to the larger, more fundamental problem of bogus takedown notices.
In 1984, Stewart Brand famously said that information wants to be free. John Perry Barlow reiterated it in the early 90s, and added “Information Replicates into the Cracks of Possibility.” When this idea was applied to online music sharing, it was cool in a “fight the man!” kind of way. Unfortunately, information replication doesn’t discriminate: your personal data, credit cards and medical problems alike, also want to be free. Keeping it secret is really, really hard.
Quite right. We’ve been debating the complexities of information control in the Internet policy arena for the last 20 years and I think we can all now safely conclude that information control is hugely challenging regardless of the sort of information in question. As I’ll note below, that doesn’t mean control is impossible, but the relative difficulty of slowing or stopping information flows of all varieties has increased exponentially in recent years.
But Adida’s more interesting point is the one about the selective morality at play in debates over information control. That is, people generally expect or favor information freedom in some arenas, but then get pretty upset when they can’t crack down on information flows elsewhere. Indeed, some people can get downright religious about the whole “information-wants-to-be-free” thing in some cases and then, without missing a beat, turn around and talk like information totalitarians in the next breath. Continue reading →
Thanks to all of you who have sent your comments about Tate Watkins and my new cybersecurity paper. It’s been getting a good reception.
James Fallows of *The Atlantic*, for example, [noted yesterday](http://www.theatlantic.com/technology/archive/2011/04/two-fascinating-exhibits-on-data-security/237891/) that the paper “represents a significant libertarian-right voice of concern about this latest expansion of the permanent national-security surveillance state,” and that while we can’t underestimate cyber risks, “the emphasis on proportionate response, and the need to guard other values, comes at the right time. We should debate these threats rather than continuing to cower.”
Today I wanted to bend your ears (or eyes, I guess) with another excerpt. The subject today is the “if you only knew what we know,” rationale for government action. I’m happy to see that Sen. Sheldon Whitehouse has [a new bill](http://www.fas.org/blog/secrecy/2011/04/cyber_secrecy.html) getting right at the problem of over-classification that allows leaders to get away with “just trust us” rhetoric. Check out the excerpt is after the jump. Continue reading →
I’ve written a long article this morning for CNET (See “Privacy panic debate: Whose data is it?”) on the discovery of the iPhone location tracking file and the utterly predictable panic response that followed. Its life-cycle follows precisely the crisis model Adam Thierer has so frequently and eloquently traced, most recently here on TLF.
In particular, the CNET article takes a close and serious look at Richard Thaler’s column in Saturday’s New York Times, “Show us the data. (It’s ours, after all.)” Thaler uses the iPhone scare as occassion to propose a regulatory fix to the “problem” of users being unable to access in “computer-friendly form” copies of the information “collected on” them by merchants. Continue reading →
Today my colleague [Tate Watkins](http://shortsentences.org/) and I are releasing [a new working paper on cybersecurity policy](http://mercatus.org/publication/loving-cyber-bomb-dangers-threat-inflation-cybersecurity-policy). Please excuse my patently sleep-deprived mug while I describe it here:
Over the past few years there has been a steady drumbeat of alarmist rhetoric coming out of Washington about potential catastrophic cybersecurity threats. For example, at a Senate Armed Services Committee hearing last year, Chairman Carl Levin said that “cyberweapons and cyberattacks potentially can be devastating, approaching weapons of mass destruction in their effects.” Proposed responses include increased federal spending on cybersecurity and the regulation of private network security practices.
The rhetoric of “[cyber doom](http://mercatus.org/publication/beyond-cyber-doom)” employed by proponents of increased federal intervention, however, lacks clear evidence of a serious threat that can be verified by the public. As a result, the United States may be witnessing a bout of threat inflation.
Threat inflation, [according to Thrall and Cramer](http://books.google.com/books?id=EzUtuTOIfTEC&lpg=PP1&ots=3AQmVD2Slb&dq=AMERICAN%20FOREIGN%20POLICY%20AND%20THE%20POLITICS%20OF%20FEAR&pg=PP1#v=onepage&q&f=false), is a concept in political science that refers to “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.” Different actors—including members of Congress, defense contractors, journalists, policy experts, academics, and civilian, military, and intelligence officials—will each have their own motives for contributing to threat inflation. When a threat is inflated, the marketplace of ideas on which a democracy relies to make sound judgments—in particular, the media and popular debate—can become overwhelmed by fallacious information. The result can be unwarranted public support for misguided policies.
The run-up to the Iraq War illustrates the dynamic of threat inflation. After 9/11, the Bush Administration decided to invade Iraq to oust Saddam Hussein. Lacking any clear casus belli, the administration sought popular and congressional support for war by promoting several rationales that ultimately proved baseless. Continue reading →
On the podcast this week, Jane Yakowitz, a visiting assistant professor at Brooklyn Law School, discusses her new paper about data anonymization and privacy regulation, Tragedy of the Data Commons. Citing privacy concerns, legal scholars and privacy advocates have recently called for tighter restrictions on the collection and dissemination of public research data. Yakowitz first explains why these concerns are overblown, arguing that scholars have misinterpreted the risks of anonymized data sets. She then discusses the social value of the data commons, noting the many useful studies that likely wouldn’t have been possible without a data commons. She finally suggests why the data commons is undervalued, citing disparate reactions to similar statistical releases by OkCupid and Facebook, and offers a few policy recommendations for the data commons.
It is disappointing that the Obama administration, which campaigned against George W. Bush’s poor record on civil liberties protection, is pursuing a course that aims to limit Fourth Amendment rights when it comes to the use of location tracking technology.
The Washington Post reported yesterday that the Obama administration has petitioned the U.S. Supreme Court to overturn a ruling last year by the U.S. Court of Appeals for the D.C. Circuit that forces police to obtain a warrant before tracking the movements of a suspect using a global positioning device.
The motion is significant because various state laws conflict over procedure and the Supreme Court, if it takes the case, could establish long-term procedure going forward. In the case at hand, United States vs. Antoine Jones, the D.C. court sided with the defendant, overturning the conviction against Jones, who was accused of being a major cocaine dealer, ruling that D.C. police violated due process by using a GPS device to track Jones’ movements for one month without a warrant. Appellate courts in New York and California, on the other hand, have ruled in favor of police in similar cases.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →