“[There’s No Data Sheriff on the Wild Web](http://www.nytimes.com/2011/05/08/weekinreview/08bilton.html),” is an article by Nick Bilton in the *New York Times* this weekend, pointing out that no federal law punishes the massive breaches of personal information like the recent Epsilon and Sony cases.

>”There needs to be new legislation and new laws need to be adopted” to protect the public, said Senator Richard Blumenthal, Democrat of Connecticut, who has been pressing Sony to answer questions about its data breach and what the company did to avoid it. “Companies need to be held accountable and need to pay significantly when private and confidential information is imperiled.”

>But how? Privacy experts say that Congress should pass legislation regulating companies if they collect certain types of information. If such laws existed today, they say, Sony could be held responsible for failing to properly protect the data by employing up-to-date security on its systems.

>Or at the very least, companies would be forced to update their security systems. In underground online forums last week, hackers said Sony’s servers were severely outdated and infiltrating them was relatively easy.

While there may be no law requiring site operators to keep their networks updated and secure, it’s not as if they currently have no incentive to do so, and it’s not as if they are completely unaccountable. Witness the (at least) two lawsuits already filed against Sony. [One in Canada](http://ingame.msnbc.msn.com/_news/2011/05/03/6577819-sony-declines-to-testify-before-congress-as-1-billion-lawsuit-filed) for $1 billion and [one in the U.S.](http://ingame.msnbc.msn.com/_news/2011/04/27/6544610-sony-sued-could-bleed-billions-following-playstation-network-hack) looking for class action status. Not to mention that the PlayStation network is still down and losing money, as well as Sony’s reputation loss. Are you now more or less likely to buy a PlayStation as your next console?

To the extent we do need legislation, it’s not to tell firms to keep their Apache servers up to date. There are plenty of terrible things that happen to a firm if it doesn’t take the security of its customers’ data seriously. Sony is living proof of that. Adding a criminal fine to the pile likely won’t improve private incentives. What prescriptive legislation might to do, however, is put federal bureaucrats in charge of security standards, which is not a good thing in my book.

The missing incentive here might be the incentive to disclose that a breach has occurred. Rep. Mary Bono Mack [has suggested that she might introduce legislation](http://thehill.com/blogs/hillicon-valley/technology/159581-gop-rep-sony-playing-the-victim-in-hacker-attack) to require such disclosures. Such legislation may well be responding to a real and harmful information asymmetry. If a firm could preserve such an asymmetry, then the usual incentives wouldn’t work.

Rather than trying to legislatively predict and preempt security breaches, when it comes to the security of personal information it might be better to seek a policy of transparency and resiliency. As I explain in my [latest TIME Techland piece](http://techland.time.com/2011/05/08/why-your-personal-information-wants-to-be-free/), we may now be in a world were it’s next to impossible to ensure that at lease some of our private personal information that is digitized and connected to the net won’t be compromised. To attempt to put that genie back in the bottle might be not only futile, but counterproductive. Instead, we may be better served by being informed when our data is compromised, seeking civil redress, and learning to cope with the new reality. As I write in the piece:

>On net, the fact that we now live in a hyper-connected world where information can’t be controlled is a good thing. The cultural, social, economic and political benefits of such a transparent system will likely outweigh the price we pay in privacy and security. And that’s especially the case if learn to live with that reality.

>Human beings are incredibly resilient, and faced with a new environment, we adapt. When major changes take place—-from natural disasters to the Industrial Revolution—-we learn to live in the new context, but only if we acknowledge the new reality. We need to get used to this new world in which information can’t be controlled.

>Maybe a new social norm will develop that accepts that everyone will have embarrassing facts about them online, and that it’s OK because we’re human. Maybe if we assumed that data breaches are inevitable, we wouldn’t give up on securing networks, but we might do more to cope. For example, the technology exists to make all credit card numbers single-use to a particular vendor, so they’re of little value to hackers.

>Welcome to the new world. Information wants to be free. The Net interprets information control as damage and routes around it. Get used to it.

Late last week, the Project on Government Oversight‘s Danielle Brian took a little umbrage at a Huffington Post piece by former U.S. Deputy Chief Technology Officer Beth Noveck, who had been implementing the Obama Administration’s Open Government Initiative until she recently returned to New York Law School.

Brian’s piece suggests a slight schism in the transparency community, between what I believe are the “insider” and “outsider” camps. Brian leaves to the end a crucial point: “[C]an’t the two camps in the open government world peacefully co-exist? There’s just too much work to be done for us to get bogged down in denigrating each others’ agendas.” They most certainly can.

Noveck was a bit dismissive of the open government movement as perceived by much of the transparency community. “Many people, even in the White House,” she wrote, “still assume that open government means transparency about government.” Actually, Noveck continued, open government is “open innovation or the idea that working in a transparent, participatory, and collaborative fashion helps improve performance, inform decisionmaking, encourage entrepreneurship, and solve problems more effectively. By working together as team [sic] with government in productive fashion, the public can then help to foster accountability.”

Visualize the difference between these two approaches: open government as a tool for public oversight and open government as a tool for public participation. When open government is about public oversight, the wording connotes the public looking down from above on the work its servants are doing. When open government is about collaboration, the public is at best an equal partner, allowed to participate in the work of governing. Noveck’s unfortunate language choice treats accountability as a kind of dessert to which the public will be entitled when it has donated sufficient energies to making the government work better.

The administration’s December 2009 open government memorandum predicted this divide. In calling for each agency to publish three “high-value data sets,” it said:

High-value information is information that can be used to increase agency accountability and responsiveness; improve public knowledge of the agency and its operations; further the core mission of the agency; create economic opportunity; or respond to need and demand as identified through public consultation.

As I noted at the time, it’s a very broad definition.

Without more restraint than that, public choice economics predicts that the agencies will choose the data feeds with the greatest likelihood of increasing their discretionary budgets or the least likelihood of shrinking them. That’s data that “further[s] the core mission of the agency” and not data that “increase[s] agency accountability and responsiveness.” It’s the Ag Department’s calorie counts, not the Ag Department’s check register.

Noveck wants us to put the calorie counts to use. Brian wants to see the check register.

There is no fundamental tension between these two agendas. Both are doable at the same time. The difference between them is that one is the openness agenda of the insider: using transparency, participation, and collaboration to improve on the functioning of government as it now exists.

The openness agenda of the outsider seeks information about the management, deliberation, and results of the government and its agencies. It is a reform (or “good government”) agenda that may well realign the balance of power between the government and the public. That may sound scary—it’s certainly complicates some things for insiders—but the “outsider” agenda is shared by groups across the ideological and political spectra. Its content sums to better public oversight and better functioning democracy, things insiders are not positioned to oppose.

I think these things will also reduce the public’s demand for government, or at least reduce the cost of delivering what it currently demands. But others who share the same commitment to transparency see it as likely to validate federal programs, root out corruption, and so on (a point I made in opening Cato’s December 2008 policy forum, “Just Give Us the Data!”) There are no losers in this bet. Better functioning programs and reduced corruption are better for fans of limited government than poorly functioning programs and corruption.

Forward on all fronts! The existence of two camps is interesting, but not confounding to the open government movement.

Reps. Edward Markey (D-Mass.) and Joe Barton (R-Texas) have released a discussion draft of their forthcoming “Do Not Track Kids Act of 2011.”  I’ve only had a chance to give it a quick read, but the bill, which is intended to help safeguard kids’ privacy online, has two major regulatory provisions of interest:

(1) New regulations aimed at limiting data collection about children and teens, including (a) expansion of the Children’s Online Privacy Protection Act (COPPA) of 1998, which would build upon COPPA’s “verifiable parental consent” model; and (b) a new “Digital Marketing Bill of Rights for Teens;” and (c) limits on collection of geolocation information about both children and teens.

(2) An Internet “Eraser Button” for Kids to help kids wipe out embarrassing facts they have place online but later come to regret.  Specifically, the bill would require online operators “to the extent technologically feasible, to implement mechanisms that permit users of the website, service, or application of the operator to erase or otherwise eliminate content that is publicly available through the website, service, or application and contains or displays personal information of children or minors.” This is loosely modeled on a similar idea currently being considered in the European Union, a so-called “right to be forgotten” online.

Both of these proposals were originally floated by the child safety group Common Sense Media (CSM) in a report released last December.  It’s understandable why some policymakers and child safety advocates like CSM would favor such steps. They fear that there is simply too much information about kids online today or that kids are voluntarily placing far too much personal information online that could come back to haunt them in the future. These are valid concerns, but there are both practical and principled reasons to be worried about the regulatory approach embodied in the Markey-Barton “Do Not Track Kids Act”: Continue reading →

For Forbes.com this morning, I take a close look at last month’s controversial FCC order requiring facilities-based wireless carriers to negotiate data roaming agreements with other carriers.

There are business, technical, and legal reasons why the order stands on unsteady ground, which the article looks at in detail.

The order, by encouraging artificial competition in nationwide mobile broadband, could also undermine arguments against AT&T’s merger with T-Mobile USA.

How so?  If every regional, local, or rural carrier can offer their customers access to the nationwide coverage of Verizon, AT&T, or Sprint, on terms overseen for “commercial reasonableness” by the FCC, what’s the risk of consumer harm from combining AT&T and T-Mobile’s infrastructure?  Indeed, doing so would create stronger nationwide 3G and 4G networks for other carriers to use.  In that sense, it’s actually pro-competitive, and a pragmatic solution to spectrum exhaustion. Continue reading →

I spaced out and completely forget to post a link here to my latest Forbes column which came out over the weekend.  It’s a look at back at last week’s hullabaloo over “Apple, The iPhone, and a Locational Privacy Techno-Panic.” In it, I argue:

Some of the concerns raised about the retention of locational data are valid. But panic, prohibition and a “privacy precautionary principle” that would preemptively block technological innovation until government regulators give their blessings are not valid answers to these concerns. The struggle to conceptualize and protect privacy rights should be an evolutionary and experimental process, not one micro-managed at every turn by regulation.

I conclude the piece by noting that:

Public pressure and market norms also encourage companies to correct bone-headed mistakes like the locational info retained by Apple.  But we shouldn’t expect less data collection or less “tracking” any time soon.  Information powers the digital economy, and we must learn to assimilate new technology into our lives.

Read the rest here. And if you missed essay Larry Downes posted here on the same subject last week, make sure to check it out.

With news today that the Department of Justice is [extending its probe](http://thehill.com/blogs/hillicon-valley/technology/158909-justice-department-extends-atat-probe) of the AT&T – T-Mobile merger, and that the FCC [has received](http://www.washingtonpost.com/blogs/post-tech/post/consumers_give_fcc_an_earful_on_atandt_bid_to_buy_t_mobile/2011/05/02/AFX0VScF_blog.html) thousands of comments on the issue, the FCC’s hopefully soon to be release Wireless Competition Report is taking on even greater importance.

Last year’s report was [the first in 15 years not to find the market “effectively competitive.”](http://techliberation.com/2010/05/21/the-underlying-desperation-at-the-fcc/) As a result, expectations are high for the new annual report. How it determines the state of competition in the wireless market could affect regulatory policy and how the Commission looks at mergers.

Join the Mercatus Center at George Mason University’s [Technology Policy Program](http://mercatus.org/technology-policy-program) for a discussion of these issues, including:

– What does a proper analysis of wireless competition look like?
– What should we expect from the FCC’s report this year?
– How should the FCC address competition in the future?

Our panel will feature [**Thomas W. Hazlett**](http://mason.gmu.edu/~thazlett/), Professor of Law & Economics, George Mason University School of Law; [**Joshua D. Wright**](http://mason.gmu.edu/~jwrightg/), Assistant Professor of Law, George Mason University School of Law; [**Robert M. Frieden**](http://comm.psu.edu/people/rmf5), Professor of Telecommunications & Law, Penn State University; and [**Harold Feld**](http://www.publicknowledge.org/user/1540), Legal Director, Public Knowledge

**When:** Wednesday, May 18, 2011, 4 – 5:30 p.m. (with a reception to follow)

**Where:** George Mason University’s Arlington Campus, just ten minutes from downtown Washington. (Founders Hall, Room 111, 3351 N. Fairfax Drive, Arlington, VA)

To RSVP for yourself and your guests, please contact Megan Gandee at 703-993-4967 or [mmahan@gmu.edu](mailto:mmahan@gmu.edu) no later than May 16, 2011. If you can’t make it to the Mercatus Center, you can watch this discussion live online at mercatus.org.

A federal judge in Illinois has refused to allow a plaintiff to match IP addresses to individual names in a piracy case, indicating that use of IP addresses without any other evidence is too unreliable in identifying actual perpetrators, and as such, violates the rights of those caught in what he termed a “fishing expedition.”

In his decision, Judge Harold Baker pointed to one of several recent cases where paramilitary-type police raids on the residences of persons suspected of downloading child pornography that turned up nothing. What had happened was that real culprit had used that household’s unsecured wireless Internet connection.

Continue reading →

On this week’s podcast, Jessica Litman, professor of law at the University of Michigan Law School and one of the country’s foremost experts on copyright, discusses her new essay, Reader’s Copyright. Litman talks about the origins of copyright protection and explains why the importance of readers’, viewers’, and listeners’ interests have diminished over time. She points out that copyright would be pointless without readers and claims that the system is not designed to serve creators or potential creators exclusively. Litman also discusses differences in public and private protections and talks about rights that should be made more explicit regarding copyright.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

I’ve spent a great deal of time here defending “techno-optimism” or “Internet optimism” against various attacks through the years, so I was interested to see Cory Doctorow, a novelist and Net activist, take on the issue in a new essay at Locus Online.  I summarized my own views on this issue in two recent book chapters. Both chapters appear in The Next Digital Decade and are labeled “The Case for Internet Optimism.” Part 1 is sub-titled “Saving the Net From Its Detractors” and Part 2 is called “Saving the Net From Its Supporters.” More on my own thoughts in a moment. But let’s begin with Doctorow’s conception of the term.

Doctorow defines “techno-optimism” as follows:

In order to be an activist, you have to be… pessimistic enough to believe that things will get worse if left unchecked, optimistic enough to believe that if you take action, the worst can be prevented. […]

Techno-optimism is an ideology that embodies the pessimism and the optimism above: the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.

What this definition suggests is that Doctorow has a very clear vision of what constitutes “good” vs. “bad” technology or technological developments. He turns to that dichotomy next as he seeks to essentially marry “techno-optimism” to a devotion to the free/open software movement and a rejection of “proprietary technology”: Continue reading →

Here’s a doozy for the cyber-hype files. After it was announced that CIA Director Leon Panetta would take over at the Department of Defense, Rep. Jim Langevin, co-chair of the CSIS cybersecurity commission and author of comprehensive cybersecurity legislation, put out [a statement that read in part](http://thehill.com/blogs/hillicon-valley/technology/158383-house-dem-says-panetta-understands-cybersecurity):

>“I am particularly pleased to know that Director Panetta will have a full appreciation for the increasing sense of urgency with which we must approach cybersecurity issues. Earlier this year, Panetta warned that ‘the next Pearl Harbor could very well be a cyberattack.”

That’s from a [statement made](http://abcnews.go.com/News/cia-director-leon-panetta-warns-cyber-pearl-harbor/story?id=12888905) by Panetta to a house intelligence panel in February, and it’s an example of unfortunate rhetoric that Tate Watkins and I cite in [our new paper](http://mercatus.org/publication/loving-cyber-bomb-dangers-threat-inflation-cybersecurity-policy). Pearl Harbor left over two thousand persons dead and pushed the United States into a world war. There is no evidence that a cyber-attack of comparable effect is possible.

What’s especially unfortunate about that kind of alarmist rhetoric, apart from the fact that unduly scares citizens, is that it is often made in support of comprehensive cybersecurity legislation, like that introduced by Rep. Langevin. That bill [gives DHS the authority](http://www.govtrack.us/congress/billtext.xpd?bill=h112-1136&version=ih&nid=t0%3Aih%3A386) to issue standards for, and audit for compliance, private owners of critical infrastructure.

What qualifies as critical infrastructure? The bill has an expansive definition, so let’s hope that the “computer experts” cited in [this National Journal story](http://www.nextgov.com/nextgov/ng_20110429_3808.php) on the Sony PlayStation breach are not the ones doing the interpreting:

>While gaming and music networks may not be considered “critical infrastructure,” the data that perpetrators accessed could be used to infiltrate other systems that are critical to people’s financial security, according to some computer experts. Stolen passwords or profile information, especially codes that customers have used to register on other websites, can provide hackers with the tools needed to crack into corporate servers or open bank accounts.

It’s not hard to imagine a logic that leads everything to be considered “critical infrastructure” because, you know, everything’s connected on the network. We need to be very careful about legislating great power stemming from vague definitions and doing so on little evidence and lots of fear.