Privacy, Security & Government Surveillance

Earlier today on Twitter, I listed what I thought were the Top 5 “Biggest Internet Policy Issues of 2012.” In case you don’t follow me on Twitter — and shame on you if you don’t! — here were my choices:

  1. Copyright wars reinvigorated post-SOPA; tide starting to turn in favor of copyright reform. [TLF posts on copyright.]
  2. Privacy still red-hot w ECPA reform, online advertising regs & kids’ privacy issues all pending. [TLF posts on privacy.]
  3. WCIT makes Internet governance / NetFreedom a major issue worldwide. [TLF posts on Net governance.]
  4. Antitrust threat looms larger w pending Google case + Apple books investigation. [TLF posts on antitrust.]
  5. Cybersecurity regulatory push continues in both legislative (CISPA) & executive branch. [TLF posts on cybersecurity.]

Lists like these are entirely subjective, of course, but I am basing my list on the general amount of chatter I tended to see and hear about each topic over the course of the year.

What do you think the top tech policy issues of the year were?

As I noted in an addendum to my previous post, less than an hour after I posted an essay about how the District of Columbia’s subsidy deal with LivingSocial was potentially set to unravel, I received a call from two representatives of the D.C. Mayor’s office asking me to clarify a few aspects of the deal. The tone and substance of the call was courteous and profession from the start and I told them I would be happy to post a quick update to my essay letting readers know of the points that they wanted stressed.

After I did so, however, I kept thinking how strange it was that I received such a quick response from the Mayor’s office about my little post. After all, I can’t imagine that the Technology Liberation Front is on the top of their morning reading list! I just figured that someone in the Mayor’s office probably had a Google Alert set up that caught it.  But then, as luck would have it, I was reading through the Wall Street Journal at lunch and came across a story entitled, “In D.C., Social-Media Surveillance Pays Off” by Sarah Portlock. She reports that:

The local government in the nation’s capital is paying hundreds of thousands of dollars to a startup to gather comments on Twitter, Facebook and other online message boards as well as the government’s own website. The data help form a letter grade for the bureaucracies that handle drivers licenses, building permits and the like. These social-media analytics services are already common for businesses such as restaurants and hotel chains that want to go beyond the comment cards most customers ignore. The D.C. experiment suggests governments are beginning to mirror the private sector in seeking real-time unvarnished feedback.

The D.C. government apparently has a 2-year $670,000 contract with newBrandAnalytics, Inc. to gather social media feedback and insights about the District.  So, I figure that’s how the folks in the D.C. Mayor’s office stumbled upon my little rant. I had posted a link to my essay on both Twitter and Google+ and they probably got an immediate report back about it.

In any event, that got me wondering about how people are going to respond to this sort of “surveillance” of social media sites and activities by governments. Continue reading →

Consumers should be aware that “government transparency” also applies to the data consumers voluntarily provide to the FCC when they participate in a government-run broadband measurement program.

The most egregious aspect of these broadband measurement programs, however, is that the FCC kept the public in the dark for more than a year by failing to disclose that its mobile testing apps were collecting user locations (by latitude and longitude) and unique handset identification numbers that the FCC’s contractors can make available to the public.

The Federal Communications Commission (FCC) recently announced a new program to measure mobile broadband performance in the United States. The FCC believes it is “difficult” for consumers to get detailed information about their mobile broadband performance, and that “transparency on broadband speeds drives improvement in broadband speeds.” The FCC does not, however, limit transparency to broadband speeds. Consumers should be aware that “government transparency” also applies to the data consumers voluntarily provide to the FCC when they participate in a government-run broadband measurement program. Information collected by the FCC about individual consumers may be “routinely disclosed” to other federal agencies, states, or local agencies that are investigating or prosecuting a civil or criminal violation. Some personal information, including individual IP address, mobile handset location data, and unique handset identification numbers, may be released to the public.

This blog post describes the FCC’s broadband measurement programs and highlights the personal data that may be disclosed about those who participate in them. Continue reading →

The privacy debate has been increasingly shaped by an apparent consensus that de-identifying sets of personally identifying information doesn’t work.  In particular, this has led the FTC to abandon the PII/non-PII distinction on the assumption that re-identification is too easy.  But a new paper shatters this supposed consensus by rebutting the methodology of Latanya Sweeney’s seminal 1997 study of re-identification risks, which in turn, shaped the HIPAA’s rules for de-identification of health data and the larger privacy debate ever since.

This new critical paper, “The ‘Re-Identification’ of Governor William Weld’s Medical Information: A Critical Re-Examination of Health Data Identification Risks and Privacy Protections, Then and Now” was published by Daniel Barth-Jones, an epidemiologist and statistician at Columbia University. After carefully re-examining the methodology of Sweeney’s 1997 study, he concludes that re-identification attempts will face “far-reaching systemic challenges” that are inherent in the statistical methods used to re-identify. In short, re-identification turns out to be harder than it seemed—so our identity can more easily be obscured in large data sets. This more nuanced story must be understood by privacy law scholars and public policy-makers if they want to realistically assess current privacy risks posed by de-identified data—not just for health data, but for all data.

The importance of Barth-Jones’s paper is underscored by the example of Vioxx, which stayed on the market years longer than it should have because of HIPAA’s privacy rules, thus resulting in  88,000 and 139,000 unnecessary heart attacks, and 27,000-55,000 avoidable deaths—as University of Arizona Law Professor Jane Yakowitz Bambauer explained in a recent Huffington Post piece.

Ultimately, overstating the risk of re-identification causes policymakers to strike the wrong balance in the trade-off of privacy with other competing values.  As Barth-Jones and Yakowitz have suggested, policymakers should instead focus on setting standards for proper de-identification of data that are grounded in a rigorous statistical analysis of re-identification risks.  A safe harbor for proper de-identification, combined with legal limitations on re-identification, could protect consumers against real privacy harms while still allowing the free flow of data that drives research and innovation throughout the economy.

Unfortunately, the Barth-Jones paper has not received the attention it deserves.  So I encourage you consider writing about this, or just take a moment to share this with your friends on Twitter or Facebook.

Adam Thierer, senior research fellow at the Mercatus Center at George Mason University, discuses recent calls for nationalizing Facebook or at least regulating it as a public utility. Thierer argues that Facebook is not a public good in any formal economic sense, and nationalizing the social network would be a big step in the wrong direction. He argues that nationalizing the network is neither the only nor the most effective means of solving privacy concerns that surround Facebook and other social networks. Nor is Facebook is a monopoly, he says, arguing that customers have many other choices. Thierer also points out that regulation is not without its problems including the potential that a regulator will be captured by the regulated network thus making monopoly a self-fulfilling prophecy.

Listen to the Podcast

Download MP3

Related Links

I have always found it strange that the ACLU speaks with two voices when it comes to user empowerment as a response to government regulation of the Internet. That is, when responding to government efforts to regulate the Internet for online safety or speech purposes, the ACLU stresses personal responsibility and user empowerment as the first-order response. But as soon as the conversation switches to online advertising and data collection, the ACLU suggests that people are basically sheep who can’t possibly look out for themselves and, therefore, increased Internet regulation is essential. They’re not the only ones adopting this paradoxical position. In previous essays I’ve highlighted how both EFF and CDT do the same thing. But let me focus here on ACLU.

Writing today on the ACLU “Free Future” blog, ACLU senior policy analyst Jay Stanley cites a new paper that he says proves “the absurdity of the position that individuals who desire privacy must attempt to win a technological arms race with the multi-billion dollar internet-advertising industry.” The new study Stanley cites says that “advertisers are making it impossible to avoid online tracking” and that it isn’t paternalistic for government to intervene and regulate if the goal is to enhance user privacy choices. Stanley wholeheartedly agrees. In this and other posts, he and other ACLU analysts have endorsed greater government action to address this perceived threat on the grounds that, in essence, user empowerment cannot work when it comes to online privacy.

Again, this represents a very different position from the one that ACLU has staked out and brilliantly defended over the past 15 years when it comes to user empowerment as the proper and practical response to government regulation of objectionable online speech and pornography. For those not familiar, beginning in the mid-1990s, lawmakers started pursuing a number of new forms of Internet regulation — direct censorship and mandatory age verification were the primary methods of control — aimed at curbing objectionable online speech. In case after case, the ACLU rose up to rightly defend our online liberties against such government encroachment. (I was proud to have worked closely with many former ACLU officials in these battles.) Most notably, the ACLU pushed back against the Communications Decency Act of 1996 (CDA) and the Child Online Protection Act of 1998 (COPA) and they won landmark decisions for us in the process. Continue reading →

Nicolas Christin, Associate Director of the Information Networking Institute at Carnegie Mellon University, discuses the Silk Road anonymous online marketplace. Silk Road is a site where buyers and sellers can exchange goods much like eBay and Craigslist. The difference is that the identity of both the buyers and sellers is anonymous and goods are exchanged for bitcoins rather than traditional currencies. The site has developed a reputation of being a popular online portal for buying and selling drugs because of this anonymity, which has caused some politicians to call for the site to be investigated and closed by law enforcement. Despite all of this, the Silk Road remains a very stable marketplace with a very good track record of consumer satisfaction. Christin conducted an extensive empirical study of the site, which he discusses.

Download

President Obama seems to be poised once again to use executive powers to get what Congress won’t give him.

In this case, it’s the imposition of a sweeping set of cybersecurity mandates and regulations on the private sector. My latest commentary at Reason.org addresses the problems of the original Cybersecurity Act, which did not muster enough support in the Senate to get to a vote, and why a White House decision to implement it by executive order simply expands the government’s surveillance and datagathering power while doing little to secure the nation’s information infrastrucuture.

Find the commentary here.

In the worlds of technology and government, I’m fond of joking, paranoia is just having a long time-horizon. Advances in data processing will make identifiable what is now anonymous. That “voluntary” pilot program will become full-fledged and mandatory.

But we need not apply the paranoid principle to the White House’s handling of the petition I started a few weeks ago asking the White House to have TSA follow the law. The petition ended on time. There’s no good evidence that its ending was hastened to cut off a late run at getting to 25,000 signatures.

Some folks had gotten the idea that we would have until midnight last Thursday, but it expired around mid-day. That’s about the same time that I created the petition weeks earlier, which is consistent with my assumption that the system is designed to expire petitions automatically when the time allowed for them to run has elapsed.

We could kvetch about losing some momentum when the petition function went down for a few hours around the time a great story came out on Wired’s Threat Level blog. But the folks at Whitehouse.gov added a full day to all petitions to make up for the maintenance outage. The time to complain was then, and I didn’t, so that complaint has expired.

There’s lots of other stuff that is interesting about all this. Continue reading →

It was my honor today to be a panelist at a Hill event on “Apps, Ads, Kids & COPPA: Implications of the FTC’s Additional Proposed Revisions,” which was co-sponsored by the Family Online Safety Institute and the Association for Competitive Technology. It was a free-wheeling discussion, but I prepared some talking points for the event that I thought I would share here for anyone interested in my views about the Federal Trade Commission’s latest proposed revisions to the Children’s Online Privacy Protection Act (COPPA).

________

The Commission deserves credit for very wisely ignoring calls by some to extend the coverage of COPPA’s regulatory provisions from children under 13 all the way up to teens up to 18.

  • that would have been a constitutional and technical enforcement nightmare. But the FTC realized that long ago and abandoned any thought of doing that. So that is a huge win since we won’t be revisiting the COPA age verification wars.
  • That being said, each tweak or expansion of COPPA, the FTC opens the door a bit wider to a discussion of some sort age verification or age stratification scheme for the Internet.
  • And we know from recent AG activity (recall old MySpace age verification battle) and Hill activity (i.e. Markey-Barton bill) that there remains an appetite for doing something more to age-segment Internet populations

Continue reading →