Privacy, Security & Government Surveillance

Read my take at Cato@Liberty.

So reports the Wall Street Journal:

Lawmakers working to craft a new comprehensive immigration bill have settled on a way to prevent employers from hiring illegal immigrants: a national biometric identification card all American workers would eventually be required to obtain.

It’s the natural evolution of the policy called “internal enforcement” of immigration law, as I wrote in my Cato Institute paper, “Franz Kafka’s Solution to Illegal Immigration.”

Once in place, watch for this national ID to regulate access to financial services, housing, medical care and prescriptions—and, of course, serve as an internal passport.

A couple weeks ago the Google Books Settlement fairness hearing took place in New York City, where Judge Denny Chin heard dozens of oral arguments discussing the settlement’s implications for competition, copyright law, and privacy. The settlement raises a number of very challenging legal questions, and Judge Chin’s decision, expected to come down later this spring, is sure to be a page-turner no matter how he rules.

My work on the Google Books Settlement has focused on reader privacy concerns, which have been a major point of contention between Google and civil liberties groups like EFF, ACLU, and CDT. While I agree with these groups that existing legal protections for sensitive user information stored by cloud computing providers are inadequate, I do not believe that reader privacy should factor into the court’s decision on whether to approve or reject the settlement.

I elaborated on reader privacy in an amicus curiae brief I submitted to the court last September. I argued that because Google Books will likely earn a sizable portion of its revenues from advertising, placing strict limits on data collection (as EFF and others have advocated) would undercut Google’s incentive to scan books, ultimately hurting the very authors whom the settlement is supposed to benefit. While the settlement is not free from privacy risks, such concerns aren’t unique to Google Books nor are they any more serious than the risks surrounding popular Web services like Google search and Gmail. Comparing Google Book Search to brick-and-mortar libraries is inapt, and like all cloud computing providers, Google has a strong incentive to safeguard user data and use it only in ways that benefit users and advertisers.

Continue reading →

Here’s a great conversation at Slate.com about Shane Harris’ new book The Watchers.

We’ll be having the author here at Cato on March 10th for a similar discussion of his book and the growth of the surveillance state.

Register here.

Jim Harper and I have been having one of our periodic tussles over the Lower Merion school laptop spying case.  Jim thinks the search in this case may pass Fouth Amendment muster; I disagree.

This is especially tricky because the facts are still very much unclear, but I’m going to follow Orin Kerr in assuming that the facts are roughly as follows. (I also, incidentally, follow Kerr in his conclusions: The statutory claims are mostly spurious; the Fourth Amendment claim is legitimate.)  Harriton High School issues its students personal laptops, which are required for class, and normally are also taken home by the students.  Student Blake Robbins, however, had apparently been issued a temporary “loaner” laptop while his normal one was in for repairs.  According to school rules, this laptop was supposed to remain on campus because he had not paid an insurance fee for it, but he took it home with him anyway. Exactly what happened next is not entirely clear, but at some point someone at the school appears to have registered it as missing on the school’s asset management and security system. The system works as follows. Each laptop periodically checks in with the school server whenever it is online—it sends a “heartbeat”—registering its identity, the IP address from which it’s connected, and some basic system data. It also, among other things, checks whether it has been reported missing or stolen.  If it has, depending on the settings specified, it activates a security protocol which causes it to check in more frequently and may also involve taking a series of still images with its built-in webcam and submitting them back to the server for review. One of those images, presumably because it showed something the school’s techs thought might be drugs, was subsequently passed along to a school administrator.  Again, any of this could be wrong, but assume these facts for now.

Our baseline is that private homes enjoy the very highest level of Fourth Amendment protection, and that whenever government agents engage in non-consensual monitoring that reveals any information about activity in the interior of the home, that’s a violation of the right against unreasonable search.There are some forms of public search that may be deemed reasonable without a court order, such as the so-called Terry stop, but “searches and seizures inside a home without a warrant are presumptively unreasonable absent exigent circumstances” (Karo v. United States). Obviously, an ordinary search for stolen property cannot be “exigent.” Karo is actually helpful to linger on for a moment. There, a can of ether fitted with a covert tracking beeper had been sold to suspects who were involved in cocaine processing:

Continue reading →

Governments are exercising more and more control over individuals using financial systems and communications systems.

We spend most of our time here on communications. Here’s a look at how things are shaping up in the financial area:

http://www.youtube.com/v/5mUdDBYeg_g&hl=en_US&fs=1&

Fellow TLFer Julian Sanchez has written (twice) at Cato@Liberty on the big school-using-laptops-to-spy-on-kids case.

Indulging my contrarian habit, I’m taking a little bit of a different view, though not necessarily an inconsistent one. While it seems error to me that the school district issued laptops with a potentially invasive security system, failing to fully inform parents, I think a lot more facts have to come out before we reach legal conclusions.

I started to feel some contrary comin’ on when I read the lengthy commentary of a parent at the school, posted on a privacy colleague’s Facebook wall. Among other things, she said:

The minor in question is a truly bad kid. [cites supporting facts] He had broken two laptop computers and had been issued a loaner computer with the explicit instructions not to take it off school property. It disappeared from the school and when questioned he told the school it had been stolen from him. There is quite a bit of theft and laptops had been a target. The kids seemed to know about the security system in place, I didn’t know about it which I think was wrong — the school has apologized for this. The school activated the security system realized the computer was in use and the webcam took a still shot. The minor in question was sitting in front of the webcam, the rumor is with drugs. The photo was sent to the police which apparently was standard procedure for stolen property and not related to anything else.

Maybe the “drugs” were Mike & Ike’s candies. The plaintiff’s lawyer says so. (Consider the veracity of a kid explaining things to his parents and their counsel, though, and of a trial lawyer seeking to lead a class action.)

Sugar pills or not, if the laptop is AWOL from school—presumptively stolen—I don’t see that it would be unreasonable to use the security system to discover its location, and the camera to capture images of who is using it. If there are statutes that would prevent that, I think a court would find a way to avoid applying them, be it on the theory that the putative thief assumed the risk of being surveilled, unclean hands, or some other basis.

The reporting and commentary has been a little overwrought. Better facts will determine what law should apply. Parents at the school have started a Facebook group to discuss this and share the rest of the story given that the school district has, well, lawyered up.

I tipped a reporter at an outlet I respect about this parent’s version of events. The reporter was alternately dismissive of sources that weren’t “official” and highly defensive when I suggested that her writing and reporting appeared to be preserving controversy rather than getting to the bottom of things. So much for relying on media—even new media—for getting information out.

Maybe spun-up outrage will cause better policies in this area than would otherwise result. Maybe we’ll learn that the security system was used for routine, inappropriate spying on kids. But as a legal case, there’s a lot more to be learned before we should draw conclusions.

Cyber Shockwave FAIL

by on February 21, 2010 · 10 comments

From my undulating perch on an elliptical machine last night, I saw that CNN was broadcasting a strange roundtable event called “cyber.shockwave”—they occasionally displayed a subhead saying something like “you were warned.”

It was a group of (mostly) former Bush Administration officials sitting around making their pitch that we should be frightened about yet another menace and that our salvation is to run to the arms of government (especially if it’s controlled by their party). The CNN airing of it was illustration of how politics and public policy are collapsing together with entertainment—reality TV, specifically. The government “experts” were actors in a play dressed up as a newscast.

This post at “Crabbyolbastard Ruminates” captures my sense of what was going on. (“I see that we as a country are being led by blithering Luddites . . .”) As reported by Crabbyol’, the ideas they discussed included: pulling the plug on the Internet, pulling the plug on the cell phone networks, and nationalizing the telco and power companies.

D33PT00T tweets, cleverly, “ok my phn doesn’t work & Internet doesn’t work – ths guys R planning 2 run arnd w/ bullhorns ‘all is well remain calm!'”

Maybe it’s coincidence that Republicans dominated the scene. It was an event put together by the “Bipartisan Policy Center.” But that just goes to show that there is bipartisan agreement on one thing in Washington, D.C.: The government should control more of the society.

The U.S. federal government is not where the action is on “cybersecurity.” It is the responsibility of coders, device manufacturers, network operators, data holders, and ordinary computer users. The CNN broadcast of this event mislead viewers into thinking that cybersecurity is the government’s responsibility and that the government will lead any response to security failures.

Heaven help us if that becomes the reality.

If a tree falls in the forest, who cares who hears it?

But when we “publish,” “speak” or “share” online, we often do care who hears it. While millions of users eagerly share huge amounts of information about themselves and their activities by posting status updates, photos, videos, events, etc., nearly everyone would rather limit some of their sharing to a select circle of contacts. For some users (and in some situations), that circle might be quite small, while it could be very large or unlimited for other users or situations. How public is too public when it comes to what we share about ourselves? Personalizing our audience is something we each have to decide for ourselves depending on the circumstances—what I would call “publication privacy.” (It’s a potentially ambiguous term, I’ll grant you, since “publication” still doesn’t obviously refer to user-generated content in everyone’s mind, but I think it’s more clear than “Sharing Privacy,” since “publication” is a subset of the information we “share” about ourselves.)

For all the talk about the “Death of Privacy“—be that good, bad, or simply inevitable—publication privacy is thriving. Twitter, most famously, offers users only the binary choice of either locking down their entire feed (so that you have to approve requests to “follow” you) or making it public to everyone on the service. But just in the last two months, we’ve seen a sea change in the ability of users to manage their publication privacy.

Facebook’s Publication Controls

First, in December, Facebook began offering users the ability to control access to each and every piece of content they share—like so:

Continue reading →

At the FTC’s second Exploring Privacy roundtable at Berkeley in January, many of the complaints about online advertising centered on how difficult it was to control the settings for Adobe’s Flash player, which is used to display ads, videos and a wide variety on other graphic elements on most modern webpages, as well the potential for unscrupulous data collectors to “re-spawn” standard (HTTP) cookies even after a user deleted them simply by referencing the Flash cookie on a user’s computer from that domain—thus circumventing the user’s attempt to clear out their own cookies. Adobe to the first criticism by promising to include better privacy management features in Flash 10.1 and by condemning such re-spawning and calling for “a mix of technology tools and regulatory efforts” to deal with the problem (including FTC enforcement). (Adobe’s filing offers a great history of Flash, a summary of its use and an introduction to Flash Cookies, which Adam Marcus detailed here.)

Earlier this week (and less than three weeks later), Adobe rolled out Flash 10.1, which offers an ingenious solution to the problem of how to manage flash cookies: Flash now simply integrates its privacy controls with Internet Explorer, Firefox and Chrome (and will soon do so with Safari). So when the user turns on “private browsing mode” in these browser, the Flash Cookies will be stored only temporarily, allowing users to use the full functionality of the site, but the Flash Player will “automatically clear any data it might store during a private browsing session, helping to keep your history private.” That’s a pretty big step and an elegantly simple to the problem of how to empower users to take control of their own privacy. Moreover:

Flash Player separates the local storage used in normal browsing from the local storage used during private browsing. So when you enter private browsing mode, sites that you previously visited will not be able to see information they saved on your computer during normal browsing. For example, if you saved your login and password in a web application powered by Flash during normal browsing, the site won’t remember that information when you visit the site under private browsing, keeping your identity private.

Continue reading →