Read my take at Cato@Liberty.
Keeping politicians' hands off the Net & everything else related to technology
So reports the Wall Street Journal:
Lawmakers working to craft a new comprehensive immigration bill have settled on a way to prevent employers from hiring illegal immigrants: a national biometric identification card all American workers would eventually be required to obtain.
It’s the natural evolution of the policy called “internal enforcement” of immigration law, as I wrote in my Cato Institute paper, “Franz Kafka’s Solution to Illegal Immigration.”
Once in place, watch for this national ID to regulate access to financial services, housing, medical care and prescriptions—and, of course, serve as an internal passport.
A couple weeks ago the Google Books Settlement fairness hearing took place in New York City, where Judge Denny Chin heard dozens of oral arguments discussing the settlement’s implications for competition, copyright law, and privacy. The settlement raises a number of very challenging legal questions, and Judge Chin’s decision, expected to come down later this spring, is sure to be a page-turner no matter how he rules.
My work on the Google Books Settlement has focused on reader privacy concerns, which have been a major point of contention between Google and civil liberties groups like EFF, ACLU, and CDT. While I agree with these groups that existing legal protections for sensitive user information stored by cloud computing providers are inadequate, I do not believe that reader privacy should factor into the court’s decision on whether to approve or reject the settlement.
I elaborated on reader privacy in an amicus curiae brief I submitted to the court last September. I argued that because Google Books will likely earn a sizable portion of its revenues from advertising, placing strict limits on data collection (as EFF and others have advocated) would undercut Google’s incentive to scan books, ultimately hurting the very authors whom the settlement is supposed to benefit. While the settlement is not free from privacy risks, such concerns aren’t unique to Google Books nor are they any more serious than the risks surrounding popular Web services like Google search and Gmail. Comparing Google Book Search to brick-and-mortar libraries is inapt, and like all cloud computing providers, Google has a strong incentive to safeguard user data and use it only in ways that benefit users and advertisers.
Here’s a great conversation at Slate.com about Shane Harris’ new book The Watchers.
We’ll be having the author here at Cato on March 10th for a similar discussion of his book and the growth of the surveillance state.
Jim Harper and I have been having one of our periodic tussles over the Lower Merion school laptop spying case. Jim thinks the search in this case may pass Fouth Amendment muster; I disagree.
This is especially tricky because the facts are still very much unclear, but I’m going to follow Orin Kerr in assuming that the facts are roughly as follows. (I also, incidentally, follow Kerr in his conclusions: The statutory claims are mostly spurious; the Fourth Amendment claim is legitimate.) Harriton High School issues its students personal laptops, which are required for class, and normally are also taken home by the students. Student Blake Robbins, however, had apparently been issued a temporary “loaner” laptop while his normal one was in for repairs. According to school rules, this laptop was supposed to remain on campus because he had not paid an insurance fee for it, but he took it home with him anyway. Exactly what happened next is not entirely clear, but at some point someone at the school appears to have registered it as missing on the school’s asset management and security system. The system works as follows. Each laptop periodically checks in with the school server whenever it is online—it sends a “heartbeat”—registering its identity, the IP address from which it’s connected, and some basic system data. It also, among other things, checks whether it has been reported missing or stolen. If it has, depending on the settings specified, it activates a security protocol which causes it to check in more frequently and may also involve taking a series of still images with its built-in webcam and submitting them back to the server for review. One of those images, presumably because it showed something the school’s techs thought might be drugs, was subsequently passed along to a school administrator. Again, any of this could be wrong, but assume these facts for now.
Our baseline is that private homes enjoy the very highest level of Fourth Amendment protection, and that whenever government agents engage in non-consensual monitoring that reveals any information about activity in the interior of the home, that’s a violation of the right against unreasonable search.There are some forms of public search that may be deemed reasonable without a court order, such as the so-called Terry stop, but “searches and seizures inside a home without a warrant are presumptively unreasonable absent exigent circumstances” (Karo v. United States). Obviously, an ordinary search for stolen property cannot be “exigent.” Karo is actually helpful to linger on for a moment. There, a can of ether fitted with a covert tracking beeper had been sold to suspects who were involved in cocaine processing:
Continue reading →
Governments are exercising more and more control over individuals using financial systems and communications systems.
We spend most of our time here on communications. Here’s a look at how things are shaping up in the financial area:
http://www.youtube.com/v/5mUdDBYeg_g&hl=en_US&fs=1&
From my undulating perch on an elliptical machine last night, I saw that CNN was broadcasting a strange roundtable event called “cyber.shockwave”—they occasionally displayed a subhead saying something like “you were warned.”
It was a group of (mostly) former Bush Administration officials sitting around making their pitch that we should be frightened about yet another menace and that our salvation is to run to the arms of government (especially if it’s controlled by their party). The CNN airing of it was illustration of how politics and public policy are collapsing together with entertainment—reality TV, specifically. The government “experts” were actors in a play dressed up as a newscast.
This post at “Crabbyolbastard Ruminates” captures my sense of what was going on. (“I see that we as a country are being led by blithering Luddites . . .”) As reported by Crabbyol’, the ideas they discussed included: pulling the plug on the Internet, pulling the plug on the cell phone networks, and nationalizing the telco and power companies.
D33PT00T tweets, cleverly, “ok my phn doesn’t work & Internet doesn’t work – ths guys R planning 2 run arnd w/ bullhorns ‘all is well remain calm!'”
Maybe it’s coincidence that Republicans dominated the scene. It was an event put together by the “Bipartisan Policy Center.” But that just goes to show that there is bipartisan agreement on one thing in Washington, D.C.: The government should control more of the society.
The U.S. federal government is not where the action is on “cybersecurity.” It is the responsibility of coders, device manufacturers, network operators, data holders, and ordinary computer users. The CNN broadcast of this event mislead viewers into thinking that cybersecurity is the government’s responsibility and that the government will lead any response to security failures.
Heaven help us if that becomes the reality.
If a tree falls in the forest, who cares who hears it?
But when we “publish,” “speak” or “share” online, we often do care who hears it. While millions of users eagerly share huge amounts of information about themselves and their activities by posting status updates, photos, videos, events, etc., nearly everyone would rather limit some of their sharing to a select circle of contacts. For some users (and in some situations), that circle might be quite small, while it could be very large or unlimited for other users or situations. How public is too public when it comes to what we share about ourselves? Personalizing our audience is something we each have to decide for ourselves depending on the circumstances—what I would call “publication privacy.” (It’s a potentially ambiguous term, I’ll grant you, since “publication” still doesn’t obviously refer to user-generated content in everyone’s mind, but I think it’s more clear than “Sharing Privacy,” since “publication” is a subset of the information we “share” about ourselves.)
For all the talk about the “Death of Privacy“—be that good, bad, or simply inevitable—publication privacy is thriving. Twitter, most famously, offers users only the binary choice of either locking down their entire feed (so that you have to approve requests to “follow” you) or making it public to everyone on the service. But just in the last two months, we’ve seen a sea change in the ability of users to manage their publication privacy.
First, in December, Facebook began offering users the ability to control access to each and every piece of content they share—like so: