Julian Sanchez – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 14 Dec 2011 20:34:35 +0000 en-US hourly 1 6772528 The New SOPA: Now With Slightly Less Awfulness! https://techliberation.com/2011/12/14/the-new-sopa-now-with-slightly-less-awfulness/ https://techliberation.com/2011/12/14/the-new-sopa-now-with-slightly-less-awfulness/#comments Wed, 14 Dec 2011 19:48:45 +0000 http://techliberation.com/?p=39486

On Thursday, the House Judiciary Committee is slated to take up the misleadingly named Stop Online Piracy Act, an Internet censorship bill that will do little to actually stop piracy. In response to an outpouring of opposition from cybersecurity professionals, First Amendment scholars, technology entrepreneurs, and ordinary Internet users, the bill’s sponsors have cooked up an amended version that trims or softens a few of the most egregious provisions of the original proposal, bringing it closer to its Senate counterpart, PROTECT-IP. But the fundamental problem with SOPA has never been these details; it’s the core idea. The core idea is still to create an Internet blacklist, which means everything I say in this video still holds true. Let’s review the main changes. Three new clarifying clauses have been added up front: the first two make clear that SOPA is not meant to create an affirmative obligation for site owners to monitor user content (good!) or mandate the implementation of technologies as a condition of compliance with the law (also good!). But the underlying incentives created by the statute push strongly in that direction whether or not it’s a formal requirement: What else do we imagine sites threatened under this law because of user-uploaded content or links will do to escape liability? A third clause says the bill shouldn’t be construed in a way that would impair the security or integrity of the network—which is a bit like slapping a label on a cake stipulating that it shouldn’t be construed to make you fat. These are all nice sentiments, but they remind me of the old philosophers’ joke: “You’ve obviously misinterpreted my theory; I didn’t intend for it to have any counterexamples!”

The big changes in the section establishing court-ordered blocking of supposed “rogue” sites appear to be intended to respond to the objections of cybersecurity professionals and network engineers, who pointed out that requiring falsification of Domain Name System records to redirect users from banned domains would interfere with a major government-supported initiative to secure the Internet against such hijacking. The updated language explicitly disavows the idea of redirection, removes a hard five-day deadline for compliance, and (crucially) says that any DNS operator (like your ISP) has fully satisfied its obligations under the statute if it simply fails to respond to DNS queries for blacklisted sites.

This is bad for transparency, in both the engineering and democratic senses of that term, insofar as it makes a government block indistinguishable from a technical failure, but it does, in a sense, address the direct conflict with DNSSEC. But as network engineers point out, a well-designed application implementing DNSSEC isn’t just going to give up when it doesn’t get a valid, cryptographically signed reply: it’s going to try other DNS servers (including servers outside US jurisdiction) until it finds one that answers.

There are two possibilities here. The first is that application designers don’t design their software properly to implement DNSSEC for fear of liability under the statute’s anti-circumvention provisions, which would be a Very Bad Thing. The second is that they’re assured they won’t be held liable for good design, in which case this whole elaborate censorship process—which was never going to be particularly effective against people who actually want to find pirated content—becomes a truly farcical pantomime, in which nobody running reasonably up-to-date clients even notices the nominal “blocking,” beyond a few seconds delay in resolving the “blocked” site. Now, if we’ve got to have an Internet censorship law, a completely impotent one is surely the best kind, but it becomes a bit mysterious what the point of all this is, beyond providing civil libertarians with a chuckle at the vast amount of money Hollywood has wasted ramming this thing through.

The other big change is to the private right of action, which previously would have allowed any copyright holder to unilaterally compel payment processors and ad networks to cut off sites that it merely accuses of infringement, or enabling infringement, or (in a baffling specimen of tortured language) taking “deliberate actions to avoid confirming a high probability” that the site would be used for infringement. That last little hate crime against English is mercifully absent from the revised SOPA, and it makes clear that only foreign sites are covered, and a judge is now required to actually issue an order before intermediaries are obligated to sever ties.

Which ultimately goes to show that the original proposal was so profoundly wretched that you can improve it a great deal, and still have a very bad idea. This is still, as many legal scholars have correctly observed, censorship by slightly circuitous economic means. The involvement of a judge should (knock on wood) weed out the most obviously frivolous complaints, but it still makes it far too easy for U.S. corporations to effectively destroy foreign Internet sites based on a one-sided proceeding in U.S. courts.

These changes are somewhat heartening insofar as they evince some legislative interest in addressing the legitimate concerns that have been raised thus far. But the problem with SOPA and PROTECT-IP isn’t that they need to be tweaked in order to get the details of an Internet censorship system right. There is no “right” way to do Internet censorship, and the best version of a bad idea remains a bad idea.

]]>
https://techliberation.com/2011/12/14/the-new-sopa-now-with-slightly-less-awfulness/feed/ 11 39486
Wiretap Law Online: A Second Look at Paxfire https://techliberation.com/2011/09/15/wiretap-law-online-a-second-look-at-paxfire/ https://techliberation.com/2011/09/15/wiretap-law-online-a-second-look-at-paxfire/#respond Thu, 15 Sep 2011 14:56:49 +0000 http://techliberation.com/?p=38355

A few days ago, Ars Technica asked me to comment on a class action lawsuit against Paxfire, a company that partners with Internet Service Providers for the purpose of “monetizing Address Bar Search and DNS Error traffic.” The second half of that basically means fixing URL typos, so when you accidentally tell your ISP you want the webpage for “catoo.org,” they figure out you probably mean Cato. The more controversial part is the first half: When users type certain trademarked terms into a unified address/search bar (but not a pure search bar, or a search engine’s own home page), Paxfire directs the user to the page of paying affiliates who hold the trademark. So, for instance, if I type “apple” into the address bar, Paxfire might take me straight to Apple’s home page, even though Firefox’s default behavior would be to treat it as a search for the term “apple” via whatever search engine I’ve selected as my default.

The question at the heart of the suit is: Does this constitute illegal wiretapping? A free tip if you ever want to pose as an online privacy expert: For basically any question about how the Electronic Communications Privacy Act applies to the Internet, the correct answer is “It’s complicated, and the law is unclear.” Still, being a little fuzzy on the technical details of how Paxfire and the ISP accomplished this, I thought about what the end result of this was without focusing too much on how the result was arrived at. The upshot is that Paxfire (if we take their description of their practices at their word) only ends up logging a small subset of terms submitted via address bars, which are at least plausibly regarded as user attempts to specify addressing information, not communications content. In other words, I basically treated the network as a black box and thought about the question in terms of user intent: If someone who punches “apple” into their search bar is almost always trying to tell their ISP to take them to Apple’s website, that’s addressing information, which ISPs have a good deal of latitude to share with anyone but the government under federal law. And it can’t be wiretapping to route the communication through Paxfire, because that’s how the Internet works: Your ISP sends your packets through a series of intermediary networks owned by other companies and entities, and their computers obviously need to look at the addressing information on those packets in order to deliver them to the right address. So on a first pass, it sounded like they were probably clear legally.

Now I think that’s likely wrong. My mistake was in not thinking clearly enough about the mechanics. Because, of course, neither your ISP nor Paxfire see what you type into your address bar; they see specific packets transmitted to them by your browser. And it turns out that the way they pull out the terms you’ve entered in a search bar is, in effect, by opening a lot of envelopes addressed to somebody else.

Some quick Internet 101: When you type “apple.com” into your address bar, your browser first checks with your ISP (or, if you’re a techie, maybe with some other Domain Name System server you’ve specified) to look up the computer-friendly numerical address corresponding to the human-friendly URL. Then the browser sends a GET request—basically just a packet saying “give me this page please”—to the IP address of the machine where it think apple.com lives. But if you just type “apple” into a lot of modern browsers, then depending on their settings, they may not pass that on to your ISP’s DNS server at all. Instead, the browser recognizes that you’ve entered something that isn’t formatted like a URL, and sends a packet straight to your default search engine, whose content is “please give me a page of results for the search term apple.” That’s annoying to the ISP, because it means they get cut out of an opportunity to monetize your eyeballs by (for instance) charging Apple to send you straight there, or delivering you their own search results (with their own ads).

According to some network researchers who explained their findings at EFF’s site, here’s how Paxfire and some of its ISP partners have apparently solved the “problem.” When your browser goes to look up the IP address of your default search engine—that is, when it asks your ISP’s domain name server where it can find “Bing.com” or “Google.com”—the ISP just systematically lies. It tells your browser that one of Paxfire’s servers is really Google.* Paxfire then acts as an invisible proxy, or “man in the middle”: It looks at the request your browser was trying to submit to Google, and in most cases resubmits the identical request to Google itself, then passes along Google’s response without logging anything. An ordinary user wouldn’t notice that Paxfire had been involved at all. But! When their servers see a search that was both originated from a browser’s address bar (the search parameters apparently reveal this) and matches their list of trademarked terms, they’ll log the query and instead return their own page.

The crucial point here is that by the time the packet gets to Paxfire, it’s no longer ambiguous whether “apple” was supposed to be an address or a search term. By the time it gets to Paxfire, “apple” is the content of a message addressed to Google, which reads “please send me search results for apple, and by the way, I’m asking from a Firefox address bar.” The mechanics are opaque to the average user, but Paxfire is in effect combing through all these messages to find the ones that maybe, possibly, perchance the user really meant to be an address rather than a search request, because they don’t really understand how their browsers work. And thaaats kinda wiretappy.

Except, of course, it’s still complicated. If an ordinary citizen taps your phone or your Internet connection, they’re guilty of wiretapping (a felony, for those keeping score at home) the instant they “acquire” the communication, regardless of what they do with it. If I rig your computer to send me copies of all your emails (without your consent), it makes no difference whether I ever read any of them or use them for any purpose. If, for some bizarre reason, I’ve done this and then set my system set to automatically erase the e-mails upon receipt, I’m still equally guilty of illegal “interception” of your communications. (It’s a separate offense to “use” or “disclose” communications that have been illegally intercepted.) The crime occurs at the instant of acquisition.

The rules are different for telecom companies, because the only way to have communications on a packet switched network is for various intermediaries to “acquire” your communications in order to pass them along. So the Wiretap Act’s definition of “interception” explicitly excludes acquisition by a provider’s computers in the “ordinary course of business.” It’s a separate offense for the provider to “divulge” the contents of a communication to any third party, and here there’s no loose “ordinary course of business” exception. Third-party disclosure is allowed when it’s a “necessary incident” of providing the communications service, or when the contents are passed to an entity whose facilities are used to forward the communication to its intended recipient, or with the consent of one of the parties to the communication. (There are a bunch of other exceptions, but they’re not relevant here.) The interesting wrinkle here is that while for most of us, it’s simple to determine when we’ve “intercepted” a communication, for telecom providers it’s kind of complicated: Unlike the rest of us, they’re allowed to acquire and disclose other people’s communications in the ordinary course of business, so whether an illegal interception has occurred doesn’t just depend on where the data goes, but on what they’re doing with it, and why.

Unsurprisingly, Paxfire’s reply to the suit against them seeks to invoke the “ordinary course of business” exception, among other arguments. Exactly what qualifies as the “ordinary course of business” in this rapidly changing industry is an open question, and circuit court rulings are all over the map, with none directly on point. If that were what we had to assess, I might say flip a coin. But the standard for “divulging” to third parties is more stringent—which makes sense, when you think about it. The law essentially gives providers more leeway in deciding what kinds of internal monitoring or processing are necessary, but sets a higher bar for disclosure to others.

Claiming that redirection of search traffic to Paxfire is a “necessary incident” of service seems like a nonstarter. The more obvious out for the ISPs here is 18 U.S.C. 2511(3)(b)(iii), authorizing disclosure to a person whose facilities are used to forward the message. But a little common sense is needed here: Anyone eavesdropping on a realtime packet-switched communication would normally forward the intercepted packets to their intended recipients. We can’t read this as a sort of blanket loophole for wiretapping executed as a man-in-the-middle attack.

What seems dispositive here is that, while Paxfire is ultimately forwarding most query packets to their intended recipients, ISPs aren’t routing traffic through Paxfire as a means of getting it to the intended recipient, Google. The more direct way to achieve that would be not lying to our browsers when they ask for Google’s IP address, and letting our requests go through normally. Rather, the only rationale for routing the traffic to Paxfire is what they do other than the normal routing and forwarding Internet switches do. The only difference Paxfire makes is that it sometimes doesn’t just forward packets to their intended recipients in the normal way, but sends the user to some affiliate’s page instead. It would make a kind of nonsense of the statute to apply the forwarding exception to these circumstances.

Perhaps counterintuitively, it’s not nearly as clear that Paxfire itself falls on the wrong side of the law here, because a court might well regard their actions as covered by the telecom provider’s “ordinary course of business” restriction on the statutory definition on “interception.” If everyone whose traffic was routed through Paxfire had clearly given informed consent to the filtering and occasional rerouting of their search queries, what Paxfire’s doing would clearly be legal, and one could argue it’s really the ISP’s problem to ensure they’re allowed to pass along the traffic they do.

That, of course, brings us back to the crucial question of consent. All of this is moot if the ISPs had the informed consent of their subscribers to do this. Paxfire says they all did, pointing to privacy policies like this one on RCN’s website. But it’s not clear that this does or should meet the relatively high standard for consent to interception under the Wiretap Act. Congress clearly wanted to establish a pretty strong presumption against the interception of communications content. In this case, that means that when monitoring or disclosure go beyond the “ordinary course” and “necessary incident” exceptions, it seems appropriate to demand that each individual whose communications are intercepted have actual, specific, effective notice that their communications are subject to interception. In considering a case involving workplace monitoring of an employee’s personal calls, the 11th Circuit gave an indication of the stringency of the consent requirement:

It is clear, to start with, that Watkins did not actually consent to interception of this particular call. Furthermore, she did not consent to a policy of general monitoring. She consented to a policy of monitoring sales calls but not personal calls. This consent included the inadvertent interception of a personal call, but only for as long as necessary to determine the nature of the call. So, if Little’s interception went beyond the point necessary to determine the nature of the call, it went beyond the scope of Watkins’ actual consent.

Consent under title III is not to be cavalierly implied. Title III expresses a strong purpose to protect individual privacy by strictly limiting the occasions on which interception may lawfully take place.

Your mileage may differ, but that sounds hard to square with the claim that “consent” exists for each user provided that whichever member of the household pays the bills checked a box next to a link to a dozen pages of dense legal boilerplate, which studies suggest nobody actually reads. Title III, after all, is to a substantial extent a regulation of telecom operators themselves—which means it would be contrary to the purpose of the statute to let them so easily disclaim liability, and to pile broad new exceptions atop the detailed list Congress created.

The key point to bear in mind here is that the strong statutory presumption against interception is of enormous benefit to the providers. People normally don’t scrutinize the legal boilerplate on ISP privacy policies, I want to suggest, because they take for granted that wiretapping is illegal, and that ISPs will not, in fact, routinely allow marketers to sift through the contents of their private communications. If users and customers had to fear that a communications provider was likely to assert the right to do this, based on item D(3) on page 12… a lot of providers would lose a hell of a lot of business, because most people don’t want to have to get a JD in order to be able to feel confident they can communicate securely. I know some of my libertarian friends will say it would be better if everyone did have to pay close attention to all these clickwrap contracts, but in the world we currently live in, people do rely on the strong statutory default prohibiting interception and disclosure. Providers whose business depends pretty heavily on consumer expectations of a strong default shouldn’t be allowed to turn around and assert that the default is actually so weak as to be almost trivially overcome when it might permit them to rake in a few extra bucks on the side.

* Actually, they’ve apparently stopped proxying Google specifically, but roll with me for illustrative purposes.

]]>
https://techliberation.com/2011/09/15/wiretap-law-online-a-second-look-at-paxfire/feed/ 0 38355
Tracking and Trade-Offs https://techliberation.com/2010/12/07/tracking-and-trade-offs/ https://techliberation.com/2010/12/07/tracking-and-trade-offs/#comments Tue, 07 Dec 2010 21:16:57 +0000 http://techliberation.com/?p=33540

While I harbor plenty of doubts about the wisdom or practicability of Do Not Track legislation, I have to cop to sharing one element of Nick Carr’s unease with the type of argument we often see Adam and Berin make with respect to behavioral tracking here.  As a practical matter, someone who is reasonably informed about the scope of online monitoring and moderately technically savvy already has an array of tools available to “opt out” of tracking. I keep my browsers updated, reject third party cookies and empty the jar between sessions, block Flash by default, and only allow Javascript from explicitly whitelisted sites. This isn’t a perfect solution, to be sure, but it’s a decent barrier against most of the common tracking mechanisms that interferes minimally with the browsing experience. (Even I am not quite zealous enough to keep Tor on for routine browsing.) Many of us point to these tools as evidence that consumers have the ability to protect their privacy, and argue that education and promotion of PETs is a better way of dealing with online privacy threats. Sometimes this is coupled with the claim that failure to adopt these tools more widely just goes to show that, whatever they might tell pollsters about an abstract desire for privacy, in practice most people don’t actually care enough about it to undergo even mild inconvenience.

That sort of argument seems to me to be very strongly in tension with the claim that some kind of streamlined or legally enforceable “Do Not Track” option will spell doom for free online content as users begin to opt-out en masse. (Presumably, of course, The New York Times can just have a landing page that says “subscribe or enable tracking to view the full article.”) If you think an effective opt-out mechanism, included by default in the major browsers, would prompt such massive defection that behavioral advertising would be significantly undermined as a revenue model, logically you have to believe that there are very large numbers of people who would opt out if it were reasonably simple to do so, but aren’t quite geeky enough to go hunting down browser plug-ins and navigating cookie settings. And this, as I say, makes me a bit uneasy. Because the hidden premise here, it seems, must be that behavioral advertising is so important to supplying this public good of free content that we had better be really glad that the average, casual Web user doesn’t understand how pervasive tracking is or how to enable more private browsing, because if they could do this easily, so many people would make that choice that it would kill the revenue model.  So while, of course, Adam never says anything like “invisible tradeoffs are better than visible ones,” I don’t understand how the argument is supposed to go through without the tacit assumption that if individuals have a sufficiently frictionless mechanism for making the tradeoff themselves, too many people will get it “wrong,” making the relative “invisibility” of tracking (and the complexity of blocking it in all its forms) a kind of lucky feature.

There are, of course, plenty of other reasons for favoring self-help technological solutions to regulatory ones. But as between these two types of arguments, I think you probably do have to pick one or the other.

]]>
https://techliberation.com/2010/12/07/tracking-and-trade-offs/feed/ 9 33540
Laptop Spying and the Fourth Amendment https://techliberation.com/2010/02/25/laptop-spying-and-the-fourth-amendment/ https://techliberation.com/2010/02/25/laptop-spying-and-the-fourth-amendment/#comments Thu, 25 Feb 2010 18:07:28 +0000 http://techliberation.com/?p=26549

Jim Harper and I have been having one of our periodic tussles over the Lower Merion school laptop spying case.  Jim thinks the search in this case may pass Fouth Amendment muster; I disagree.

This is especially tricky because the facts are still very much unclear, but I’m going to follow Orin Kerr in assuming that the facts are roughly as follows. (I also, incidentally, follow Kerr in his conclusions: The statutory claims are mostly spurious; the Fourth Amendment claim is legitimate.)  Harriton High School issues its students personal laptops, which are required for class, and normally are also taken home by the students.  Student Blake Robbins, however, had apparently been issued a temporary “loaner” laptop while his normal one was in for repairs.  According to school rules, this laptop was supposed to remain on campus because he had not paid an insurance fee for it, but he took it home with him anyway. Exactly what happened next is not entirely clear, but at some point someone at the school appears to have registered it as missing on the school’s asset management and security system. The system works as follows. Each laptop periodically checks in with the school server whenever it is online—it sends a “heartbeat”—registering its identity, the IP address from which it’s connected, and some basic system data. It also, among other things, checks whether it has been reported missing or stolen.  If it has, depending on the settings specified, it activates a security protocol which causes it to check in more frequently and may also involve taking a series of still images with its built-in webcam and submitting them back to the server for review. One of those images, presumably because it showed something the school’s techs thought might be drugs, was subsequently passed along to a school administrator.  Again, any of this could be wrong, but assume these facts for now.

Our baseline is that private homes enjoy the very highest level of Fourth Amendment protection, and that whenever government agents engage in non-consensual monitoring that reveals any information about activity in the interior of the home, that’s a violation of the right against unreasonable search.There are some forms of public search that may be deemed reasonable without a court order, such as the so-called Terry stop, but “searches and seizures inside a home without a warrant are presumptively unreasonable absent exigent circumstances” (Karo v. United States). Obviously, an ordinary search for stolen property cannot be “exigent.” Karo is actually helpful to linger on for a moment. There, a can of ether fitted with a covert tracking beeper had been sold to suspects who were involved in cocaine processing:

This case thus presents the question whether the monitoring of a beeper in a private residence, a location not open to visual surveillance, violates the Fourth Amendment rights of those who have a justifiable interest in the privacy of the residence. Contrary to the submission of the United States, we think that it does. At the risk of belaboring the obvious, private residences are places in which the individual normally expects privacy free of governmental intrusion not authorized by a warrant, and that expectation is plainly one that society is prepared to recognize as justifiable. Our cases have not deviated from this basic Fourth Amendment principle. […] In this case, had a DEA agent thought it useful to enter the Taos residence to verify that the ether was actually in the house and had he done so surreptitiously and without a warrant, there is little doubt that he would have engaged in an unreasonable search within the meaning of the Fourth Amendment. For purposes of the Amendment, the result is the same where, without a warrant, the Government surreptitiously employs an electronic device to obtain information that it could not have obtained by observation from outside the curtilage of the house.

Similarly in Kyllo v. United States: “In the home, our cases show, all details are intimate details, because the entire area is held safe from prying government eyes.” So there’s an incredibly strong presumption to overcome.

Now, to be sure, we can imagine fact patterns where we would not want to count the monitoring as a search: If someone steals a wireless surveillance camera, we don’t want to say that somehow the owner conducts a “search” when, without his knowledge or consent, and without his taking any action, the camera is placed inside a home. But it’s important to remember the way the software works here. The laptop issues a “heartbeat” and, in the process, reveals the IP address from which it is connecting.  In other words, it has already announced that it is not on school grounds. From that information alone, it can often be determined whether the laptop is in a public or residential location, and typically that information can also be used in conjunction with a subpoena to determine whose network the laptop was checking in from.  So the activation of the camera here occurs (1) after the laptop has registered itself as being off public school grounds and (2) after it has already provided information that would typically be adequate to learn the location of the laptop, had it actually been stolen.

Now, Jim doesn’t say a whole lot about why he thinks the presumptive unreasonableness of warrantless home searches is overcome here; just that he thinks it is:

[If] the laptop is AWOL from school—presumptively stolen—I don’t see that it would be unreasonable to use the security system to discover its location, and the camera to capture images of who is using it.

The problem is that the relevant case law is full of some pretty strong statements to the effect that any such search is unreasonable without a warrant, even when there’s much less intimate disclosure than a webcam makes possible.  In a context where these laptops are routinely taken home by minor students—even if this particular one should not have been—and where there’s a registration of the IP address in advance of the camera activation, it seems fair to expect the school to understand that activation of the camera carries significant risk of capturing images from the home. In Karo, again, the police did not know in advance that the can of ether would necessarily end up in a private home—but when their monitoring revealed information from the home, it nevertheless violated the Fourth Amendment. The result might be different in a case where the school had strong reason to believe that the webcam was on school property in another public place, but that’s not the case here.

Now, in the comments and in our e-mail exchanges, Jim invokes the “plain view” doctine in a way that I think is problematically circular.  The idea here is that what can be seen by ordinary visual inspection—either because visible from outside the house through a window, or because apparent to a police officer otherwise lawfully present—doesn’t get Fourth Amendment protection.  So Jim is right that if the activation of the webcam for the purpose of protecting their property against theft were lawful, then the ancillary information thereby revealed would be admissible in court. But that doesn’t mean that the risk of exposure of such ancillary information is irrelevant to the reasonableness analysis of the initial search. That is, if a lawful search happens to uncover intimate information unrelated to the purpose of the initial search, it’s certainly fair game. But the risk of revealing protected information is highly relevant to the antecedent question of whether the search is reasonable.  In Illinois v. Caballes, Justice Stevens distinguishes a (lawful) use of a narcotics dog from the thermal imaging surveillance at issue in Kyllo:

Critical to that decision was the fact that the device was capable of detecting lawful activity–in that case, intimate details in a home, such as “at what hour each night the lady of the house takes her daily sauna and bath.” Id., at 38. The legitimate expectation that information about perfectly lawful activity will remain private is categorically distinguishable from respondent’s hopes or expectations concerning the nondetection of contraband in the trunk of his car. A dog sniff conducted during a concededly lawful traffic stop that reveals no information other than the location of a substance that no individual has any right to possess does not violate the Fourth Amendment.

In other words, a monitoring method that reveals only the presence or absence of contraband—perhaps even in the home—might not count as a Fourth Amendment search, because possession of contraband (or stolen property) is not a fact in which anybody has a reasonable expectation of privacy. But that exception applies only to such highly limited and targeted searches—it cannot be invoked when there is a risk that information in which there is a reasonable expectation of privacy might be revealed. Mere location of the laptop would be covered by the Caballes exception; the use of the webcam clearly falls outside its bounds.

Also relevant here is Arizona v. Hicks. Police lawfully entered an apartment to investigate a bullet which had been fired through the floor of the apartment above. While there, one officer lifted up a record turntable and recorded a serial number, which he later used to establish that the turntable had been stolen. The plain view doctrine, Justice Scalia held, was inapplicable, because the serial number was only visible after the officer lifted the turntable to expose it. Though their presence in the apartment was lawful, Scalia explained, the decision to lift the turntable could only be sustained on the basis of a probable cause belief that it would disclose evidence of a crime—and this is the case, note, even when the “search” here revealed nothing more intimate than the serial number on what proved to be stolen property!

With that in mind, let’s return to the webcam case. We know that in previous cases where they’ve activated this tracking feature, laptops erroneously registered as missing or stolen turned out to be in a classroom, being used by students, all along. Even if we defer to their suspicion that a laptop has been stolen—and I think it’s an open question whether that’s the appropriate description of a student improperly taking home the laptop he was issued—experts appear to be in agreement that the legitimate purpose of locating it could have been accomplished far less intrusively. The CEO of the company that sells the tracking software avers that  the webcam feature “really doesn’t serve any purpose” and intends to remove it from the next version of the program.

So to recap: Monitoring that reveals even trivial and non-intimate private information about a home is presumptively unreasonable. Remote activation of a laptop webcam without notice or consent is indisputably a search revealing details about the interior of a home, and potentially far, far more intimate details than were at issue in either Kyllo or Karo. Very narrow searches meant to detect the presence or absence of contraband in a private place may be exempt from that presumption, but only when this is the only information thereby revealed. Even when an investigator is lawfully present (telepresent?) in a home, actions that further reveal facts about objects in the home are separate searches subject to a probable cause standard. There is simply no plausible way to shoehorn this case into any of the recognized exceptions to the strong Fourth Amendment protection to which dwellings are subject. If, armed with information about the IP address from which a laptop has checked in, school officials wish to conduct more intrusive surveillance of the home to gather evidence, they should contact the police and seek a probable cause warrant—not play Batman with cameras in a teenager’s bedroom.

]]>
https://techliberation.com/2010/02/25/laptop-spying-and-the-fourth-amendment/feed/ 34 26549
Defining “Paternalism” Online https://techliberation.com/2010/02/12/defining-paternalism-online/ https://techliberation.com/2010/02/12/defining-paternalism-online/#comments Fri, 12 Feb 2010 19:24:47 +0000 http://techliberation.com/?p=26055

Since some of my cobloggers have taken to using the phrase “Privacy Paternalists” to describe some advocates of privacy regulation, I want to suggest a distinction growing out of the discussion on Berin’s Google Buzz post below.

I think that it’s clear there is such a thing as a “privacy paternalist”—and there are not a few among folks I consider allies on other issues.  They’re the ones who are convinced that anyone who values privacy less highly than they do must be confused or irrational. A genuine privacy paternalist will say that even if almost everyone understands that Amazon keeps track of their purchases to make useful recommendations, this collection must be prohibited because they’re not thinking clearly about what this really means and may someday regret it.

There’s actually a variant on this view that I won’t go into at length, but which I don’t think should be classed as strictly paternalist.  Call this the “Prisoner’s Dilemma” view of privacy.  On this account, there are systemic consequences to information sharing, such that we each get some benefit from participating in certain systems of disclosure, but would all be better off if nobody did.  The merits of that kind of argument probably need to be taken up case-by-case, but whatever else might be wrong with it, the form of the argument is not really paternalistic, since the claim is that (most) individuals have a system-level preference that runs contrary to their preference for disclosure within the existing system.

The objections to Buzz, however, don’t really look like this. The claim is not that people’s foolish choices to disclose should be overridden for their own protection. The claim, rather, is that the system is designed in a way that makes it too easy to disclose information without choosing to do so in any meaningful way. Now, if I can log into your private database as user “J’ OR T=T”, you probably need to learn to set up SQL better.  But it is not terribly persuasive of me to argue that criticism of my breach is “paternalistic,” since after all you made your database accessible online to anyone who entered that login. It is substantially more persuasive if I have logged in as “guest” because you had enabled anonymous logins in the hope that only your friends would use them. On the Internet, the difference between protecting information from a user’s own (perhaps ill-advised) disclosure and protecting it from exploitation by an attacker ultimately, in practice, comes down to expectations. (The same is true in the physical world, though settled expectations make this less salient: Preventing me from getting into the boxing ring is paternalism; declaring the park a “boxing ring” by means of a Post-It note at the entrance is a pretext for assault.)

What expectations are reasonable ultimately has to be an empirical question.  If we want to establish whether a particular protocol for information sharing is meaningfully consensual, it is not especially helpful to set the bar by appeal to some a priori preference for thinking of people as “smart” or “stupid.”  We should actually try to find out: “When people click this button in this context, do they understand what they are agreeing to? Is the clarity of the notice commensurate with the potential consequences?”  If it turns out that many actual users are dismayed and angry about what they have supposedly “agreed” to, it ought to throw into serious doubt the premise that they have agreed at all. And especially when it is the users themselves complaining, paternalism seems like an odd label to apply. The very limited empirical data we have suggests that people generally do not have a very clear understanding of how information about them is being used or may be used. In the case of Buzz, I’m not entirely sure about what is shared with whom under what conditions given different settings—and I study privacy and tech for a living.

One might say that this is an unflattering or obnoxious observation to make because it implies that we’re all stupid and irresponsible, at least for some values of “stupid.”  Unfortunately, it does not therefore become less true. If people are genuinely concerned and confused, they do not become less so if you suggest that only stupid people would be concerned and confused. If you insist that they stop being concerned and confused, because their concern and confusion logically support the position of regulators, they will thank you for directing them toward a group of people who are operating with a more accurate model of the world and start writing checks to EPIC.

With all due respect to the Red Queen, I think the right approach here is verdict first, sentence afterward. First, let’s try to learn what people expect and believe about online privacy practices, what assumptions about time and cognitive capacity are reasonable, and so on. Maybe we need more time in this new space—the Internet has been around long enough, but interconnected social networking sites as a mass phenomenon are still relatively novel—so that users and sites can negotiate the right set of expectations, but it’d still be useful to have a way of tracking whether and how quickly this is actually happening.

Only after you’ve got this factual foundation is it even possible to define “paternalism” adequately in this context. Whether a rule is “paternalistic” can’t really be determined by looking at the rule itself, or even at the rule in combination with the beliefs and expectations a fully informed and perfectly rational being without time constraints might form. It depends what the facts about real people’s beliefs and expectations are. A rule based on too pessimistic a picture will be paternalistic in effect; one based on too sanguine a picture will fail to protect people from being abused.  An adequately protective rule, of course, need not be enforced by government. Privacy advocates and ordinary users can speak up and pressure firms to adopt better practices if they don’t want to lose market share. When the practices and expectations really are out of sync, this will work, and users will appreciate it. But they’re probably going to notice if it’s always the advocates of regulation who are drawing attention to genuine areas of concern, while libertarians predictably insist there are no infidels in Baghdad.

]]>
https://techliberation.com/2010/02/12/defining-paternalism-online/feed/ 7 26055
The Community is the Search Engine https://techliberation.com/2010/01/23/the-community-is-the-search-engine/ https://techliberation.com/2010/01/23/the-community-is-the-search-engine/#comments Sat, 23 Jan 2010 20:46:19 +0000 http://techliberation.com/?p=25320

With China’s Internet filtering back in the spotlight, this is as good a time as any to rewatch Clay Shirky’s excellent TED talk on the political implications of the ongoing media revolution—with a fascinating case study of a recent episode in the People’s Republic.

Two points that probably deserve emphasis. The first is that the explosion of user generated content in one sense makes the control of search engines even more important for a regime that’s trying to limit access to politically inconvenient information. You can block access to Amnesty International, and you can even try to play whack-a-mole with all the mirrors that pop up, but when the ideas you’re trying to suppress can essentially crop up anywhere, a strategy that relies on targeting sites is going to be hopeless. The search engine is a choke point: You can’t block off access to every place where someone might talk about the Tiananmen massacre, but if you can lock down people’s capacity to search for “Tiananmen massacre,” you can do the next best thing, which is making it very difficult for people to find those places. There are always innumerable workarounds for simple text filters (“Ti@n@nm3n”) but if people are looking for pages, the searchers and the content producers need to converge on the same workaround, by which point the authorities are probably aware of it as well and able to add it to the filter. It’s the same reason people who want to shut down illegal BitTorrent traffic have to focus on the trackers.

The second point, however, is that social media also erodes the value of the search engine as a choke point, because it transforms the community itself into the search engine. For many broad categories of question I might want answered, I will get better information more rapidly by asking Twitter than by asking Google. Marshall McLuhan called media “the extensions of man,” because they amplify and extend the function of our biological nervous systems: The screen as prosthetic eye, the speaker as prosthetic ear, the book or the database as external memory storage. The really radical step is to make our nervous systems extensions of each other—to make man the extension of man. That’s hugely more difficult to filter effectively because it makes the generation of the medium’s content endogenous to the use of the medium. You can ban books on a certain topic because a static object gives you a locus of control; a conversation is a moving target. Hence, as Shirky describes, China just had to shut down Twitter on the Tienanmen anniversary, because there was no feasible way to filter it in realtime.

An analogy to public key encryption might be apt here. The classic problem of secure communications is that you needed a secure channel to transmit the key: The process of securing your transmission against attack was itself a point of vulnerability. You had to openly agree to a code before you could start speaking in code. The classic problem of free communication is that the censors can see the method you’re attempting to evade censorship. Diffie-Hellman handshaking solves the security problem because an interactive connection between sufficiently smart systems lets you negotiate an idiosyncratic set of session keys without actually transmitting it. A conversation can similarly negotiate its own terms; given sufficient ingenuity, I can make it clear to a savvy listener that  I intend for us to discuss Tienanmen in such-and-such a fashion, and the most you can do with any finite set of forbidden terms and phrases is slow the process down slightly.

This is a big part of why, pace folks like Tim Wu, I’ll still allow myself to get into the spirit of ’96 every now and again. They can, to be sure, resolve to shut down Twitter and try to throw enough people in jail to intimidate folks into “self discipline,” as they charmingly term it. But the strategies of control available become hugely more costly when the function of the medium is less to connect people with information than to connect them to each other.

]]>
https://techliberation.com/2010/01/23/the-community-is-the-search-engine/feed/ 4 25320
Surveillance, Security, & the Google Breach https://techliberation.com/2010/01/14/surveillance-security-the-google-breach/ https://techliberation.com/2010/01/14/surveillance-security-the-google-breach/#comments Thu, 14 Jan 2010 16:00:18 +0000 http://techliberation.com/?p=25110

Yesterday’s bombshell announcement that Google is prepared to pull out of China rather than continuing to cooperate with government Web censorship was precipitated by a series of attacks on Google servers seeking information about the accounts of Chinese dissidents.  One thing that leaped out at me from the announcement was the claim that the breach “was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” That piqued my interest because it’s precisely the kind of information that law enforcement is able to obtain via court order, and I was hard-pressed to think of other reasons they’d have segregated access to user account and header information.  And as Macworld reports, that’s precisely where the attackers got in:

That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

This is hardly the first time telecom surveillance architecture designed for law enforcement use has been exploited by hackers. In 2005, it was discovered that Greece’s largest cellular network had been compromised by an outside adversary. Software intended to facilitate legal wiretaps had been switched on and hijacked by an unknown attacker, who used it to spy on the conversations of over 100 Greek VIPs, including the prime minister.

As an eminent group of security experts argued in 2008, the trend toward building surveillance capability into telecommunications architecture amounts to a breach-by-design, and a serious security risk. As the volume of requests from law enforcement at all levels grows, the compliance burdens on telcoms grow also—making it increasingly tempting to create automated portals to permit access to user information with minimal human intervention.

The problem of volume is front and center in a leaked recording released last month, in which Sprint’s head of legal compliance revealed that their automated system had processed 8 million requests for GPS location data in the span of a year, noting that it would have been impossible to manually serve that level of law enforcement traffic.  Less remarked on, though, was Taylor’s speculation that someone who downloaded a phony warrant form and submitted it to a random telecom would have a good chance of getting a response—and one assumes he’d know if anyone would.

The irony here is that, while we’re accustomed to talking about the tension between privacy and security—to the point where it sometimes seems like people think greater invasion of privacy ipso facto yields greater security—one of the most serious and least discussed problems with built-in surveillance is the security risk it creates.

Cross-posted from Cato@Liberty.

]]>
https://techliberation.com/2010/01/14/surveillance-security-the-google-breach/feed/ 7 25110
The Virtual Fourth Amendment https://techliberation.com/2009/12/10/the-virtual-fourth-amendment/ https://techliberation.com/2009/12/10/the-virtual-fourth-amendment/#comments Thu, 10 Dec 2009 15:24:51 +0000 http://techliberation.com/?p=24153

At Berin’s suggesting, cross-posting from Cato@Liberty:

I’ve just gotten around to reading Orin Kerr’s fine paper “Applying the Fourth Amendment to the Internet: A General Approach.”  Like most everything he writes on the topic of technology and privacy, it is thoughtful and worth reading.  Here, from the abstract, are the main conclusions:

First, the traditional physical distinction between inside and outside should be replaced with the online distinction between content and non-content information. Second, courts should require a search warrant that is particularized to individuals rather than Internet accounts to collect the contents of protected Internet communications. These two principles point the way to a technology-neutral translation of the Fourth Amendment from physical space to cyberspace.

I’ll let folks read the full arguments to these conclusions in Orin’s own words, but I want to suggest a clarification and a tentative objection.  The clarification is that, while I think the right level of particularity is, broadly speaking, the person rather than the account, search warrants should have to specify in advance either the accounts covered (a list of e-mail addresses) or the method of determining which accounts are covered (”such accounts as the ISP identifies as belonging to the target,” for instance).  Since there’s often substantial uncertainty about who is actually behind a particular online identity, the discretion of the investigator in making that link should be constrained to the maximum practicable extent.

The objection is that there’s an important ambiguity in the physical-space “inside/outside” distinction, and how one interprets it matters a great deal for what the online content/non-content distinction amounts to. The crux of it is this: Several cases suggest that surveillance conducted “outside” a protected space can nevertheless be surveillance of the “inside” of that space. The grandaddy in this line is, of course, Katz v. United States, which held that wiretaps and listening devices may constitute a “search” though they do not involve physical intrusion on private property. Kerr can accomodate this by noting that while this is surveillance “outside” physical space, it captures the “inside” of communication contents. But a greater difficulty is presented by another important case, Kyllo v. United States, with which Kerr deals rather too cursorily.

In Kyllo, the majority—led, perhaps surprisingly, by Justice Scalia!—found that the use without a warrant of a thermal imaging scanner to detect the use of marijuana growing lights in a private residence violated the Fourth Amendment. As Kerr observes, the crux of the disagreement between the majority and the dissent had to do with whether the scanner should be considered to be gathering private information about the interior of the house, or whether it only gathered information (about the relative warmth of certain areas of the house) that might have been obtained by ordinary observation from the exterior of the house.  No great theoretical problem, says Kerr: That only shows that the inside/outside line will sometimes be difficult to draw in novel circumstances. Online, for instance, we may be unsure whether to regard the URL of a specific Web page as mere “addressing” information or as “content”—first, because it typically makes it trivial to learn the content of what a user has read, and second, because URLs often contain the search terms manually entered by users. A similar issue arose with e-mail subject lines, which now seem by general consensus to be regarded as “content” even though they are transmitted in the “header” of an e-mail.

Focus on this familiar (if thorny) line drawing problem, however, misses what is important about the Kyllo case, and the larger problem it presents for Kerr’s dichotomy: Both the majority and the dissent seemed to agree that a more sophisticated scanner capable of detecting, say, the movements of persons within the house, would have constituted a Fourth Amendment search. But reflect, for a moment, on what this means given the way thermal imaging scanners operate. Infrared radiation emitted by objects within the house unambiguously ends up “outside” the house: A person standing on the public street cannot help but absorb some of it. What all the justices appeared to agree on, then, is that the collection and processing of information that is unambiguously outside the house, and is conducted entirely outside the house, may nevertheless amount to a search because it is surveillance of and yields information about the inside of the house. This means that there is a distinction between the space where information is acquired and the space about which it is acquired.

This matters for Kerr’s proposed content/non-content distinction, because in very much the same way, sophisticated measurement and analysis of non-content information may well yield information about content. A few examples may help to make this clear. Secure Shell (SSH) is an encrypted protocol for secure communications. In its interactive mode, SSH transmits each keystroke as a distinct packet—and this packet transmission information is non-content information of the sort that might be obtained, say, using a so-called pen/trap order, issued using a standard of mere “relevance” to an investigation, rather than the “probable cause” required for a full Fourth Amendment search—the same standard Kerr agrees should apply to communications. Yet there are strong and regular patterns in the way human beings type different words on a standard keyboard, such that the content of what is typed—under SSH or any realtime chat protocol that transmits each keystroke as a packet—may be deducible from the non-content packet transmission data given sufficiently advanced analytic algorithms. The analogy to the measurement and analysis of infrared radiation in Kyllo is, I think, quite strong.

It is not hard to come up with a plethora of similar examples. By federal statute, records of the movies a person rents enjoy substantial privacy protection, and the standard for law enforcement to obtain them—probable cause showing of “relevance” and prior notice to the consumer—is higher than required for a mere pen/trap. Yet precise analysis of the size of a file transmitted from a service like Netflix or iTunes could easily reveal either the specific movie or program downloaded, or at the least narrow it down to a reasonably small field of possibilities. Logs of the content-sensitive advertising served by a service like Gmail to a particular user may reveal general information about the contents of user e-mails. Sophisticated social network analysis based on calling or e-mailing patterns of multiple users may reveal, not specific communications contents, but information about the membership and internal structure of various groups and organizations. That amounts to revealing the “contents” of group membership lists, which could have profound First Amendment implications in light of a string of Supreme Court precedents making it clear that state compelled disclosure of such lists may impermissibly burden the freedom of expressive association even when it does not run afoul of Fourth Amendment privacy protections. And running back to Kyllo, especially as “smart” appliances and ubiquitous networked computing become more pervasive, analysis of non-content network traffic may reveal enormous amounts of information about the movements and activities of people within private homes.

Here’s one way to describe the problem: The combination of digital technology and increasingly sophisticated analytic methods have complicated the intuitive link between what is directly observed or acquired and what is ultimately subject to surveillance in a broader sense. The natural move here is to try to draw a distinction between what is directly “acquired” and what is learned by mere “inference” from the information acquired. I doubt such a distinction will hold up. It takes a lot of sophisticated processing to turn ambient infrared radiation into an image of the interior of a home; the majority in Kyllo was not sympathetic to the argument that this was mere “inference.” Strictly speaking, after all, the data pulled off an Internet connection is nothing but a string of ones and zeroes. It is only a certain kind of processing that renders it as the text of an e-mail or an IM transcript. If a different sort of processing can derive the same transcript—or at least a fair chunk of it—from the string of ones and zeroes representing packet transmission timing, should we presume there’s a deep constitutional difference?

That is not to say there’s anything wrong with Kerr’s underyling intuition.  But it does, I think, suggest that new technologies will increasingly demand that privacy analysis not merely look at what is acquired but at what is done with it. In a way, the law’s hyperfocus on the moment of acquisition as the unique locus of Fourth Amendment blessing or damnation is the shadow of the myopically property-centric jurisprudence the Court finally found to be inadequate in Katz. As Kerr intimates in his paper, shaking off the digital echoes of that legacy—with its convenient bright lines—is apt to make things fiendishly complex, at least in the initial stages.  But I doubt it can be avoided much longer.

]]>
https://techliberation.com/2009/12/10/the-virtual-fourth-amendment/feed/ 6 24153
No, Seriously, U.S. Broadband Competition Sucks https://techliberation.com/2009/10/12/no-seriously-u-s-broadband-competition-sucks/ https://techliberation.com/2009/10/12/no-seriously-u-s-broadband-competition-sucks/#comments Mon, 12 Oct 2009 18:57:15 +0000 http://techliberation.com/?p=22535

Ok, I didn’t say anything last month when Jerry—albeit with some caveats—cited that FCC stat about how 88 percent of zip codes have four or more broadband providers. But now I see my friend Peter Suderman relying on the same figure over at Reason. And friends don’t let friends use FCC broadband data.

First, since a zip code is considered to be “served” by a provider if it has a single subscriber in that area, this is not a terribly helpful measure of competition, which is a function of what you can get at any given household. More importantly, the definition of “broadband” here is a blazing 200 Kbps unidirectional—or about 1/20th the average broadband connection speed in the U.K., itself the slowpoke of Europe. A third of the connections they’re calling “broadband” don’t even reach that pathetic speed bidirectional. Of the 2/3 that do manage to reach that speed both ways, almost half are slower than 2.5 Mbps in the faster direction.

Mobile companies are by far the most common “broadband” providers in their sample, with “at least some presence” in 99% of their zip codes, so at least one of those four  providers is almost certainly a mobile company.  It’s probably a lot more than that: In only 63% of zip codes were both cable and ADSL subscribers reported—and remember, that doesn’t even tell us whether any households actually had even the choice between a cable and an ADSL provider. So we can see how easily you get to four providers under this scheme: You just have to live in a zip code with, let’s say,  an incumbent cable company offering what passes for “real” broadband in the U.S., plus even spotty coverage under a 3G network (average downstream speed 901–1,940 Kbps, depending on your provider) , and a couple of conventional cellular carriers with Edge or EVDO coverage that just squeaks over the 200 Kbps bar. Congratulations, you’re a lucky beneficiary of U.S. “broadband competition.”  Woo.

Look, I think the average person in this country understand that their broadband options are pretty crap, and there’s not much percentage in telling them to ignore their lying eyes and check out some dubious numbers. If the argument against net neutrality depends on the idea that we currently have robust competition in real broadband, well, the argument is in a lot of trouble. What I find much more compelling is the idea that, with 4G rollouts on the horizon, we may actually get adequate broadband competition in the near-to-medium term, and might want to be wary about rushing into regulatory solutions that not only assume the status quo, but could plausibly help entrench it.

Addendum: That Pew survey I cited in the previous post did ask individual home broadband subscribers how many providers they had to choose from.  Obviously, that sample excludes people without any broadband access, but 21% (and 30% of rural users) said they only had a single provider, and only a quarter of those who had multiple providers said they had as many as four. Since average prices appeared to be lower the more competition was present, and assuming ceteris paribus you get higher adoption when prices are lower, this sounds likely to overstate the actual degree of choice Americans enjoy.

]]>
https://techliberation.com/2009/10/12/no-seriously-u-s-broadband-competition-sucks/feed/ 31 22535
Class and Gov 2.0 https://techliberation.com/2009/10/12/class-and-gov-2-0/ https://techliberation.com/2009/10/12/class-and-gov-2-0/#comments Mon, 12 Oct 2009 17:30:59 +0000 http://techliberation.com/?p=22517

Rose Afriyie from Feministing wants to know why, amid all the enthusiastic talk of “Gov 2.0” under Obama, we’re not hearing about the  “digital divide,” about which there used to be so much tearing of hair and rending of garments:

I, for one, am a little concerned that in all this technology talk, particularly with respect to government agencies moving information online, not a word was mentioned about the Digital Divide. It’s not news that low-income people of color and women are devastatingly impacted by decreased access to technology. But as states and state agencies experience budget constraints, activists must keep an eye out to insure that these creative measures are sensitive to the needs of these communities. Data consolidation is one thing, but how will “automated government services” impact consumers? More specifically, how much computer literacy will be needed to interact with these agencies? I’m not saying that agencies should stay in the Stone Age per se; But, before these agencies pull a George Jetson, they should assess the technological literacy of their communities through surveys or other methods. Also, they should use some of the savings from implementing these new high tech programs to invest in more free Wi-Fi hotspot locations and free technology education workshops–that run at night and provide childcare.

broadbandadoptionOne reason might be that it’s hard to imagine the growth curve for Internet adoption being a whole lot steeper than it is. According to the most recent Pew survey, the percentage of adults in households with home broadband rose from 55% to 63% over the past year. As with adoption of all new technologies, lower income households are behind—but that just means they’re lagging by a few years on the same rapid growth trajectory. For households with annual incomes under $20,000, home broadband rose from 25% of households to 35% in 2009. That’s pretty similar to the curve we saw with television adoption, and if the trend from here roughly tracks TV, we should expect something damn near ubiquity within about five years, which is how long I’d expect it would take to get the kinks worked out of all these online government services anyway. And obviously, that doesn’t count all the people who don’t have broadband at home but have some other access—via work, friends, family, libraries, or cafes.  (Also, the government just pumped $4 billion into “stimulating” broadband growth, with another #3 billion in the offing—although that money is, I think unwisely, focused on building pipe to underserved but sparsely populated rural areas rather than improving service and increasing uptake in cities.) All of which is to say, it would be mindbogglingly shortsighted to hold off on on rolling out Gov 2.0 services just because a target community might have low rates of Internet use today.

The same Pew survey suggests that a significant chunk of non–Internet users simply don’t see the value to them in getting online. (This is actually hard to disentangle from other reasons given: The claim that it’s too expensive or too much effort to acquire the skills is always implicitly relative to some expected benefit.) Without wanting to trivialize the very great advantage that comes from growing up in a computer-using household or having access to the Internet at school, I doubt most people need some kind of “Internet literacy” class at the community college to figure out how to use a Web browser. They need to see how it would actually benefit their lives enough to justify spending some time screwing around with one to learn how it works, maybe asking the neighbor kid for help as necessary. One of the best ways to ensure that members of disadvantaged communities get online is to make it clear that it’s worth their while to be there—by, say, increasing useful online access to important government services. You don’t need a high school diploma to understand Firefox, but if your education doesn’t qualify you for the sorts of office jobs where computer skills are required, it might not be clear why you should bother learning. Gov 2.0 might be one answer. Partly for that reason, surveys to assess the level of technological literacy in the community are likely to be worse than useless: The pace of change is too rapid for such metrics to be accurate for long anyway, and all the more when the community members’ motivation to become technologically literate is substantially affected by how much value these agencies are providing online.

There are also relatively few dimensions on which putting services online doesn’t make things easier for people with other barriers to access. Consider all the ways someone with a tough work schedule and limited education or language skills is better off, say, filling out an aid application online.  Instructions can be provided in a wide array of languages, and if necessary in audio as well as text. Detailed explanations for potentially confusing form items can be provided a click away without cluttering the page. The form itself can, to some extent, check for errors or do necessary math. It can be accessed and submitted when the applicant finds it convenient, not just during business hours. And insofar as online access reduces the burden on the brick-and-mortar facilities, even those who don’t avail themselves of it directly should benefit. Again, it’s hard to imagine that the barriers to online access are actually going to be higher for disadvantaged users than the barriers to in-person, telephonic, or paper-and-pen access.

Instead of depriving people of all these benefits while they commission studies and convene working groups, agencies should hew to the motto “release early, release often.” The clearest evidence of the community’s needs will come from how and how much they interact with the agency online. The best means of ensuring that the community has the necessary technological literacy to do that is to provide something valuable enough to make it worth acquiring. And the best evangelists and teachers will be the early adopters.

]]>
https://techliberation.com/2009/10/12/class-and-gov-2-0/feed/ 29 22517
Net Neutrality and Architecture Avoidance https://techliberation.com/2009/09/22/net-neutrality-and-architecture-avoidance/ https://techliberation.com/2009/09/22/net-neutrality-and-architecture-avoidance/#comments Tue, 22 Sep 2009 13:27:48 +0000 http://techliberation.com/?p=21766

If I can amplify a bit on a post at the Cato blog earlier today, I want to clarify that I fully agree some of the ISP behaviors that net neutrality proponents have identified as demanding a regulatory response really are seriously problematic. My point of departure is that I’d rather see if there are narrower grounds for addressing the objectionable behaviors than making sweeping rules about network architecture. So in the case of Comcast’s throttling of BitTorrent, which is the big one that seems to confirm the fears of the neutralists, I think it’s significant that for a long while the company was—”lying about” assumes intent, so  I’ll charitably go with “misrepresenting”—their practices. And I don’t think you need any controversial premises about optimal network management to think that it’s impermissible for a company to charge a fee for a service, and then secretly cripple that service. So without even having to hit the more controversial “nondiscrimination” principle Julius Genachoswki proposed on Monday, you can point to this as a failure of the “transparency” principle, about which I think there’s a good deal more consensus. Now, there are bigger guns out there looking for dodgy filtering practices these days, so I’d expect the next attempt at this sort of thing to get caught more quickly, but by all means, enforce transparency about business practices too. Consumers have a right to get the service they’ve bought without having to be 1337 haxx0rz to discover how they’re being shortchanged. But before we get the feds involve in writing code for ISP routers, I’d like to see whether that proves sufficient to limit genuinely objectionable deviations from neutrality. There’s a hoary rule of jurisprudence called the canon of constitutional avoidance. It means, very crudely, that judges don’t decide broad constitutional questions—they don’t go mucking with the basic architecture of the legal system—when they have some narrower grounds on which to rule. So if, for instance, there are two reasonable interpretations of a statute, one of which avoids a potential conflict with a constitutional rule, judges are supposed to prefer that interpretation. It’s not always possible, of course: Sometime judges have to tackle the big, broad questions. But it’s supposed to be something of a last resort. Lawyers and civil liberties advocates, of course, tend to get more animated by those broad principles, whether the First Amendment or end-to-end. But there’s often good reason to start small—to look to the specific fact patterns of problem cases and see whether there are narrower bases for resolution. It may turn out that in the kinds of cases that neutralists rightly warn could harm innovation, it’s not one big principle, but a diverse array of responses or fixes that will resolve the different issues. In a case like this one, perhaps a mix of mandated transparency, consumer demand, and user adaptation (e.g. encrypting traffic) will get you the same (or a better) result than an architectural mandate. One reason to prefer narrower solutions is that the more sweeping your fix is, the broader and more unpredictable the effects will tend to be.  So, in the Cato post, I floated the possibility that a neutrality mandate might skew investment incentives, and I’d like to elaborate a little on what I had in mind there. In wireline we have a legacy system where the open Internet is transiting over the very same coax cables as more traditional television signals, which now includes an array of not-so-traditional services like On Demand. Now, neutrality advocates are pretty explicit that they’re totally cool with this, though there’s nothing more discriminatory and closed than cable TV, where the menu of content you can access is rigidly determined by your service provider.  Not only are these signals sharing (finite) space on a wire, they’re often bundled in one package, so consumers pay a discounted price for getting their TV and Internet together. I may even have two Comcast wires from the same line coming into my TV set, allowing me to download the same show from Comcast’s On Demand or the Playstation Store. But what Comcast can’t do, consistent with principles of neutrality, is fold their video offerings into the data stream, but with priority for their packets  that allows me to download the same array of movies and shows at the speed I’m used to, rather than at the somewhat lower speed at which I can download Playstation content. Obviously there are numerous reasons cable companies continue to maintain segregated networks, some of which, again, have to do with cable being legacy tech. I’m not really interested in getting tangled in the question of the real-world conditions under which it would be more efficient to combine them. I am interested in the possibility that if it were more efficient, an overly broad rule designed as a response to a narrow problem with BitTorrent throttling could nevertheless provide a strong incentive to keep them segregated—and, ironically, for the very type of reason neutrality rules are supposed to make moot: to avoid cannibalizing their video offerings. In the wireless context, think of a technology like MediaFLO. That stands for Forward Link Only—a one-way video stream from a tower to a mobile device, with interactivity provided by a conventional 3G connection. There are various reasons, again, why it might be efficient for spectrum to be allocated to this delivery mechanism rather than having people download their video on all fours with every other packet on a generic LTE Internet connection. But it seems to me that a really bad reason to allocate spectrum this way is that you’ve got a regulatory asymmetry that lets you take advantage of cross-subsidies for content delivered this way at high speed, but not if you want to prioritize it on the all-purpose 4G network. Just to be clear, I’m not claiming this particular thing will happen—I have no idea whether it’s remotely probable for any specific market or technology.  But I very much doubt anyone can say how significant this type of allocative bias would be—certainly not five years down the line with whatever other standards are in the offing. And that’s the problem: To weigh the effects of the broader rule, you need to start factoring in effects like these, which seems like an impossible task. But if your concern is that the owners of the physical layer are going to leverage their control of the platform to privilege their content on the network, it seems like you’ve got to be equally concerned about whether they’ll privilege networks for their content. Put another way, it seems like there’s a potential tension between a policy of neutrality within the network and a policy that’s neutral across networks. I can’t predict how serious an issue that will be in two years or ten, and if I had to bet I’d put my money on the open, neutral network beating out some wireless Minitel. All the old walled-garden online services of the 90s turned out to be no competition for the unfettered Internet… and it’s for this very reason that I expect packet discrimination to be a losing proposition for ISPs, with or without regulation. But there are reasons things at least might be different for wireless. Until we know, I’d rather stick with the narrowest available fixes to such particular problems as do crop up, and then figure out as we go whether a broader remedy is needed, than have an overbroad fix that prompts some further lurching correction when we figure out, belatedly, what unintended second-order effects our first solution created. Addendum: My friend Tom Lee from Sunlight Foundation (confusingly distinct from colleague Tim Lee!) has some characteristically smart things to say, and suggests that while disagreement is bound to persist, arguments in this space appear to be getting less hysterical and stupid.  Which if true would mean Net Neutrality is on some kind of countercyclical trend to… every single other aspect of American political discourse.  Hope springs eternal. Addendum II: From Tom’s post:
[T]he FCC is essentially saying that if ISP, Inc. is interested in undertaking some network monkey business, it would behoove them to get on the phone with Washington before they get on the phone with Cisco.  This is a burden, I suppose, but network-wide changes are a big enough deal and pursued at a sufficiently careful pace that I don’t think it’s likely to be a particularly onerous one.
So, the thing we all like about end-to-end is that it enables innovation by decentralizing experimentation—you don’t need permission to connect a new application or device to the network. What I like about end-to-end in markets is that you don’t need permission to hook up a new business model either. Obviously, ISPs are vastly fewer and far slower moving than coders and users. Maybe he’s right that this makes the burden relatively low relative to the gains. But I’d like to see some of the neutralists go a little fractal and turn that geek candlepower to the question of how the market itself might be maintained as a more open platform rather than looking for the best network management strategies. Benkler has suggested spectrum commons as a means of introducing last-mile competition—a real Public Option, as it were—and something like that strikes me as more attractive than duopoly, whether or not it’s regulated duopoly. Addendum III: Jon Zittrain notes that he had a similar thought in his book The Future of the Internet (and How to Stop It):
The cable television experience is a walled garden. Should a cable or satellite company choose to offer a new feature in the lineup called the “Internet channel,” it could decide which Web sites to allow and which to prohibit. It could offer a channel that remains permanently tuned to one Web site, or a channel that could be steered among a preselected set of sites, or a channel that can be tuned to any Internet destination the subscriber enters so long as it is not on a blacklist maintained by the cable or satellite provider. Indeed, some video game consoles are configured for broader Internet access in this manner. Puzzlingly, parties to the network neutrality debate have yet to weigh in on this phenomenon.
]]>
https://techliberation.com/2009/09/22/net-neutrality-and-architecture-avoidance/feed/ 7 21766
Online Privacy and Regulation by Default https://techliberation.com/2009/09/17/online-privacy-and-regulation-by-default/ https://techliberation.com/2009/09/17/online-privacy-and-regulation-by-default/#comments Thu, 17 Sep 2009 17:00:16 +0000 http://techliberation.com/?p=21645

My colleague Jim Harper and I have been having a friendly internal argument about Internet privacy regulation that strikes me as having potential implications for other contexts, so I thought I might as well pick it up here in case it’s of interest to anyone else. Unsurprisingly, neither of us are particularly sanguine about elaborate regulatory schemes—and I’m sympathetic to the general tenor of his recent post on the topic. But unlike Jim, as I recently wrote here, I can think of two rules that might be appropriate: A notice requirement that says third-party trackers must provide a link to an ordinary-language explanation of what information is being collected, and for what purpose, combined with a clear rule making those stated privacy policies enforceable in court. Jim regards this as paternalistic meddling with online markets; I regard it as establishing the conditions for the smooth functioning of a market. What do those differences come down to?

First, a question of expectations. Jim thinks it’s unreasonable for people to expect any privacy in information they “release” publicly—and when he’s talking about messages posted to public fora or Facebook pages, that’s certainly right. But it’s not always right, and as we navigate the Internet our computers can be coaxed into “releasing” information in ways that are far from transparent to the ordinary user. Consider this analogy. You go to the mall to buy some jeans; you’re out in public and clearly in plain view of many other people—most of whom, in this day and age, are probably carrying cameras built into their cell phones. You can hardly complain about being observed, and possibly caught on camera, as you make your way to the store. But what about when you make your way to the changing room at The Gap to try on those jeans? If the management has placed an unobtrusive camera behind a mirror to catch shoplifters, can the law require that the store post a sign informing you that you’re being taped in a location and context where—even though it’s someone else’s property—most people would expect privacy? Current U.S. law does, and really it’s just one special case of the law laying down default rules to stabilize expectations.  I think Jim sees the reasonable expectation in the online context as “everything is potentially monitored and archived all the time, unless you’ve explicitly been warned otherwise.” Empirically, this is not what most people expect—though they might begin to as a result of a notice requirement.

Now, as Jim well knows, there are many cases in which the law sets defaults to stabilize expectations. Under the common law doctrine of implied warranty, when you go out and buy a toaster, you do not explicitly write out a contract in which it’s stipulated that the thing will turn on when you get home and plug it in, that it will toast bread without bursting into flames, and so on. Markets would not function terribly well if you did have to do this constantly. Rather, it’s understood that there are some minimal expectations built into the transaction—toasters toast bread!—unless the seller provides explicit notice that this is an “as is” sale. This brings us to a second point of divergence: Like Jim, I think the evolutionary mechanism of the common law is generally the best way to establish these market-structuring defaults. Unlike Jim, I think sometimes it’s appropriate to resort to statute instead. This story from Techdirt should suggest why:

It’s still not entirely clear what online agreements are actually enforceable and which aren’t. We’ve seen cases go both ways, with a recent ruling even noting that terms that are a hyperlink away, rather than on the agreement page itself, may be enforceable. But the latest case, involving online retailer Overstock went in the other direction. A court found that Overstock’s arbitration requirement was unenforceable, because, as “browserwrap,” the user was not adequately notified. Eventually, it seems that someone’s going to have to make it clear what sorts of online terms are actually enforceable (if any). Until then, we’re going to see a lot more lawsuits like this one.

Evolutionary mechanisms are great, but they’re also slow, incremental, and in the case of the common law typically parasitic on the parallel evolution of broader social norms and expectations. That makes it an uneasy fit with novel and rapidly changing technological platforms for interaction. The tradeoff is that, while it’s slow, the discovery process tends to settle on efficient rules. But sometimes having a clear rule is actually more important—maybe significantly more important—than getting the rule just right. These features seem to me to weigh in favor of allowing Congress, not to say what standards of privacy must look like, but to step in and lay down public default rules that provide a stable basis for informed consumers and sellers to reach their own mutually beneficial agreements.

Finally, there’s the question of whether it’s constitutionally appropriate for federal legislators, rather than courts, to make that kind of decision. I scruple to say how “the Founders intended” the Constitution to apply to e-commerce, but even on a very narrow reading of the Commerce Clause, this seems to fall safely within the purview of a power to “make regular” commerce between the several states by establishing uniform rules for transactions across a network that pays no heed to state boundaries. A patchwork of divergent standards imposed by judges and state legislators does not strike me as an especially market-friendly response to people’s online privacy concerns, but that appears to be the alternative. If there’s a way to address those concerns that’s both constitutionally appropriate and works by enabling informed choice and contract rather than nannying consumers or micromanaging business practices, then it seems to me that it makes sense for supporters of limited government to point that solution out.

Cross-posted from Cato-at-Liberty.

]]>
https://techliberation.com/2009/09/17/online-privacy-and-regulation-by-default/feed/ 17 21645
I’m Always Doing Seven Things; I Write the Code for Brain Implants https://techliberation.com/2009/09/15/im-always-doing-seven-things-i-write-the-code-for-brain-implants/ https://techliberation.com/2009/09/15/im-always-doing-seven-things-i-write-the-code-for-brain-implants/#comments Tue, 15 Sep 2009 14:11:20 +0000 http://techliberation.com/?p=21528

Thanks to Adam for the kind introduction; for folks to whom I’m unfamiliar, my Ars Technica archive has the bulk of my tech writing over the past year and change, though plenty of it is straight reporting now well past its expiration date.  It’s been suggested that for openers, I crosspost last week’s Cato @ Liberty thumbsucker on behavioral advertising regulation, which riffs on some of the commentary here, but in the interest of avoiding redundancy, I’ll just do the digest version and let the curious click through. Since they say the first day in lockup, you should pick a fight with the biggest mofo in the yard, I’ll excerpt the part where I disagree with Berin a bit:

First, while it’s certainly true that there are privacy advocates who seem incapable of grasping that not all rational people place an equally high premium on anonymity, it strikes me as unduly dismissive to suggest, as Berin Szoka does, that it’s inherently elitist or condescending to question whether most users are making informed choices about their privacy. If you’re a reasonably tech-savvy reader, you probably know something about conventional browser cookies, how they can be used by advertisers to create a trail of your travels across the Internet, and how you can limit this.  But how much do you know about Flash cookies? Did you know about the old CSS hack I can use to infer the contents of your browser history even without tracking cookies? And that’s without getting really tricksy. If you knew all those things, congratulations, you’re an enormous geek too — but normal people don’t.  And indeed, polls suggest that people generally hold a variety of false beliefs about common online commercial privacy practices.  Proof, you might say, that people just don’t care that much about privacy or they’d be attending more scrupulously to Web privacy policies — except this turns out to impose a significant economic cost in itself.

I still end up rejecting most of the proposed arguments for regulation, though a couple of the suggested rules (notice requirement, liquidated damages for intentional breach of stated privacy policy) struck me as more defensible, if not especially urgent.

That aside, I want to get down to the more important business of suggesting a TLF theme song: The Magnetic Fields’ sardonic “Technical (You’re So)” (whence the title of this post),  in which wordsmith/crooner Stephin Merritt delivers such lines as: “There are no papers on you /  The laws don’t cover what you do / You and your think-tank entourage / Are all counterculture demigods” and “You’re a Libertarian / The death of the left was you / You look like Herbert Von Karajan / You live underneath the zoo.”  Sure, they’re meant as mockery when Merritt sings them, but then, “queer” used to be a pejorative too. Reappropriation, baby.

Also, rhyming “Libertarian” with “Von Karajan” is the greatest act of poetry in music since Sting paired “He starts to shake and cough” with “the old man in / that book by Nabakov.” Fact.

]]>
https://techliberation.com/2009/09/15/im-always-doing-seven-things-i-write-the-code-for-brain-implants/feed/ 4 21528