Privacy, Security & Government Surveillance

Today’s Washington Post has a story entitled U.S. Web-Tracking Plan Stirs Privacy Fears. It’s about the reversal of an ill-conceived policy adopted nine years ago to limit the use of cookies on federal Web sites.

In case you don’t already know this, a cookie is a short string of text that a server sends a browser when the browser accesses a Web page. Cookies allow servers to recognize returning users so they can serve up customized, relevant content, including tailored ads. Think of a cookie as an eyeball – who do you want to be able to see that you visited a Web site?

Your browser lets you control what happens with the cookies offered by the sites you visit. You can issue a blanket refusal of all cookies, you can accept all cookies, and you can decide which cookies to accept based on who is offering them. Here’s how:

  • Internet Explorer: Tools > Internet Options > “Privacy” tab > “Advanced” button: Select “Override automatic cookie handling” and choose among the options, then hit “OK,” and next “Apply.”

I recommend accepting first-party cookies – offered by the sites you visit – and blocking third-party cookies – offered by the content embedded in those sites, like ad networks. (I suspect Berin disagrees!) Or ask to be prompted about third-party cookies just to see how many there are on the sites you visit. If you want to block or allow specific sites, select the “Sites” button to do so. If you selected “Prompt” in cookie handling, your choices will populate the “Sites” list.

  • Firefox: Tools > Options > “Privacy” tab: In the “cookies” box, choose among the options, then hit “OK.”

I recommend checking “Accept cookies from sites” and leaving unchecked “Accept third party cookies.” Click the “Exceptions” button to give site-by-site instructions.

There are many other things you can do to protect your online privacy, of course. Because you can control cookies, a government regulation restricting cookies is needless nannying. It may marginally protect you from government tracking – they have plenty of other methods, both legitimate and illegitimate – but it won’t protect you from tracking by others, including entities who may share data with the government.

The answer to the cookie problem is personal responsibility. Did you skip over the instructions above? The nation’s cookie problem is your fault.

If society lacks awareness of cookies, Microsoft (Internet Explorer), the Mozilla Foundation (Firefox), and producers of other browsers (Apple/Safari, Google/Chrome) might consider building cookie education into new browser downloads and updates. Perhaps they should set privacy-protective defaults. That’s all up to the community of Internet users, publishers, and programmers to decide, using their influence in the marketplace. (I suspect Berin is against it!)

Artificially restricting cookies on federal Web sites needlessly hamstrings federal Web sites. When the policy was instituted it threatened to set a precedent for broader regulation of cookie use on the Web. Hopefully, the debate about whether to regulate cookies is over, but further ‘Net nannying is a constant offering of the federal government (and other elitists).

By moving away from the stultifying limitation on federal cookies, the federal government acknowledges that American grown-ups can and should look out for their own privacy.

Google’s New Opt-Opt

by on August 11, 2009 · 6 comments

You want privacy, do ya’?

http://www.theonion.com/content/themes/common/assets/onn_embed/embedded_player.swf?image=http%3A%2F%2Fwww.theonion.com%2Fcontent%2Ffiles%2Fimages%2FGOOGLE_VILLAGE_article.jpg&videoid=97279&title=Google%20Opt%20Out%20Feature%20Lets%20Users%20Protect%20Privacy%20By%20Moving%20To%20Remote%20Village


Google Opt Out Feature Lets Users Protect Privacy By Moving To Remote Village

What Unites Advocates of Speech Controls & Privacy Regulation? [pdf]

by Adam Thierer & Berin Szoka The Progress & Freedom Foundation, Progress on Point No. 16.19

Anyone who has spent time following debates about speech and privacy regulation comes to recognize the striking parallels between these two policy arenas. In this paper we will highlight the common rhetoric, proposals, and tactics that unite these regulatory movements. Moreover, we will argue that, at root, what often animates calls for regulation of both speech and privacy are two remarkably elitist beliefs:

  1. People are too ignorant (or simply too busy) to be trusted to make wise decisions for themselves (or their children); and/or,
  2. All or most people share essentially the same values or concerns and, therefore, “community standards” should trump household (or individual) standards.

While our use of the term “elitism” may unduly offend some understandably sensitive to populist demagoguery, our aim here is not to launch a broadside against elitism as Time magazine culture critic William H. Henry once defined it: “The willingness to assert unyieldingly that one idea, contribution or attainment is better than another.”[1] Rather, our aim here is to critique that elitism which rises to the level of political condescension and legal sanction. We attack not so much the beliefs of some leaders, activists, or intellectuals that they have a better idea of what it in the public’s best interest than the public itself does, but rather the imposition of those beliefs through coercive, top-down mandates.

That sort of elitism—elitism enforced by law—is often the objective of speech and privacy regulatory advocates. Our goal is to identify the common themes that unite these regulatory movements, explain why such political elitism is unwarranted, and make it clear how it threatens individual liberty as well as the future of free and open Internet. As an alternative to this elitist vision, we advocate an empowerment agenda: fostering an environment in which users have the tools and information they need to make decisions for themselves and their families. Continue reading →

By Eric Beach, Adam Marcus & Berin Szoka

In the first entry of the Privacy Solution Series, Berin Szoka and Adam Thierer noted that the goal of the series is “to detail the many ‘technologies of evasion’ (i.e., empowerment or user ‘self-help’ tools) that allow web surfers to better protect their privacy online.” Before outlining a few more such tools, we wanted to step back and provide a brief overview of the need for, goals of, and future scope of this series.

Smokey the Bear with signWe started this series because, to paraphrase Smokey the Bear, “Only you can protect your privacy online!” While the law can play a vital role in giving full effect to the Fourth Amendment’s restraint on government surveillance, privacy is not something that cannot simply be created or enforced by regulation because, as Cato scholar Jim Harper explains, privacy is “the subjective condition that people experience when they have power to control information about themselves.” Thus, when the appropriate technological tools and methods exist and users “exercise that power consistent with their interests and values, government regulation in the name of privacy is based only on politicians’ and bureaucrats’ guesses about what ‘privacy’ should look like.” As Berin has put it:

Debates about online privacy often seem to assume relatively homogeneous privacy preferences among Internet users. But the reality is that users vary widely, with many people demonstrating that they just don’t care who sees what they do, post or say online. Attitudes vary from application to application, of course, but that’s precisely the point: While many reflexively talk about the ‘importance of privacy’ as if a monolith of users held a single opinion, no clear consensus exists for all users, all applications and all situations.

Moreover, privacy and security are both dynamic: The ongoing evolution of the Internet, shifting expectations about online interaction, and the constant revelations of new security vulnerabilities all make it impossible to simply freeze the Internet in place. Instead, users must be actively engaged in the ongoing process of protecting their privacy and security online according to their own preferences.

Our goal is to educate users about the tools that make this task easier. Together, user education and empowerment form a powerful alternative to regulation. That alternative is “less restrictive” because regulatory mandates come with unintended consequences and can never reflect the preferences of all users.

Continue reading →

The new Maine law I blogged about on Sunday is much worse than I thought based on my initial reading. If allowed to stand, it would constitute a sweeping age verification mandate introduced through the back door of “child protection.”

The law, which goes into effect in September, would extend the approach of the Children’s Online Privacy Protection Act (COPPA) of 1998 by requiring “verifiable parental consent” before the collection of kids “personal information” about kids, not just those under 13, but also adolescents age 13-17.  Unlike other state-level proposals in New Jersey, Illinois, Georgia and North Carolina, Maine’s “COPPA 2.0” law would also cover health information, but would only govern the collection and use of  data for marketing purposes (while the FTC has interpreted COPPA to cover to essentially any capability for communicating personal information among users).

But the Maine law would go much further than these proposals or COPPA itself by banning transfer or use of such data in anything other than de-identified, aggregate form. Still I took some comfort in the fact that the Maine law, unlike COPPA or these other proposals, lacked the second of COPPA’s two prongs: (i) collection from kids and (ii) collection on sites that are directed at kids. It’s because of the second prong that COPPA applies not only when a site operator knows that it’s collecting information from kids (or merely allowing them to share information with other users), but also when the operator’s site is (like, say, Club Penguin) targeted to kids in terms of its subject matter, branding, interface, etc. Because I initially concluded that the Maine law would apply only to knowing collection, I supposed that it would be less likely to require age verification of all users, as other COPPA 2.0 proposals would—something that would be unlikely to survive a First Amendment challenge based on the harm to online anonymity.

But I was quite wrong. During the PFF Capitol Hill briefing Adam and I held on Monday, Jim Halpert, one of our panelists, noted that the bill imposed “strict liability.”  Continue reading →

In response to Professor Jonathan Zittrain’s op-ed in The New York Times last Monday about online privacy and open platforms (which Adam thoroughly refuted last week) I have a letter to the editor in today’s The New York Times:cloud

To the Editor: Re “Lost in the Cloud” (Op-Ed, July 20): In discussing the privacy risks that have accompanied the growth of the Internet, Prof. Jonathan Zittrain rightly bemoans the willingness of governments to violate individuals’ privacy rights. Unfortunately, he proposes new legal restrictions that would stifle online innovation while doing little to enhance consumer privacy. Mr. Zittrain proposes a “fair practices law” that would require companies to release personal data back to users upon request. Such a rule may sound workable, but purging specific data across globally dispersed server farms is no simple endeavor. Who is to pay for the implementation of such privacy procedures — especially for free services like Facebook or Twitter that have yet to turn a profit? A better approach to online privacy is to educate users on safeguarding personal information. Ultimately, however, the only foolproof approach to protecting sensitive data online is to simply not disclose it.

Continue reading →

Maine has just enacted a law severely restricting marketing to kids: the Act To Prevent Predatory Marketing Practices against Minors, summarized by Covington & Burling. Adam and I released a major paper in June about such laws: COPPA 2.0: The New Battle over Privacy, Age Verification, Online Safety & Free Speech. Maine is following the lead of several other states that have tried to expand the Children’s Online Privacy Protection Act (COPPA) of 1998 to cover nost just kids under 13 but adolescents as well and potentially all social networking sites. We discussed at length the problems such laws create, particularly the possibility that large numbers of adults would, for the first time, be subject to age verification mandates before accessing (or participating in) the growing range of sites with social networking capabilities.  This, in turn, would significantly “chill” free speech online by undermining anonymity.

Like COPPA 2.0 proposals in New Jersey (simply extending COPPA to cover adolescents) and Illinois (applying COPPA to most social networking sites), the Maine law tries to build on COPPA’s “verifiable parental consent” requirement for the 13-17 audience as well as those under 13.

On the one hand, the Maine law goes much further than these other COPPA 2.0 proposals. While the original bill was limited to the Internet and wireless communications, the final bill’s scope applies to all communications.  The bill also covers “health-related” information (HRI) as well as “personal information” (PI). On the other hand, the Maine law is thus somewhat narrower than other COPPA 2.0 proposals and COPPA itself in that it applies only to “marketing or advertising products, goods or services.” While COPPA is commonly misunderstood to cover only marketing, it actually covers essentially any “collection” (broadly defined) of personal information from kids for any purpose—including merely giving kids access to communications functionality that might let them share personal information with other users (even if the site itself is not “collecting” that information in the commonly understood sense).

Continue reading →

From the Portland Press Herald:

Wanted: 250 Maine drivers willing to let a stranger put a black box under their dashboard. The reward: $895 and the opportunity to speak their minds about the highway tax experiment to a researcher. University of Iowa researchers are seeking 250 motorists in Cumberland, York and Sagadahoc counties willing to have a computer tracking system installed in their cars for 10 months. The system could someday be used to tax drivers according to the number of miles they drive, rather than the amount of gasoline they consume.

This is not only gets the award for most Orwellian government program of the week, but also the irony in incentives bonus prize. The new tax is meant to make up for the loss in gas revenue from more fuel efficient cars and folks using less gas during the recession. In doing so, this black-box tax would essentially be punishing motorists for driving more efficient cars, which is supposed to be a goal of the gas tax (other than raising revenue).

Bottom line: If you need more money for highways, build more tolls or raise the gas tax, don’t track your citizens.

CCleanerby Eric Beach & Adam Thierer

In our ongoing “Privacy Solutions Series” we have been outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online. These tools and methods form an important part of a layered approach that we believe offers a more effective alternative to government-mandated regulation of online privacy. [See entries 1, 2, 3, 4]  In this installment, we will be exploring CCleaner, a free Windows-based tool created by UK-based software developer Piriform that scrubs you computer’s hard drive and cleans its registry. We’ll describe how CCleaner helps you destroy data and protect your private information.

Whenever you move files to the recycling bin and subsequently purge the recycling bin, the affected files remain on your computer. In other words, deleting files from the recycling bin does not remove them from the computer. The reason for this is important and, in many ways, beneficial. In some respects, many computer file systems work like an old library catalog system. A file is like a catalog card and contains the reference to the actual place on the hard drive where the information contained in the file is stored. When a user deletes a file, the computer does not actually clean all the affected hard drive space. Instead, to extend the analogy, the computer simply removes the card catalog entry that points to the hard drive space where the file is contained and frees up this space for new files. The reason this is usually beneficial is that cleaning the hard drive space occupied by a file can take a while. If you want evidence of this, look no further than the length of time required to reformat a hard drive (reformatting a hard drive actually clears the disk’s contents). The practical implication of the way hard drives work is that when you delete an important memo from your computer, it is not actually gone. Similarly, when you clear your browsing history, it is not gone. The bottom line is that an individual who can access your hard drive (a thief, the government, etc.) could view many or all of the files you deleted.

The solution to this problem is to ensure that when a file is deleted, the space on the hard drive occupied by that file is not simply flagged as available space but is entirely rewritten with unintelligible data. One of the best programs for accomplishing this is CCleaner (which formerly stood for “Crap Cleaner”!)

Continue reading →

Evgeny Morozov has an op-ed in the New York Times today that makes the case that cyberattacks are not an existential threat to the country or anything even close. He also argues that more secrecy around cybersecurity is exactly the wrong way to address the problem, citing the old geek adage “given enough eyeballs, all bugs are shallow.” He even explains that “Much of the real computer talent today is concentrated in the private sector,” and that “It’s no secret that many computer science graduates perceive government jobs as an ‘IT ghetto.'”

So far so good. Bravo, in fact. Unfortunately, he suggests that “To inject more talent into government IT jobs, it is necessary to raise their visibility and prestige, perhaps by creating national Tech Corps that could introduce talent into sectors that need it most.”

As Jim Harper has noted, given that cyberattacks may not be as serious a threat as many assume, it might be better to allow the private sector (which has the talent and the incentive) to protect its own infrastructure. DHS and the military can protect the .govs and the military the .mils. The government could benefit from private R&D on run-of-the-mill cybersecurity, and they can focus on protecting critical and secret assets, which in any case should not be connected directly to the wider internet.