Privacy, Security & Government Surveillance

With China’s Internet filtering back in the spotlight, this is as good a time as any to rewatch Clay Shirky’s excellent TED talk on the political implications of the ongoing media revolution—with a fascinating case study of a recent episode in the People’s Republic.

Two points that probably deserve emphasis. The first is that the explosion of user generated content in one sense makes the control of search engines even more important for a regime that’s trying to limit access to politically inconvenient information. You can block access to Amnesty International, and you can even try to play whack-a-mole with all the mirrors that pop up, but when the ideas you’re trying to suppress can essentially crop up anywhere, a strategy that relies on targeting sites is going to be hopeless. The search engine is a choke point: You can’t block off access to every place where someone might talk about the Tiananmen massacre, but if you can lock down people’s capacity to search for “Tiananmen massacre,” you can do the next best thing, which is making it very difficult for people to find those places. There are always innumerable workarounds for simple text filters (“Ti@n@nm3n”) but if people are looking for pages, the searchers and the content producers need to converge on the same workaround, by which point the authorities are probably aware of it as well and able to add it to the filter. It’s the same reason people who want to shut down illegal BitTorrent traffic have to focus on the trackers.

The second point, however, is that social media also erodes the value of the search engine as a choke point, because it transforms the community itself into the search engine. For many broad categories of question I might want answered, I will get better information more rapidly by asking Twitter than by asking Google. Marshall McLuhan called media “the extensions of man,” because they amplify and extend the function of our biological nervous systems: The screen as prosthetic eye, the speaker as prosthetic ear, the book or the database as external memory storage. The really radical step is to make our nervous systems extensions of each other—to make man the extension of man. That’s hugely more difficult to filter effectively because it makes the generation of the medium’s content endogenous to the use of the medium. You can ban books on a certain topic because a static object gives you a locus of control; a conversation is a moving target. Hence, as Shirky describes, China just had to shut down Twitter on the Tienanmen anniversary, because there was no feasible way to filter it in realtime.

An analogy to public key encryption might be apt here. The classic problem of secure communications is that you needed a secure channel to transmit the key: The process of securing your transmission against attack was itself a point of vulnerability. You had to openly agree to a code before you could start speaking in code. The classic problem of free communication is that the censors can see the method you’re attempting to evade censorship. Diffie-Hellman handshaking solves the security problem because an interactive connection between sufficiently smart systems lets you negotiate an idiosyncratic set of session keys without actually transmitting it. A conversation can similarly negotiate its own terms; given sufficient ingenuity, I can make it clear to a savvy listener that  I intend for us to discuss Tienanmen in such-and-such a fashion, and the most you can do with any finite set of forbidden terms and phrases is slow the process down slightly.

This is a big part of why, pace folks like Tim Wu, I’ll still allow myself to get into the spirit of ’96 every now and again. They can, to be sure, resolve to shut down Twitter and try to throw enough people in jail to intimidate folks into “self discipline,” as they charmingly term it. But the strategies of control available become hugely more costly when the function of the medium is less to connect people with information than to connect them to each other.

Following up from Adam’s post on Hillary Clinton’s speech on global Internet freedom, here’s an interesting blog post from Nora von Ingersleben at ACT. Nora was the lucky (and only) person at the event to ask a question to our Secretary of State. Her question centered upon the practical–while it is well-and-good that companies should “do the right thing”, there are real-world consequences when a company doesn’t comply with a legal request. How can off-shore employees be protected?

QUESTION: Nora von Ingersleben with the Association for Competitive Technology. Madame Secretary, you mentioned that U.S. companies have to do the right thing, not just what is good for their profits. But what if I am a U.S. company and I have a subsidiary in China and the Chinese Government is coming after my guys for information and, you know, we have resisted but now my guys have been taken to jail, my equipment is being hauled away. In that situation, what can the State Department do? Or what will the State Department do? SECRETARY CLINTON: Well, we obviously speak out on those individual cases. And we are, as I said, hoping to engage in a very candid and constructive conversation with the Chinese Government. We have had a positive year of very open discussions with our Chinese counterparts. I think we have established a foundation of understanding. We disagree on important issues with them. They disagree on important issues with us. They have our perspective; we have our perspective. But obviously, we want to encourage and support increasing openness in China because we believe it will further add to the dynamic growth and the democratization on the local level that we see occurring in China.

Brad Smith, Microsoft’s Senior Vice President and General Counsel addressed the Brookings Institution earlier this week calling for government to get involved to enhance the safety, security and privacy of the “Cloud.” (Here’s a transcript of his remarks)

Smith alluded to the fact that cloud computing is undergoing a powerful transformation and correctly pointed out that, even though millions of Americans are using cloud computing platforms today (and have been for years), a far majority of them have no real concept of what cloud computing actually is or does — and neither to most policymakers.

This speech was very well timed, given the current Google-China kerfuffle from the past couple of weeks. Essentially, who is in charge of the data in the cloud? How can we guarantee that best practices are being used by providers? And, what role will the federal government play in the regulation of this powerful emerging technology? Continue reading →

There’s been a lot of hand-wringing lately about Google’s recent acquisitions of Teracent (ad-personalization) and AdMob (mobile ads), as well as Apple’s response, buying AdMob’s rival Quattro Wireless. Jeff Chester, true To form, quickly fired off an angry letter to FTC Chairman Jon Leibowitz, ranting about how the Google/AdMob deal would harm consumer privacy with the same vague fulminations as ever:

Google amasses a goldmine of data by tracking consumers’ behavior as they use its search engine and other online services. Combining this information with information collected by AdMob would give Google a massive amount of consumer data to exploit for its benefit.

Yup, that’s right, it’s all part of Google’s grand conspiracy to exploit (and eventually enslave) us all—and Apple is just a latecomer to this dastardly game. It’s not as if that data about users’ likely interests might, oh, I don’t know… actually help make advertising more relevant—and thus increase advertising revenues for the mobile applications/websites that depend on advertising revenues to make their business models work. No, of course not! Greedy capitalist scum like Google and Apple don’t care about anyone but themselves, and just want to extract every last drop of “surplus value” (as Marx taught us) from The Worker. (Never mind that in 4Q2009 Google generated $1.47 billion for website owners who use Google AdSense to sell ads on their sites—up 17% over 4Q2008—or that Apple has a strong incentive to maximize revenues for its iPhone app developers.) Internet users of the world, unite!  You have nothing to lose but all those “free” content and services thrown at your feet! Continue reading →

Over this past week, a lot of people were making hay over this recent ReadWriteWeb story, “Facebook’s Zuckerberg Says The Age of Privacy is Over.” Seems that some people were taking issue with Facebook founder Mark Zuckerberg’s suggestion that Facebook’s recent site policy changes, which generally encouraged more sharing or information, were in line with public expectations.  Most people put words in Zuckerberg’s mouth and accused him of saying that “privacy is over” or that he claimed he “is a prophet,” neither of which he actually said.  But let’s ignore the fact that some people made stuff up and get back to the point: What set people off about Facebook’s recent site changes and Zuckerberg’s rationalization of them?

I think it goes back to the fact that a lot of people want to have their cake and eat it too. “It is the paradox of the cyber era,” notes Washington Post columnist Michael Gerson: We are “a nation of exhibitionists demanding privacy.”  Indeed, that’s true, but there’s a good reason why this so-called “privacy paradox” exists. As Larry Downes, author of the brilliant new book, The Laws of Disruption, argues:

People value their privacy, but then go out of their way to give it up. There’s nothing paradoxical about it. We do value privacy. It’s just that we’re willing to trade it for services we value even more. Consumers intuitively look at the information being requested and decide whether the value they receive for disclosing it is worth the cost of their privacy. (p. 80)

That’s exactly right. When confronted with real world choices about privacy and information sharing, we often are willing to accept some trade-offs in exchange for something of value. But when we are asked about this process we are loathe to admit that we would willingly engage in such privacy-for-services trade-offs even if we do it every day of our lives.  As Michael Arrington of TechCrunch rightly points out:

Continue reading →

Yesterday’s bombshell announcement that Google is prepared to pull out of China rather than continuing to cooperate with government Web censorship was precipitated by a series of attacks on Google servers seeking information about the accounts of Chinese dissidents.  One thing that leaped out at me from the announcement was the claim that the breach “was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” That piqued my interest because it’s precisely the kind of information that law enforcement is able to obtain via court order, and I was hard-pressed to think of other reasons they’d have segregated access to user account and header information.  And as Macworld reports, that’s precisely where the attackers got in:

That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

This is hardly the first time telecom surveillance architecture designed for law enforcement use has been exploited by hackers. In 2005, it was discovered that Greece’s largest cellular network had been compromised by an outside adversary. Software intended to facilitate legal wiretaps had been switched on and hijacked by an unknown attacker, who used it to spy on the conversations of over 100 Greek VIPs, including the prime minister.

As an eminent group of security experts argued in 2008, the trend toward building surveillance capability into telecommunications architecture amounts to a breach-by-design, and a serious security risk. As the volume of requests from law enforcement at all levels grows, the compliance burdens on telcoms grow also—making it increasingly tempting to create automated portals to permit access to user information with minimal human intervention.

The problem of volume is front and center in a leaked recording released last month, in which Sprint’s head of legal compliance revealed that their automated system had processed 8 million requests for GPS location data in the span of a year, noting that it would have been impossible to manually serve that level of law enforcement traffic.  Less remarked on, though, was Taylor’s speculation that someone who downloaded a phony warrant form and submitted it to a random telecom would have a good chance of getting a response—and one assumes he’d know if anyone would.

The irony here is that, while we’re accustomed to talking about the tension between privacy and security—to the point where it sometimes seems like people think greater invasion of privacy ipso facto yields greater security—one of the most serious and least discussed problems with built-in surveillance is the security risk it creates.

Cross-posted from Cato@Liberty.

by Adam Thierer & Berin Szoka, Progress Snaphot 6.1

Stephanie Clifford of the  New York Times posted a very interesting article this week summarizing a recent “on-the-record chat” the Times staff had with Federal Trade Commission (FTC) chairman Jon Leibowitz and FTC Bureau of Consumer Protection chief David Vladeck.  The interview [discussed by Braden here] is profoundly important in that it reveals an alarming disconnect regarding the relationship between “privacy” regulation and the future of media, which were the subjects of their discussion with Times staff.  Namely, Leibowitz and Vladeck apparently fail to appreciate how the delicate balance between commercial advertising and journalism is at risk precisely because of the sort of regulations they apparently are ready to adopt.  Because the value of online advertising depends on data about its effectiveness and consumers’ likely interests, and because advertising is indispensable to funding media, what’s ultimately at stake here is nothing short of the future of press freedom.

The “Day of Reckoning” Is Upon Us

Leibowitz and Vladeck spend the first half of The Times interview wringing their hands about “privacy policies,” the declarations made by websites and advertising networks about their data collection and use practices (for which the FTC can and must hold them accountable).  But the two feel that privacy policies don’t adequately inform consumers.  Chairman Leibowitz claims that online companies “haven’t given consumers effective notice, so they can make effective choices.”  And Mr. Vladeck states that advise-and-consent models “depended on the fiction that people were meaningfully giving consent.” But he and the FTC seem ready to abandon the notice and choice model because the “literature is clear” that few people read privacy policies, Vladeck told the Times.  He and Leibowitz continue:

“Philosophically, we wonder if we’re moving to a post-disclosure era and what that would look like,” Mr. Vladeck said. “What’s the substitute for it?” He said the commission was still looking into the issue, but it hoped to have an answer by June or July, when it plans to publish a report on the subject. Mr. Leibowitz gave a hint as to what might be included: “I have a sense, and it’s still amorphous, that we might head toward opt-in,” Mr. Leibowitz said.

This clearly foreshadows the regulatory endgame we have long suspected was coming.  When the FTC released its “Self-Regulatory Principles for Online Behavioral Advertising” eleven months ago, we asked: “What’s the Harm & Where Are We Heading?”  Their answers to both questions have become clearer with each new calculated comment—all apparently intended to slowly “turn up the heat” on the advertising industry so that the proverbial frog will stay in the pot until the water finally boils.  Leibowitz’s FTC has simply dodged the “harm” question with a four-part strategy: Continue reading →

Google’s policy blog just announced that Google, along with several other companies around the world, has been subjected to Chinese-sponsored cyber attacks.  As a result, Google will stop censoring the search results on Google.cn and as a consequence, may close its Chinese offices.

This decision is refreshing.  Despite over two decades of easing restriction on its people, Chinese regime remains brutally oppressive and continues to commit heinous crimes against its own people.  In a world that’s all too eager to look the other way so it can cash-in on China’s economic boom, Google has decided to forgo profits and take a stand against this oppressive regime.

I hope that many other companies follow Google’s lead.  Perhaps even the US government could do so, but so long as China owns one out of every four dollars of foreign-held US debt, Google shouldn’t count on it.

Last year there was discussion of a possible return of the FCC’s “Fairness Doctrine” that used to apply to broadcasters. This year, we should all be aware of the FTC’s stepped-up rhetoric toward an “Unfairness Doctrine” for privacy–an increased effort toward enforcing the “unfair” part of Section 5 of the FTC Act, which prohibits unfair or deceptive practices.

Historically, the approach of the FTC toward privacy has been one of notice and consent and to hold companies to the word of their privacy policies — if companies say one thing and then do another, the FTC goes after them for being deceptive. This is the “deceptive” part of the FTC’s power to enforce the law against unfair or deceptive commercial practices.

For privacy, we really haven’t seen the “unfair” part being enforced. But if public comments from high-ranking officials is any indicator (and it is), that’s about to change.

A recent New York Times article summarizes its interview with FTC Chairman Jon Leibowitz and David Vladeck, chief of the FTC’s Bureau of Consumer Protection. It’s another insight into how aggressive the commission wants to be toward privacy.

Advise-and-consent “depended on the fiction that people were meaningfully giving consent,” Mr. Vladeck said. “The literature is clear” that few people read privacy policies, he said.

But even if people did read privacy policies, Vladeck still doesn’t think it is fair that people give consent to data practices, often in exchange for free services: Continue reading →

Today I appeared on CNBC [video here and embedded down below] to discuss concerns about emerging “smart-sign” technology, which could give rise to a new generation of interactive retail advertising and marketing efforts. This is in the news because, as Don Clark and Nick Wingfield report today in The Wall Street Journal (“Intel, Microsoft Offer Smart-Sign Technology: Retailers, Product Marketers Could Discern Viewer, Make Choices on What to Display and Transfer Coupons Via Phone“), Intel and Microsoft have announced that:

they will collaborate to help companies create and use new forms of digital signs. By exploiting Intel chips and Microsoft software, the companies hope to bring more interactivity to such devices and help retailers customized marketing offers to consumers. Signs equipped with cameras and specialized software could recognize the age, gender and height of people in front of them, and tell what products and images received the most attention, the companies said. By gathering information about which messages are more effective, they add, traditional retailers could develop marketing approaches that better counter Web-based competitors. “Every year retailers lose more ground to online [sellers], and they have to do something about that,” said Joe Jensen, general manager of Intel’s embedded computing division.

Down below, I have jotted down a couple of thoughts about the rise of “digital signage” and more targeted forms of retail marketing, only a few of which I was able to get across in this short TV spot. I think it’s an exciting new development for both retailers and consumers for the reasons I explain down below:

http://plus.cnbc.com/rssvideosearch/action/player/id/1383744249/code/cnbcplayershare Continue reading →