Privacy, Security & Government Surveillance

The WSJ ran a front page, above-the-fold headline screaming that Facebook has had a privacy breach. But as Steve DelBianco discusses over at the NetChoice blog, today’s WSJ “breach” is all smoke and no fire.

The WSJ is saying that some of Facebook’s applications are accidentally sharing the public username on my Facebook page, in violation of the company’s privacy policy.  This story was nothing like a breach where my credit card numbers or sensitive personal information was leaked or hacked. A closer look at the issue indicates that there is far mSmoke alarm in a smoky roomore smoke than fire in the WSJ piece.
Moreover, the WSJ should step-back from using tabloid-style headings to attract eyeballs (and advertising revenue) to their research and writing.  The breathless headline is clearly meant to feed the privacy beast that is increasingly in danger of doing far more harm than good.

While details are still forthcoming, it appears that the issue at hand involves external actions between application developers and advertising companies. Facebook has stepped-up and is holding third parties accountable to existing privacy requirements.

Late last month, the National Research Council released a book entitled “Biometric Recognition: Challenges and Opportunities” that exposes the many difficulties with biometric identification systems. Popular culture has portrayed biometrics as nearly infallible, but it’s just not so, the report emphasizes. Especially at scale, biometrics will encounter a lot of challenges, from engineering problems to social and legal considerations.

“[N]o biometric characteristic, including DNA, is known to be capable of reliably correct individualization over the size of the world’s population,” the report says. (page 30) As with analog, in-person identification, biometrics produces a probabilistic identification (or exclusion), but not a certain one. Many biometrics change with time. Due to injury, illness, and other causes, a significant number of people do not have biometric characteristics like fingerprints and irises, requiring special accommodation.

At the scale often imagined for biometric systems, even a small number of false positives or false negatives (referred to in the report as false matches and false nonmatches) will produce considerable difficulties. “[F]alse alarms may consume large amounts of resources in situations where very few impostors exist in the system’s target population.” (page 45)

Consider a system that produces a false negative, excluding someone from access to a building, one time in a thousand. If there aren’t impostors attempting to defeat the biometric system on a regular basis, the managers of the system will quickly come to assume that the system is always mistaken when it produces a “nonmatch” and they will habituate to overruling the biometric system, rendering it impotent.

Context is everything. Biometric systems have to be engineered for particular usages, keeping the interests of the users and operators in mind, then tested and reviewed thoroughly to see if they are serving the purpose for which they’re intended. The report debunks the “magic wand” capability that has been imputed to biometrics: “[S]tating that a system is a biometric system or uses ‘biometrics’ does not provide much information about what the system is for or how difficult it is to successfully implement.” (page 60) Continue reading →

By Adam Thierer & Berin Szoka

Last Friday, Common Sense Media (CSM) held an event  (video) at the National Press Club featuring the chairmen of the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC). The regulatory activist group released a new poll on children and privacy (Exec Summary & Full Survey). Unfortunately, like almost every other privacy-related poll, theirs is more geared towards fueling a privacy panic than on exploring the real-world trade-offs between legislating “greater privacy” (a hopelessly abstract concept in most conversations) and losing the consumer benefits of data sharing: innovation in online services and the quality and quantity of services and content supported by data-driven advertising.

What better way to drum up Congressional support for paternalistic privacy legislation (restrictions on online data use) than by asserting that this is what the electorate already wants? The poll asks whether “Congress should update laws that relate to online privacy and security for children and teens.”  Three-fifths (61% of parents, 62% of adults) said yes. But earlier in the survey, only 16% knew that the Children Online Privacy Protection Act of 1998 already prohibits “online companies… from collecting or using personal information from children under the age of thirteen without a parent’s permission.” (53% weren’t sure.) If parents don’t know what Congress has already done, how meaningful is it for them to say they think Congress needs to do more? (There’s a reason we don’t have direct democracy.)

Indeed, how useful are such polls, anyway? Ultimately, what such polls really tell us is that, if you ask parents—or adults in general—whether they’re concerned about protecting kids, of course most will say yes, because nobody wants to think of themselves as the kind of person who doesn’t care about kids.

This bias becomes even more problematic when the choice at issue involves such stark trade-offs—especially when we’re talking about throwing a wrench (restrictions on data use and collection) in the economic engine that has again and again provided funding for media and services that users just won’t pay for. As we’ve noted here before, privacy polls and surveys reveal only what the public will tell pollsters in response to the particular questions asked. On privacy, those questions are almost invariably designed to solicit responses suggesting an urgent need for more laws and government action. Even the fairest of these surveys is no substitute for real-world experiments in which people make real choices, in real time, often with real money, and face many real trade-offs. Continue reading →

Earlier this month, a coalition of ad and marketing associations made public a new self-regulatory program for behavioral advertising (or as we like to refer to them, “interest-based ads”). Will it be enough to whet the appetite of members of Congress waiting to chomp on the privacy bit when they get back in November?

Hopefully. But it all depends on ad network uptake and user adoption. FTC Chairman Jon Leibowitz’s wait-and-see attitude toward the self-regulatory effort probably sums up the thoughts of many pro-regulatory privacy advocates. According to Politico’s Morning Tech, Leibowitz said:

We commend industry’s effort to get a broad group of industry leaders on board. However, the effectiveness of this effort will depend on how, and the extent to which, the opt-out is actually implemented and enforced – all of which is yet to be seen. We also urge industry to make sure that the opt-out is easy for consumers to find, use, and understand.

Making it easy for consumers is what the advertising option icon (above) is all about. It’s a just-in-time “heads-up” accompanying ads that allows users to obtain more information about why they’re seeing the ad. In the future, it will allow users to opt-out. Ad networks will pay a license fee to have the right to display the icon and must submit to ongoing compliance.

It’s the compliance part that’s interesting. The Better Advertising project is a new company formed specifically for the self-regulatory program. According to Internet Retailer, “the Council of Better Business Bureaus and the Direct Marketing Association, a trade group for direct-to-consumer marketers and retailers, will begin monitoring compliance with the program early next year.”

Let’s hope the coalition moves quickly and successfully, before Congress does….

(Second in a series.)

The Register quotes security guru Bruce Schneier saying: “Facebook is the worst [privacy] offender – not because it’s evil but because its market is selling user data to its commercial partners.”

Facebook’s business model is to guide advertisements on its site toward users based on their interests as revealed by data about them. It is not to sell data about users. Selling data about users would undercut its advertising business.

It’s easy to misspeak in extemporaneous comments, and The Register is not your most careful media outlet. But we’ve almost got enough data points to show a consistent practice of misrepresentation on Bruce Schneier’s part. Perhaps that should be actionable as an unfair or deceptive practice under section five of the FTC Act.

In a post here last month on “Two Paradoxes of Privacy Regulation,” I discussed some of the interesting — and to me, troubling — similarities between rising calls for online privacy regulation and ongoing attempts to enact various types of controls on online speech or expression.  In that essay, I argued that while most privacy advocates are First Amendment supporters as it pertains to content regulation, they abandon their free speech values and corresponding constitutional tests when it comes to privacy regulation. When the topic of debate shifts from concerns about potentially objectionable content to the free movement of personal information, personal responsibility and self-regulation become the last option, not the first.  Privacy advocates typically ignore, downplay, or denigrate user-empowerment tools, even though many of those same advocates endorse “self-help” efforts as the superior method of dealing with objectionable speech or media content. In essence, therefore, they are claiming self-help is the right answer in one context, but not the other.  Ironically, therefore, privacy advocates and moral conservatives actually share much in common in that they are using the same playbook to advance their goals:  They are rejecting personal responsibility and user-empowerment tools and techniques in favor or government control for their respective issues.

Keeping that insight in mind, I want to take this comparison a step further and suggest that what really unites these two movements is a general conservatism about how our online lives and online business should be governed.  For the moral conservatives, that instinct is well-understood. They want hold the line against what they believe is a decaying moral order by restricting access to potentially objectionable speech or content — dirty words, violent video games, online porn, or whatever else.   The conservatism of the modern privacy movement is less obvious at first blush.  I suspect that many privacy conservatives would not consider themselves “conservative” at all, and they might even be highly offended at being grouped in with moral conservatives who seek to wield government power to control online speech and expression. Nonetheless, the two groups share a common trait — an innate hostility to the impact of technological / social change within the realm of “rights” or values they care about.  In their respective arenas, they both rejected the evolutionary dynamism of the free marketplace and they long for a return to a simpler and supposedly better time. Continue reading →

Well, then, this post (via Adam Shostack) is for you!

“Dissent” goes through the numbers revealed in the first year of data breach reporting under the Health Insurance Portability and Accountability Act regulations. The post gives extremely light treatment to the possibility—indeed, the likelihood—of noncompliance with the regulations due to unawareness of breaches or judgments that reporting is more dangerous than not reporting.

But one also must wonder . . . Why does this matter?

Data breach notification is the grown-up version of the schoolyard taunt: “Your epidermis is showing!” The questions are: What part of the epidermis? And what social or economic consequences does it have?

Of course, these statistics may be interesting and relevant to security professionals, but harm is where the rubber hits the road for consumer protection. (See this interesting colloquy recently on Concurring Opinions.) Some data breaches have some relationship to consumer harm, but gross breach statistics don’t seem to be a window onto harm prevention.

The details of Tyler Clementi’s case are slowly revealing themselves. He was the Rutgers University freshman whose sex life was exposed on the Internet when fellow students Dharun Ravi and Molly Wei placed a webcam in his dorm room, transmitting the images that it captured in real time on the Internet. Shortly thereafter, Clementi committed suicide.

Whether Ravi and Wei acted out of anti-gay animus, titillation about Clementi’s sexual orientation, or simply titillation about sex, their actions were utterly outrageous, offensive, and outside of the bounds of decency. Moreover, according to Middlesex County, New Jersey prosecutors, they were illegal. Ravi and Wei have been charged with invasion of privacy.

This is what invasion of privacy looks like. It’s the outrageous, offensive, truly galling revelation of private facts like what happened in this case. Over the last 120 years, common law tort doctrine has evolved to find that people have a right not to suffer such invasions. New Jersey has apparently enshrined that right in a criminal statute.

The story illustrates how quaint are some of the privacy “invasions” we often discuss, such as the tracking of people’s web surfing by advertising networks. That information is not generally revealed in any meaningful way. It is simply being used to serve tailored ads.

This event also illustrates how privacy law is functioning in our society. It’s functioning fairly well. Law, of course, is supposed to reflect deeply held norms. Privacy norms—like the norm against exposing someone’s sexual activity without consent—are widely shared, so that the laws backing up those norms are rarely violated.

It is probably a common error to believe that law is “working” when it is exercised fairly often, fines and penalties being doled it with some routine. Holders of this view see law—more accurately, legislation—as a tool for shaping society, of course. Many of them would like to end the societal debate about online privacy, establishing a “uniform national privacy standard.” But nobody knows what that standard should be. The more often legal actions are brought against online service providers, the stronger is the signal that online privacy norms are unsettled. That privacy debate continues, and it should.

It is not debatable that what Ravi and Wei did to Tyler Clementi was profoundly wrong. That was a privacy invasion.

At the Safe Internet Alliance event earlier this week there was a surprising amount of agreement on one aspect of sharing information on the Internet: eliminating the fear factor.

“Facts, not fear” was a meme throughout the event. Rep. Boucher discussed how comprehensive privacy legislation encourages Internet use because consumers don’t need to fear how their information is protected. And Josh Gottheimer of the FCC cited a study that shows that one of the main reasons why people don’t have broadband is due to, as he called it, the “fear factor.”

For increased use and adoption of the Internet and online services, cutting through the fear is key. That’s why I stressed why one of the main goals of a group that’s discussing privacy-related public policies should be to distinguish between legitimate concerns versus overreactions.

For online safety, there was a period just a year or two ago where we saw a lot of rhetoric, but not a lot of facts, about the real risks and likely threats kids face when online. Today the discussion is less fear-based, and as a result is much more productive for making the Internet safer. The NTIA OSTWG report stressed this fact-based approach.

Today privacy is where the online safety debate was a few years ago. There’s a similar danger of overreaction where rhetoric may crowd-out productive solutions. But there’s also a risk of being too glib on each side: pro-regulatory privacy advocates may not value the need for legitimate revenue models while businesses may sometimes dismiss legitimate privacy concerns.

Ultimately it may come down to a question of who decides. Whether it’s default settings or what is personal information, is it government, companies, or consumers that decide? I’ll tip my hand here: I think the key is for consumers to on the one hand understand the decisions they make, and on the other hand be allowed to make decisions.

Fear not, NetChoice looks forward to working with the Safe Internet Alliance and policymakers on privacy issues.

I’d like to recommend Sonia Arrison’s recent article on the need for updating the Electronic Privacy Communications Act (ECPA). She makes a good case why citizens should feel a bit worried about the ability of government to invade their privacy when they keep data in the cloud. And citizens are customers, so online businesses are worried if people may use less of their services. But here’s another angle for why we need to update ECPA…it’s to promote online safety. From an excellent analysis by Becky Burr, ECPA reform:

Would establish uniform, clear, and easily understood rules about when and what kind of judicial review is needed by law enforcement to access electronic content; and Would, by clarifying the applicable rules, enable business to respond more quickly and with greater confidence to law enforcement requests and to avail themselves of hosted productivity technology.

Right now the law is muddled, and online services have a hard time determining legitimate requests from those that are overreaching. When the law is clarified, businesses and law enforcement can (with appropriate legal process) share information that can help find sexual predators and other online miscreants.