Privacy, Security & Government Surveillance

The day many had expected is finally here. This Reuters headline says it all: Senators seek crackdown on “Bitcoin” currency.

The main target of Sens. Chuck Schumer and Joe Manchin is Silk Road–the online illicit drug bazaar run via the TOR network–but bitcoin, the currency of choice on Silk Road, is also in their sights. (Also, Sens. Roy Blunt and Claire McCaskill are also getting in on the action.) In a recent letter Schimer and Manchin have asked the DOJ and DEA to shut down Silk Road, and “seize” the website’s domain. More to the point, in his press conference, which you can watch here, Schumer said that bitcoin is “an online form of money laundering used to disguise the source of money, and to disguise who’s both selling and buying the drug.”

As the DOJ and DEA plan a response and this issue develops, I though I’d offer some initial thoughts:

  • Bitcoin is digital cash, and like any form of cash, it can be used for good or for ill. Because, like all cash, it is largely anonymous, it will be used by persons looking to evade official scrutiny. This could be contributing anonymously to unpopular causes like Wikileaks, but it could also mean buying drugs online. We don’t ban hard to trace paper cash because we understand that there’s nothing inherently bad about it; it’s what people do with it that’s can be problematic. Bitcoin should be treated the same way.

That said about what I think ought to be, what’s really interesting is what will be regardless of normative values. That is, can Silk Road and bitcoin “cracked down”?

  • The federal government is no doubt going to go after Silk Road. This sets up another “natural experiment” like the one presented by LulzSec taking bitcoin donations. Given that the site exists as a .onion an anonymous hidden service via TOR, will the feds be able to find who’s behind it and shut it down? We’ll see. They certainly won’t be able to “seize the domain” as Schumer and Manchin’s letter suggests. If a year from now the site is still operating, will we be able to say that government does not “possess any methods of enforcement we have true reason to fear.

  • If the federal government seeks to go after bitcoin, it won’t be able to take down the network. That’s just impossible as far as I can tell. The weakest link in the bitcoin ecosystem, however, are the exchanges, like Mt Gox. These allow you to trade your bitcoins for dollars and vice versa. At this point, there’s not a lot you can buy with bitcoins, so the ability to trade them to a widely accepted currency is important.
    According to Gavin Andresen, the lead developer of the bitcoin project, Mt Gox “is careful to comply with all anti-money-laundering laws and regulations.” I’d love to know more about this. As far as I can tell, we know very little about who runs Mt Gox and how they comply with the law.

  • Even if the federal government is able to shut down Silk Road and exchanges like Mt Gox, we will quickly see others take their place. Silk Road will be supplanted by another anonymart (to use Kevin Kelly’s phrase), and we’ll see a replay of the drug war we know too well from meatspace. As for exchanges, we’ll see new ones pop up, likely in jurisdictions with liberal banking laws, and it will be interesting to see if Congress tries to make it illegal for financial institutions and payment processors to deal with them, just as they’ve made it illegal to deal with offshore online casinos. What I hope we’ll see emerge is a properly licensed and legally compliant domestic exchange that is as committed to fighting money laundering as Citibank. That would certainly help test bitcoin’s legality. This great paper by Reuben Grinberg that gives me hope that, for now at least, there’s nothing inherently illegal about trading bitcoins.

Facebook announced yesterday that it had finished most of the global roll-out, begun in the U.S. last December. Now ZDNet reports that European Privacy regulators are already planning a probe of this. Emil Protalinski writes:

“Tags of people on pictures should only happen based on people’s prior consent and it can’t be activated by default,” Gerard Lommel, a Luxembourg member of the so-called Article 29 Data Protection Working Party, told BusinessWeek. Such automatic tagging “can bear a lot of risks for users” and the group of European data protection officials will “clarify to Facebook that this can’t happen like this.”

No doubt our friends at the Extra-Paternalist Internet Cops (EPIC) will jump into the fray with another of their many complaints to the FTC, dripping with outrage that Facebook has “opted us into” this feature. But what’s the big deal, really?  Emil explains how things work:

When you upload new photos, Facebook uses software similar to that found in many photo editing tools to match your new photos to other photos you’re tagged in. Similar photos are grouped together and, whenever possible, Facebook suggests the name(s) your friend(s) in the photos. In other words, the square that magically finds faces in a photo now suggests names of your Facebook friends to streamline the tagging process, especially with the same friends in multiple uploaded photos.

Lifehacker explains how easy it is for Facebook users to opt-out of having their friends seeing the automatically generated suggestion to tag their face (as Facebook did  in its own announcement):

  1. Head your Privacy Settings and click on Customize Settings.
  2. Scroll down to the “Suggest Photos of Me to Friends” setting and hit “Edit Settings”.
  3. In the drop-down on the right, hit “Disable”.

See the screenshots here. So, in short: The feature that’s upsetting the privacy regulationistas is a feature that saves us time and effort in tagging our friends in photos we upload—unless our friends have opt-outed of having their photos auto-suggested.

Continue reading →

If you’re like me, you woke up at the crack of dawn today to maximize your enjoyment of World IPv6 Day. Don’t want to miss a minute! If you’re like me, you’ll also say untruthful things as a very dry form of sarcasm. I hope you got that.

Whatever your interest in IPv6—learn more by reading this heresy—you should take interest in whether the next generation of the Internet protocol will erode or enhance your ability to protect privacy. That’s a question that’s been gnawing at me for a long time.

IPv4 was designed without enough numbers to accommodate the worldwide, multiple-device Internet we’ve got today. IPv5 seems to have disappeared—and I’m desperate to know what happened to it. (see above re: sarcasm) Now we’re talking about IPv6, a major feature of which is that it has enough numbers to assign one to every device on the globe.

IPv6’s ginormous number space is great for simplifying the maintenance of quality communications on the modern Internet, but it could suck for privacy. You see, if every device can be assigned a permanent number, that number will act as a permanent identifier, and lots of privacy-reducing inferences can be drawn. I.e., “If I saw this IP number before, it’s probably the same device and the same person I dealt with before.” Communications and interactions that don’t require or benefit from tracking become trackable anyway. We lose a structural protection of privacy.

Luckily, the designers of the IPv6 protocol thought of that. Christopher Parsons explains in a thorough post from last year that the IPv6 protocol calls for rolling assignment of randomized numbers for initiators of communications. A Web server has to have a fixed address, of course. It’s the target of communications requests, and people need to know where to find it. But the computers that ask for content from such servers do not. IPv6 allows those devices to have transient, pretty darn random numbers that change with regularity. This way, the records of your surfing that come to rest in servers all over the world cannot be combined into a dossier of everything you ever did online. Your computer’s IP address does not become your de facto worldwide identifier.

But here’s the question: To what extent is this part of IPv6 being implemented? Are the organizations implementing IPv6 including randomized numbers for initiators of communications? Parsons has a clever turn of phrase suggesting one reason why they may not: “the ‘security institutions’ are better at dissolving privacy protections than the privacy community is at enshrining privacy in law.” It could also be simply that there’s some cost associated with IPv6’s randomization.

So, does anyone know the status of randomization in the IPv6 protocol? Is it being implemented?

The good news, I think, is that it seems fairly easy to test whether an ISP is deploying IPv6 in full or short-cutting on randomization. Set up a server out there, ping it with a consistent communication, and see if it sees the communication coming from a consistent IP address. If it does, then IPv6 randomization is not working. That’s a problem.

Given the wisdom of “trust but verify,” I suppose this is not only an appeal for information about present practice, but a request that some group of technical smarties out there set up a system for routine verification that IPv6 randomization is fully and properly implemented by Internet service providers and other major deployers of Internet protocol. If you’ve already done it, do tell! Thanks!

On May 26th, it was my great pleasure to participate in a panel discussion on “Growing Up with the Mobile Net,” which was co-sponsored by the Congressional Internet Caucus and Common Sense Media. It was a conversation about kids’ privacy, online safety, teen free speech rights, anonymity, and the possibility of expanding the Children’s Online Privacy Protection Act (COPPA) and implementing the so-called “Internet Eraser Button.”

I was joined on the panel by Jules Polonetsky, Co-chair and Director of the Future of Privacy Forum, and Alan Simpson, Vice President of Policy at Common Sense Media. And the session was very ably moderated, as always, by the supremely objective Tim Lordan.*  We really unpacked the “Eraser Button” and “right to be forgotten” notion and thought through the ramifications. And the discussion about the extent of First Amendment rights for teenagers was also interesting.

The video for this 48-minute session can be found on the Congressional Internet Caucus YouTube page here and is embedded below.

Note: During the session, Tim Lordan claimed that he takes no position and that if anyone says he take positions on issues that he will slap a super-injunction on them. Well, I say Tim Lordan is brimming with positions and he’s letting them fly at every juncture. In fact, I’ve never met someone so full of controversial positions in my life as Tim Lordan! OK, so sue me Tim!

My latest Forbes column is entitled “With Freedom of Speech, The Technological Genie Is Out of the Bottle,” and it’s a look back at the amazing events that unfolded over the past week in the U.K. regarding privacy, free speech, and Twitter. I’m speaking, of course, about the “super-injunction” mess. I relate this episode to the ongoing research Jerry Brito and I are doing examining the increasing challenges of information control.

I begin by noting that:

When it comes to freedom of speech in the age of Twitter, for better or worse, the genie is out of the bottle. Controlling information flows on the Internet has always been challenging, but new communications technologies and media platforms make it increasingly difficult for governments to crack down on speech and data dissemination now that the masses are empowered. The most recent exhibit in the information control follies comes from the United Kingdom, where in the span of just one week the country’s enhanced libel law procedure was rendered a farce.

I go on to explain how Britain’s super-injunction regulatory regime unraveled so quickly and why it’s unlikely to be effectively enforceable in the future. Read the entire essay over at Forbes and then also check out Jerry’s Time TechLand editorial from last week, “Twitter’s Super-Duper U.K. Censorship Trouble.” I also just saw this piece by British defamation expect John Maher: “Law Playing Catch-up with New Media.” It’s worth a read.

“Cloud computing” is the term for applications that are handled by third-party software and storage on the Internet, like Google Docs and QuickBooks Online, as opposed to programs like Microsoft Word and Quicken, which you load and access from your PC.

Gmail and Hotmail were early examples of cloud computing. The cloud computing concept has since expanded to include popular applications like photo editing and sharing, money management and social networking. It also takes in the increasing number of cloud-based storage services, like Dropbox, which allows you to port documents from client to client, and Carbonite, which performs near real-time back-up of data and documents on your PC.

What most Americans don’t realize is that data stored in the cloud is not protected by the Fourth Amendment the way that same data is if stored on a PC, CD or detachable hard drive in the home. My op-ed in the Washington Times today outlines this problem, and points to a new bill in Congress, S.1011, introduced last week by Sen. Patrick Leahy (D-VT), as a big step toward closing this loophole. S.1011, also cited by Berin here, extends the due process provisions against illegal wiretapping in the existing Electronic Communications Privacy Act (ECPA) to personal data stored in data centers owned and operated by third parties.

As online services and applications evolve, it is critical that these due process protections extend to cloud services. Public cloud infrastructure, applications and platforms are growing at 25 percent per year, according to International Data Corp., a market research firm specializing in high-tech. IDC found that, as of year-end 2010, 56 percent of Internet users use webmail services, 34 percent store personal pictures online, 7 percent store personal videos, 5 percent pay to store files and 5 percent back up their hard drive to a website. These numbers are all expected to grow rapidly.

But this is about more than convenience or personal preference for data storage. Internet applications are becoming geared for the “cloud.” Cloud computing will be the easiest, cheapest and most efficient way users can access personal data on any device, in any location, at any time. It’s not simply an option in the way one chooses to manage data. Cloud computing is becoming necessary to go about one’s daily business. Legal protections need to be there.

Last week, I spoke to a group of Capitol Hill staffers about the current debate over online privacy policy. The topic is red-hot right now with 6 major bills pending and plenty of international and state-based activity percolating. I offered the staffers an overview of these issues as well as an alternative vision for how we might handle privacy concerns going forward.

I have embedded the video of my briefing below and it can also be found on the Mercatus website here. And the slide deck I used that day can also be found down below or over on Scribd here.

As Sonia Arrison mentioned here on Friday, the State of California is currently considering legislation that could, in the name of enhancing online privacy, impose burdensome new regulatory mandates on the Internet. Sonia has a nice column at TechNewsWorld discussing this. I also wrote about the same issue in my Forbes column this week, which is entitled, “The State of California Versus the Internet.” Specifically, I discuss SB 242, “The Social Networking Privacy Act,” and SB761, the so-called Do Not Track bill, and argue that: “What unifies these two measures is a general lack of understanding about the way the Internet and digital technology work. Both measures fail to appreciate the global nature of the Internet and would raise a host of unintended consequences.”

While the best of intentions drive these measures, they will be complicated to enforce in practice and could have a devastating impact on the California economy in the process. “If California wants to reestablish itself as the home of high-tech innovation,” I argue, “it needs to realize heavy-handed Net controls are not the ticket to either economic progress or job-creation.” Moreover, “These laws could be challenged in court since state-based regulation of the Internet raise constitutional issues. The Commerce Clause of the Constitution was designed to block the sort of parochial burdens on interstate commerce that these measures would establish.”

Jump over to Forbes to read the rest. Let’s hope California policymakers realize what a mistake they are making before it’s too late. If they don’t, Congress will need to preempt this regulation of interstate commerce if it’s not immediately challenged in Court and overturned.

Social widgets, such as the now-ubiquitous Facebook “Like” button and Twitter “Tweet” button, offer users a convenient way to share online content with their friends and followers. These widgets have recently come under scrutiny for their privacy implications. Yesterday, The Wall Street Journal reported that Facebook, Twitter, and Google are informed each time a user visits a webpage that contains one of the respective company’s widgets:

Internet users tap Facebook Inc.’s “Like” and Twitter Inc.’s “Tweet” buttons to share content with friends. But these tools also let their makers collect data about the websites people are visiting. These so-called social widgets, which appear atop stories on news sites or alongside products on retail sites, notify Facebook and Twitter that a person visited those sites even when users don’t click on the buttons, according to a study done for The Wall Street Journal.

It wasn’t exactly a secret that social widgets “phone home.” However, the Journal’s story shed new light on how the firms that offer social widgets handle the data they glean regarding user browsing habits. Facebook and Google reportedly store this data for a limited period of time — two weeks and 90 days, respectively — and, importantly, the data isn’t recorded in a way that can be tied back to a user (unless, of course, the user affirmatively decides to “like” a webpage). Twitter reportedly records browsing data as well, but deletes it “quickly.”

Assuming the companies effectively anonymize the data they glean from their social widgets, privacy-conscious users have little reason to worry. I’m not aware of any evidence that social widget data has been misused or breached. However, as Pete Warden reminded us in an informative O’Reilly Radar essay posted earlier this week, anonymizing data is harder than it sounds, and supposedly “anonymous” data sets have been successfully de-anonymized on several occasions. (For more on the de-anonymization of data sets, see Arvind Narayanan and Vitaly Shmatikov’s 2008 research paper on the topic).

Continue reading →

For those who wonder about the latest craziness coming from California, here is a summary.  It’s truly shocking that California policy makers are going after Silicon Valley, since it is one of the reasons the economy hasn’t completely tanked.

From my recent TNW column:

Facebook is having a tough month. First, it was revealed that the company hired a PR firm to portray competitor Google in a negative light, and now it is facing an even worse scenario: government regulation.   The Social Networking Privacy Act (SB 242) introduced into the California Senate by Sen. Ellen Corbett, D-San Leandro, would force any social networking site to make new users choose their privacy settings when they register and make the default settings private except for the user’s name and city of residence. This is a huge challenge to Facebook CEO Mark Zuckerberg who has argued that making personal data public is the new “social norm.”   Clearly, the battle over what constitutes the appropriate social norm is up for grabs. According to Corbett, “you shouldn’t have to sign in and give up your personal information before you get to the part where you say ‘please don’t share my personal information.'”   This might sound like common sense at first, but someone should remind the senator that signing up for Facebook is voluntary. No one is required to log in or give up their data.   In addition to its stipulations about privacy settings, the bill would force social networking sites to remove any personally identifying information that a user wants to delete and would allow parents to edit their children’s Facebook profiles.   Suddenly the horror that “Mom’s on Facebook” could mean a lot more than potential embarrassment for kids. For those under 18, it might mean deletion of one’s online identity.

[…]

Read more here.