Privacy, Security & Government Surveillance

Reading professor Siva Vaidhyanathan’s recent op-ed in the New York Times, one could reasonably assume that Facebook is now seriously tackling the enormous problem of dangerous information. In detailing his takeaways from a recent hearing with Facebook’s COO Sheryl Sandberg and Twitter CEO Jack Dorsey, Vaidhyanathan explained,

Ms. Sandberg wants us to see this as success. A number so large must mean Facebook is doing something right. Facebook’s machines are determining patterns of origin and content among these pages and quickly quashing them.

Still, we judge exterminators not by the number of roaches they kill, but by the number that survive. If 3 percent of 2.2 billion active users are fake at any time, that’s still 66 million sources of potentially false or dangerous information.

One thing is clear about this arms race: It is an absurd battle of machine against machine. One set of machines create the fake accounts. Another deletes them. This happens millions of times every month. No group of human beings has the time to create millions, let alone billions, of accounts on Facebook by hand. People have been running computer scripts to automate the registration process. That means Facebook’s machines detect the fakes rather easily. (Facebook says that fewer than 1.5 percent of the fakes were identified by users.)

But it could be that, in their zeal to trapple down criticism from all sides, Facebook instead has corrected too far and is now over-moderating. The fundamental problem is that it is nearly impossible to know the true amount of disinformation on a platform. For one, there is little agreement on what kind of content needs to be policed. It is doubtful everyone would agree what constitutes fake news and separates it from disinformation or propaganda and how all of that differs from hate speech. But more fundamentally, even if everyone agreed to what should be taken down, it is still not clear that algorithmic filtering methods would be able to perfectly approximate that. Continue reading →

There are a growing number of voices raising concerns about privacy rights and data security in the wake of news of data breaches and potential influence. The European Union (EU) recently adopted the heavily restrictive General Data Privacy Rule (GDPR) that favors individual privacy over innovation or the right to speak. While there has been some discussion of potential federal legislation related to data privacy, none of these attempts has truly gained traction beyond existing special protections for vulnerable users (like children) or specific information (like that of healthcare and finances). Some states, notably including California, are attempting to solve this perceived problem of data privacy on their own, but often are creating bigger problems and passing potentially unconstitutional and often poorly drafted solutions.

Continue reading →

In preparation for a Federalist Society teleforum call that I participated in today about the compliance costs of the EU’s General Data Protection Regulation (GDPR), I gathered together some helpful recent articles on the topic and put together some talking points. I thought I would post them here and try to update this list in coming months as I find new material. (My thanks to Andrea O’Sullivan for a major assist on coming up with all this.)

Key Points:

  • GDPR is no free lunch; compliance is very costly
      • All regulation entails trade-offs, no matter how well-intentioned rules are
      • $7.8 billion estimated compliance cost for U.S. firms already
      • Punitive fees can range from €20 million to 4 percent of global firm revenue
      • Vagueness of language leads to considerable regulatory uncertainty — no one knows what “compliance” looks like
      • Even EU member states do not know what compliance looks like: 17 of 24 regulatory bodies polled by Reuters said they were unprepared for GDPR
  • GDPR will hurt competition & innovation; favors big players over small
      • Google, Facebook & others beefing up compliance departments. (“ EU official, Vera Jourova: “They have the money, an army of lawyers, an army of technicians and so on.”)
      • Smaller firms exiting or dumping data that could be used to provide better, more tailored services
      • PwC survey found that 88% of companies surveyed spent more than $1 million on GDPR preparations, and 40% more than $10 million.
      • Before GDPR, half of all EU ad spend went to Google. The first day after it took effect, an astounding 95 percent went to Google.
      • In essence, with the GDPR, the EU is surrendering on the idea of competition being possible going forward
      • The law will actually benefit the same big companies that the EU has been going after on antitrust grounds. Meanwhile, the smaller innovators and innovations will suffer.

Continue reading →

On Friday, the Supreme Court ruled on Carpenter v. United States, a case involving the cell-site location information. In the 5 to 4 decision, the Court declared that “The Government’s acquisition of Carpenter’s cell-site records was a Fourth Amendment search.” What follows below is a roundup of reactions and comments to the decision.  Continue reading →

Image result for Zuckerberg Schmidt laughing

Two weeks ago, as Facebook CEO Mark Zuckerberg was getting grilled by Congress during a two-day media circus set of hearings, I wrote a counterintuitive essay about how it could end up being Facebook’s greatest moment. How could that be? As I argued in the piece, with an avalanche of new rules looming, “Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.”

With the exception of probably only Google, no firm other than Facebook likely has enough lawyers, lobbyists, and money to deal with layers of red tape and corresponding regulatory compliance headaches that lie ahead. That’s true both here and especially abroad in Europe, which continues to pile on new privacy and “data protection” regulations. While such rules come wrapped in the very best of intentions, there’s just no getting around the fact that regulation has costs. In this case, the unintended consequence of well-intentioned data privacy rules is that the emerging regulatory regime will likely discourage (or potentially even destroy) the chances of getting the new types of innovation and competition that we so desperately need right now.

Others now appear to be coming around to this view. On April 23, both the New York Times and The Wall Street Journal ran feature articles with remarkably similar titles and themes. The New York Times article by Daisuke Wakabayashi and Adam Satariano was titled, “How Looming Privacy Regulations May Strengthen Facebook and Google,” and The Wall Street Journal’s piece, “Google and Facebook Likely to Benefit From Europe’s Privacy Crackdown,” was penned by Sam Schechner and Nick Kostov.

“In Europe and the United States, the conventional wisdom is that regulation is needed to force Silicon Valley’s digital giants to respect people’s online privacy. But new rules may instead serve to strengthen Facebook’s and Google’s hegemony and extend their lead on the internet,” note Wakabayashi and Satariano in the NYT essay. They continue on to note how “past attempts at privacy regulation have done little to mitigate the power of tech firms.” This includes regulations like Europe’s “right to be forgotten” requirement, which has essentially put Google in a privileged position as the “chief arbiter of what information is kept online in Europe.”
Continue reading →

With Facebook CEO Mark Zuckerberg in town this week for a political flogging, you might think that this is darkest hour for the social networking giant. Facebook stands at a regulatory crossroads, to be sure. But allow me to offer a cynical take, and one based on history: Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.

By slowly capitulating to critics (both here and abroad) who are thirsty for massive regulation of the data-driven economy, Facebook is setting itself up as a servant of the state. In the name of satisfying some amorphous political “public interest” standard and fulfilling a variety of corporate responsibility objectives, Facebook will gradually allow itself to be converted into a sort of digital public utility or electronic essential facility.

That sounds like trouble for the firm until you realize that Facebook is one of the few companies who will be able to sacrifice a pound of flesh like that and remain alive. As layers of new regulatory obligations are applied, barriers to new innovations will become formidable obstacles to the very competitors that the public so desperately needs right now to offer us better alternatives. Gradually, Facebook will recognize this and go along with the regulatory schemes. And then eventually they will become the biggest defender of all of it.

Welcome to Facebook’s broadcast industry moment. The firm is essentially in the same position the broadcast sector was about a century ago when it started cozying up to federal lawmakers. Over time, broadcasters would warmly embrace an expansive licensing regime that would allow all parties—regulatory advocates, academics, lawmakers, bureaucrats, and even the broadcasters themselves—to play out the fairy tale that broadcasters would be good “public stewards” of the “public airwaves” to serve the “public interest.”

Alas, the actual listening and viewing public got royally shafted in this deal. Continue reading →

This article originally appeared at techfreedom.org.

Today, Rep. Michael McCaul (R-TX) and Sen. Mark Warner (D-VA) introduced legislation to create a blue ribbon commission that would examine the challenges encryption and other forms of digital security pose to law enforcement and national security. The sixteen-member commission will be made up of experts from law enforcement, the tech industry, privacy advocacy and other important stakeholders in the debate and will be required to present an initial report after six months and final recommendations within a year.

In today’s Tech Policy Podcast, TechFreedom President Berin Szoka and Ryan Hagemann, the Niskanen Center’s technology and civil liberties policy analyst, discussed the commission’s potential.

I see this commission as an ideal resting place for this debate,” Hagemann said. “Certainly what we’re trying to avoid is pushing through any sort of knee-jerk legislation that Senators Feinstein or Burr would propose, especially in the wake of a new terrorist attack.”

“I share the chairman’s concerns that since we’re not making any headway on these issues in the public forum, what is really needed here is for Congress to take some level of decisive action and get all of the people who have something to gain as well as something to lose in this debate to just sit down and talk through the issues that all parties have,” he continued.

I think it’s going to come out and say that there is no middle ground on end-to-end encryption, but it’s probably going to deal with the Apple situation very specifically,” Szoka said. “I think you’re going to see some standard that is going to be probably a little more demanding upon law enforcement than what law enforcement wants under the All Writs Act.”

This article was originally posted on techfreedom.org

On January 11, TechFreedom joined nearly 200 organizations, companies, and experts from more than 40 countries in urging world leaders to support strong encryption and to reject any law, policy, or mandate that would undermine digital security. In France, India, the U.K, China, the U.S., and beyond, governments are considering legislation and other proposals that would undermine strong encryption. The letter is now open to public support and is hosted at https://www.SecureTheInternet.org.

The letter concludes:

Strong encryption and the secure tools and systems that rely on it are critical to improving cybersecurity, fostering the digital economy, and protecting users. Our continued ability to leverage the internet for global growth and prosperity and as a tool for organizers and activists requires the ability and the right to communicate privately and securely through trustworthy networks.

There’s no middle ground on encryption,” said Tom Struble, Policy Counsel at TechFreedom. “You either have encryption or you don’t. Any vulnerability imposed for government use can be exploited by those who seek to do harm. Privacy in communications means governments must not ban or restrict access to encryption, or mandate or otherwise pressure companies to implement backdoors or other security vulnerabilities into their products.”

This article originally appeared at techfreedom.org

Yesterday, the FTC reiterated its age-old formula: there are benefits, there are risks, and here are some recommendations on what we regard as best practices. The report summarizes the workshop the agency held in October 2014, “Big Data: A Tool for Inclusion or Exclusion?”

Commissioner Ohlhausen issued a separate statement, saying the report gave “undue credence to hypothetical harms” and failed to “consider the powerful forces of economics and free-market competition,” which might avoid some of the hypothetical harms in the report.

The FTC is essentially saying, ‘there are clear benefits to Big Data and there may also be risks, but we have no idea how large they are,’” said Berin Szoka. “That’s not surprising, given that not a single economist participated in the FTC’s Big Data workshop. The report repeats a litany of ‘mights,’ ‘concerns’ and ‘worries’ but few concrete examples of harm from Big Data analysis — and no actual analysis. Thus, it does little to advance understanding of how to address real Big Data harms without inadvertently chilling forms of ‘discrimination’ that actually help underserved and minority populations.”

“Most notably,” continued Szoka, “the report makes much of a single news piece suggesting that Staples charged higher prices online to customers who lived farther away from a Staples store — which was cherry-picked precisely because it’s so hard to find examples where price discrimination results in higher prices for poor consumers. The report does not mention the obvious response: if consumers are shopping online anyway, comparison shopping is easy. So why would we think this would be an effective strategy for profit-maximizing firms?”

The FTC can do a lot better than this,” concluded Szoka. “The agency has an entire Bureau of Economics, which the Bureau of Consumer Protection stubbornly refuses to involve in its work — presumably out of the misguided notion that economic analysis is somehow anti-consumer. That’s dead wrong. As with previous FTC reports since 2009, this one’s ‘recommendations’ will have essentially regulatory effect. Moreover, the report announces that the FTC will bring Section 5 enforcement actions against Big Data companies that have ‘reason to know’ that their customers will use their analysis tools ‘for discriminatory purposes.’ That sounds uncontroversial, but all Big Data involves ‘discrimination’; the real issue is harmful discrimination, and that’s not going to be easy for Big Data platforms to assess. This kind of vague intermediary liability will likely deter Big Data innovations that could actually help consumers — like more flexible credit scoring.”

This article originally appeared at techfreedom.org

WASHINGTON D.C. — Yesterday, the Federal Trade Commission announced that it had reached a settlement with Wyndham Hotels over charges that the company had “unreasonable” data security. In 2009, Russian hackers stole customer information, including credit card numbers, from Wyndham hotel systems. The company initially refused to settle an FTC enforcement action, becoming the first to challenge the FTC’s approach to data security in federal court. The FTC has used a decade of settlements with dozens of companies to establish fuzzy de facto standards for data security. In August, the Third Circuit denied Wyndham’s appeal of the district court’s decision to let the case proceed.

The FTC has, once again, avoided having a federal court definitively answer fundamental questions about the constitutionality of the FTC’s approach to data security,” said Berin Szoka, President of TechFreedom, which joined an amicus brief in the case. “The FTC will no doubt claim the Third Circuit vindicated its approach, but all the court really said was that Wyndham’s specific practices may have been unfair. Indeed, the appeals court agreed with Wyndham that the FTC’s so-called ‘common law of consent decrees’ cannot provide the ‘fair notice’ required by the Constitution’s Due Process clause. This implied that the FTC needs to do much more to guide companies on what ‘reasonable’ data security would be. By settling the case, the FTC avoided having the district court resolve those questions.”

It’ll take years for another case to work its way through the courts,” explained Szoka. “LabMD’srecent victory before the FTC’s chief administrative law judge is encouraging, and may allow a federal court to weigh in on the requirements of Section 5’s amorphous unfairness standard, if the full Commission overrules the ALJ. But that case focuses more on how the FTC weighs costs and benefits in each enforcement action than on the issue of how much guidance it provides guidance to industry.”

It’s high time Congress reasserted itself here,” concluded Szoka. “The FTC has demonstrated little willingness to change from within, and we can’t wait for the courts to address these questions. Congress needs to put the FTC on sounder footing across the board — from data security to privacy and other consumer protection issues. Far from hamstringing the agency, requiring better explanation of what the law requires and weighing of costs and benefits would actually help consumers — both by promoting better business practices and by avoiding FTC actions that end up harming consumers. Such common sense reforms should be bipartisan, just as they were back in 1980, the last time Congress really checked the FTC’s vast discretion.”

Szoka is co-author, along with Geoffrey Manne and Gus Hurwitz, of the FTC: Technology & Reform Project’s initial report, “Consumer Protection & Competition Regulation in a High-Tech World: Discussing the Future of the Federal Trade Commission,” which critiques the FTC’s processes and suggests areas where the FTC, the courts and Congress could improve how the FTC applies its sweeping unfairness and deception powers in data security, privacy and other cases, especially related to technology.